WO2023029551A1 - 一种基于多无人机的图像拼接方法及其系统 - Google Patents

一种基于多无人机的图像拼接方法及其系统 Download PDF

Info

Publication number
WO2023029551A1
WO2023029551A1 PCT/CN2022/091638 CN2022091638W WO2023029551A1 WO 2023029551 A1 WO2023029551 A1 WO 2023029551A1 CN 2022091638 W CN2022091638 W CN 2022091638W WO 2023029551 A1 WO2023029551 A1 WO 2023029551A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
metadata
full
motion video
image frames
Prior art date
Application number
PCT/CN2022/091638
Other languages
English (en)
French (fr)
Inventor
袁睿
周单
雷明
刘夯
Original Assignee
成都纵横自动化技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都纵横自动化技术股份有限公司 filed Critical 成都纵横自动化技术股份有限公司
Priority to KR1020247007241A priority Critical patent/KR20240058858A/ko
Priority to EP22862694.1A priority patent/EP4398183A1/en
Publication of WO2023029551A1 publication Critical patent/WO2023029551A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/88Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the invention relates to the technical field of drone imaging, in particular to an image stitching method and system based on multiple drones.
  • UAV UAV
  • industrial drones can complete more tasks, such as regional monitoring, inspection, surveying and mapping, investigation and evidence collection, etc.; they can play a greater role in more industrial applications, Such as emergency rescue, disaster monitoring, traffic road monitoring, border patrol and regional monitoring and other industries.
  • video acquisition equipment including single-camera photoelectric pods, multi-camera panoramic equipment, etc.
  • video data is transmitted to the ground to understand the situation of the flight area in real time, or use algorithms such as artificial intelligence Get more effective information.
  • Traditional image stitching methods include: based on image feature point matching, calculating the motion between images, using the image registration model homography matrix, incrementally stitching images, repeating the above steps to obtain a global stitching image, but this method has Drift problem caused by cumulative error.
  • this method has Drift problem caused by cumulative error.
  • the error is very large, and when using video images or multiple video images at the same time, dense video stream images cannot be processed, and repeated calculations will reduce efficiency.
  • the present invention provides a multi-UAV-based image stitching method and its system, which solves the problem of poor stitching quality existing in the traditional multi-UAV-based image stitching method.
  • the technical solution of the present invention is to adopt an image mosaic method based on multiple drones, including: obtaining multiple sets of image compression code streams and metadata; synchronizing multiple sets of metadata according to time Encapsulate frame by frame into the preset field of the image compression code stream, and generate multiple full-motion video code streams containing the metadata; transmit the multiple full-motion video code streams to the receiving end through a communication link;
  • Each of the full-motion video code streams is parsed into image frames and their corresponding metadata, and the overlapping relationship between the image frames is calculated based on the metadata; based on a plurality of the image frames and their overlapping relationships Generate a stitched image.
  • parsing each full-motion video code stream into multiple pairs of image frames and the corresponding metadata, and calculating the overlapping relationship between the image frames based on the metadata includes: When each of the full-motion video code streams is parsed out of a time-synchronized image frame, the different images are calculated by parsing the geographic location information contained in the metadata that is time-synchronized with the image frame. Geographic range overlap data between frames, and generate the overlapping relationship; or, when a plurality of full-motion video code streams are respectively parsed into multiple frames of the image frame, by parsing the metadata contained in the The geographical location information of the time-synchronized image frames is used to calculate the geographical range overlapping data between the time-synchronized image frames, and to generate the overlapping relationship.
  • parsing each set of full-motion video code streams into multiple pairs of image frames and the corresponding metadata, and calculating the overlapping relationship between the image frames based on the metadata further includes: When a plurality of full-motion video streams are respectively parsed into multiple frames of the image frame, by analyzing the geographical location information contained in the metadata and time-synchronized with the image frame, the information contained in the same full-motion video stream is calculated. Geographically overlapping data among the plurality of image frames, and generating the overlapping relationship.
  • generating the spliced image based on the plurality of image frames and their overlapping relationship includes: after unifying the coordinate systems of the plurality of image frames, generating the stitched image based on the overlapping relationship.
  • acquiring the image compression code stream and metadata includes: outputting, based on the same reference clock circuit, a first control signal for collecting the image compression code stream and a first control signal for collecting the metadata with the same reference clock. second control signal; acquiring the image compression code stream and the metadata including absolute time based on the first control signal and the second control signal.
  • the metadata includes at least GNSS positioning data.
  • the image mosaic method further includes performing visual processing on the image frames, and performing visual processing on the image frames includes: establishing a corresponding first layer based on each of the full-motion video code streams; if the If the overlapping relationship is non-overlapping, the image frames participating in the calculation of the overlapping relationship are configured in such a way that they are updated on the first layer corresponding to the full-motion video stream to which the image frames belong.
  • the spliced image if the overlapping relationship is overlapping, call the first layer corresponding to the full-motion video code stream to which all the image frames participating in the calculation of the overlapping relationship belong and regard it as the second layer as a whole A layer, performing orthorectification and photogrammetric coordinate system unification on a plurality of the image frames, and after generating the stitched image based on the overlapping relationship, updating the stitched image on the second layer.
  • the update frequency is configured to be the same as the frame rate of the full-motion video code stream.
  • an image mosaic system based on multiple UAVs includes: a sending end, configured to acquire multiple sets of image compression code streams and metadata, and synchronize multiple sets of the metadata in a time-synchronized manner After frame-by-frame encapsulation into the preset fields of the image compression code stream, multiple full-motion video code streams containing the metadata are generated, and multiple full-motion video code streams are transmitted to the receiver through multiple communication links end; the receiving end is configured to parse each full-motion video code stream into multiple pairs of image frames and the corresponding metadata, and calculate the overlapping relationship between the image frames based on the metadata Afterwards, a spliced image is generated based on the plurality of image frames and their overlapping relationships.
  • the receiving end includes: a judging unit for selecting multiple pairs of image frames that need to be stitched together; a positioning unit for calculating the overlapping relationship between the image frames; a splicing unit for generating the A spliced image; a display unit, configured to display the spliced image.
  • the primary improvement of the present invention is to provide an image mosaic method based on multiple UAVs.
  • the metadata can be used to provide accurate UAV images. Information such as flight conditions and geographic positioning can improve the video stitching effect, thereby avoiding errors and cumulative drift in traditional image stitching methods.
  • the metadata since the metadata is strictly synchronized with the image frame, the metadata can accurately analyze the field of view range and the center coordinate of the image frame corresponding to the metadata, so as to accurately calculate the overlapping relationship between different image frames , which solves the problem of inaccurate stitching caused by data out-of-sync when stitching multiple drones.
  • Fig. 1 is the simplified flowchart of the image mosaic method based on multi-UAV of the present invention
  • Fig. 2 is a simplified unit connection diagram of the multi-UAV-based image mosaic system of the present invention.
  • an image mosaic method based on multi-UAVs includes: obtaining multiple sets of image compression code streams and metadata; encapsulating multiple sets of metadata into image compression frame by frame In the preset field of the code stream, and generate a plurality of full dynamic video code streams containing the metadata; transmit the multiple full dynamic video code streams to the receiving end through a communication link; each of the full dynamic video code streams The video code stream is parsed into image frames and the corresponding metadata, and the overlapping relationship between the image frames is calculated based on the metadata; and a spliced image is generated based on a plurality of the image frames and their overlapping relationships.
  • the metadata can include GNSS (Global Navigation Satellite System, Global Navigation Satellite System) data, altitude data, field of view data and flight attitude data, etc.;
  • the preset field can be: when the communication transmission protocol used is H264 or H265 , the preset field may be an SEI (Supplemental Enhancement Information, supplementary enhancement information) field; when the communication transmission protocol used is a TS (MPEG2 Transport stream, transport stream) encapsulation protocol, the preset field is a custom field.
  • the type of metadata information varies according to the type of mobile device equipped with sensors. For example, when the device is a ship, the metadata may include device status data, and the device status data includes at least GNSS data, wind direction data, and heading data, etc.
  • the metadata includes at least aircraft POS (Position and Orientation System) data, aircraft state data, load sensor type data, pod POS data, pod state data and image processing board data etc.
  • the metadata information may include information such as positioning, viewing direction, pitch angle, field of view, tower height, channel, transmission bandwidth, device ID (Identity document, identity identification number), etc.
  • the POS data of the carrier aircraft at least includes the data of the yaw angle of the carrier aircraft, the data of the pitch angle of the carrier aircraft, the data of the roll angle of the carrier aircraft, the latitude and longitude data of the carrier aircraft, the height data of the carrier aircraft, the distance data of the carrier aircraft compared with the starting point, Compared with the azimuth data of the starting point and the flight speed data of the carrier aircraft.
  • Pod POS data at least include visible light horizontal field of view data, visible light vertical field of view data, infrared horizontal field of view data, infrared vertical field of view data, camera focal length data, pod heading Euler angle data, pod pitch O Pull angle data, heading frame angle data, pitch frame angle data, roll frame angle data, target longitude, latitude and height data, target speed data, target speed azimuth data and estimated distance data of the target compared to the carrier aircraft.
  • each full-motion video code stream is parsed into multiple pairs of image frames and the corresponding metadata, and the overlapping relationship between the image frames is calculated based on the metadata, including:
  • the different image frames are calculated by parsing the geographic location information contained in the metadata that is time-synchronized with the image frame Geographically overlapping data between them, and generate the overlapping relationship; or, when a plurality of full-motion video streams are respectively parsed into multiple frames of the image frame, by parsing the metadata contained in the image
  • the geographical position information of the time-synchronized frames is calculated, and the geographical range overlapping data between the time-synchronized image frames is calculated, and the overlapping relationship is generated.
  • calculating the geographical range overlapping data between different image frames includes: based on the geographical location information included in the metadata and time-synchronized with the image frames, field angle data and pitch yaw angle calculation and the The field of view range and the center coordinates of the field of view of the image frame corresponding to the metadata constitute the geographical range data of the image frame, calculate the overlapping area of the geographical range data between different image frames, and generate the geographical range overlapping data .
  • parsing each set of full-motion video code streams into multiple pairs of image frames and the corresponding metadata, and calculating the overlapping relationship between the image frames based on the metadata further includes: When a plurality of full-motion video streams are respectively parsed into multiple frames of the image frame, by analyzing the geographical location information contained in the metadata and time-synchronized with the image frame, the information contained in the same full-motion video stream is calculated. Geographically overlapping data among the plurality of image frames, and generating the overlapping relationship.
  • generating the spliced image based on the multiple image frames and their overlapping relationship includes: after unifying the coordinate systems of the multiple image frames, generating the spliced image based on the overlapping relationship.
  • multiple image frames can be processed.
  • the spatial resolution of multiple image frames can also be unified to improve the image mosaic effect; when the overlapping area represented by the overlapping relationship is large, it can also be The image splicing is performed after image motion change calculation is performed on a plurality of image frames participating in the calculation of the overlapping relationship.
  • the existing stitching result and the metadata of multiple images can be used for global optimization, and methods including but not limited to georeferencing and camera pose optimization can be used to improve the stitching effect.
  • the image mosaic method further includes performing visual processing on the image frames, and performing visual processing on the image frames includes: establishing a corresponding first layer based on each of the full-motion video code streams; if the If the overlapping relationship is non-overlapping, the image frames participating in the calculation of the overlapping relationship are respectively updated on the first layer corresponding to the full-motion video code stream to which the image frames belong to constitute the Stitching images; if the overlapping relationship is overlapping, call the first layer corresponding to the full-motion video code stream to which all the image frames participating in the calculation of the overlapping relationship belong and regard it as the second image as a whole layer, performing orthorectification and photogrammetric coordinate system unification on a plurality of the image frames, and after generating the stitched image based on the overlapping relationship, updating the stitched image on the second layer.
  • the update frequency is configured to be the same as the frame rate of the full dynamic video code stream, so as to realize the visual presentation of the spliced image in real time; the update frequency can also be configured as: the full dynamic video code stream
  • the frame rate is a multiple of the update frequency. For example, when the frame rate of the full-motion video code stream is 50 frames, the update frequency is 25 frames, so that when the visual presentation of the spliced images does not need to be completed in real time, the calculation can be reduced. force load.
  • this application also provides an image mosaic method, specifically: each time a frame of time-synchronized image frames are parsed out from multiple full-motion video streams, At the same time, by analyzing the geographical location information contained in the metadata and time-synchronized with the image frames, the geographical range overlapping data between different image frames is calculated, and the first overlapping relationship is generated; if the second If the overlapping relationship is overlapping, call the first layer corresponding to the full-motion video code stream to which all the image frames participating in the calculation of the overlapping relationship belong and regard it as the second layer as a whole, and combine multiple The image frame is orthorectified and the photogrammetric coordinate system is consistent, and after the stitched image is generated based on the first overlapping relationship, the stitched image is updated on the second layer; if the first If the overlapping relationship is non-overlapping, the image frames participating in the calculation of the first overlapping relationship are constructed in such a way that they are respectively updated on the first layer corresponding to the full-motion video stream
  • acquiring the image compression code stream and metadata includes: based on the same reference clock circuit, respectively outputting a first control signal for collecting the image compression code stream and a first control signal for collecting the metadata with the same reference clock
  • Two control signals acquiring the image compression code stream and the metadata including absolute time based on the first control signal and the second control signal.
  • the present invention uses the reference clock signal output by the same reference clock circuit as the first control signal and the second control signal, so that the time stamps contained in the image compression code stream and metadata all refer to the same clock source, therefore, the image compression code stream Timestamps and metadata timestamps can be considered absolute time to each other within systems of the same clock source.
  • the specific method of generating the first control signal and the second control signal may be: after receiving the instruction from the ground station, the payload processing subunit of the UAV outputs the first control signal respectively based on the same reference clock circuit and the second control signal.
  • the image stitching method further includes: when the sending end transmits the full-motion video code stream to the receiving end through a communication link, if the communication link includes at least one transmission node, the transmission The node can parse the full-motion video stream into multiple image frames and the time-synchronized metadata; after modifying the metadata, encapsulate the modified metadata in a time-synchronized In the preset field of the image frame, and generate a full-motion video code stream containing the modified metadata; continue to transmit the full-motion video code stream containing the modified metadata to the the receiving end.
  • the present invention uses a data transmission method that synchronously encapsulates metadata into image compression code streams, so that when there is a transmission node in the communication link, the transmission node can restart from the full-motion video code without destroying the compressed code stream. Extract pure image frames from the stream and extract metadata synchronized with the image frames from the preset fields of the full-motion video code stream, so that the transmission node can not only implement various application scenarios based on image frames and metadata, but also through After the metadata is modified, the metadata and image frames are re-encapsulated and then transmitted to the receiving end, which ensures the modifiability and diversity of application scenarios when the full-motion video stream is transmitted in the communication link.
  • the present invention constructs a full-motion video stream with UAV metadata and video data synchronized in time and space, and utilizes the metadata to provide accurate information such as UAV flight conditions and geographic positioning, thereby improving the effect of video splicing, thereby avoiding the traditional image
  • problems such as errors and cumulative drift in the splicing method.
  • the metadata since the metadata is strictly synchronized with the image frame, the metadata can accurately analyze the field of view range and the center coordinate of the image frame corresponding to the metadata, so as to accurately calculate the overlapping relationship between different image frames , which solves the problem of inaccurate stitching caused by data out-of-sync when stitching multiple drones.
  • the present invention provides an image mosaic system based on multiple UAVs, including: a sending end, configured to obtain multiple sets of image compression code streams and metadata, and multiple sets of said After the metadata is encapsulated into the preset fields of the image compression code stream frame by frame in a time-synchronized manner, multiple full-motion video code streams containing the metadata are generated, and multiple full-motion video code streams are passed through multiple A communication link is transmitted to the receiving end; the receiving end is used to analyze each full-motion video code stream into multiple pairs of image frames and the corresponding metadata, and calculate the said metadata based on the metadata.
  • the sending end may be an unmanned aerial vehicle;
  • the receiving end may be a ground station or other back-end data processing unit, and in the case that the receiving end is other back-end data processing unit, the ground station may be regarded as used for An intermediate transmission node that realizes data transparent transmission and forwarding.
  • the receiving end includes: a judging unit, used to select multiple pairs of image frames that need to be stitched together; a positioning unit, used to calculate the overlapping relationship between the image frames; a stitching unit, used to generate the stitching An image; a display unit, configured to display the spliced image.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • registers hard disk, removable disk, CD-ROM, or any other Any other known storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

一种基于多无人机的图像拼接方法及其系统,包括:获取多组图像压缩码流和元数据;将多组元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中,并生成包含元数据的多个全动态视频码流;将多个全动态视频码流通过通信链路传输至接收端;将每个全动态视频码流解析为图像帧及其对应的元数据,并基于元数据计算图像帧之间的重叠关系;基于多个图像帧及其重叠关系生成拼接图像。通过构建具有时空同步的无人机元数据和视频数据的全动态视频流,利用元数据提供准确的无人机飞行情况和地理定位等信息,提高视频拼接效果。同时,由于元数据与图像帧严格同步,因此能够精确地计算出不同图像帧之间的重叠关系。

Description

一种基于多无人机的图像拼接方法及其系统
本申请要求于2021年8月30日提交中国专利局、申请号为202111006658.X、申请名称为“一种基于多无人机的图像拼接方法及其系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及无人机成像技术领域,具体涉及一种基于多无人机的图像拼接方法及其系统。
背景技术
随着无人机技术的发展,多种无人机已经用于不同行业中,完成不同的飞行任务。与一般的消费级无人机不同的是,工业无人机能够完成更多任务,如区域监测、巡检、测绘、调查取证等任务;在更多的行业应用中能够发挥更大的作用,如应急救援、灾情监测、交通道路监控、边境巡逻和区域监控等行业。通过在无人机上搭载视频采集设备,包括单相机光电吊舱、多相机全景设备等,借助多种数据传输链路,将视频数据传输到地面,实时了解飞行区域情况,或利用人工智能等算法获取更有效信息。
现有无人机视频数据传输过程中,一般只有视频数据或者只有元数据,由于在实际应用如巡检等实时性任务中,数据种类的单一不利于下一步完成更多任务,如飞行状态实时监测或基于视频数据和无人机元数据进行进一步信息处理,包括全景图像构建、目标定位等方法。基于无人机视频或者图像的拼接技术,是利用无人机数据链路实时传输的序列图像,在地面利用增量式算法重建飞行区域的二维图像。同样也可以使用事后处理的方法,通过全局优化的算法,重构飞行区域的二维图像,并基于重建的数字表面模型进行正射校正。
传统的图像拼接方法包括:基于图像特征点匹配,计算图像之间的运动,使用图像配准模型单应矩阵,对图像进行增量式拼接,重复以上步骤,获得全局拼接图像,但是本方法存在累计误差造成的漂移问题,同时由于没有使用多张图像约束计算结果,误差很大,且同时使用视频图像或者多路视频图像时,无法处理密集的视频流图像,并存在重复计算会造成效率 降低的问题;基于图像特征点匹配,计算图像之间的运动,通过三角测量的方法恢复场景结构,通过2D-3D方法计算相机位姿,对图像进行增量式拼接,重复以上步骤,获得全局拼接图像,但是本方法同样存在累计误差造成的漂移问题,并且如果场景结构重建失败,配准失败或飞机运动变化较大,容易造成丢失,都需要重新开始拼接。同时使用视频图像或者多路视频图像时,无法处理密集的视频流图像,且存在重复计算会造成效率降低的问题;接收多无人机实时视频图像,基于目标定位对图像做预先变换,然后再对重叠区域进行图像拼接,获得拼接后图像,但是本方法仍使用特征点匹配的方法来对拼接图像,且需要基于目标定位信息对图像做预先变换,容易导致拼接质量下降,变换后可能导致图像匹配不再准确,并且处理多无人机视频图像时,只处理重叠区域,无法处理多无人机飞行区域不重叠时的情况。
综上所述,传统的基于多无人机的图像拼接方法存在拼接质量差的问题。
申请内容
有鉴于此,本发明提供一种基于多无人机的图像拼接方法及其系统,解决了传统的基于多无人机的图像拼接方法存在的拼接质量差的问题。
为解决以上问题,本发明的技术方案为采用一种基于多无人机的图像拼接方法,包括:获取多组图像压缩码流和元数据;将多组所述元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中,并生成包含所述元数据的多个全动态视频码流;将多个所述全动态视频码流通过通信链路传输至接收端;将每个所述全动态视频码流解析为图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系;基于多个所述图像帧及其重叠关系生成拼接图像。
可选地,将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,包括:在多个所述全动态视频码流每分别解析出一帧时间同步的所述图像帧的同时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息, 计算不同的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系;或,在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算时间同步的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
可选地,将每组所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,还包括:在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算同一全动态视频码流中包含的多个所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
可选地,基于多个所述图像帧及其重叠关系生成拼接图像包括:将多个所述图像帧进行坐标系一致化后,基于所述重叠关系生成所述拼接图像。
可选地,获取图像压缩码流和元数据,包括:基于同一参考时钟电路分别输出具有同一参考时钟的用于采集所述图像压缩码流的第一控制信号和用于采集所述元数据的第二控制信号;基于所述第一控制信号和所述第二控制信号获取包含绝对时间的所述图像压缩码流和所述元数据。
可选地,所述元数据至少包括GNSS定位数据。
可选地,所述图像拼接方法还包括对所述图像帧进行可视化处理,对所述图像帧进行可视化处理包括:基于每个所述全动态视频码流建立对应的第一图层;若所述重叠关系为不重叠,则将参与计算所述重叠关系的所述图像帧按照分别在所述图像帧所属的所述全动态视频码流对应的所述第一图层上更新的方式构成所述拼接图像;若所述重叠关系为重叠,则调用全部参与计算所述重叠关系的所述图像帧所属的所述全动态视频码流对应的所述第一图层并视为整体的第二图层,将多个所述图像帧进行正射校正和摄影测量坐标系一致化,基于所述重叠关系生成所述拼接图像后,将所述拼接图像在所述第二图层上更新。
可选地,所述更新频率被配置为与所述全动态视频码流的帧率相同。
相应地,本发明提供,一种基于多无人机的图像拼接系统包括:发送端,用于获取多组图像压缩码流和元数据,并将多组所述元数据按照时间 同步的方式逐帧封装至图像压缩码流的预设字段中后,生成包含所述元数据的多个全动态视频码流,并将多个所述全动态视频码流通过多个通信链路传输至接收端;所述接收端,用于将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系后,基于多个所述图像帧及其重叠关系生成拼接图像。
可选地,所述接收端包括:判断单元,用于选择需要进行图像拼接的多对图像帧;定位单元,用于计算所述图像帧之间的重叠关系;拼接单元,用于生成所述拼接图像;显示单元,用于显示所述拼接图像。
本发明的首要改进之处为提供的基于多无人机的图像拼接方法,通过构建具有时空同步的无人机元数据和视频数据的全动态该视频流,利用元数据提供准确的无人机飞行情况和地理定位等信息,提高视频拼接效果,从而避免传统的图像拼接方法存在的误差、累积漂移等问题。同时,由于元数据与图像帧严格同步,因此通过元数据能够准确解析与所述元数据对应的图像帧的视场范围及视场中心坐标,从而精确地计算出不同图像帧之间的重叠关系,解决了多无人机拼接时数据不同步导致拼接不准确的问题。
附图说明
图1是本发明的基于多无人机的图像拼接方法的简化流程图;
图2是本发明的基于多无人机的图像拼接系统的简化单元连接图。
具体实施方式
下面将结合本申请中的说明书附图,对申请中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例一
如图1所示,一种基于多无人机的图像拼接方法,包括:获取多组图像压缩码流和元数据;将多组所述元数据按照时间同步的方式逐帧封装至 图像压缩码流的预设字段中,并生成包含所述元数据的多个全动态视频码流;将多个所述全动态视频码流通过通信链路传输至接收端;将每个所述全动态视频码流解析为图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系;基于多个所述图像帧及其重叠关系生成拼接图像。其中,元数据可以包括GNSS(Global Navigation Satellite System,全球导航卫星系统)数据、高度数据、视场角数据和飞行姿态数据等;预设字段可以是:在使用的通信传输协议为H264或H265时,预设字段可以是SEI(Supplemental Enhancement Information,补充增强信息)字段;在使用的通信传输协议为TS(MPEG2 Transport stream,传输流)封装协议时,预设字段为自定义字段。具体的,元数据信息的种类根据搭载传感器的移动设备的种类变化而变化,例如:在设备为船时,元数据可以包括设备状态数据,设备状态数据至少包括GNSS数据、风向数据和航向数据等;在设备为飞行器时,元数据至少包括载机POS(Position and Orientation System,定位定向系统)数据、载机状态数据、载荷传感器类型数据、吊舱POS数据、吊舱状态数据和图像处理板数据等,传感器设备为固定摄像头时,元数据信息可以包括定位、视角方向、俯仰角、视场角、塔杆高度、信道、传输带宽、设备ID(Identity document,身份标识号)等信息。其中,载机POS数据至少包括载机偏航角数据、载机俯仰角数据、载机滚转角数据、载机的经纬数据、载机高度数据、载机相较于出发点的距离数据、载机相较于出发点的方位角数据、载机飞行速度数据。吊舱POS数据至少包括可见光水平视场角数据、可见光垂直视场角数据、红外水平视场角数据、红外垂直视场角数据、相机焦距数据、吊舱航向欧拉角数据、吊舱俯仰欧拉角数据、航向框架角数据、俯仰框架角数据、翻滚框架角数据、目标的经纬度及高度数据、目标速度数据、目标速度方位角数据和目标相较于载机的预估距离数据。
进一步的,将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,包括:在多个所述全动态视频码流每分别解析出一帧时间同步的所述图像帧的同时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息, 计算不同的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系;或,在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算时间同步的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。其中,计算不同的所述图像帧之间的地理范围重叠数据包括:基于所述元数据包含的与所述图像帧时间同步的地理位置信息、视场角数据和俯仰偏航角计算与所述元数据对应的图像帧的视场范围及视场中心坐标,构成所述图像帧的地理范围数据,计算不同的所述图像帧之间的地理范围数据的重叠区域,生成所述地理范围重叠数据。
更进一步的,将每组所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,还包括:在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算同一全动态视频码流中包含的多个所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
进一步的,基于多个所述图像帧及其重叠关系生成拼接图像包括:将多个所述图像帧进行坐标系一致化后,基于所述重叠关系生成所述拼接图像。其中,由于可能存在的多无人机的摄像单元参数不同和无人机飞行高度不同导致的不同无人机采集的图像帧的空间分辨率的不同,因此,可以将多个所述图像帧进行正射校正和摄影测量坐标系一致化后,还可以对多个所述图像帧进行空间分辨率一致化,以提升图像拼接效果;在所述重叠关系所表征的重叠区域较大时,还可以通过对参与计算所述重叠关系的多个所述图像帧进行图像运动变化计算后,再进行图像拼接。
更进一步的,生成所述拼接图像后,还可以利用已有拼接结果和多个图像的元数据,进行全局优化,使用包括但不限于地理配准、相机位姿优化等方法,提高拼接效果。
进一步的,所述图像拼接方法还包括对所述图像帧进行可视化处理,对所述图像帧进行可视化处理包括:基于每个所述全动态视频码流建立对应的第一图层;若所述重叠关系为不重叠,则将参与计算所述重叠关系的 所述图像帧按照分别在所述图像帧所属的所述全动态视频码流对应的所述第一图层上更新的方式构成所述拼接图像;若所述重叠关系为重叠,则调用全部参与计算所述重叠关系的所述图像帧所属的所述全动态视频码流对应的所述第一图层并视为整体的第二图层,将多个所述图像帧进行正射校正和摄影测量坐标系一致化,基于所述重叠关系生成所述拼接图像后,将所述拼接图像在所述第二图层上更新。其中,所述更新频率被配置为与所述全动态视频码流的帧率相同,以实现实时完成拼接图像的可视化呈现;所述更新频率还可以被配置为:所述全动态视频码流的帧率是所述更新频率的倍数,例如:所述全动态视频码流的帧率为50帧时,所述更新频率为25帧,从而在无需实时完成拼接图像的可视化呈现时,能够降低算力负荷。
为使得本申请适用于其他不同需求的应用场景,本申请还提供一种图像拼接方法,具体的:在多个所述全动态视频码流每分别解析出一帧时间同步的所述图像帧的同时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算不同的所述图像帧之间的地理范围重叠数据,并生成所述第一重叠关系;若所述第一重叠关系为重叠,则调用全部参与计算所述重叠关系的所述图像帧所属的所述全动态视频码流对应的所述第一图层并视为整体的第二图层,将多个所述图像帧进行正射校正和摄影测量坐标系一致化,基于所述第一重叠关系生成所述拼接图像后,将所述拼接图像在所述第二图层上更新;若所述第一重叠关系为不重叠,则将参与计算所述第一重叠关系的所述图像帧按照分别在所述图像帧所属的所述全动态视频码流对应的所述第一图层上更新的方式构成所述拼接图像,其中,将参与计算所述第一重叠关系的所述图像帧在所述第一图层上更新的方式可以是:计算所述全动态视频码流中的当前的所述图像帧与上一帧的所述图像帧的地理范围重叠数据,并生成所述第二重叠关系,将当前的所述图像帧与上一帧的所述图像帧进行正射校正和摄影测量坐标系一致化,基于所述第二重叠关系生成所述拼接图像后,将所述拼接图像在所述第一图层上更新。
进一步的,获取图像压缩码流和元数据,包括:基于同一参考时钟电路分别输出具有同一参考时钟的用于采集所述图像压缩码流的第一控制信 号和用于采集所述元数据的第二控制信号;基于所述第一控制信号和所述第二控制信号获取包含绝对时间的所述图像压缩码流和所述元数据。本发明通过使用同一参考时钟电路输出的参考时钟信号作为第一控制信号和第二控制信号,使得图像压缩码流和元数据中包含的时间戳均参照同一时钟源,因此,图像压缩码流的时间戳和元数据的时间戳在同一时钟源的系统内可彼此视为绝对时间。其中,生成所述第一控制信号和所述第二控制信号的具体方法可以是:无人机的载荷处理子单元在接收到地面站的指令后,基于同一参考时钟电路分别输出第一控制信号和第二控制信号。
进一步的,所述图像拼接方法还包括:在发送端将所述全动态视频码流通过通信链路传输至所述接收端时,若所述通信链路中包含至少一个传输节点,所述传输节点能够将所述全动态视频码流解析为多帧图像帧及其时间同步的所述元数据;对所述元数据进行修改后,将修改后的所述元数据按照时间同步的方式封装至所述图像帧的预设字段中,并生成包含修改后的所述元数据的全动态视频码流;将包含修改后的所述元数据的全动态视频码流继续通过通信链路传输至所述接收端。本发明通过使用将元数据同步封装于图像压缩码流的数据传输方式,使得在通信链路中存在传输节点时,传输节点能够在在不破坏压缩码流的情况下,重新从全动态视频码流中提取纯净的图像帧及全动态视频码流的预设字段中提取与所述图像帧同步的元数据,从而使得传输节点不仅能够基于图像帧和元数据实现多种应用场景,还能够通过对元数据进行修改后,重新对元数据和图像帧封装后,传输至接收端,保证了全动态视频码流在通信链路中传输时具有可修改性、应用场景多样性。
本发明通过构建具有时空同步的无人机元数据和视频数据的全动态该视频流,利用元数据提供准确的无人机飞行情况和地理定位等信息,提高视频拼接效果,从而避免传统的图像拼接方法存在的误差、累积漂移等问题。同时,由于元数据与图像帧严格同步,因此通过元数据能够准确解析与所述元数据对应的图像帧的视场范围及视场中心坐标,从而精确地计算出不同图像帧之间的重叠关系,解决了多无人机拼接时数据不同步导致拼接不准确的问题。
实施例二
相应的,如图2所示,本发明提供,一种基于多无人机的图像拼接系统,包括:发送端,用于获取多组图像压缩码流和元数据,并将多组所述元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中后,生成包含所述元数据的多个全动态视频码流,并将多个所述全动态视频码流通过多个通信链路传输至接收端;所述接收端,用于将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系后,基于多个所述图像帧及其重叠关系生成拼接图像。其中,发送端可以是无人机;接收端可以是地面站或其他后端数据处理单元,在所述接收端为其他后端数据处理单元的情况下,可以将所述地面站视为用于实现数据透传转发的中间传输节点。
进一步的,所述接收端包括:判断单元,用于选择需要进行图像拼接的多对图像帧;定位单元,用于计算所述图像帧之间的重叠关系;拼接单元,用于生成所述拼接图像;显示单元,用于显示所述拼接图像。
以上对本发明实施例所提供的基于多无人机的图像拼接方法及其系统进行了详细介绍。说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现 不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。

Claims (10)

  1. 一种基于多无人机的图像拼接方法,其特征在于,包括:
    获取多组图像压缩码流和元数据;
    将多组所述元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中,并生成包含所述元数据的多个全动态视频码流;
    将多个所述全动态视频码流通过通信链路传输至接收端;
    将每个所述全动态视频码流解析为图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系;
    基于多个所述图像帧及其重叠关系生成拼接图像。
  2. 根据权利要求1所述的图像拼接方法,其特征在于,将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,包括:
    在多个所述全动态视频码流每分别解析出一帧时间同步的所述图像帧的同时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算不同的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系;或,
    在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算时间同步的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
  3. 根据权利要求2所述的图像拼接方法,其特征在于,将每组所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,还包括:
    在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算同一全动态视频码流中包含的多个所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
  4. 根据权利要求1所述的图像拼接方法,其特征在于,基于多个所述图像帧及其重叠关系生成拼接图像包括:
    将多个所述图像帧进行坐标系一致化后,基于所述重叠关系生成所述 拼接图像。
  5. 根据权利要求1所述的图像拼接方法,其特征在于,获取图像压缩码流和元数据,包括:
    基于同一参考时钟电路分别输出具有同一参考时钟的用于采集所述图像压缩码流的第一控制信号和用于采集所述元数据的第二控制信号;
    基于所述第一控制信号和所述第二控制信号获取包含绝对时间的所述图像压缩码流和所述元数据。
  6. 根据权利要求5所述的图像拼接方法,其特征在于,所述元数据至少包括GNSS定位数据。
  7. 根据权利要求4所述的图像拼接方法,其特征在于,所述图像拼接方法还包括对所述图像帧进行可视化处理,对所述图像帧进行可视化处理包括:
    基于每个所述全动态视频码流建立对应的第一图层;
    若所述重叠关系为不重叠,则将参与计算所述重叠关系的所述图像帧按照分别在所述图像帧所属的所述全动态视频码流对应的所述第一图层上更新的方式构成所述拼接图像;
    若所述重叠关系为重叠,则调用全部参与计算所述重叠关系的所述图像帧所属的所述全动态视频码流对应的所述第一图层并视为整体的第二图层,将多个所述图像帧进行正射校正和摄影测量坐标系一致化,基于所述重叠关系生成所述拼接图像后,将所述拼接图像在所述第二图层上更新。
  8. 根据权利要求7所述的图像拼接方法,其特征在于,所述更新的频率被配置为与所述全动态视频码流的帧率相同。
  9. 一种基于多无人机的图像拼接系统,其特征在于,包括:
    发送端,用于获取多组图像压缩码流和元数据,并将多组所述元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中后,生成包含所述元数据的多个全动态视频码流,并将多个所述全动态视频码流通过多个通信链路传输至接收端;
    所述接收端,用于将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系后, 基于多个所述图像帧及其重叠关系生成拼接图像。
  10. 根据权利要求9所述的图像拼接系统,其特征在于,所述接收端包括:
    判断单元,用于选择需要进行图像拼接的多对图像帧;
    定位单元,用于计算所述图像帧之间的重叠关系;
    拼接单元,用于生成所述拼接图像;
    显示单元,用于显示所述拼接图像。
PCT/CN2022/091638 2021-08-30 2022-05-09 一种基于多无人机的图像拼接方法及其系统 WO2023029551A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020247007241A KR20240058858A (ko) 2021-08-30 2022-05-09 다중 무인기 기반의 이미지 스티칭 방법 및 그 시스템
EP22862694.1A EP4398183A1 (en) 2021-08-30 2022-05-09 Image stitching method and system based on multiple unmanned aerial vehicles

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111006658.XA CN115731100A (zh) 2021-08-30 2021-08-30 一种基于多无人机的图像拼接方法及其系统
CN202111006658.X 2021-08-30

Publications (1)

Publication Number Publication Date
WO2023029551A1 true WO2023029551A1 (zh) 2023-03-09

Family

ID=85291030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091638 WO2023029551A1 (zh) 2021-08-30 2022-05-09 一种基于多无人机的图像拼接方法及其系统

Country Status (4)

Country Link
EP (1) EP4398183A1 (zh)
KR (1) KR20240058858A (zh)
CN (1) CN115731100A (zh)
WO (1) WO2023029551A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821414B (zh) * 2023-05-17 2024-07-19 成都纵横大鹏无人机科技有限公司 基于无人机视频形成视场投影地图的方法及系统
CN116363185B (zh) * 2023-06-01 2023-08-01 成都纵横自动化技术股份有限公司 地理配准方法、装置、电子设备和可读存储介质
CN116958519B (zh) * 2023-09-15 2023-12-08 四川泓宝润业工程技术有限公司 一种无人机视频图像与无人机位置数据对齐的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107371040A (zh) * 2017-08-28 2017-11-21 荆门程远电子科技有限公司 一种无人机影像高效处理系统
US10104286B1 (en) * 2015-08-27 2018-10-16 Amazon Technologies, Inc. Motion de-blurring for panoramic frames
CN110383847A (zh) * 2017-03-10 2019-10-25 雷索恩公司 视频数据中的实时帧对准
CN111837383A (zh) * 2018-07-13 2020-10-27 Lg电子株式会社 发送和接收关于动态视点的坐标系的元数据的方法和装置
CN113542926A (zh) * 2021-07-07 2021-10-22 东风悦享科技有限公司 一种基于Sharing-Smart无人清扫车的5G平行驾驶系统及控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10104286B1 (en) * 2015-08-27 2018-10-16 Amazon Technologies, Inc. Motion de-blurring for panoramic frames
CN110383847A (zh) * 2017-03-10 2019-10-25 雷索恩公司 视频数据中的实时帧对准
CN107371040A (zh) * 2017-08-28 2017-11-21 荆门程远电子科技有限公司 一种无人机影像高效处理系统
CN111837383A (zh) * 2018-07-13 2020-10-27 Lg电子株式会社 发送和接收关于动态视点的坐标系的元数据的方法和装置
CN113542926A (zh) * 2021-07-07 2021-10-22 东风悦享科技有限公司 一种基于Sharing-Smart无人清扫车的5G平行驾驶系统及控制方法

Also Published As

Publication number Publication date
EP4398183A1 (en) 2024-07-10
KR20240058858A (ko) 2024-05-03
CN115731100A (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
WO2023029551A1 (zh) 一种基于多无人机的图像拼接方法及其系统
US9639935B1 (en) Apparatus and methods for camera alignment model calibration
EP3653990B1 (en) Real-time moving platform management system
CN110675450B (zh) 基于slam技术的正射影像实时生成方法及系统
CN105700547B (zh) 一种基于导航飞艇的空中立体视频街景系统及实现方法
CN110537365B (zh) 信息处理装置、信息处理方法、信息处理程序、图像处理装置以及图像处理系统
CN109618134A (zh) 一种无人机动态视频三维地理信息实时融合系统及方法
WO2021149484A1 (ja) 画像生成装置、画像生成方法、および、プログラム
CN102831816B (zh) 一种提供实时场景地图的装置
KR102417591B1 (ko) 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템
US20200177935A1 (en) Smart camera, image processing apparatus, and data communication method
US11216662B2 (en) Efficient transmission of video over low bandwidth channels
WO2023029588A1 (zh) 一种应用于gis的动态视频呈现方法及其系统
CN113703473B (zh) 一种即时勘察输电走廊附近自然灾害的无人机图传通信方法
WO2019100214A1 (zh) 输出影像生成方法、设备及无人机
KR101009683B1 (ko) 파노라믹 동영상 생성 시스템
WO2023029567A1 (zh) 一种传感器采集的多种数据的可视化方法及其系统
Alamouri et al. The joint research project ANKOMMEN–Exploration using automated UAV and UGV
KR101323099B1 (ko) 모자이크 영상 생성 장치
CN110602456A (zh) 航拍焦点的显示方法及系统
US20240163540A1 (en) Image Processing Architecture For Composite Focal Plane Array (CFPA) Imaging System
TW202340676A (zh) 架構於無人機群飛之零時差巨幅空照圖拍攝裝置
CN117522681A (zh) 无人机热红外遥感影像实时拼接方法及系统
JP2006171224A (ja) 撮影映像処理システム
JP2019041159A (ja) スマートカメラ、サーバ、スマートカメラシステム、データ伝送方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22862694

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20247007241

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2022862694

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022862694

Country of ref document: EP

Effective date: 20240402