WO2023029551A1 - 一种基于多无人机的图像拼接方法及其系统 - Google Patents
一种基于多无人机的图像拼接方法及其系统 Download PDFInfo
- Publication number
- WO2023029551A1 WO2023029551A1 PCT/CN2022/091638 CN2022091638W WO2023029551A1 WO 2023029551 A1 WO2023029551 A1 WO 2023029551A1 CN 2022091638 W CN2022091638 W CN 2022091638W WO 2023029551 A1 WO2023029551 A1 WO 2023029551A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- metadata
- full
- motion video
- image frames
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000006835 compression Effects 0.000 claims abstract description 26
- 238000007906 compression Methods 0.000 claims abstract description 26
- 238000004891 communication Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000001360 synchronised effect Effects 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 5
- 230000005540 biological transmission Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 230000001186 cumulative effect Effects 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000005538 encapsulation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/88—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the invention relates to the technical field of drone imaging, in particular to an image stitching method and system based on multiple drones.
- UAV UAV
- industrial drones can complete more tasks, such as regional monitoring, inspection, surveying and mapping, investigation and evidence collection, etc.; they can play a greater role in more industrial applications, Such as emergency rescue, disaster monitoring, traffic road monitoring, border patrol and regional monitoring and other industries.
- video acquisition equipment including single-camera photoelectric pods, multi-camera panoramic equipment, etc.
- video data is transmitted to the ground to understand the situation of the flight area in real time, or use algorithms such as artificial intelligence Get more effective information.
- Traditional image stitching methods include: based on image feature point matching, calculating the motion between images, using the image registration model homography matrix, incrementally stitching images, repeating the above steps to obtain a global stitching image, but this method has Drift problem caused by cumulative error.
- this method has Drift problem caused by cumulative error.
- the error is very large, and when using video images or multiple video images at the same time, dense video stream images cannot be processed, and repeated calculations will reduce efficiency.
- the present invention provides a multi-UAV-based image stitching method and its system, which solves the problem of poor stitching quality existing in the traditional multi-UAV-based image stitching method.
- the technical solution of the present invention is to adopt an image mosaic method based on multiple drones, including: obtaining multiple sets of image compression code streams and metadata; synchronizing multiple sets of metadata according to time Encapsulate frame by frame into the preset field of the image compression code stream, and generate multiple full-motion video code streams containing the metadata; transmit the multiple full-motion video code streams to the receiving end through a communication link;
- Each of the full-motion video code streams is parsed into image frames and their corresponding metadata, and the overlapping relationship between the image frames is calculated based on the metadata; based on a plurality of the image frames and their overlapping relationships Generate a stitched image.
- parsing each full-motion video code stream into multiple pairs of image frames and the corresponding metadata, and calculating the overlapping relationship between the image frames based on the metadata includes: When each of the full-motion video code streams is parsed out of a time-synchronized image frame, the different images are calculated by parsing the geographic location information contained in the metadata that is time-synchronized with the image frame. Geographic range overlap data between frames, and generate the overlapping relationship; or, when a plurality of full-motion video code streams are respectively parsed into multiple frames of the image frame, by parsing the metadata contained in the The geographical location information of the time-synchronized image frames is used to calculate the geographical range overlapping data between the time-synchronized image frames, and to generate the overlapping relationship.
- parsing each set of full-motion video code streams into multiple pairs of image frames and the corresponding metadata, and calculating the overlapping relationship between the image frames based on the metadata further includes: When a plurality of full-motion video streams are respectively parsed into multiple frames of the image frame, by analyzing the geographical location information contained in the metadata and time-synchronized with the image frame, the information contained in the same full-motion video stream is calculated. Geographically overlapping data among the plurality of image frames, and generating the overlapping relationship.
- generating the spliced image based on the plurality of image frames and their overlapping relationship includes: after unifying the coordinate systems of the plurality of image frames, generating the stitched image based on the overlapping relationship.
- acquiring the image compression code stream and metadata includes: outputting, based on the same reference clock circuit, a first control signal for collecting the image compression code stream and a first control signal for collecting the metadata with the same reference clock. second control signal; acquiring the image compression code stream and the metadata including absolute time based on the first control signal and the second control signal.
- the metadata includes at least GNSS positioning data.
- the image mosaic method further includes performing visual processing on the image frames, and performing visual processing on the image frames includes: establishing a corresponding first layer based on each of the full-motion video code streams; if the If the overlapping relationship is non-overlapping, the image frames participating in the calculation of the overlapping relationship are configured in such a way that they are updated on the first layer corresponding to the full-motion video stream to which the image frames belong.
- the spliced image if the overlapping relationship is overlapping, call the first layer corresponding to the full-motion video code stream to which all the image frames participating in the calculation of the overlapping relationship belong and regard it as the second layer as a whole A layer, performing orthorectification and photogrammetric coordinate system unification on a plurality of the image frames, and after generating the stitched image based on the overlapping relationship, updating the stitched image on the second layer.
- the update frequency is configured to be the same as the frame rate of the full-motion video code stream.
- an image mosaic system based on multiple UAVs includes: a sending end, configured to acquire multiple sets of image compression code streams and metadata, and synchronize multiple sets of the metadata in a time-synchronized manner After frame-by-frame encapsulation into the preset fields of the image compression code stream, multiple full-motion video code streams containing the metadata are generated, and multiple full-motion video code streams are transmitted to the receiver through multiple communication links end; the receiving end is configured to parse each full-motion video code stream into multiple pairs of image frames and the corresponding metadata, and calculate the overlapping relationship between the image frames based on the metadata Afterwards, a spliced image is generated based on the plurality of image frames and their overlapping relationships.
- the receiving end includes: a judging unit for selecting multiple pairs of image frames that need to be stitched together; a positioning unit for calculating the overlapping relationship between the image frames; a splicing unit for generating the A spliced image; a display unit, configured to display the spliced image.
- the primary improvement of the present invention is to provide an image mosaic method based on multiple UAVs.
- the metadata can be used to provide accurate UAV images. Information such as flight conditions and geographic positioning can improve the video stitching effect, thereby avoiding errors and cumulative drift in traditional image stitching methods.
- the metadata since the metadata is strictly synchronized with the image frame, the metadata can accurately analyze the field of view range and the center coordinate of the image frame corresponding to the metadata, so as to accurately calculate the overlapping relationship between different image frames , which solves the problem of inaccurate stitching caused by data out-of-sync when stitching multiple drones.
- Fig. 1 is the simplified flowchart of the image mosaic method based on multi-UAV of the present invention
- Fig. 2 is a simplified unit connection diagram of the multi-UAV-based image mosaic system of the present invention.
- an image mosaic method based on multi-UAVs includes: obtaining multiple sets of image compression code streams and metadata; encapsulating multiple sets of metadata into image compression frame by frame In the preset field of the code stream, and generate a plurality of full dynamic video code streams containing the metadata; transmit the multiple full dynamic video code streams to the receiving end through a communication link; each of the full dynamic video code streams The video code stream is parsed into image frames and the corresponding metadata, and the overlapping relationship between the image frames is calculated based on the metadata; and a spliced image is generated based on a plurality of the image frames and their overlapping relationships.
- the metadata can include GNSS (Global Navigation Satellite System, Global Navigation Satellite System) data, altitude data, field of view data and flight attitude data, etc.;
- the preset field can be: when the communication transmission protocol used is H264 or H265 , the preset field may be an SEI (Supplemental Enhancement Information, supplementary enhancement information) field; when the communication transmission protocol used is a TS (MPEG2 Transport stream, transport stream) encapsulation protocol, the preset field is a custom field.
- the type of metadata information varies according to the type of mobile device equipped with sensors. For example, when the device is a ship, the metadata may include device status data, and the device status data includes at least GNSS data, wind direction data, and heading data, etc.
- the metadata includes at least aircraft POS (Position and Orientation System) data, aircraft state data, load sensor type data, pod POS data, pod state data and image processing board data etc.
- the metadata information may include information such as positioning, viewing direction, pitch angle, field of view, tower height, channel, transmission bandwidth, device ID (Identity document, identity identification number), etc.
- the POS data of the carrier aircraft at least includes the data of the yaw angle of the carrier aircraft, the data of the pitch angle of the carrier aircraft, the data of the roll angle of the carrier aircraft, the latitude and longitude data of the carrier aircraft, the height data of the carrier aircraft, the distance data of the carrier aircraft compared with the starting point, Compared with the azimuth data of the starting point and the flight speed data of the carrier aircraft.
- Pod POS data at least include visible light horizontal field of view data, visible light vertical field of view data, infrared horizontal field of view data, infrared vertical field of view data, camera focal length data, pod heading Euler angle data, pod pitch O Pull angle data, heading frame angle data, pitch frame angle data, roll frame angle data, target longitude, latitude and height data, target speed data, target speed azimuth data and estimated distance data of the target compared to the carrier aircraft.
- each full-motion video code stream is parsed into multiple pairs of image frames and the corresponding metadata, and the overlapping relationship between the image frames is calculated based on the metadata, including:
- the different image frames are calculated by parsing the geographic location information contained in the metadata that is time-synchronized with the image frame Geographically overlapping data between them, and generate the overlapping relationship; or, when a plurality of full-motion video streams are respectively parsed into multiple frames of the image frame, by parsing the metadata contained in the image
- the geographical position information of the time-synchronized frames is calculated, and the geographical range overlapping data between the time-synchronized image frames is calculated, and the overlapping relationship is generated.
- calculating the geographical range overlapping data between different image frames includes: based on the geographical location information included in the metadata and time-synchronized with the image frames, field angle data and pitch yaw angle calculation and the The field of view range and the center coordinates of the field of view of the image frame corresponding to the metadata constitute the geographical range data of the image frame, calculate the overlapping area of the geographical range data between different image frames, and generate the geographical range overlapping data .
- parsing each set of full-motion video code streams into multiple pairs of image frames and the corresponding metadata, and calculating the overlapping relationship between the image frames based on the metadata further includes: When a plurality of full-motion video streams are respectively parsed into multiple frames of the image frame, by analyzing the geographical location information contained in the metadata and time-synchronized with the image frame, the information contained in the same full-motion video stream is calculated. Geographically overlapping data among the plurality of image frames, and generating the overlapping relationship.
- generating the spliced image based on the multiple image frames and their overlapping relationship includes: after unifying the coordinate systems of the multiple image frames, generating the spliced image based on the overlapping relationship.
- multiple image frames can be processed.
- the spatial resolution of multiple image frames can also be unified to improve the image mosaic effect; when the overlapping area represented by the overlapping relationship is large, it can also be The image splicing is performed after image motion change calculation is performed on a plurality of image frames participating in the calculation of the overlapping relationship.
- the existing stitching result and the metadata of multiple images can be used for global optimization, and methods including but not limited to georeferencing and camera pose optimization can be used to improve the stitching effect.
- the image mosaic method further includes performing visual processing on the image frames, and performing visual processing on the image frames includes: establishing a corresponding first layer based on each of the full-motion video code streams; if the If the overlapping relationship is non-overlapping, the image frames participating in the calculation of the overlapping relationship are respectively updated on the first layer corresponding to the full-motion video code stream to which the image frames belong to constitute the Stitching images; if the overlapping relationship is overlapping, call the first layer corresponding to the full-motion video code stream to which all the image frames participating in the calculation of the overlapping relationship belong and regard it as the second image as a whole layer, performing orthorectification and photogrammetric coordinate system unification on a plurality of the image frames, and after generating the stitched image based on the overlapping relationship, updating the stitched image on the second layer.
- the update frequency is configured to be the same as the frame rate of the full dynamic video code stream, so as to realize the visual presentation of the spliced image in real time; the update frequency can also be configured as: the full dynamic video code stream
- the frame rate is a multiple of the update frequency. For example, when the frame rate of the full-motion video code stream is 50 frames, the update frequency is 25 frames, so that when the visual presentation of the spliced images does not need to be completed in real time, the calculation can be reduced. force load.
- this application also provides an image mosaic method, specifically: each time a frame of time-synchronized image frames are parsed out from multiple full-motion video streams, At the same time, by analyzing the geographical location information contained in the metadata and time-synchronized with the image frames, the geographical range overlapping data between different image frames is calculated, and the first overlapping relationship is generated; if the second If the overlapping relationship is overlapping, call the first layer corresponding to the full-motion video code stream to which all the image frames participating in the calculation of the overlapping relationship belong and regard it as the second layer as a whole, and combine multiple The image frame is orthorectified and the photogrammetric coordinate system is consistent, and after the stitched image is generated based on the first overlapping relationship, the stitched image is updated on the second layer; if the first If the overlapping relationship is non-overlapping, the image frames participating in the calculation of the first overlapping relationship are constructed in such a way that they are respectively updated on the first layer corresponding to the full-motion video stream
- acquiring the image compression code stream and metadata includes: based on the same reference clock circuit, respectively outputting a first control signal for collecting the image compression code stream and a first control signal for collecting the metadata with the same reference clock
- Two control signals acquiring the image compression code stream and the metadata including absolute time based on the first control signal and the second control signal.
- the present invention uses the reference clock signal output by the same reference clock circuit as the first control signal and the second control signal, so that the time stamps contained in the image compression code stream and metadata all refer to the same clock source, therefore, the image compression code stream Timestamps and metadata timestamps can be considered absolute time to each other within systems of the same clock source.
- the specific method of generating the first control signal and the second control signal may be: after receiving the instruction from the ground station, the payload processing subunit of the UAV outputs the first control signal respectively based on the same reference clock circuit and the second control signal.
- the image stitching method further includes: when the sending end transmits the full-motion video code stream to the receiving end through a communication link, if the communication link includes at least one transmission node, the transmission The node can parse the full-motion video stream into multiple image frames and the time-synchronized metadata; after modifying the metadata, encapsulate the modified metadata in a time-synchronized In the preset field of the image frame, and generate a full-motion video code stream containing the modified metadata; continue to transmit the full-motion video code stream containing the modified metadata to the the receiving end.
- the present invention uses a data transmission method that synchronously encapsulates metadata into image compression code streams, so that when there is a transmission node in the communication link, the transmission node can restart from the full-motion video code without destroying the compressed code stream. Extract pure image frames from the stream and extract metadata synchronized with the image frames from the preset fields of the full-motion video code stream, so that the transmission node can not only implement various application scenarios based on image frames and metadata, but also through After the metadata is modified, the metadata and image frames are re-encapsulated and then transmitted to the receiving end, which ensures the modifiability and diversity of application scenarios when the full-motion video stream is transmitted in the communication link.
- the present invention constructs a full-motion video stream with UAV metadata and video data synchronized in time and space, and utilizes the metadata to provide accurate information such as UAV flight conditions and geographic positioning, thereby improving the effect of video splicing, thereby avoiding the traditional image
- problems such as errors and cumulative drift in the splicing method.
- the metadata since the metadata is strictly synchronized with the image frame, the metadata can accurately analyze the field of view range and the center coordinate of the image frame corresponding to the metadata, so as to accurately calculate the overlapping relationship between different image frames , which solves the problem of inaccurate stitching caused by data out-of-sync when stitching multiple drones.
- the present invention provides an image mosaic system based on multiple UAVs, including: a sending end, configured to obtain multiple sets of image compression code streams and metadata, and multiple sets of said After the metadata is encapsulated into the preset fields of the image compression code stream frame by frame in a time-synchronized manner, multiple full-motion video code streams containing the metadata are generated, and multiple full-motion video code streams are passed through multiple A communication link is transmitted to the receiving end; the receiving end is used to analyze each full-motion video code stream into multiple pairs of image frames and the corresponding metadata, and calculate the said metadata based on the metadata.
- the sending end may be an unmanned aerial vehicle;
- the receiving end may be a ground station or other back-end data processing unit, and in the case that the receiving end is other back-end data processing unit, the ground station may be regarded as used for An intermediate transmission node that realizes data transparent transmission and forwarding.
- the receiving end includes: a judging unit, used to select multiple pairs of image frames that need to be stitched together; a positioning unit, used to calculate the overlapping relationship between the image frames; a stitching unit, used to generate the stitching An image; a display unit, configured to display the spliced image.
- RAM random access memory
- ROM read-only memory
- EEPROM electrically programmable ROM
- EEPROM electrically erasable programmable ROM
- registers hard disk, removable disk, CD-ROM, or any other Any other known storage medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种基于多无人机的图像拼接方法,其特征在于,包括:获取多组图像压缩码流和元数据;将多组所述元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中,并生成包含所述元数据的多个全动态视频码流;将多个所述全动态视频码流通过通信链路传输至接收端;将每个所述全动态视频码流解析为图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系;基于多个所述图像帧及其重叠关系生成拼接图像。
- 根据权利要求1所述的图像拼接方法,其特征在于,将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,包括:在多个所述全动态视频码流每分别解析出一帧时间同步的所述图像帧的同时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算不同的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系;或,在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算时间同步的所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
- 根据权利要求2所述的图像拼接方法,其特征在于,将每组所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系,还包括:在多个所述全动态视频码流分别解析出多帧所述图像帧时,通过解析所述元数据包含的与所述图像帧时间同步的地理位置信息,计算同一全动态视频码流中包含的多个所述图像帧之间的地理范围重叠数据,并生成所述重叠关系。
- 根据权利要求1所述的图像拼接方法,其特征在于,基于多个所述图像帧及其重叠关系生成拼接图像包括:将多个所述图像帧进行坐标系一致化后,基于所述重叠关系生成所述 拼接图像。
- 根据权利要求1所述的图像拼接方法,其特征在于,获取图像压缩码流和元数据,包括:基于同一参考时钟电路分别输出具有同一参考时钟的用于采集所述图像压缩码流的第一控制信号和用于采集所述元数据的第二控制信号;基于所述第一控制信号和所述第二控制信号获取包含绝对时间的所述图像压缩码流和所述元数据。
- 根据权利要求5所述的图像拼接方法,其特征在于,所述元数据至少包括GNSS定位数据。
- 根据权利要求4所述的图像拼接方法,其特征在于,所述图像拼接方法还包括对所述图像帧进行可视化处理,对所述图像帧进行可视化处理包括:基于每个所述全动态视频码流建立对应的第一图层;若所述重叠关系为不重叠,则将参与计算所述重叠关系的所述图像帧按照分别在所述图像帧所属的所述全动态视频码流对应的所述第一图层上更新的方式构成所述拼接图像;若所述重叠关系为重叠,则调用全部参与计算所述重叠关系的所述图像帧所属的所述全动态视频码流对应的所述第一图层并视为整体的第二图层,将多个所述图像帧进行正射校正和摄影测量坐标系一致化,基于所述重叠关系生成所述拼接图像后,将所述拼接图像在所述第二图层上更新。
- 根据权利要求7所述的图像拼接方法,其特征在于,所述更新的频率被配置为与所述全动态视频码流的帧率相同。
- 一种基于多无人机的图像拼接系统,其特征在于,包括:发送端,用于获取多组图像压缩码流和元数据,并将多组所述元数据按照时间同步的方式逐帧封装至图像压缩码流的预设字段中后,生成包含所述元数据的多个全动态视频码流,并将多个所述全动态视频码流通过多个通信链路传输至接收端;所述接收端,用于将每个所述全动态视频码流解析为多对图像帧及其对应的所述元数据,并基于所述元数据计算所述图像帧之间的重叠关系后, 基于多个所述图像帧及其重叠关系生成拼接图像。
- 根据权利要求9所述的图像拼接系统,其特征在于,所述接收端包括:判断单元,用于选择需要进行图像拼接的多对图像帧;定位单元,用于计算所述图像帧之间的重叠关系;拼接单元,用于生成所述拼接图像;显示单元,用于显示所述拼接图像。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020247007241A KR20240058858A (ko) | 2021-08-30 | 2022-05-09 | 다중 무인기 기반의 이미지 스티칭 방법 및 그 시스템 |
EP22862694.1A EP4398183A1 (en) | 2021-08-30 | 2022-05-09 | Image stitching method and system based on multiple unmanned aerial vehicles |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111006658.XA CN115731100A (zh) | 2021-08-30 | 2021-08-30 | 一种基于多无人机的图像拼接方法及其系统 |
CN202111006658.X | 2021-08-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023029551A1 true WO2023029551A1 (zh) | 2023-03-09 |
Family
ID=85291030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/091638 WO2023029551A1 (zh) | 2021-08-30 | 2022-05-09 | 一种基于多无人机的图像拼接方法及其系统 |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4398183A1 (zh) |
KR (1) | KR20240058858A (zh) |
CN (1) | CN115731100A (zh) |
WO (1) | WO2023029551A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116821414B (zh) * | 2023-05-17 | 2024-07-19 | 成都纵横大鹏无人机科技有限公司 | 基于无人机视频形成视场投影地图的方法及系统 |
CN116363185B (zh) * | 2023-06-01 | 2023-08-01 | 成都纵横自动化技术股份有限公司 | 地理配准方法、装置、电子设备和可读存储介质 |
CN116958519B (zh) * | 2023-09-15 | 2023-12-08 | 四川泓宝润业工程技术有限公司 | 一种无人机视频图像与无人机位置数据对齐的方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107371040A (zh) * | 2017-08-28 | 2017-11-21 | 荆门程远电子科技有限公司 | 一种无人机影像高效处理系统 |
US10104286B1 (en) * | 2015-08-27 | 2018-10-16 | Amazon Technologies, Inc. | Motion de-blurring for panoramic frames |
CN110383847A (zh) * | 2017-03-10 | 2019-10-25 | 雷索恩公司 | 视频数据中的实时帧对准 |
CN111837383A (zh) * | 2018-07-13 | 2020-10-27 | Lg电子株式会社 | 发送和接收关于动态视点的坐标系的元数据的方法和装置 |
CN113542926A (zh) * | 2021-07-07 | 2021-10-22 | 东风悦享科技有限公司 | 一种基于Sharing-Smart无人清扫车的5G平行驾驶系统及控制方法 |
-
2021
- 2021-08-30 CN CN202111006658.XA patent/CN115731100A/zh active Pending
-
2022
- 2022-05-09 KR KR1020247007241A patent/KR20240058858A/ko active Search and Examination
- 2022-05-09 WO PCT/CN2022/091638 patent/WO2023029551A1/zh active Application Filing
- 2022-05-09 EP EP22862694.1A patent/EP4398183A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10104286B1 (en) * | 2015-08-27 | 2018-10-16 | Amazon Technologies, Inc. | Motion de-blurring for panoramic frames |
CN110383847A (zh) * | 2017-03-10 | 2019-10-25 | 雷索恩公司 | 视频数据中的实时帧对准 |
CN107371040A (zh) * | 2017-08-28 | 2017-11-21 | 荆门程远电子科技有限公司 | 一种无人机影像高效处理系统 |
CN111837383A (zh) * | 2018-07-13 | 2020-10-27 | Lg电子株式会社 | 发送和接收关于动态视点的坐标系的元数据的方法和装置 |
CN113542926A (zh) * | 2021-07-07 | 2021-10-22 | 东风悦享科技有限公司 | 一种基于Sharing-Smart无人清扫车的5G平行驾驶系统及控制方法 |
Also Published As
Publication number | Publication date |
---|---|
EP4398183A1 (en) | 2024-07-10 |
KR20240058858A (ko) | 2024-05-03 |
CN115731100A (zh) | 2023-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023029551A1 (zh) | 一种基于多无人机的图像拼接方法及其系统 | |
US9639935B1 (en) | Apparatus and methods for camera alignment model calibration | |
EP3653990B1 (en) | Real-time moving platform management system | |
CN110675450B (zh) | 基于slam技术的正射影像实时生成方法及系统 | |
CN105700547B (zh) | 一种基于导航飞艇的空中立体视频街景系统及实现方法 | |
CN110537365B (zh) | 信息处理装置、信息处理方法、信息处理程序、图像处理装置以及图像处理系统 | |
CN109618134A (zh) | 一种无人机动态视频三维地理信息实时融合系统及方法 | |
WO2021149484A1 (ja) | 画像生成装置、画像生成方法、および、プログラム | |
CN102831816B (zh) | 一种提供实时场景地图的装置 | |
KR102417591B1 (ko) | 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템 | |
US20200177935A1 (en) | Smart camera, image processing apparatus, and data communication method | |
US11216662B2 (en) | Efficient transmission of video over low bandwidth channels | |
WO2023029588A1 (zh) | 一种应用于gis的动态视频呈现方法及其系统 | |
CN113703473B (zh) | 一种即时勘察输电走廊附近自然灾害的无人机图传通信方法 | |
WO2019100214A1 (zh) | 输出影像生成方法、设备及无人机 | |
KR101009683B1 (ko) | 파노라믹 동영상 생성 시스템 | |
WO2023029567A1 (zh) | 一种传感器采集的多种数据的可视化方法及其系统 | |
Alamouri et al. | The joint research project ANKOMMEN–Exploration using automated UAV and UGV | |
KR101323099B1 (ko) | 모자이크 영상 생성 장치 | |
CN110602456A (zh) | 航拍焦点的显示方法及系统 | |
US20240163540A1 (en) | Image Processing Architecture For Composite Focal Plane Array (CFPA) Imaging System | |
TW202340676A (zh) | 架構於無人機群飛之零時差巨幅空照圖拍攝裝置 | |
CN117522681A (zh) | 无人机热红外遥感影像实时拼接方法及系统 | |
JP2006171224A (ja) | 撮影映像処理システム | |
JP2019041159A (ja) | スマートカメラ、サーバ、スマートカメラシステム、データ伝送方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22862694 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20247007241 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022862694 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022862694 Country of ref document: EP Effective date: 20240402 |