CN101976429A - Cruise image based imaging method of water-surface aerial view - Google Patents

Cruise image based imaging method of water-surface aerial view Download PDF

Info

Publication number
CN101976429A
CN101976429A CN201010521191.8A CN201010521191A CN101976429A CN 101976429 A CN101976429 A CN 101976429A CN 201010521191 A CN201010521191 A CN 201010521191A CN 101976429 A CN101976429 A CN 101976429A
Authority
CN
China
Prior art keywords
image
video
water surface
map
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201010521191.8A
Other languages
Chinese (zh)
Other versions
CN101976429B (en
Inventor
李勃
董蓉
顾昊
江登表
张潇
吴聪
郁健
刘虎
陈晨
陈启美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN2010105211918A priority Critical patent/CN101976429B/en
Publication of CN101976429A publication Critical patent/CN101976429A/en
Application granted granted Critical
Publication of CN101976429B publication Critical patent/CN101976429B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

基于游弋图像的水面鸟瞰图成像方法,首先建立成像系统,包括服务器图像处理系统、客户端界面显示与交互系统和游弋船艇视频采集系统,在船艇上设置视频摄像机和GPS定位装置,采集水面视频数据,服务器图像处理系统对视频数据进行处理得到水面鸟瞰图像,并输入客户端界面显示与交互系统。本发明通过游弋船艇采集水面视频,转化为水面的鸟瞰图像,失真小,噪声干扰少;周期短,时效性高;投入低,兼容性好。

Figure 201010521191

The water surface bird's-eye view imaging method based on cruising images first establishes an imaging system, including a server image processing system, a client interface display and interaction system, and a cruising boat video acquisition system. A video camera and a GPS positioning device are installed on the boat to collect the water surface Video data, the server image processing system processes the video data to obtain a bird's-eye view image of the water surface, and inputs it into the client interface display and interaction system. The invention collects water surface video by cruising boats and converts it into a bird's-eye view image of the water surface, which has less distortion, less noise interference, short cycle, high timeliness, low investment and good compatibility.

Figure 201010521191

Description

基于游弋图像的水面鸟瞰图成像方法 Imaging method of bird's eye view of water surface based on cruising image

技术领域technical field

本发明属于图像处理技术领域,涉及图像稳定技术、摄像机定标技术、图像修复技术等,将通过水面船艇巡逻拍摄的视频帧图像转换为水面的鸟瞰图,展示大面积水面状态,如蓝藻等,具体为一种基于游弋图像的水面鸟瞰图成像方法。The invention belongs to the field of image processing technology, and relates to image stabilization technology, camera calibration technology, image restoration technology, etc., and converts video frame images captured by patrolling boats on the water surface into bird's-eye views of the water surface to display large-area water surface conditions, such as blue-green algae, etc. , specifically a water surface bird's-eye view imaging method based on cruising images.

背景技术Background technique

自20世纪60年代以来,太湖流域的经济迅速发展,给环境带来巨大压力,造成了生态系统的严重破坏和环境质量的下降。按现行的《地表水环境质量标准GB3838-88》,20世纪90年代中期太湖水质平均已达IV类,1/3湖区为V类,表现为平均每10年水质下降一个级别,近10多年下降速度明显加快。随着富营养化进程的加快,太湖梅梁湾、丝山湖、西太湖沿岸等水域蓝藻水华频繁发生。蓝藻异常生长极易堆积、腐烂沉降形成水华,在河口以及近岸淤积,不仅破坏水体景观和生态系统平衡,而且由于蓝藻在生长过程中释放毒素,消耗溶解氧,引起水体生物大量死亡,湖泊水质恶化,严重威胁了湖泊周围地区的饮水安全。2007、2008年太湖爆发的蓝藻事件尤为突出,其暴发时间早、规模强度大,后果已严重威胁了无锡等市的饮水安全,使经济遭受到巨大的损失。因此,防治湖泊水质污染,保持生态平衡,以保证人类正常的生活环境,已成为现代社会最重要的问题之一。Since the 1960s, the rapid economic development of the Taihu Basin has brought enormous pressure to the environment, resulting in serious damage to the ecosystem and a decline in environmental quality. According to the current "Surface Water Environmental Quality Standard GB3838-88", the average water quality of Taihu Lake in the mid-1990s reached Class IV, and one-third of the lake area fell into Class V, showing that the water quality has dropped by one level every 10 years on average, and has declined in the past 10 years. The speed is obviously accelerated. With the acceleration of eutrophication process, cyanobacteria blooms occur frequently in Meiliang Bay of Taihu Lake, Sishan Lake, and the coast of West Taihu Lake. The abnormal growth of cyanobacteria is easy to accumulate, decay and settle to form algae blooms, and silt up in the estuary and near the shore, which not only destroys the water body landscape and the balance of the ecosystem, but also causes a large number of aquatic organisms to die due to the release of toxins during the growth process of cyanobacteria and consumes dissolved oxygen. Water quality has deteriorated, seriously threatening the safety of drinking water in areas around the lake. In 2007 and 2008, the blue-green algae outbreak in Taihu Lake was particularly prominent. Its outbreak time was early and its scale was large. The consequences have seriously threatened the drinking water safety of Wuxi and other cities and caused huge losses to the economy. Therefore, it has become one of the most important issues in modern society to prevent and control lake water pollution and maintain ecological balance to ensure the normal living environment of human beings.

除了蓝藻,还有其它的水域情况需要大范围监测,如海面的赤潮,石油泄漏等引起的水域污染等。In addition to cyanobacteria, there are other water conditions that need to be monitored on a large scale, such as red tides on the sea surface, water pollution caused by oil spills, etc.

2010年7月16日,大连湾一条输油管线突然爆炸,造成估计至少上万吨原油泄漏入海,对生态安全造成严重危害,给海水养殖及旅游业等造成了难以估计的损失。大批海洋生物因中毒或窒息死亡,原油所含的苯和甲苯等有毒化合物会进入食物链。此外,潜在的损害进一步扩展到当地生态系统中,幸存的生物,在几年内会把毒物的影响遗传给后代。On July 16, 2010, an oil pipeline in Dalian Bay exploded suddenly, causing at least tens of thousands of tons of crude oil to leak into the sea, causing serious damage to ecological security and causing incalculable losses to mariculture and tourism. A large number of marine organisms die due to poisoning or suffocation, and toxic compounds such as benzene and toluene contained in crude oil will enter the food chain. In addition, the potential damage spreads further into the local ecosystem, with surviving organisms passing on the poison's effects to future generations within a few years.

2010年10月11日,台风凡亚比登陆福建漳浦带来东南风,在惠东盐港引发大范围赤潮,盐港2500多个网箱鱼类因赤潮缺氧在3天内死亡,造成直接经济损失1700多万元。On October 11, 2010, typhoon Fanapi landed in Zhangpu, Fujian, bringing southeasterly winds and causing a large-scale red tide in Yangang, Huidong. More than 2,500 fish in net cages in Yangang died within 3 days due to red tide and hypoxia, causing direct The economic loss was more than 17 million yuan.

水质监测是水质评价与水污染防治的主要依据,随着水体污染问题的日渐严重,水质监测成为社会经济可持续发展必须解决的重大问题。目前我国内陆水体水质监测主要采取四种方法:Water quality monitoring is the main basis for water quality evaluation and water pollution prevention and control. With the increasingly serious water pollution problem, water quality monitoring has become a major issue that must be solved for sustainable social and economic development. At present, my country's inland water quality monitoring mainly adopts four methods:

(一)直接检测法。对要监测的湖泊采集水样,在实验室进行水质分析,得到相关化学参数,以采样点的参数代表附近水域或采用统计学的方法分析其时空分布。该方法的优点是可以检测多种水质参数,包括溶解氧、水温、pH值、电导率、透明度,高锰酸盐指数、氨氮、总氮、总磷、叶绿素A、藻类总密度等等与水质密切相关的信息,不足之处是对大面积的水域,采用该方法耗时、耗资、耗力,难以获取大范围水域状态参量,不能满足对水质实时、连续、大范围的监测评价要求。(1) Direct detection method. Collect water samples from the lakes to be monitored, conduct water quality analysis in the laboratory, obtain relevant chemical parameters, and use the parameters of the sampling points to represent the nearby waters or use statistical methods to analyze their spatial and temporal distribution. The advantage of this method is that it can detect a variety of water quality parameters, including dissolved oxygen, water temperature, pH value, conductivity, transparency, permanganate index, ammonia nitrogen, total nitrogen, total phosphorus, chlorophyll A, total algae density, etc. and water quality Closely related information, the disadvantage is that for large areas of water, this method is time-consuming, costly, and labor-intensive, and it is difficult to obtain state parameters of large-scale water areas, which cannot meet the requirements for real-time, continuous, and large-scale monitoring and evaluation of water quality.

(二)固定摄像机监控。以太湖为例,目前无锡市环境检测中心已和中国电信达成协议,采用“全球眼”技术实现梅梁湖沿岸水域的实时视频监控。该系统已经初步完成,但仍存在一些不足。(2) Fixed camera monitoring. Taking Taihu Lake as an example, the Wuxi Municipal Environmental Testing Center has reached an agreement with China Telecom to use the "Global Eye" technology to realize real-time video surveillance of the waters along the Meiliang Lake. The system has been initially completed, but there are still some deficiencies.

1.费用昂贵:双方采用租用服务的方式,全部建设由中国电信出资,包括日后的维护,无锡环保检测中心租用其服务,每年交纳大概50万元的网络租赁费用。1. Expensive: The two parties adopt the method of renting services, and China Telecom will pay for all construction, including future maintenance. Wuxi Environmental Protection Testing Center rents its services and pays about 500,000 yuan in network rental fees per year.

2.画质不佳:电信“全球眼”采用10Mbps光纤接入,在检测中心的画面不是很流畅。目前采用的清晰度为4CIF,帧率不低于20帧,摄像头采用云台控制,可远程进行水平、上下转动以及远程调节镜头焦距。2. Poor picture quality: Telecom’s “Global Eye” uses 10Mbps optical fiber access, and the picture in the testing center is not very smooth. The resolution currently used is 4CIF, and the frame rate is not lower than 20 frames. The camera is controlled by a pan-tilt, which can remotely perform horizontal, vertical rotation and remote adjustment of the lens focal length.

3.监测域有限:近景基本可以看清,但镜头拉远时水上画面无法辨别,其一方面和摄像头质量有关,另一方面和云台架设的高度有关。3. The monitoring area is limited: the close-up can be seen clearly, but the underwater picture cannot be distinguished when the lens is zoomed out. On the one hand, it is related to the quality of the camera, and on the other hand, it is related to the height of the gimbal.

(三)人工巡逻监视。人工巡逻监视由沿水域各地区、各部门共同完成。在沿岸区设置巡视点,主要记录观测点周边水域的风向、水色、污染面积、藻类聚集情况,根据事件发生级别确定观测频次。人工巡视具有较强主观性,对水色、藻类聚集状况等判断标准难以统一,污染面积等参数只能估算得出,监测频次较高时需要消耗大量人力、物力。(3) Manual patrol and surveillance. Manual patrols and surveillance are jointly completed by various regions and departments along the waters. Set up inspection points in the coastal area, mainly record the wind direction, water color, polluted area, and algae accumulation in the waters around the observation point, and determine the observation frequency according to the level of event occurrence. Manual inspections are highly subjective, and it is difficult to unify the judgment standards for water color and algae accumulation status. Parameters such as polluted area can only be estimated. High monitoring frequency requires a lot of manpower and material resources.

(四)遥感监测法。目前常用的内陆水体水质遥感监测是基于经验、统计分析、水质参数光谱特征等选择遥感波段数据进行统计分析,建立水质参数反演算法实现的。70年代初期开始,对陆地水体的遥感研究从单纯的水域识别发展到对水质参数进行遥感监测、制图和预测。随着对物质光谱特征研究的深入、算法的改进以及遥感技术本身的不断革新,遥感监测水质从定性发展到定量,且可通过遥感预测的水质参数种类逐渐增加,包括悬浮颗粒物、水体透明度、叶绿素a浓度以及溶解性有机物、水中入射与出射光的垂直衰减系数,和一些综合污染指标如营养状态指数等。(4) Remote sensing monitoring method. At present, the commonly used remote sensing monitoring of inland water quality is based on experience, statistical analysis, and spectral characteristics of water quality parameters. Beginning in the early 1970s, remote sensing research on terrestrial water bodies has developed from simple water identification to remote sensing monitoring, mapping and forecasting of water quality parameters. With the in-depth research on the spectral characteristics of substances, the improvement of algorithms and the continuous innovation of remote sensing technology itself, remote sensing monitoring of water quality has developed from qualitative to quantitative, and the types of water quality parameters that can be predicted by remote sensing are gradually increasing, including suspended particles, water transparency, chlorophyll, etc. aConcentration and dissolved organic matter, vertical attenuation coefficient of incident and outgoing light in water, and some comprehensive pollution indicators such as nutritional status index, etc.

遥感监测又分为两类:一是基于高空间分辨率,如法国SPOT卫星的HRV数据、美国陆地探测卫星系统LANDSAT的TM(Thematic Mapper)数据或搭载在Terro卫星上的ASTER数据等对湖泊相关信息的直接提取;二是基于中低空间分辨率如气象卫星NOAA的AVHRR、美国1999年发射的Terra(EOS-AMI)卫星上搭载的MODIS(Moderate-resolution Imaging Spectroradiometer)等数据结合相关的算法来提取目标信息。Remote sensing monitoring is divided into two categories: one is based on high spatial resolution, such as the HRV data of the French SPOT satellite, the TM (Thematic Mapper) data of the US Land Exploration Satellite System LANDSAT or the ASTER data carried on the Terro satellite, etc. The direct extraction of information; the second is based on low-medium spatial resolution such as the AVHRR of the meteorological satellite NOAA, MODIS (Moderate-resolution Imaging Spectroradiometer) carried on the Terra (EOS-AMI) satellite launched by the United States in 1999 and other data combined with relevant algorithms. Extract target information.

遥感技术是大范围水质监测中备受瞩目的方法,但是仍然存在以下问题:Remote sensing technology is a high-profile method in large-scale water quality monitoring, but there are still the following problems:

1.高分辨率遥感影像时间分辨率低且价格昂贵。如ASTER重复覆盖周期为4~16天,TM的运行周期为16天,SPOT的轨道是“定态”(phased)的,重复覆盖周期为26天。ASTER影像大约为800元/景;TM影像4000元/景;SPOT1、2、3、4影像数据9900元/景,分辨率10m和5m的SOPT5则为14900元/景,2.5m的高达29800元/景。1. High-resolution remote sensing images have low temporal resolution and are expensive. For example, the repeat coverage period of ASTER is 4 to 16 days, the operation period of TM is 16 days, the orbit of SPOT is "phased", and the repeat coverage period is 26 days. ASTER image is about 800 yuan/scene; TM image is 4,000 yuan/scene; SPOT1, 2, 3, 4 image data is 9,900 yuan/scene, SOPT5 with resolution 10m and 5m is 14,900 yuan/scene, 2.5m is as high as 29,800 yuan /scene.

2.中低分辨率的遥感监测数据存在混合象元问题,精度受限。遥感对地物的探测是以像元为单位,像元具有一定的面积,且很少是由单一均匀的地表覆盖类组成的,一般都是几种地物的混合体。尽管不同的自然地物有其不同的光谱、时间、角度等特征,但是遥感记录的像元只有单一的光谱、时间、角度等特征,即混杂后的特征,从而给遥感解译造成困扰。MODIS仪器的地面分辨率为250m、500m和1000m;AVHRR的星下点分辨率为1.1km,由于扫描角大,图像边缘部分变形较大,实际上其最有用的部分在±15°范围内(15°处地面分辨率为1.5km)。混合象元问题是遥感技术向定量化深入发展的重要障碍。2. There is a problem of mixed pixels in the low-to-medium resolution remote sensing monitoring data, and the accuracy is limited. The detection of ground objects by remote sensing is based on the unit of pixel, which has a certain area, and is seldom composed of a single uniform land cover class, but generally a mixture of several ground objects. Although different natural features have different characteristics such as spectrum, time, and angle, the pixels recorded by remote sensing only have a single characteristic of spectrum, time, and angle, that is, mixed features, which cause troubles for remote sensing interpretation. The ground resolution of the MODIS instrument is 250m, 500m and 1000m; the sub-satellite point resolution of AVHRR is 1.1km. Due to the large scanning angle, the edge part of the image is deformed a lot. In fact, the most useful part is within the range of ±15° ( Ground resolution at 15° is 1.5km). The problem of mixed pixels is an important obstacle to the further development of remote sensing technology to quantification.

3.遥感图像易受云层、大气影响,导致无法获得地面影像或干扰较大,多雨季节难以实时连续跟踪水面动态变化。太湖流域多雨,年均降水量为1100毫升,其属亚热带季风气候区,每年春夏之交,北上的暖气流和南下的冷气流相遇于此,两者势均力敌,相持时间一个月,甚至五六十天,形成连绵不断的梅雨,而此时也正是蓝藻高发期,遥感监测难以发挥效用。卫星传感器接收的是地物反射太阳辐射的信号,在太阳辐射与地球大气的相互作用中,会产生吸收、反射、散射与发射等效应,这些都会引起传感器接收到的信号失真,使图像质量下降,特别是水体,其反射光谱信号相对陆地较弱,因此大气状况会对监测结果造成较大干扰。3. Remote sensing images are easily affected by clouds and the atmosphere, resulting in inability to obtain ground images or greater interference. It is difficult to continuously track dynamic changes in water surface in real time in rainy seasons. The Taihu Lake Basin is rainy, with an average annual precipitation of 1100 milliliters. It belongs to the subtropical monsoon climate zone. Every spring and summer, the northward warm air flow and the southward cold air flow meet here. It is a rainy day, and it is also a period of high incidence of blue-green algae, so remote sensing monitoring is difficult to play a role. The satellite sensor receives the signal of solar radiation reflected by ground objects. In the interaction between solar radiation and the earth's atmosphere, effects such as absorption, reflection, scattering and emission will occur, which will cause distortion of the signal received by the sensor and degrade the image quality. , especially water bodies, whose reflectance spectrum signal is weaker than that of land, so the atmospheric conditions will cause great interference to the monitoring results.

各种水域在需要对水面有整体可视状态的监测时,都存在上面的问题。All kinds of water areas have the above problems when they need to monitor the overall visual state of the water surface.

发明内容Contents of the invention

本发明要解决的问题是:现有技术对水面的监测存在不足,如不能满足大面积水域监测的要求,监测实时性不够,监测成本高昂,监测精度易受影响等;需要一种简便易实现,精度高效果良好的水面监测方法。The problem to be solved by the present invention is: there are deficiencies in the monitoring of the water surface in the prior art, such as the requirements for monitoring large-area waters cannot be met, the real-time monitoring is not enough, the monitoring cost is high, and the monitoring accuracy is easily affected; a simple and easy to realize , a water surface monitoring method with high precision and good effect.

本发明的技术方案为:基于游弋图像的水面鸟瞰图成像方法,首先建立成像系统,包括服务器图像处理系统、客户端界面显示与交互系统和游弋船艇视频采集系统,游弋船艇视频采集系统包括视频采集装置和GPS定位装置,服务器图像处理系统对视频数据进行处理得到水面鸟瞰图像,并输入客户端界面显示与交互系统;其中:服务器图像处理系统包括视频稳定模块、摄像机标定模块、图像变换模块和图像拼接模块,客户端界面显示与交互系统包括地图填充及显示模块和网格地图构建模块,包括以下步骤:The technical solution of the present invention is: a water surface bird's-eye view imaging method based on cruising images. First, an imaging system is established, including a server image processing system, a client interface display and interaction system, and a cruising boat video acquisition system. The cruising boat video acquisition system includes The video acquisition device and GPS positioning device, the server image processing system process the video data to obtain the bird's-eye view image of the water surface, and input it into the client interface display and interaction system; wherein: the server image processing system includes a video stabilization module, a camera calibration module, and an image transformation module And the image stitching module, the client interface display and interaction system includes a map filling and display module and a grid map construction module, including the following steps:

1)、在船艇上设置视频摄像机和GPS定位装置,视频摄像机记录巡逻船艇游历路线中视域内的水面景象,输出视频帧序列,GPS定位装置实时获取船艇巡游中的GPS信息,GPS信息与视频帧序列通过时间一一对应;1), a video camera and a GPS positioning device are set on the boat, the video camera records the water surface scene in the patrol boat tour route, outputs a video frame sequence, and the GPS positioning device obtains the GPS information in the boat cruise in real time, and the GPS information and The video frame sequence corresponds to one by one through time;

2)将视频帧序列输入服务器图像处理系统,视频稳定模块通过全局运动估计、运动补偿和图像修复技术对视频帧序列进行预处理,把由于船艇的颠簸导致的画面抖动去除,得到稳定的视频序列图像;2) Input the video frame sequence into the image processing system of the server, and the video stabilization module preprocesses the video frame sequence through global motion estimation, motion compensation and image repair technology, removes the image jitter caused by the turbulence of the boat, and obtains a stable video sequence image;

3)摄像机标定模块从视频摄像机获取的图像信息来计算水面世界坐标系中的几何信息,完成摄像机标定,得到的信息用于重建水面鸟瞰图影像信息;3) The camera calibration module calculates the geometric information in the water surface world coordinate system from the image information obtained by the video camera, completes the camera calibration, and the obtained information is used to reconstruct the image information of the water surface bird's-eye view;

4)图像变换模块利用摄像机标定模块的标定信息,包括视频摄像机内外参数以及俯仰角等相机姿态,将视频帧序列的视角从侧视转换为俯视,再利用图像重构技术实现对俯视图像的空洞填补、图像清晰化处理,得到视角变换后的清晰鸟瞰图,其中图像重构采用基于模板和基于网格的算法;4) The image conversion module uses the calibration information of the camera calibration module, including the internal and external parameters of the video camera and the camera attitude such as pitch angle, to convert the viewing angle of the video frame sequence from side view to top view, and then uses image reconstruction technology to realize the hole in the top view image Filling and image clearing processing to obtain a clear bird's-eye view after perspective transformation, in which image reconstruction uses template-based and grid-based algorithms;

5)图像拼接模块将一幅幅孤立的视频帧序列俯视图拼合起来,得到一张大图,拼接依据以图像特征点匹配和对数图像处理LIP模型为主,GPS信息的地理坐标为辅;5) The image stitching module stitches together isolated video frame sequence top views to obtain a large image. The stitching basis is mainly based on image feature point matching and logarithmic image processing LIP model, supplemented by the geographic coordinates of GPS information;

6)网格地图构建模块依据经纬度信息和地图坐标的对应关系将水面的地图按网格分割,生成基于地理信息系统GIS的电子网格地图,其中使用可缩放矢量图形SVG作为地图格式,GPS坐标转换为水面地图坐标,通过GPS经纬度与水面地图坐标的对应关系将地图分割为不同的网格,将鸟瞰图根据GPS坐标填充到对应的水面地图网格中,生成水域的总体鸟瞰图在WEB上发布。6) The grid map construction module divides the map of the water surface into grids according to the correspondence between latitude and longitude information and map coordinates, and generates an electronic grid map based on the geographic information system GIS, wherein SVG is used as the map format, and GPS coordinates Convert to water surface map coordinates, divide the map into different grids through the correspondence between GPS latitude and longitude and water surface map coordinates, fill the bird's-eye view into the corresponding water surface map grids according to GPS coordinates, and generate an overall bird's-eye view of the water area on the WEB release.

步骤2)中,视频稳定模块的处理流程如下:Step 2) in, the processing flow of video stabilization module is as follows:

21)全局运动估计:采用块匹配法和参数模型估计法;21) Global motion estimation: using block matching method and parameter model estimation method;

22)图像运动补偿:首先对视频帧序列的无意和有意运动进行参数估计,采用卡尔曼状态滤波法进行有意运动参数估计,利用统计学方法构造物理状态空间模型,描述有意和无意运动参数的动态变化,并基于卡尔曼滤波器估计有意运动参数,然后通过图像变换来补偿无意运动,使图像序列的运动与估计出的有意运动模型一致,从而达到视频稳定的目的,假设第n帧图像fn变换参数为(Tn,dn),则变换模型如下:22) Image motion compensation: firstly estimate the parameters of the unintentional and intentional motion of the video frame sequence, use the Kalman state filter method to estimate the parameters of the intentional motion, and use the statistical method to construct a physical state space model to describe the dynamics of the intentional and unintentional motion parameters Change, and estimate the intentional motion parameters based on the Kalman filter, and then compensate the unintentional motion through image transformation, so that the motion of the image sequence is consistent with the estimated intentional motion model, so as to achieve the purpose of video stabilization, assuming the nth frame image f n The transformation parameters are (T n , d n ), then the transformation model is as follows:

pp ‾‾ nno == TT ‾‾ nno pp nno ++ dd ‾‾ nno -- -- -- (( 11 ))

其中,pn为图像变换后的点坐标向量,根据卡尔曼滤波的状态方程和观测方程,得到:Among them, p n is the point coordinate vector after image transformation, according to the state equation and observation equation of Kalman filter, we can get:

pp ‾‾ nno == TT ^^ nno pp nno ++ dd ^^ nno == (( TT ^^ nno (( TT ~~ nno )) -- 11 )) pp nno -- (( TT ^^ nno (( TT ~~ nno )) -- 11 dd ~~ nno ++ dd ^^ nno -- -- -- (( 22 ))

因此,第n帧图像fn的变换过程可描述如下:Therefore, the transformation process of the nth frame image f n can be described as follows:

ff ′′ nno (( pp )) == ff nno (( (( TT ‾‾ nno )) -- 11 pp nno -- (( TT ‾‾ nno )) -- 11 dd ‾‾ nno )) -- -- -- (( 33 )) ;;

23)图像修复:从图像数据中经过滤波、参数或非参数估计以及熵方法得到图像模型,采用基于偏微分方程的数字图像修补技术,利用待修补区域的边缘信息,同时采用由粗到精的方法来估计等照度线的方向,将待修补区域周围的信息传播到修补区域中,以实现图像修复。23) Image repair: The image model is obtained from the image data through filtering, parameter or non-parametric estimation and entropy method, using the digital image repair technology based on partial differential equations, using the edge information of the area to be repaired, and adopting from coarse to fine The method is used to estimate the direction of the iso-illuminance line, and the information around the area to be repaired is propagated to the repaired area to achieve image restoration.

作为优选方式,本发明视频摄像机为针孔摄像机,步骤3)摄像机标定时在传统的摄像机标定方法基础上对内外参分别标定,以一个结构已知的标定块作为空间参照物,利用标定块上一组非退化点的三维坐标和相应的图像坐标通过一系列数学变换和计算方法,求取摄像机模型的内部参数和外部参数,内参标定在室内实验环境下完成,采用传统的基于标定模板的方法;外参进行在线标定,通过提取场景中的固定特征参照物实现。As a preferred mode, the video camera of the present invention is a pinhole camera. In step 3) when the camera is calibrated, on the basis of the traditional camera calibration method, the internal and external parameters are respectively calibrated, and a calibration block with a known structure is used as a spatial reference object, and the The three-dimensional coordinates of a group of non-degenerate points and the corresponding image coordinates are obtained through a series of mathematical transformations and calculation methods to obtain the internal parameters and external parameters of the camera model. The internal parameter calibration is completed in the indoor experimental environment, using the traditional method based on calibration templates ; Online calibration of external parameters is achieved by extracting fixed feature reference objects in the scene.

步骤4)中,图像变换模块通过图像逆向映射技术实现视频帧的视角转换,建立从目的域到参考域的映射关系矩阵,目的域为鸟瞰图,参考域为视频帧图像,用摄像机标定得到的信息预先重建目的域的深度信息,根据所需构建的鸟瞰图中某一点的坐标(X,Y),通过映射关系矩阵计算得到该点的像平面坐标(x,y),再将(x,y)处像素值赋值于(X,Y),将视频帧序列的视角从侧视转换为俯视;图像重构采用基于模板的算法。In step 4), the image conversion module realizes the viewing angle conversion of the video frame through the image reverse mapping technology, and establishes a mapping relationship matrix from the target domain to the reference domain. The target domain is a bird's-eye view, and the reference domain is a video frame image. The information pre-reconstructs the depth information of the target domain. According to the coordinates (X, Y) of a certain point in the bird's-eye view to be constructed, the image plane coordinates (x, y) of the point are calculated through the mapping relationship matrix, and then (x, The pixel value at y) is assigned to (X, Y), and the perspective of the video frame sequence is converted from side view to top view; image reconstruction uses a template-based algorithm.

进一步的,步骤5)的图像拼接模块的处理流程包括图像预处理、图像配准、图像融合,其中:Further, the processing flow of the image stitching module in step 5) includes image preprocessing, image registration, and image fusion, wherein:

图像预处理对参考图像和待拼接图像进行预处理,参考图像取视频序列的第一帧图像,预处理包括图像处理的基本操作、建立图像的匹配模板和对图像进行变换,其中将参考图像作为匹配模板,根据实际情况可以做一些剪切,对图像进行变换用以提取图像特征,包括傅立叶变换、小波变换、Gabor变换,之后提取图像的特征集合,利用特征计算参考图像和待拼接图像的粗略位置关系,即对待拼接图像进行粗略定位,找到大致的重叠区域,缩小匹配范围,提高速度;Image preprocessing preprocesses the reference image and the image to be spliced. The reference image takes the first frame image of the video sequence. The preprocessing includes the basic operations of image processing, the establishment of a matching template for the image, and the transformation of the image. The reference image is used as Match the template, do some cutting according to the actual situation, and transform the image to extract image features, including Fourier transform, wavelet transform, and Gabor transform, and then extract the feature set of the image, and use the features to calculate the roughness of the reference image and the image to be stitched Positional relationship, that is, roughly positioning the image to be stitched, finding the approximate overlapping area, narrowing the matching range, and increasing the speed;

图像配准采用改进的归一化灰度级相关方法NGC结合GPS定位数据进行图像配准,改进的归一化灰度级相关方法NGC为:将图像的匹配模板中像素的灰度级乘以待拼接图像中被匹配模板覆盖的对应像素的灰度级,得到的总和NC值存储在二维数组中,所述匹配模板和待拼接图像均未已经过步骤4)转化的鸟瞰图:Image registration uses the improved normalized gray level correlation method NGC combined with GPS positioning data for image registration. The improved normalized gray level correlation method NGC is: multiply the gray level of the pixel in the matching template of the image by The gray level of the corresponding pixel covered by the matching template in the image to be spliced, and the obtained sum NC value is stored in a two-dimensional array, and neither the matching template nor the image to be spliced has been converted from the bird's-eye view of step 4):

NCNC (( ii ,, jj )) == ΣΣ mm == 11 Mm ΣΣ nno == 11 NN SS ii ,, jj (( mm ,, nno )) TT (( mm ,, nno )) -- -- -- (( 1010 ))

其中,T为匹配模板,包含M×N个像素,Si,j为移动匹配模板起始点至待拼接图像的(i,j)位置时图像被模板覆盖的区域,称为子图,匹配模板在待拼接图像中的位置由与匹配模板最相似的子图决定,将匹配模板T和子图Si,j看成两个向量,利用向量夹角的余弦公式(11)求解Si,j与T的夹角θ,使得θ最小的Si,j所在的位置即为模板在图像中的位置:Among them, T is the matching template, which contains M×N pixels, S i, j is the area covered by the template when moving the starting point of the matching template to the (i, j) position of the image to be stitched, which is called a sub-image, and the matching template The position in the image to be spliced is determined by the subgraph most similar to the matching template. The matching template T and the subgraph S i, j are regarded as two vectors, and the cosine formula (11) of the vector angle is used to solve S i, j and The included angle θ of T makes the position of S i, j where θ is the smallest is the position of the template in the image:

coscos θθ ii ,, jj == ΣΣ mm == 11 Mm ΣΣ nno == 11 NN SS ii ,, jj (( mm ,, nno )) TT (( mm ,, nno )) ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ SS (( mm ,, nno )) ]] 22 ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ TT (( mm ,, nno )) ]] 22 -- -- -- (( 1111 )) ;;

图像融合采用对数域图像处理模型LIP,将待拼接图像转换至LIP域处理,实现图像增强、亮度校正。The image fusion adopts the logarithmic domain image processing model LIP, and the image to be stitched is converted to the LIP domain for processing to realize image enhancement and brightness correction.

作为优选,图像拼合中,把位于

Figure BDA0000029614070000061
处的像素点的亮度值视为一个三维向量,称之为彩色向量,用表示,As a preference, in the image mosaic, place the
Figure BDA0000029614070000061
The brightness value of the pixel at the position is regarded as a three-dimensional vector, which is called a color vector. express,

Ff urur (( vv )) rr == Ff rr (( vv )) rr Ff gg (( vv )) rr Ff bb (( vv )) rr -- -- -- (( 1818 ))

在计算中引用LIP域定义的

Figure BDA0000029614070000064
Figure BDA0000029614070000065
运算法则,以及映射函数ψ,将彩色图像的R、G、B三个通道分别处理,在灰度图像的LIP模型上,建立彩色图像的对数处理CLIP模型,在CLIP域内处理水面图像,使图像更贴近真实视觉特征。Refer to the LIP field definition in the calculation
Figure BDA0000029614070000064
and
Figure BDA0000029614070000065
The algorithm and the mapping function ψ process the R, G, and B channels of the color image separately. On the LIP model of the grayscale image, a logarithmic processing CLIP model of the color image is established, and the water surface image is processed in the CLIP domain. The image is closer to the real visual features.

作为优选,步骤6)中将水面的地图按网格分割时,分割为大小均匀的六边形网格地图。As a preference, when the map of the water surface is divided into grids in step 6), it is divided into hexagonal grid maps of uniform size.

本发明以船载摄像机作为系统前端,采集水面图像,将视频序列输入系统,系统首先利用视频稳定技术,把由于船的颠簸导致的画面抖动去除,得到稳定的视频序列图像;然后通过图像变换模块,利用摄像机标定技术自动标定摄像机内外参数以及测定俯仰角等相机姿态,利用图像变换技术将视角从测试转换为俯视,再利用空洞填补、图像清晰化等技术处理俯视图,得到视角变换后的清晰鸟瞰图;接下来是图像拼接模块,利用图像拼接技术将一幅幅孤立的视频图像拼合起来,得到一张大图,拼接依据以图像特征点匹配和CLIP处理为主,GPS地理坐标为辅;拼接完成的水面大图以网格地图的形式在WEB上发布。The present invention uses the ship-mounted camera as the front end of the system to collect water surface images and input the video sequence into the system. The system first uses the video stabilization technology to remove the picture shake caused by the turbulence of the ship to obtain a stable video sequence image; then through the image conversion module , use camera calibration technology to automatically calibrate the internal and external parameters of the camera and measure the camera attitude such as pitch angle, use image transformation technology to convert the viewing angle from test to bird's-eye view, and then use hole filling, image clarity and other technologies to process the bird's-eye view after viewing angle transformation Figure; the next is the image stitching module, which uses image stitching technology to stitch together isolated video images to get a large image. The stitching basis is mainly based on image feature point matching and CLIP processing, supplemented by GPS geographic coordinates; the stitching is completed The large water surface map is released on the WEB in the form of a grid map.

针对卫星遥感阴雨多云天气下无法获得地面影像或干扰较大的不足,本发明提出基于船艇游弋和视频摄像机的大范围水域状态监测新方法,与已有监测方法优势互补。高分辨率遥感图像价格极其昂贵,而本发明基于现有巡逻船艇的船载摄像机成本适中,并可采集同等分辨率甚至更高分辨率图像,且其近距离成像受大气、天气干扰较少,构建基于游弋图像的太湖蓝藻分布图系统具有广阔的应用前景。Aiming at the inability to obtain ground images or large interferences under satellite remote sensing in rainy and cloudy weather, the present invention proposes a new method for monitoring the state of large-scale waters based on boat cruising and video cameras, which complements the advantages of existing monitoring methods. The price of high-resolution remote sensing images is extremely expensive, but the cost of the shipboard camera based on the existing patrol boats of the present invention is moderate, and images with the same resolution or even higher resolution can be collected, and its close-range imaging is less disturbed by the atmosphere and weather , the construction of a cyanobacteria distribution map system based on cruising images in Taihu Lake has broad application prospects.

本发明构建视频处理、分析、集成、发布系统,具有巡逻视频记录、定位、地图再现及人机交互功能。系统集成GPS地理定位数据和游弋图像数据提供地图显示,可用于指导船艇巡逻方向,辅助其无遗漏的游历湖域,以免盲目巡游。The invention constructs a video processing, analysis, integration, and release system, which has the functions of patrol video recording, positioning, map reproduction, and human-computer interaction. The system integrates GPS geolocation data and cruising image data to provide a map display, which can be used to guide the patrol direction of the boat and assist it to tour the lake area without omission, so as to avoid blind cruising.

本发明提出了构建湖域鸟瞰图的新方法,减轻对卫星遥感系统的依赖。在摄像机标定技术基础上实现图像映射和重建,将透视图转化为鸟瞰图,基于图像拼接技术,并消除块图像之间因光照不均等造成的亮度差异,形成整个湖域鸟瞰图。The invention proposes a new method for constructing the bird's-eye view of the lake area, which reduces the dependence on the satellite remote sensing system. Based on the camera calibration technology, the image mapping and reconstruction are realized, and the perspective view is converted into a bird's-eye view. Based on the image mosaic technology, and the brightness difference caused by uneven illumination between block images is eliminated, a bird's-eye view of the entire lake area is formed.

提出的网格化地图模式,构建具有高分辨率的网格图像块,其独立处理、动态更新,具有处理便捷、实时性强等优点。基于GIS地图视窗和SVG技术构建太湖网格图像,既能实时展现蓝藻全域分布信息,又能提取独立网格块,展现高分辨率细节信息。The proposed grid map model constructs grid image blocks with high resolution, which can be independently processed and dynamically updated, and has the advantages of convenient processing and strong real-time performance. Based on the GIS map window and SVG technology to construct the grid image of Taihu Lake, it can not only display the global distribution information of cyanobacteria in real time, but also extract independent grid blocks to display high-resolution detailed information.

提出的基于对数域LIP和CLIP的图像处理方式,解决图像计算中的溢出、失真等问题,使其更加逼近人眼视觉系统。图像处理中常用的普通加法和标量乘法运算,与图像形成法则不一致,其对由透射光形成的图像,对人类视觉感知和很多实际数字图像都是不适合的,常常导致计算溢出,对数域图像处理技术较好的解决了这一问题,其封闭运算模式不会导致计算溢出。The proposed image processing method based on logarithmic domain LIP and CLIP solves the problems of overflow and distortion in image calculation, making it closer to the human visual system. Ordinary addition and scalar multiplication operations commonly used in image processing are inconsistent with the rules of image formation, which are not suitable for images formed by transmitted light, human visual perception and many actual digital images, often resulting in calculation overflow, logarithmic domain Image processing technology solves this problem well, and its closed operation mode will not cause calculation overflow.

本发明具有以下优点:The present invention has the following advantages:

1.失真小,噪声干扰少。不同于遥感成像,基于巡逻船艇的摄像机成像中光照辐射路径非常短,影响相当小,更不受云层影响,也不存在混合象元问题,无需进行大气补偿等复杂预处理,即使阴雨多云天气也能捕获实时的湖况图像。1. Small distortion and less noise interference. Different from remote sensing imaging, the light radiation path in camera imaging based on patrol boats is very short, the impact is quite small, and it is not affected by clouds, and there is no problem of mixed pixels. There is no need for complex preprocessing such as atmospheric compensation, even in rainy and cloudy weather Real-time images of lake conditions can also be captured.

2.周期短,时效性高。据调查,为实时把握水面状况,常规监测需每日上报1次监测数据,如蓝藻暴发最严重时期,应急监测项目至少1天监测2次,自动在线监测则需每2h~4h上报数据1次。卫星遥感影像周期过长,不能满足实时性需求。本发明通过巡逻船艇采集的湖域图像几秒钟内即可通过处理发布,时效性很高。2. Short cycle and high timeliness. According to the survey, in order to grasp the water surface conditions in real time, routine monitoring needs to report monitoring data once a day. For example, during the worst period of cyanobacteria outbreaks, emergency monitoring items should be monitored at least twice a day, and automatic online monitoring should report data once every 2-4 hours. . The cycle of satellite remote sensing images is too long to meet the real-time requirements. The lake images collected by the patrol boats in the present invention can be processed and released within a few seconds, and the timeliness is high.

3.投入低,兼容性好。利用现有巡逻船艇,配置视频摄像机即可实现水域状况监测,相对于需要高价购买的卫星遥感影像,其不失为廉价高效的监测手段。3. Low investment and good compatibility. Using existing patrol boats and configuring video cameras can monitor water conditions. Compared with satellite remote sensing images that need to be purchased at a high price, it is a cheap and efficient monitoring method.

附图说明Description of drawings

图1为本发明成像系统的结构图。FIG. 1 is a structural diagram of the imaging system of the present invention.

图2为本发明方法的流程图。Fig. 2 is a flow chart of the method of the present invention.

图3为视频稳定模块的处理流程图。Fig. 3 is a processing flowchart of the video stabilization module.

图4为摄像机理想小孔模型。Figure 4 is the ideal small hole model of the camera.

图5为本发明摄像机标定模块内参标定采用的基于平面模板标定的透视投影模型。FIG. 5 is a perspective projection model based on plane template calibration used for internal parameter calibration of the camera calibration module of the present invention.

图6为本发明摄像机标定模块外掺标定的摄像机城乡模型。Fig. 6 is an urban-rural model of a camera calibrated outside the camera calibration module of the present invention.

图7为图像映射中的正向映射示意图。Fig. 7 is a schematic diagram of forward mapping in image mapping.

图8为图像映射中的逆向映射示意图。Fig. 8 is a schematic diagram of reverse mapping in image mapping.

图9为本发明采用的逆向映射示意图。Fig. 9 is a schematic diagram of reverse mapping adopted in the present invention.

图10为本发明图像重构方法示意图。Fig. 10 is a schematic diagram of the image reconstruction method of the present invention.

图11为本发明的图像拼接流程图。Fig. 11 is a flow chart of image stitching in the present invention.

图12为图像拼接中重叠区线性过度方法的示意图。Fig. 12 is a schematic diagram of a linear transition method for overlapping regions in image stitching.

图13为本发明GPS坐标和电子地图坐标的转换流程示意图。Fig. 13 is a schematic diagram of the conversion process of GPS coordinates and electronic map coordinates in the present invention.

图14为本发明采用的六角形网格示意图。Fig. 14 is a schematic diagram of a hexagonal grid used in the present invention.

图15为WebGIS平台结构示意图。Figure 15 is a schematic diagram of the WebGIS platform structure.

图16为本发明在对电子网格地图进行WEB发布中,Ajax的应用功能示意图。Fig. 16 is a schematic diagram of the application function of Ajax in the WEB publishing of the electronic grid map according to the present invention.

图17为本发明在对电子网格地图进行WEB发布中,Ajax异步通信过程的示意图。FIG. 17 is a schematic diagram of the Ajax asynchronous communication process in the WEB publishing of the electronic grid map according to the present invention.

具体实施方式Detailed ways

本发明基于船载摄像机构建水域表面鸟瞰图,不同于陆地固定摄像机监控系统,其图像游弋特性及水面影像不同于陆地景观的特征决定了本系统需采用特殊的处理方式。图1显示了本发明的成像系统架构,包括服务器图像处理系统、客户端界面显示与交互系统和游弋船艇视频采集系统,游弋船艇视频采集系统包括视频采集装置和GPS定位装置,服务器图像处理系统对视频数据进行处理得到水面鸟瞰图像,并输入客户端界面显示与交互系统;其中:服务器图像处理系统包括视频稳定模块、摄像机标定模块、图像变换模块和图像拼接模块,客户端界面显示与交互系统包括地图填充及显示模块和网格地图构建模块。如图2,本发明包括以下步骤:The present invention builds a bird's-eye view of the water surface based on a ship-mounted camera, which is different from a fixed camera monitoring system on land. The cruising characteristics of the image and the characteristics of the water surface image different from the land landscape determine that the system needs to adopt a special processing method. Fig. 1 has shown the imaging system framework of the present invention, comprises server image processing system, client interface display and interaction system and cruising boat video collection system, and cruising boat video collection system comprises video collection device and GPS positioning device, server image processing system The system processes the video data to obtain a bird's-eye view image of the water surface, and inputs it into the client interface display and interaction system; among them: the server image processing system includes a video stabilization module, a camera calibration module, an image transformation module and an image splicing module, and the client interface display and interaction The system includes a map filling and display module and a grid map construction module. As shown in Fig. 2, the present invention comprises the following steps:

1)、在船艇上设置视频摄像机和GPS定位装置,视频摄像机记录巡逻船艇游历路线中视域内的水面景象,输出视频帧序列,GPS定位装置实时获取船艇巡游中的GPS信息,GPS信息与视频帧序列通过时间一一对应;1), a video camera and a GPS positioning device are set on the boat, the video camera records the water surface scene in the patrol boat tour route, outputs a video frame sequence, and the GPS positioning device obtains the GPS information in the boat cruise in real time, and the GPS information and The video frame sequence corresponds to one by one through time;

2)将视频帧序列输入服务器图像处理系统,视频稳定模块通过全局运动估计、运动补偿和图像修复技术对视频帧序列进行预处理,把由于船艇的颠簸导致的画面抖动去除,得到稳定的视频序列图像;2) Input the video frame sequence into the image processing system of the server, and the video stabilization module preprocesses the video frame sequence through global motion estimation, motion compensation and image repair technology, removes the image jitter caused by the turbulence of the boat, and obtains a stable video sequence image;

3)摄像机标定模块从视频摄像机获取的图像信息来计算水面世界坐标系中的几何信息,完成摄像机标定,得到的信息用于重建水面鸟瞰图影像信息;3) The camera calibration module calculates the geometric information in the water surface world coordinate system from the image information obtained by the video camera, completes the camera calibration, and the obtained information is used to reconstruct the image information of the water surface bird's-eye view;

4)图像变换模块利用摄像机标定模块的标定信息,包括视频摄像机内外参数以及俯仰角等相机姿态,将视频帧序列的视角从侧视转换为俯视,再利用图像重构技术实现对俯视图像的空洞填补、图像清晰化处理,得到视角变换后的清晰鸟瞰图,其中图像重构采用基于模板和基于网格的算法;4) The image conversion module uses the calibration information of the camera calibration module, including the internal and external parameters of the video camera and the camera attitude such as pitch angle, to convert the viewing angle of the video frame sequence from side view to top view, and then uses image reconstruction technology to realize the hole in the top view image Filling and image clearing processing to obtain a clear bird's-eye view after perspective transformation, in which image reconstruction uses template-based and grid-based algorithms;

5)图像拼接模块将一幅幅孤立的视频帧序列俯视图拼合起来,得到一张大图,拼接依据以图像特征点匹配和对数图像处理LIP模型为主,GPS信息的地理坐标为辅;5) The image stitching module stitches together isolated video frame sequence top views to obtain a large image. The stitching basis is mainly based on image feature point matching and logarithmic image processing LIP model, supplemented by the geographic coordinates of GPS information;

6)网格地图构建模块依据经纬度信息和地图坐标的对应关系将水面的地图按网格分割,生成基于地理信息系统GIS的电子网格地图,其中使用可缩放矢量图形SVG作为地图格式,GPS坐标转换为水面地图坐标,通过GPS经纬度与水面地图坐标的对应关系将地图分割为不同的网格,将鸟瞰图根据GPS坐标填充到对应的水面地图网格中,生成水域的总体鸟瞰图在WEB上发布。6) The grid map construction module divides the map of the water surface into grids according to the correspondence between latitude and longitude information and map coordinates, and generates an electronic grid map based on the geographic information system GIS, wherein SVG is used as the map format, and GPS coordinates Convert to water surface map coordinates, divide the map into different grids through the correspondence between GPS latitude and longitude and water surface map coordinates, fill the bird's-eye view into the corresponding water surface map grids according to GPS coordinates, and generate an overall bird's-eye view of the water area on the WEB release.

下面以太湖水域的蓝藻分布监测来具体说明本发明的实施。The implementation of the present invention will be specifically described below with the monitoring of the distribution of cyanobacteria in the waters of Taihu Lake.

1.1.1.视频稳定技术1.1.1. Video stabilization technology

视频稳定技术有三个主要的步骤:图像全局运动估计、图像运动补偿、视频修复,算法流程图如图3所示:There are three main steps in video stabilization technology: image global motion estimation, image motion compensation, and video restoration. The algorithm flow chart is shown in Figure 3:

1)图像全局估计1) Image global estimation

由于摄像机相对背景的运动是一种全局运动,即它造成的图像变化是一种全局变化,因此视频稳定算法中可以用全局运动估计技术进行补偿。全局运动的估计的主要分为基于块匹配法和基于参数模型估计法。Because the motion of the camera relative to the background is a global motion, that is, the image change caused by it is a global change, so the global motion estimation technology can be used to compensate in the video stabilization algorithm. The estimation of the global motion is mainly divided into a method based on block matching and a method based on parameter model estimation.

块匹配方法的基本思想是把当前帧分成方块,对当前帧的每一个块都在参考帧中的一定区域,即搜索窗口内,按照一定的匹配准则搜索与之具有最小匹配误差的块,该块即为当前块的匹配块,匹配块与当前块之间的坐标位移就是运动矢量。基于块匹配方法的运动估计的复杂度主要取决于匹配准则的计算量和采用的搜索算法这两个方面。该方法计算简单,但算法通常不能很好地估计摄像机的旋转、变焦等造成的运动。The basic idea of the block matching method is to divide the current frame into blocks, and each block of the current frame is in a certain area in the reference frame, that is, within the search window, and search for the block with the minimum matching error according to a certain matching criterion. The block is the matching block of the current block, and the coordinate displacement between the matching block and the current block is the motion vector. The complexity of motion estimation based on the block matching method mainly depends on the calculation amount of the matching criterion and the search algorithm used. This method is simple to calculate, but the algorithm usually cannot estimate the motion caused by the camera's rotation, zoom, etc. very well.

参数模型估计算法是指通过建立不同的参数模型可以描述不同的运动形式,参数模型旨在描述三维运动在图像平面的正交或者透视投影,模型使用少量的参数描述其中每一个像素点的运动,与块匹配方法相比,参数模型能更为简洁地描述运动,并且不容易受到噪声的影响。目前主要应用的有基于平行投影的六参数法,也有基于透视投影的八参数法,这类算法可以描述摄像机的旋转和变焦等运动。The parameter model estimation algorithm means that different forms of motion can be described by establishing different parameter models. The parameter model aims to describe the orthogonal or perspective projection of three-dimensional motion on the image plane. The model uses a small number of parameters to describe the motion of each pixel. Compared with the block matching method, the parametric model can describe the motion more succinctly and is less susceptible to noise. At present, the main application is the six-parameter method based on parallel projection, and the eight-parameter method based on perspective projection. This type of algorithm can describe the movement of the camera such as rotation and zoom.

2)图像运动补偿2) Image motion compensation

视频稳定的目的在于减轻或消除摄像机载体晃动所造成的图像序列帧间无意的、有害的运动,保留序列图像中存在的主运动或有意运动。运动补偿包括两个步骤,首先对图像序列的实际运动和有意运动进行参数估计,然后通过适当的图像变换来补偿无意运动,从而达到视频稳定的目的。The purpose of video stabilization is to reduce or eliminate the unintentional and harmful motion between frames of the image sequence caused by the shaking of the camera carrier, and to preserve the main motion or intentional motion existing in the sequence image. Motion compensation consists of two steps. First, parameter estimation is performed on the actual motion and intentional motion of the image sequence, and then the unintentional motion is compensated by appropriate image transformation, so as to achieve the purpose of video stabilization.

传统的有意运动参数估计算法有曲线拟合法和滑动平均滤波法,曲线拟合法多采用低阶模型进行最小二乘法对全局运动估计得到的实际参数进行曲线拟合,从而估计出有意运动参数。运动曲线是有意运动与随机抖动的叠加,有意运动往往代表低频分量,随机抖动为高频分量,滑动平均滤波法可以用来滤除代表随机抖动的高频分量。但上述两种方法都存在较大的缺陷,传统方法可以用于本发明,只是计算量巨大,以当前硬件的运算速度难以保证其实时性。而目标跟踪系统对数据处理的实时性要求较高,应当在下一帧图像到来之前完成对前一帧的处理。Traditional intentional motion parameter estimation algorithms include curve fitting method and moving average filtering method. Curve fitting method mostly uses low-order model to perform least squares method to curve fit the actual parameters obtained from global motion estimation, so as to estimate the intentional motion parameters. The motion curve is the superposition of intentional movement and random jitter. Intentional movement often represents low-frequency components, and random jitter is high-frequency components. The moving average filter method can be used to filter out high-frequency components representing random jitter. But above-mentioned two kinds of methods all have bigger defect, and traditional method can be used in the present invention, but calculation amount is huge, it is difficult to guarantee its real-time performance with the calculation speed of current hardware. The target tracking system requires high real-time data processing, and the processing of the previous frame should be completed before the arrival of the next frame.

卡尔曼状态滤波法可用来进行有意运动参数估计,其主要思路是利用统计学方法构造物理状态空间模型,描述有意和无意运动参数的动态变化,并基于Kalman滤波器估计有意运动参数。该算法所构造的动态运动模型能够实际地描述由于摄像机运动所造成的帧间运动,并通过递归估计获得最佳的有意运动参数。此外,该方法应用灵活,可以通过适当修正状态空间模型利用各种可获得的先验知识,如有意运动和无意运动的特点等The Kalman state filter method can be used to estimate the parameters of intentional motion. The main idea is to use statistical methods to construct a physical state space model, describe the dynamic changes of intentional and unintentional motion parameters, and estimate the intentional motion parameters based on the Kalman filter. The dynamic motion model constructed by this algorithm can actually describe the inter-frame motion caused by camera motion, and obtain the best intentional motion parameters through recursive estimation. In addition, the method is flexible and can utilize various available prior knowledge, such as the characteristics of intentional and unintentional motion, etc., by properly revising the state-space model

传统的曲线拟合法和滑动平均滤波法都用到了后续图像帧的运动参数,显然无法满足较高的实时性要求。此外,曲线拟合法难以适应较为复杂的运动形式,而滑动平均滤波法则存在“过稳”和“欠稳”的问题。而卡尔曼状态滤波法根据过去直到现在的实际图像运动参数来估计当前的有意运动参数,且计算量较小,从而克服了传统运动滤波算法难以满足高实时性要求的缺陷。Both the traditional curve fitting method and the moving average filtering method use the motion parameters of subsequent image frames, which obviously cannot meet the high real-time requirements. In addition, the curve fitting method is difficult to adapt to more complex motion forms, while the moving average filtering method has the problems of "over stability" and "under stability". The Kalman state filter method estimates the current intentional motion parameters based on the actual image motion parameters from the past to the present, and the calculation amount is small, thus overcoming the defect that the traditional motion filter algorithm is difficult to meet the high real-time requirements.

图像变换的目的在于补偿无意运动,使图像序列的运动与估计出的有意运动模型一致。假设第n帧图像变换参数为(Tn,dn),则变换模型如下:The purpose of image transformation is to compensate for unintentional motion, so that the motion of the image sequence is consistent with the estimated intentional motion model. Assuming that the image transformation parameters of the nth frame are (T n , d n ), the transformation model is as follows:

pp ‾‾ nno == TT ‾‾ nno pp nno ++ dd ‾‾ nno -- -- -- (( 11 ))

其中,pn为图像变换后的点坐标向量,根据Kalman滤波的状态方程和观测方程,可得到:Among them, p n is the point coordinate vector after image transformation, according to the state equation and observation equation of Kalman filter, it can be obtained:

pp ‾‾ nno == TT ^^ nno pp nno ++ dd ^^ nno == (( TT ^^ nno (( TT ~~ nno )) -- 11 )) pp nno -- (( TT ^^ nno (( TT ~~ nno )) -- 11 dd ~~ nno ++ dd ^^ nno -- -- -- (( 22 ))

因此,第n帧图像fn的变换过程可描述如下:Therefore, the transformation process of the nth frame image f n can be described as follows:

ff ′′ nno (( pp )) == ff nno (( (( TT ‾‾ nno )) -- 11 pp nno -- (( TT ‾‾ nno )) -- 11 dd ‾‾ nno )) -- -- -- (( 33 ))

3)图像修复技术3) Image restoration technology

目前存在两大类图像修复技术,两种图像修补方法都可以应用于本发明:一类是用于修复小尺度缺损的数字图像修补(inpainting)技术。其利用待修补区域的边缘信息,同时采用一种由粗到精的方法来估计等照度线(isophote)的方向,并采用传播机制将信息传播到待修补的区域内,以便得到较好的修补效果。本质上,它是一种基于偏微分方程(Partialdifferential Equation,PDE)的inpainting算法,该类方法的主要思想是利用物理学中的热扩散方程将待修补区域周围的信息传播到修补区域中。另一类是用于填充图像中大块丢失信息的图像补全(completion)技术。目前,这一类技术也包含以下两种方法:一种是基于图像分解的修复技术,其主要思想是将图像分解为结构部分和纹理部分,其中结构部分用inpainting算法修补,纹理部分用纹理合成方法填充。Currently, there are two types of image inpainting technologies, and both image inpainting methods can be applied to the present invention: one is digital image inpainting technology for repairing small-scale defects. It uses the edge information of the area to be repaired, and uses a coarse-to-fine method to estimate the direction of the isophote, and uses a propagation mechanism to spread the information to the area to be repaired, so as to get a better repair Effect. In essence, it is an inpainting algorithm based on a partial differential equation (PDE). The main idea of this type of method is to use the heat diffusion equation in physics to spread the information around the area to be repaired to the repaired area. The other is the image completion (completion) technology used to fill in large blocks of missing information in the image. At present, this kind of technology also includes the following two methods: one is the repair technology based on image decomposition, its main idea is to decompose the image into structure part and texture part, in which the structure part is repaired by inpainting algorithm, and the texture part is synthesized by texture method padding.

本发明主要处理水面景观,其纹理信息较少,且视频序列为图像修复提供了许多数据源,不会产生大块信息丢失的情况,采用简单的求解偏微分方程以扩散邻域的方法即可。The present invention mainly deals with the water surface landscape, which has less texture information, and the video sequence provides many data sources for image restoration, and there will be no loss of large pieces of information, and a simple method of solving partial differential equations to diffuse the neighborhood is sufficient. .

从数学角度来看,图像修复就是要根据待修补区域周围的信息将图像填充到待修补区域中。然而,图像修补通常是一个病态问题,因为目前仍没有足够的信息可以保证能唯一正确地恢复被损坏部分,所以,人们从视觉心理学的角度进行分析,提出了各种假设限定用来解决这个问题。可见,图像修补属于图像复原的研究领域。From a mathematical point of view, image restoration is to fill the image into the area to be repaired according to the information around the area to be repaired. However, image repair is usually a pathological problem, because there is still not enough information to ensure that the damaged part can be restored uniquely and correctly. Therefore, people analyze it from the perspective of visual psychology and put forward various hypotheses to solve this problem. question. It can be seen that image inpainting belongs to the research field of image restoration.

通常图像在获取过程中往往受到一些因素的影响,使得图像质量退化。在图像复原领域中,常用的退化模型是Usually, the image is often affected by some factors during the acquisition process, which degrades the image quality. In the field of image restoration, the commonly used degradation model is

I0=I+N                                            (4)I 0 =I+N (4)

其中,I0为所获得的观察图像,I为原始图像(I={I(x)}),N为加性白噪声。对大多数的图像修补问题来说,数据模型具有以下形式:Among them, I 0 is the obtained observation image, I is the original image (I={I(x)}), and N is the additive white noise. For most image inpainting problems, the data model has the following form:

I0|Ω/D=[I+N]Ω/D                                 (5)I 0 | Ω/D =[I+N] Ω/D (5)

其中,Ω表示整个图像区域,D表示信息丢失的待修补区域,Ω\D表示没有丢失信息的区域,I0为Ω\D上的可利用的图像部分,I为需要复原的目标图像。假设N为高斯的,那么关于数据模型的能量函数E可常用最小均方误差定义:Among them, Ω represents the entire image area, D represents the area to be repaired with information loss, Ω\D represents the area without loss of information, I 0 is the available image part on Ω\D, and I is the target image that needs to be restored. Assuming that N is Gaussian, then the energy function E of the data model can be defined by the minimum mean square error:

EE. [[ II 00 || II ]] == λλ 22 ∫∫ ΩΩ // DD. (( II -- II 00 )) 22 dxdx -- -- -- (( 66 ))

由于修补区域D任何可用的数据,因此图像(先验)模型对图像修补算法来说,比其他传统的复原问题(如去噪,去降晰)变得更为重要,而图像模型可以从图像数据中经过滤波、参数或非参数估计以及熵方法得到。Since there is no available data in the inpainted area D, the image (prior) model becomes more important for image inpainting algorithms than other traditional restoration problems (such as denoising, desharpening), and the image model can be obtained from the image Data obtained through filtering, parametric or nonparametric estimation, and entropy methods.

1.1.2.摄像机标定技术1.1.2. Camera Calibration Technology

针孔模型是适合于很多计算机视觉应用的最简单近似,针孔摄像机完成透视投影。图4绘制了摄像机的针孔模型,它定义了如下四个坐标系[2]The pinhole model is the simplest approximation suitable for many computer vision applications, and the pinhole camera performs perspective projection. Figure 4 draws the pinhole model of the camera, which defines the following four coordinate systems [2] :

欧式世界坐标系(下标为ω):原点在OwEuclidean world coordinate system (subscript ω): the origin is at O w .

欧式摄像机坐标系(下标为c):原点在镜头焦点C=Oc,坐标轴Zc与光轴重合并指向图像平面外。European camera coordinate system (subscript c): the origin is at the lens focus C=O c , and the coordinate axis Z c coincides with the optical axis and points out of the image plane.

欧式图像平面坐标系(下标为i):以摄像机主点(光轴与图像平面的交点)为原点,Xi和Yi位于图像平面上,分别与摄像机坐标系的X轴和Y轴平行。European image plane coordinate system (subscript i): the origin of the principal point of the camera (the intersection of the optical axis and the image plane), X i and Y i are located on the image plane, parallel to the X axis and Y axis of the camera coordinate system .

图像阵列坐标系(下标为f):计算机帧存中的图像像素坐标。Image array coordinate system (subscript f): the image pixel coordinates in the computer frame memory.

基于上述四个坐标系,针孔模型完成3D世界坐标到2D像素坐标的线性变换:Based on the above four coordinate systems, the pinhole model completes the linear transformation from 3D world coordinates to 2D pixel coordinates:

Uu VV WW == KK (( RR ,, TT )) Xx ww YY ww ZZ ww 11 -- -- -- (( 77 ))

Xx ff == Uu WW ,, YY ff == VV WW -- -- -- (( 88 ))

其中K为摄像机内参矩阵,(R,T)为外参矩阵,其中R为3*3的旋转矩阵,T为平移向量。线性摄像机标定问题即求解内外参矩阵。高精度视觉检测系统中,还需要考虑摄像机镜头畸变。Where K is the camera internal reference matrix, (R, T) is the external reference matrix, where R is a 3*3 rotation matrix, and T is a translation vector. The linear camera calibration problem is to solve the internal and external parameter matrix. In a high-precision visual inspection system, camera lens distortion also needs to be considered.

对摄像机标定算法最普遍的划分是:传统的摄像机标定算法、摄像机自标定方法以及基于主动视觉的标定方法。这几种方法都能用于本发明的摄像机定标,主动视觉标定由于对运动限制较多,而船载摄像机运动复杂,因此需要吸取其优点,针对实际情况采用合适的标定算法。The most common division of camera calibration algorithms is: traditional camera calibration algorithms, camera self-calibration methods, and calibration methods based on active vision. These several methods can be used for the camera calibration of the present invention. Since the active vision calibration has more restrictions on the movement, and the movement of the shipboard camera is complex, it is necessary to absorb its advantages and adopt a suitable calibration algorithm for the actual situation.

传统的摄像机标定方法是在特定的实验条件,以一个结构已知的标定块作为空间参照物,利用标定块上一组非退化点的三维坐标和相应的图像坐标通过一系列数学变换和计算方法,求取摄像机模型的内部参数和外部参数。传统标定方法已经有了较成熟的方法和理论。The traditional camera calibration method is to use a calibration block with a known structure as a spatial reference object under specific experimental conditions, and use the three-dimensional coordinates of a group of non-degenerate points on the calibration block and the corresponding image coordinates through a series of mathematical transformations and calculation methods , to obtain the internal parameters and external parameters of the camera model. Traditional calibration methods already have relatively mature methods and theories.

自标定技术不需要已知参照物,仅利用摄像机在运动过程中周围环境的图像与图像之间的对应关系对摄像机进行标定。其优点在于不依赖于标定装置而在线进行。理想的自标定技术根本不借助任何参考标定块,仅仅依赖拍摄的一系列场景,根据场景中各特征的对应关系进行标定,但这要求极高的图像识别及特征匹配技术。实际的摄像机自标定或多或少地应用了特定的参照物,如圆、直线、角点等,或者就直接利用客观世界的一些有一定规则几何形状的物体,如门、窗、盒子、箱子等。The self-calibration technology does not need a known reference object, and only uses the corresponding relationship between the images of the surrounding environment during the movement of the camera to calibrate the camera. The advantage is that it is performed online independent of a calibration device. The ideal self-calibration technology does not rely on any reference calibration blocks at all, but only relies on a series of scenes captured, and performs calibration according to the corresponding relationship of each feature in the scene, but this requires extremely high image recognition and feature matching technology. The actual camera self-calibration more or less applies specific reference objects, such as circles, straight lines, corners, etc., or directly uses some objects with certain regular geometric shapes in the objective world, such as doors, windows, boxes, boxes wait.

基于主动视觉的自标定技术不需要已知参照物,仅通过控制摄像机的运动获取特定的约束条件来确定内部参数,使标定过程大为简化。但是该方法为了避免求解非线性方程,对摄像机运动限制较多。The self-calibration technology based on active vision does not require known reference objects, and only obtains specific constraints by controlling the movement of the camera to determine internal parameters, which greatly simplifies the calibration process. However, in order to avoid solving nonlinear equations, this method has more restrictions on camera movement.

鉴于船艇游历过程中世界坐标系不断变化,而采用固定摄像机时其内参不变,本发明采用内外参分别标定的方法,内参标定在室内实验环境下完成,采用传统的基于标定模板的方法,以保证精度;外参进行在线标定,通过提取场景中相关特征如地平线位置、船头特征点等实现。In view of the fact that the world coordinate system is constantly changing during the cruise of the ship, and its internal parameters remain unchanged when a fixed camera is used, the present invention adopts a method of separately calibrating the internal and external parameters. The calibration of the internal parameters is completed in the indoor experimental environment, and the traditional method based on the calibration template is adopted. To ensure accuracy; external parameters are calibrated online by extracting relevant features in the scene such as horizon position, bow feature points, etc.

●基于平面模板的内参标定方法●Internal reference calibration method based on plane template

Zhang的平面模板法只需一个打印有格子图案的模板(如图5所示)在摄像机前转动三次以上(含三次)即可完成标定,无需了解模板的运动参数,流程简明,精度较高,堪称代表。在平面模板条件下,可认为Zw=0,式(7)可改写为Zhang’s planar template method only needs a template printed with a grid pattern (as shown in Figure 5) to rotate more than three times (including three times) in front of the camera to complete the calibration, without knowing the motion parameters of the template, the process is simple, and the accuracy is high. Can be called a representative. Under the condition of plane template, it can be considered that Z w =0, and formula (7) can be rewritten as

U V W = H X w Y w 1 , s u v 1 = H X w Y w 1 - - - ( 9 ) u V W = h x w Y w 1 , Right now the s u v 1 = h x w Y w 1 - - - ( 9 )

s为比例系数,(u,v)为像素坐标,H为模板平面到摄像机的Homography矩阵,它建立了模板平面和图像平面的透视关系。通过已知的N对标定点,采用最大似然准则可以求解Homography矩阵。s is the scale factor, (u, v) is the pixel coordinate, H is the Homography matrix from the template plane to the camera, which establishes the perspective relationship between the template plane and the image plane. Through the known N pairs of calibration points, the Homography matrix can be solved by using the maximum likelihood criterion.

●在线外参标定方法●Online external parameter calibration method

巡逻船艇以20km/h速度行驶,其相对于太湖坐标不断变化,通过GPS定位系统可实现船艇定位,而架设于船上的摄像机相对于船本身静止,因此,标定中世界坐标系原点选择船体某点即可,同时局部水面可近似为平面,标定过程可简化为求解两个平面间的映射关系。图6显示了抽象的船载摄像机模型:The patrol boat travels at a speed of 20km/h, and its coordinates relative to Taihu Lake are constantly changing. The positioning of the boat can be realized through the GPS positioning system, and the camera installed on the boat is stationary relative to the boat itself. Therefore, the origin of the world coordinate system in the calibration is selected. Hull A certain point is enough, and at the same time, the local water surface can be approximated as a plane, and the calibration process can be simplified as solving the mapping relationship between two planes. Figure 6 shows the abstract shipboard camera model:

假设摄像机镜头相对于水平面的旋角为0,相对于船行驶方向的偏角为0,若非零可通过调整使然,因此只需求解倾角t,当图像中地平线可见时,可根据地平线纵坐标值求解t;若地平线不可见,而船头可见,可在船头设置标定物,直接求解成像矩阵。Assume that the rotation angle of the camera lens relative to the horizontal plane is 0, and the declination angle relative to the ship’s driving direction is 0. If it is not zero, it can be determined by adjustment. Therefore, only the untilt angle t is required. When the horizon is visible in the image, it can be determined according to the vertical coordinate value of the horizon Solve for t; if the horizon is invisible but the bow is visible, a calibration object can be set at the bow to directly solve the imaging matrix.

1.1.3.图像变换技术1.1.3. Image transformation technology

●图像映射(image warping)算法● image mapping (image warping) algorithm

设空间点X投影于参考视点Cr和目的视点Cd,分别得到参考像素Xr和目的像素Xd。在正向映射过程中,映射从参考域Rr向目的域Rd进行,根据已知信息,建立从Xr到Xd的映射关系:Xd=fforward(Xr),如图7所示,反之,逆向映射即建立从Xd到Xr的映射关系:Xr=fbackward(Xd),如图8所示。Assuming that the spatial point X is projected on the reference viewpoint C r and the destination viewpoint C d , the reference pixel X r and the destination pixel X d are respectively obtained. In the forward mapping process, the mapping is carried out from the reference domain R r to the destination domain R d , and the mapping relationship from X r to X d is established according to the known information: X d =f forward (X r ), as shown in Figure 7 On the contrary, reverse mapping is to establish a mapping relationship from X d to X r : X r =f backward (X d ), as shown in FIG. 8 .

由于参考域中深度已知,正向映射具有较高的执行效率,但普遍存在以下问题:(1)映射出界问题,由于参考域中的像素并非能全部映射到目的域,会计算大量无效的出界像素;(2)空洞问题,由于图像扩张形成的信息不足会出现众多细小空洞;(3)遮挡多射问题,无法避免对于被遮挡像素的冗余计算,以及多个参考像素映射到一个目的像素时的重复计算。Since the depth in the reference domain is known, forward mapping has high execution efficiency, but the following problems generally exist: (1) The problem of mapping out of bounds, since not all pixels in the reference domain can be mapped to the target domain, a large number of invalid pixels will be calculated. Out-of-bounds pixels; (2) hole problem, many small holes will appear due to insufficient information formed by image expansion; (3) occlusion and multi-shot problem, redundant calculation of occluded pixels cannot be avoided, and multiple reference pixels are mapped to a target Double counting in pixels.

逆向映射采用绘制驱动的原理,从目的域反向建立映射,有效地解决了正向映射中的诸多问题,但由于目的域的深度未知,逆向映射常常需要一个平均复杂度为O(width/2)的像素搜索的过程,从而造成计算速度的瓶颈。对逆向映射的优化可以采用两种方法:(1)依据极线几何关系优化搜索过程;(2)利用场景的三维信息重建来预先获得目的域的深度信息,从而将逆向映射转化为一个正向映射问题。Reverse mapping adopts the principle of drawing drive, reversely establishes mapping from the target domain, effectively solves many problems in forward mapping, but because the depth of the target domain is unknown, reverse mapping often requires an average complexity of O(width/2 ) pixel search process, thus causing a bottleneck in calculation speed. Two methods can be used to optimize the reverse mapping: (1) optimize the search process according to the epipolar geometric relationship; (2) use the 3D information reconstruction of the scene to obtain the depth information of the target domain in advance, so as to convert the reverse mapping into a forward Mapping problem.

由于本发明采用了摄像机标定技术,目的域的深度信息可以预先重建,因此简化了图像映射过程,流程如图9所示:根据所需构建的鸟瞰图中该点的坐标(X,Y),通过成像矩阵逆映射关系计算得到该点的像平面坐标(x,y),将(x,y)处像素值赋值于(X,Y)即可。实际操作过程中可能会遇到取整、出界等问题,需采用合适的策略加以解决。Since the present invention adopts the camera calibration technology, the depth information of the target domain can be reconstructed in advance, thus simplifying the image mapping process, as shown in Figure 9: according to the coordinates (X, Y) of the point in the bird's-eye view to be constructed, The image plane coordinates (x, y) of the point are calculated through the inverse mapping relationship of the imaging matrix, and the pixel value at (x, y) is assigned to (X, Y). During the actual operation, problems such as rounding and out-of-bounds may be encountered, and appropriate strategies must be adopted to solve them.

●图像重构算法●Image reconstruction algorithm

本发明图像重构可采用基于模板(splat)和基于网格(mesh)的算法。如图10所示:The image reconstruction of the present invention can adopt algorithms based on templates (splat) and grids (mesh). As shown in Figure 10:

(1)基于模板的算法。该类算法最早使用在体数据绘制中,主要思想是根据参考像素的深度、方向等参数,对参考像素采用一个大小可变的内核进行重构。这样像素可以借助高斯内核或其他内核扩展为一块区域,再对重合的部分进行融合绘制,其中内核大小的计算可以针对单个像素或场景节点的包围盒进行估算。由于重构内核的多样性,基于模板的算法可以获得较好的绘制效果,但也存在一些不足:首先不区分空间结构信息,容易出现走样问题,而且,模板的计算完全依靠CPU进行,使得在高分辨率或复杂内核情况下计算量较大;另外,重构内核的尺寸不易处理,过大会降低显示效果且增加边缘融合的计算量,而过小则会在边缘处留下过多的细洞。(1) Template-based algorithm. This type of algorithm was first used in volume data rendering. The main idea is to use a variable-sized kernel to reconstruct the reference pixel according to the parameters such as the depth and direction of the reference pixel. In this way, pixels can be expanded into an area with the help of Gaussian kernel or other kernels, and then the overlapping parts are fused and drawn, and the calculation of the kernel size can be estimated for a single pixel or the bounding box of a scene node. Due to the diversity of reconstruction kernels, template-based algorithms can obtain better rendering effects, but there are also some shortcomings: first, it does not distinguish spatial structure information, which is prone to aliasing problems, and the calculation of templates is completely dependent on the CPU. In the case of high-resolution or complex kernels, the amount of calculation is large; in addition, the size of the reconstructed kernel is not easy to handle. If it is too large, it will reduce the display effect and increase the calculation of edge fusion, while if it is too small, it will leave too many details at the edge. Hole.

(2)基于网格的算法。这类算法根据假设的空间连续性将离散的像素编制为一个连续的网格面,以像素作为网格的顶点,以插值计算的结果来填充网格面。该类算法可以借助硬件完成,但却无法有效地分辨空间的拓扑关系。当相邻像素并不位于同一空间连续表面时,就会产生不存在的空间表面,从而在场景的前景和后景之间常造成“橡皮擦现象”(Rubber Sheet)。(2) Grid-based algorithm. This type of algorithm compiles discrete pixels into a continuous grid surface according to the assumed spatial continuity, uses pixels as the vertices of the grid, and fills the grid surface with the results of interpolation calculations. This type of algorithm can be completed with the help of hardware, but it cannot effectively distinguish the topological relationship of space. When adjacent pixels are not located on the same spatially continuous surface, a non-existent spatial surface is created, often causing a "rubber sheet" (Rubber Sheet) between the foreground and background of the scene.

本发明主要对水面图像进行重构,水面可近似为平面,三维空间结构信息较少,另外采用CUDA(compute unified device architecture,统计计算设备架构)等高速运算平台可以解决计算量较大问题,因此采用基于模板的算法是较好选择。The present invention mainly reconstructs the image of the water surface, the water surface can be approximated as a plane, and the three-dimensional spatial structure information is less. In addition, high-speed computing platforms such as CUDA (compute unified device architecture, statistical computing equipment architecture) can be used to solve the problem of large amount of calculation, so Using a template-based algorithm is a better choice.

1.1.4.图像拼接技术1.1.4. Image stitching technology

图像拼接技术主要分为三个主要步骤:图像预处理、图像配准、图像融合,如图11所示。其中图像预处理的目的是保证下一步图像配准的精度,对原始图像做一些折叠变化和坐标变换,包括图像处理的基本操作,如直方图处理、图像的平滑滤波等,建立图像的匹配模板、对图像进行变换,如傅立叶变换、小波变换、Gabor变换等以及提取图像的特征集合等操作,粗略定位,找到大致的重叠区域,缩小匹配范围,提高速度。Image stitching technology is mainly divided into three main steps: image preprocessing, image registration, and image fusion, as shown in Figure 11. The purpose of image preprocessing is to ensure the accuracy of image registration in the next step, to do some folding changes and coordinate transformations on the original image, including basic operations of image processing, such as histogram processing, image smoothing and filtering, etc., to establish an image matching template , Transform the image, such as Fourier transform, wavelet transform, Gabor transform, etc., and extract the feature set of the image, roughly locate, find the approximate overlapping area, narrow the matching range, and increase the speed.

●图像配准算法●Image registration algorithm

迄今为止,国内外已产生了不少的图像配准方法,各种方法都是面向一定范围的应用领域,也具有各自的特点。根据图像配准所利用的图像的信息,可以将图像配准分为四类:基于灰度信息的图像配准,基于变换域的图像配准、基于特征的图像配准和基于模型的图像配准。这四类方法都可以用于本发明的图相匹配,基于灰度的效果最好,但计算复杂度也最高,若要效率和质量兼顾,往往几种方法融合使用。So far, many image registration methods have been produced at home and abroad, all of which are oriented to a certain range of application fields and have their own characteristics. According to the image information used in image registration, image registration can be divided into four categories: image registration based on gray information, image registration based on transform domain, image registration based on feature and image registration based on model. allow. These four types of methods can all be used in the graph matching of the present invention, and the effect based on grayscale is the best, but the computational complexity is also the highest. If both efficiency and quality are to be considered, several methods are often used in combination.

基于图像灰度的方法直接利用图像的灰度信息进行配准,通过像素对其间某种相似性度量(如互信息、最小均方差等)的全局最优化实现配准,这种方法不需要对图像进行分割和特征提取,所以精度高、鲁棒性好。但这种配准方法对灰度变化非常敏感,没有充分利用灰度统计特性,对每一点的灰度信息依赖较大。The method based on image grayscale directly uses the grayscale information of the image for registration, and achieves registration through the global optimization of a certain similarity measure (such as mutual information, minimum mean square error, etc.) between pixel pairs. The image is segmented and feature extracted, so it has high precision and good robustness. However, this registration method is very sensitive to grayscale changes, does not make full use of grayscale statistical properties, and relies heavily on the grayscale information of each point.

基于变换域的图像配准的一个经典方法是相位相关法,即利用傅立叶变换的方法,将图像由空间域变换到频率域,根据傅立叶变换的平移原理来实现图像的配准。由于平移、旋转、缩放等变形在频域都有对应变换,因此完全可以利用傅立叶方法在频域中进行图像匹配。变换域的方法具有对噪声的不敏感性,计算效率高,有成熟的快速算法(FFT算法)和易于硬件实现等特点。一般来说,采用变换域的方法可以为图像拼接提供一个良好的初始配准参数。A classic method of image registration based on the transform domain is the phase correlation method, that is, the Fourier transform method is used to transform the image from the space domain to the frequency domain, and the image registration is realized according to the translation principle of the Fourier transform. Since transformations such as translation, rotation, and scaling have corresponding transformations in the frequency domain, the Fourier method can be used to perform image matching in the frequency domain. The transform domain method has the characteristics of insensitivity to noise, high calculation efficiency, mature fast algorithm (FFT algorithm) and easy hardware implementation. In general, methods using the transform domain can provide a good initial registration parameter for image stitching.

基于图像特征的方法首先要对待配准的两幅图像进行处理,提取满足特定应用要求的特征集,将这些特征作为控制结构,寻找两幅图像控制结构的映射关系。基于图像特征的方法,在特征提取后得到的特征点的数量将会大大减少,因此可以提高配准的速度,但其配准的效果很大程度上还取决于特征点的提取精度以及特征点匹配的准确度。The method based on image features first needs to process the two images to be registered, extract the feature set that meets the specific application requirements, use these features as control structures, and find the mapping relationship between the control structures of the two images. Based on the method of image features, the number of feature points obtained after feature extraction will be greatly reduced, so the speed of registration can be improved, but the effect of the registration depends largely on the extraction accuracy of feature points and feature points. matching accuracy.

基于模型的配准方法是根据图像失真的数学模型来对图像进行非线性校准的配准。典型的算法是Szeliski提出的变换优化法,首先建立图像序列之间的变换模型,然后通过优化算法迭代求出模型中的变化参数,从而实现对待拼接图像的配准。变换优化法可以处理图像序列之间存在平移、旋转、缩放等几何变换的拼接,不需要任何特征点,收敛速度快,并且在统计上是最优的。但是要使整个算法达到收敛的要求,必须要有一定精度的初始估计值,即也就是人为确定的初始对应点要足够精确,否则将会导致图像配准的失败。The model-based registration method is based on the mathematical model of image distortion to perform non-linear calibration on the image. A typical algorithm is the transformation optimization method proposed by Szeliski. First, the transformation model between image sequences is established, and then the change parameters in the model are iteratively obtained through the optimization algorithm, so as to realize the registration of the images to be stitched. The transformation optimization method can handle the stitching of geometric transformations such as translation, rotation, and scaling between image sequences. It does not require any feature points, has a fast convergence speed, and is statistically optimal. However, in order for the entire algorithm to meet the convergence requirements, it is necessary to have an initial estimate with a certain accuracy, that is, the artificially determined initial corresponding points must be accurate enough, otherwise the image registration will fail.

考虑到水面图像特征角点较少,而蓝藻常因个体老幼、生理状况及环境光质不同而呈红、黄、绿、褐、黑等丰富多变的颜色,直接导致水面图像颜色、亮度差异,海面的赤潮,石油泄漏等水面状况也有明显的颜色差异,因此本发明采用改进的归一化灰度级相关方法(NGC)结合GPS定位数据进行图像配准。Considering that there are few feature corners in the water surface image, and cyanobacteria often have rich and changeable colors such as red, yellow, green, brown, black and so on due to the difference in individual age, physiological condition and environmental light quality, which directly leads to the color and brightness of the water surface image. Differences, red tides on the sea surface, water surface conditions such as oil spills also have obvious color differences, so the present invention uses an improved normalized gray level correlation method (NGC) in combination with GPS positioning data to carry out image registration.

原始的NGC方法将匹配模板中像素的灰度级乘以待拼接图像中被模板覆盖的对应像素的灰度级,得到的总和(NC值)存储在二维数组中,如式(10)所示,该数组中NC值最大的元素的位置就是模板在图像中的位置。The original NGC method multiplies the gray level of the pixel in the matching template by the gray level of the corresponding pixel covered by the template in the image to be stitched, and the obtained sum (NC value) is stored in a two-dimensional array, as shown in formula (10) Indicates that the position of the element with the largest NC value in the array is the position of the template in the image.

NCNC (( ii ,, jj )) == ΣΣ mm == 11 Mm ΣΣ nno == 11 NN SS ii ,, jj (( mm ,, nno )) TT (( mm ,, nno )) -- -- -- (( 1010 ))

其中,T为匹配模板,包含M×N个像素,Si,j为移动模板起始点至待拼接图像的(i,j)位置时图像被模板覆盖的区域,称为子图,i、j为子图在图像中的坐标。匹配模板在待拼接图像中的位置由与模板最相似的子图决定,将模板T和子图Si,j看成两个向量,利用向量夹角的余弦公式(11)求解Si,j与T的夹角θ,使得θ最小的Si,j所在的位置即为模板在图像中的位置。Among them, T is a matching template, including M×N pixels, S i, j is the area covered by the template when moving the starting point of the template to the (i, j) position of the image to be stitched, which is called a sub-image, i, j is the coordinate of the subimage in the image. The position of the matching template in the image to be stitched is determined by the subgraph most similar to the template. The template T and the subgraph S i, j are regarded as two vectors, and the cosine formula (11) of the vector angle is used to solve the S i, j and The included angle θ of T makes the position of S i, j where θ is the smallest is the position of the template in the image.

鉴于蓝藻的色彩变化特征,本发明中考虑不单纯采用亮度参量,采用色调H、饱和度S等参量的匹配结果可能更好。In view of the color change characteristics of cyanobacteria, it is not considered in the present invention to simply use brightness parameters, but the matching results of parameters such as hue H and saturation S may be better.

由于点积本身并不能准确地定位,因为二维数组中NC值中的大小还受到像素灰度级大小的影响,如果图像中某一区域很亮一像素具有较大的灰度级,那么其NC值自然地也就变大了。本算法更加关心的是向量间的夹角,并以此消除图像亮度变化对定位的影响。Because the dot product itself cannot be positioned accurately, because the size of the NC value in the two-dimensional array is also affected by the gray level of the pixel, if a certain area in the image is very bright and a pixel has a larger gray level, then its NC The value will naturally become larger. This algorithm pays more attention to the angle between vectors, and eliminates the influence of image brightness changes on positioning.

根据以上的分析,可以得到如下的计算公式:According to the above analysis, the following calculation formula can be obtained:

coscos θθ ii ,, jj == ΣΣ mm == 11 Mm ΣΣ nno == 11 NN SS ii ,, jj (( mm ,, nno )) TT (( mm ,, nno )) ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ SS (( mm ,, nno )) ]] 22 ΣΣ mm == 11 Mm ΣΣ nno == 11 NN [[ TT (( mm ,, nno )) ]] 22 -- -- -- (( 1111 ))

●图像拼合算法●Image mosaic algorithm

当待拼接的两幅图像存在差异时图像的拼缝将会十分明显,如:由于拍摄时照度不均匀而引起的两图像重叠区有较大的亮度不一致,或者由于镜头畸变引起的图像几何形变等等。为了消除光强的突变并使光强平滑过度,常用的方法有三种:When there are differences between the two images to be stitched, the stitching of the images will be very obvious, such as: due to uneven illumination during shooting, there is a large brightness inconsistency in the overlapping area of the two images, or the geometric deformation of the image caused by lens distortion etc. In order to eliminate sudden changes in light intensity and make the light intensity transition smoothly, there are three commonly used methods:

(1)加权平均法(1) Weighted average method

Szeliski使用一个“帽状函数”来加权平均到每个重叠帧的对应像素上,该函数在图像边缘处为最低,而在中心处贡献最多。对合成结果的每个图像采样的权重分布这样定义:离图像中心越近的像素对最终合成结果贡献越大。权重分布函数表现为三角形。像一顶帽子,所以称帽状函数(hat function)。Szeliski uses a "hat function" to weight the average over the corresponding pixels in each overlapping frame, with the function being lowest at the edges of the image and contributing most at the center. The weight distribution of each image sample of the composite result is defined as follows: the closer the pixel to the center of the image, the greater the contribution to the final composite result. The weight distribution function behaves as a triangle. Like a hat, so called hat function (hat function).

(2)基于欧氏距离的有效权重(2) Effective weight based on Euclidean distance

图像的每个像素都分配权重,这个权重与到边缘(或最近不可见点)的距离成比例。在拼接时,主要目的是减少离边缘近的像素点的光强贡献。融合算法中计算距离映射d(x,y),利用块距离和欧氏距离,计算到最近的透明点或边的距离。融合所有变形图像公式:Each pixel of the image is assigned a weight proportional to the distance to the edge (or nearest invisible point). When splicing, the main purpose is to reduce the light intensity contribution of pixels close to the edge. In the fusion algorithm, the distance map d(x, y) is calculated, and the distance to the nearest transparent point or edge is calculated by using block distance and Euclidean distance. Fusion of all deformed images formula:

CC (( xx ,, ythe y )) == ΣΣ kk ww (( xx ,, ythe y )) II kk ′′ (( xx ,, ythe y )) ΣΣ kk ww (( dd (( xx ,, ythe y )) )) -- -- -- (( 1212 ))

w是单调函数,w(x)=x,Ik是第k幅变形图像的光强函数,d的计算很简单,取离矩形四条边距离的最小值。w is a monotone function, w(x)=x, I k is the light intensity function of the kth deformed image, the calculation of d is very simple, and the minimum value of the distance from the four sides of the rectangle is taken.

(3)多分辨率样条技术(3) Multi-resolution spline technology

Bert.PJ采用Laplace多分辨率金字塔结构,将图像分解成不同频率域上的一组图像,在每个分解的频率域上,将图像重叠边界附近加权平均。最后将所有频率上的合成图像汇总成一幅图像。多分辨样条法是在所有频率域上处理边界附近区域。因此工作量大,但质量高。Bert.PJ uses the Laplace multi-resolution pyramid structure to decompose the image into a group of images in different frequency domains, and in each decomposed frequency domain, the weighted average of the overlapping borders of the images is performed. Finally, the composite images over all frequencies are aggregated into one image. The multiresolution spline method deals with the region near the boundary in all frequency domains. So a lot of work, but high quality.

为了消除重叠区的拼缝问题,目前效果较好的是重叠区线性过度的方法,如图12所示。实现的具体方法是假设重叠区域宽度为L,取过渡因子是σ(0≤σ≤1),两幅图像重叠区的x和y轴最大和最小值分别为Xmin、Xmax和Ymin、Ymax,则过渡因子:In order to eliminate the patchwork problem in the overlapping area, the method of linear transition in the overlapping area is currently more effective, as shown in Figure 12. The specific method of implementation is assuming that the width of the overlapping area is L, the transition factor is σ (0≤σ≤1), and the maximum and minimum values of the x and y axes of the overlapping area of the two images are X min , X max and Y min , respectively. Y max , then the transition factor:

σσ == Xx maxmax -- xx Xx maxmax -- Xx minmin -- -- -- (( 1313 ))

重叠区域的图像强度为:The image intensities in the overlapping regions are:

I=σIA(x,y)+(1-σ)IB(x,y)                    (14)I = σI A (x, y) + (1-σ) I B (x, y) (14)

IA、IB分别为图A和B相对应的象素值,这种方法使得过渡部分比较平滑,没有明显的台阶。I A , I B are the pixel values corresponding to pictures A and B respectively, and this method makes the transition part relatively smooth without obvious steps.

●对数域图像处理(LIP)技术●Logarithmic domain image processing (LIP) technology

本发明引入一种新的图像处理框架——对数域图像处理模型LIP。相对于一般域的数字图像处理技术,LIP域图像处理方法有其独特的优点:The present invention introduces a new image processing framework—logarithmic domain image processing model LIP. Compared with the digital image processing technology in the general domain, the LIP domain image processing method has its unique advantages:

(1)LIP模型是一个完备的数学理论,对有界灰度图像的处理中,LIP模型规定了一个特殊的运算集。目前大多数图像处理中使用的普通的加法和标量乘法运算,与图像形成法则不一致,其对由透射光形成的图像,对人类视觉感觉和很多实际的数字图像都是不适合的。(1) The LIP model is a complete mathematical theory. In the processing of bounded grayscale images, the LIP model specifies a special set of operations. The ordinary addition and scalar multiplication operations currently used in most image processing are inconsistent with the image formation laws, which are not suitable for images formed by transmitted light, human visual perception and many actual digital images.

(2)LIP域图像处理运算准则是封闭的。在对数字图像处理中,两个像素灰度值的离散和可能得出一个超出图像灰度范围的值即溢出,而LIP的封闭运算能较好的解决这个问题。(2) The operating principle of image processing in LIP domain is closed. In digital image processing, the discrete sum of two pixel gray values may result in a value beyond the gray range of the image, that is, overflow, and the closed operation of LIP can better solve this problem.

(3)LIP中引入空间映射函数ψ,完成灰度值及其在对数空间的转换,可以将复杂的和⊙运算转换为映射函数的直接相加减,大大减少了运算量。(3) The spatial mapping function ψ is introduced into LIP to complete the gray value and its conversion in logarithmic space, and the complex The sum and ⊙ operations are converted into direct addition and subtraction of the mapping function, which greatly reduces the amount of computation.

(4)LIP对数图像处理方法在实践中已经证实与多重透射成像模型、多重反射成像模型、以及人类亮度感觉的规律即亮度大小反比于饱和度的特点相一致,其和心理学上的对比度概念相一致,更加逼近于人眼的视觉系统,与人眼感受的真实环境更加接近。(4) The LIP logarithmic image processing method has been confirmed in practice to be consistent with the multiple transmission imaging model, multiple reflection imaging model, and the law of human brightness perception, that is, the characteristic that brightness is inversely proportional to saturation, which is consistent with psychological contrast The concept is consistent, closer to the visual system of the human eye, and closer to the real environment felt by the human eye.

在LIP域处理中,图像的灰度函数定义在一个非空空间域D上,称D为空间支撑。D是一个有界实区域[0,M],一般M取256。LIP基本运算准则定义如下:In LIP domain processing, the grayscale function of the image is defined on a non-empty spatial domain D, which is called spatial support. D is a bounded real area [0, M], and generally M takes 256. The basic operation rules of LIP are defined as follows:

∀∀ ff ,, gg ∈∈ SS ff ⊕⊕ gg == ff ++ gg -- fgfg Mm

∀∀ ff ∈∈ SS ,, ∀∀ λλ ∈∈ RR λλ ⊗⊗ ff == Mm -- Mm (( 11 -- ff Mm )) λλ

Figure BDA0000029614070000175
Figure BDA0000029614070000175

减法运算定义了两个灰度函数f和g之间的差异:The subtraction operation defines the difference between two grayscale functions f and g:

具有特殊运算的

Figure BDA0000029614070000177
定义在向量空间E中,空间E在映射ψ:with special operations
Figure BDA0000029614070000177
and Defined in the vector space E, the space E maps ψ:

ΨΨ (( ff )) == -- Mm lnln (( 11 -- ff Mm )) -- -- -- (( 1717 ))

空间E在映射ψ下与实数空间R是代数同构的。可以将复杂的

Figure BDA00000296140700001710
Figure BDA00000296140700001711
运算转换为映射函数的直接相加减,大大减少了运算量。LIP技术可用于图像背景更新、图像重建、图像增强、图像对比度估计、边缘检测以及图像分割等。本发明采用LIP技术解决图像增强、亮度校正问题,将待拼接图像转换至LIP域处理,可以更好的解决由于太阳光照不均、背光成像等情况下待拼接图像间亮度差异较大问题。对数域图像处理在本发明中的应用有非常明显的效果提升。The space E is algebraically isomorphic to the real number space R under the map ψ. complex
Figure BDA00000296140700001710
and
Figure BDA00000296140700001711
The operation is transformed into the direct addition and subtraction of the mapping function, which greatly reduces the amount of calculation. LIP technology can be used for image background update, image reconstruction, image enhancement, image contrast estimation, edge detection and image segmentation, etc. The invention adopts LIP technology to solve the problems of image enhancement and brightness correction, and converts the images to be spliced to the LIP domain for processing, which can better solve the problem of large brightness differences between the images to be spliced due to uneven sunlight and backlight imaging. The application of logarithmic domain image processing in the present invention has a very obvious effect improvement.

●彩色对数图像处理(CLIP)技术●Color logarithmic image processing (CLIP) technology

在传统的彩色图像处理过程中,一幅彩色图像可视为一幅普通的二维图像,只是图像中每个像素点的亮度有三个自由度(R、G、B)。在CLIP彩色对数域图像的处理中,把位于处的像素点的亮度值视为一个三维向量,称之为彩色向量,用

Figure BDA00000296140700001713
表示,In the traditional color image processing process, a color image can be regarded as an ordinary two-dimensional image, but the brightness of each pixel in the image has three degrees of freedom (R, G, B). In the processing of CLIP color logarithmic domain images, the The brightness value of the pixel at the position is regarded as a three-dimensional vector, which is called a color vector.
Figure BDA00000296140700001713
express,

Ff urur (( vv )) rr == Ff rr (( vv )) rr Ff gg (( vv )) rr Ff bb (( vv )) rr -- -- -- (( 1818 ))

在计算中引用LIP域定义的

Figure BDA00000296140700001715
运算法则,以及映射函数ψ,将R、G、B三个通道分别处理,灰度图像的LIP模型上,建立彩色图像的对数处理CLIP模型,在CLIP域内处理蓝藻图像,使图像更贴近真实视觉特征。Refer to the LIP field definition in the calculation
Figure BDA00000296140700001715
and The algorithm and the mapping function ψ process the R, G, and B channels separately. On the LIP model of the grayscale image, a logarithmic processing CLIP model of the color image is established, and the cyanobacteria image is processed in the CLIP domain to make the image closer to reality. visual features.

1.1.5.网格地图构建技术1.1.5. Grid map construction technology

●基于SVG地图的网格图像生成●Grid image generation based on SVG map

SVG(Scalable Vector Graphics)是W3C组织为适应Internet应用的飞速发展需要而制定的一套基于XML语言的二维可缩放矢量图形语言描述规范。传统的HTML静态页面描述语言的采用的标记固定、有限且无内涵、不支持矢量图形等缺点日益暴露出来,已经越来越满足不了WebGIS发展要求。目前网络上的MacroMiedia公司提出的SWF文件格式以其图像矢量化,文件较小及具有交互性而倍受青睐,但它相比于SVG,还是有一些不足之处。XML作为公认的世界未来统一格式标准已经为越来越多的领域所应用。SVG作为XML的一个描述矢量图形的子集的出现,为解决WebGIS面临的静态性,数据格式多样性,平台相关的Web内容表现和缺乏交互性,网络传输慢等问题提供了一个全新的解决方法。SVG (Scalable Vector Graphics) is a set of two-dimensional scalable vector graphics language description specifications based on XML language formulated by W3C organization to meet the rapid development of Internet applications. The shortcomings of the traditional HTML static page description language, such as fixed tags, limited and meaningless, and no support for vector graphics, are increasingly exposed, which can no longer meet the development requirements of WebGIS. At present, the SWF file format proposed by MacroMiedia on the Internet is popular for its image vectorization, small file size and interactivity, but compared with SVG, it still has some shortcomings. XML, as a recognized future unified format standard in the world, has been applied in more and more fields. The emergence of SVG as a subset of XML describing vector graphics provides a new solution to the problems faced by WebGIS, such as staticity, data format diversity, platform-related web content performance and lack of interactivity, and slow network transmission. .

SVG与HTML相比,具有如下优点:Compared with HTML, SVG has the following advantages:

(1)突破了HTML固定标记集合的约束,使文件的内容更丰富、更复杂、更容易组成一个完整的信息体系;(1) Breaking through the constraints of the HTML fixed tag set, making the content of the file richer, more complex, and easier to form a complete information system;

(2)SVG是矢量图像格式,非常适合在网络中传输和应用。一般而言,SVG图像要比其他网络图像格式(如GIF,JPEG)更小,下载速度更快;(2) SVG is a vector image format, which is very suitable for transmission and application in the network. Generally speaking, SVG images are smaller and faster to download than other web image formats (such as GIF, JPEG);

(3)由文本构成矢量图像。其文本性使得SVG文件有良好的跨平台性和可以通过DOM(Document Obiect Model)方便的对其进行编辑、修改。另外一个很突出的优点就是SVG文件中的文字也可以被网络搜索引擎作为关键词搜索到。(3) A vector image is composed of text. Its textuality makes SVG files have good cross-platform and can be easily edited and modified through DOM (Document Object Model). Another outstanding advantage is that the text in the SVG file can also be searched by web search engines as keywords.

(4)具有动态交互性。SVG图像能对用户动作做出不同响应,例如高亮、声效、特效、动画等。另外,由于Microsoft的IE6.0中已经集成了浏览SVG文件的插件,这使得SVG的浏览更加方便,容易。(4) It is dynamic and interactive. SVG images can respond differently to user actions, such as highlighting, sound effects, special effects, animations, and more. In addition, since Microsoft's IE6.0 has integrated a plug-in for viewing SVG files, this makes browsing SVG more convenient and easy.

SWF与SVG相比,其不足体现为:Compared with SVG, SWF has the following disadvantages:

(1)SWF标准的非开放性。SWF是一个相对封闭的技术,与其他的开放标准之间没有一个完全融合的方案。随着XML及其他开放标准的发展,SWF的不协调性将日益突出。(1) The non-openness of the SWF standard. SWF is a relatively closed technology, and there is no fully integrated solution with other open standards. With the development of XML and other open standards, the inconsistency of SWF will become increasingly prominent.

(2)SWF较差的可编辑性。SWF是Flash的输出文件格式,作为最终的动画生成格式,其创作过程封闭在SWF文件中,无法再进行二次编辑。而SVG采用一种文本格式,用普通的编辑工具就可以打开并进行编辑。(2) The poor editability of SWF. SWF is the output file format of Flash. As the final animation generation format, its creation process is closed in the SWF file, which cannot be edited again. SVG adopts a text format, which can be opened and edited with common editing tools.

(3)SWF无法进行图像搜索。由于SWF为非文本格式,文本不能独立于图像而存在,因此无法建立类似于SVG的图像搜索功能。(3) SWF cannot perform image search. Since SWF is a non-text format, text cannot exist independently of images, so it is impossible to establish an image search function similar to SVG.

鉴于SVG的以上特性和优点,本发明选择使用SVG作为地图格式来显示存储、传输和显示太湖水域的蓝藻分布情况图像。In view of the above characteristics and advantages of SVG, the present invention chooses to use SVG as a map format to display, store, transmit and display images of cyanobacteria distribution in Taihu Lake waters.

●GPS经纬度数据到本地地图坐标的转换●Conversion of GPS latitude and longitude data to local map coordinates

GPS接收到的定位信息是基于WGS一84坐标系的,WGS一84接收到的数据无法直接应用于电子地图上。所以,要把WGS一84坐标系下的GPS定位信息和电子地图上的坐标对应起来,这样才能精确地实现对SVG地图的网格分割。The positioning information received by GPS is based on the WGS-84 coordinate system, and the data received by WGS-84 cannot be directly applied to the electronic map. Therefore, the GPS positioning information under the WGS-84 coordinate system should be matched with the coordinates on the electronic map, so as to accurately realize the grid segmentation of the SVG map.

由GPS接收机获得的定位数据是包含经度L、纬度B和高程H的三维信息,而最终要得到的电子地图上的点(Xt,Yt)是二维的象素点,而且考虑到高程数据基准不统一,因此可以采取舍弃高程H的简化算法。首先将WGS一84坐标(L,B,H)投影到高斯平面上,获得其高斯平面对应坐标点(x,y),再用简化的平面四参数法将其旋转平移缩放成电子地图平面直角坐标(Xt,Yt)。这样GPS接收机接收到的定位数据就能够在电子地图上直观显示出来。其转换流程如图13所示。The positioning data obtained by the GPS receiver is three-dimensional information including longitude L, latitude B and elevation H, and the points (Xt, Yt) on the electronic map to be finally obtained are two-dimensional pixel points, and the elevation data The datum is not uniform, so a simplified algorithm that discards the height H can be adopted. First project the WGS-84 coordinates (L, B, H) onto the Gaussian plane to obtain the corresponding coordinate points (x, y) on the Gaussian plane, and then use the simplified four-parameter method to rotate and translate it into a right angle to the electronic map plane Coordinates (Xt, Yt). In this way, the positioning data received by the GPS receiver can be visually displayed on the electronic map. Its conversion process is shown in Figure 13.

WGS一84坐标系投影到高斯平面数学模型,即高斯投影正算公式如式19所示,其中,B为投影点的大地纬度;l=L-L0,L为投影点的大地经度,L0为中央子午线的大地经度;N为投影点的卯酉圈曲率半径

Figure BDA0000029614070000191
t=tanB,η=e*cosB,e为椭球第二偏心率。The WGS-84 coordinate system is projected onto the Gaussian plane mathematical model, that is, the Gaussian projection forward calculation formula is shown in Equation 19, where B is the geodetic latitude of the projected point; l=L-L0, L is the geodetic longitude of the projected point, and L0 is The geodetic longitude of the central meridian; N is the radius of curvature of the projection point
Figure BDA0000029614070000191
t=tanB, η=e*cosB, e is the second eccentricity of the ellipsoid.

xx == Xx 00 ++ 11 // 22 ×× NN ×× tt ×× coscos 22 BB ×× ll 22 ++ 11 // 24twenty four ×× NN ×× tt (( 55 -- tt 22 ++ 99 ηη 22 ++ 44 ηη 44 )) coscos 44 BB ×× ll 44 ++ 11 // 720720 ×× NN ×× tt (( 6161 -- 5858 tt 22 ++ tt 44 ++ 270270 ηη 22 -- 330330 ηη 22 tt 22 )) coscos 66 BB ×× ll 66 ythe y == NN ×× coscos BB ×× ll ++ 11 // 66 NN (( 11 -- tt 22 ++ ηη 22 )) coscos 33 BB ×× ll 33 ++ 11 // 120120 NN (( 55 -- 1818 tt 22 ++ tt 44 ++ 1414 ηη 22 -- 5858 ηη 22 tt 22 )) coscos 55 BB ×× ll 55 -- -- -- (( 1919 ))

我们只需要进行二维平面上的坐标变换,因此可以采用更为简便的四参数转换模型。其四个参数在高斯平面内的数学含义分别是:ΔX为坐标x的平移分量;ΔY为坐标y的平移分量;m为尺度因子;θ为旋转量。We only need to perform coordinate transformation on a two-dimensional plane, so a simpler four-parameter transformation model can be used. The mathematical meanings of its four parameters in the Gaussian plane are: ΔX is the translation component of coordinate x; ΔY is the translation component of coordinate y; m is the scale factor; θ is the amount of rotation.

为了求解坐标变换的参数,我们需要知道三个以上WGS一84坐标系地图坐标系下对应点的坐标,设这组公共点P1,P2,...PN(N≥3)在两个坐标系下的坐标分别为(Xg,Yg)i和(Xd,Yd)i,则对于点Pi,有如式20所示关系。In order to solve the parameters of the coordinate transformation, we need to know the coordinates of the corresponding points in the map coordinate system of more than three WGS-84 coordinate systems. Let this group of public points P1, P2, ... PN (N≥3) be in the two coordinate systems The coordinates below are (Xg, Yg) i and (Xd, Yd) i respectively, then for point Pi, there is the relationship shown in formula 20.

Xdxd -- Xgxg YdYd -- YgYg == 11 00 Xgxg YgYg 00 11 YgYg Xgxg ii ΔXΔX ΔYΔY mm θθ -- -- -- (( 2020 ))

通过最小二乘乘原理,求出转换参数(ΔX、ΔY、m、θ)。将需要转换的WGS一84下的点坐标(x,y)代人公式(21),利用四个参数可得待测点的平面直角坐标(Xt,Yt)。The conversion parameters (ΔX, ΔY, m, θ) are obtained by the principle of least square multiplication. Substitute the point coordinates (x, y) under WGS-84 that need to be converted into formula (21), and use the four parameters to obtain the plane Cartesian coordinates (Xt, Yt) of the point to be measured.

XtXt YtYt == ΔXΔX ΔYΔY ++ (( 11 ++ mm )) xx ythe y ++ 00 θθ θθ 00 xx ythe y -- -- -- (( 21twenty one ))

经过转换,可以将WGS一84下的坐标转换成使用的地图坐标,从而实现经纬度到地图坐标的转换。After conversion, the coordinates under WGS-84 can be converted into the used map coordinates, so as to realize the conversion from latitude and longitude to map coordinates.

●网格图像生成●Grid image generation

发明以SVG格式图像绘制太湖水域地图,通过经纬度与地图坐标的对应关系将地图分割为不同的网格,将局部蓝藻实时分布情况图填充到对应的地图网格中,生成水域的蓝藻总体分布图。相对于矩形网格,六边形网格更接近于圆,更加符合摄像机成像视角,有利于图像块裁剪和拼接,因此,采用六边形网格划分湖域如图14所示。Invented to draw a map of Taihu Lake waters with SVG format images, divide the map into different grids through the correspondence between latitude and longitude and map coordinates, fill the real-time distribution map of local cyanobacteria into the corresponding map grids, and generate the overall distribution map of cyanobacteria in the waters . Compared with the rectangular grid, the hexagonal grid is closer to a circle, which is more in line with the imaging angle of the camera, and is conducive to image block cropping and splicing. Therefore, the hexagonal grid is used to divide the lake area as shown in Figure 14.

●融合MapTile技术和矢量数据技术的轻量级WebGIS平台●A lightweight WebGIS platform that integrates MapTile technology and vector data technology

基于MapTile技术的WebGIS平台中,先把地图的栅格图处理成影像金字塔进行切块,然后对图片进行四叉树编码,存放在服务器上,客户端进行缩放、漫游等操作时,只需要动态计算地图瓦片的URL进行访问并显示即可,这些瓦片的体积都很小,在网络条件比较好的时候,浏览起来很流畅。由于图片都是事先生成好放在服务器端的,所以服务器端需要巨大的存储容量的存储设备。同时地图上的所有属性要素都集中在一个瓦片上,所以实现高亮等效果就显得很困难。In the WebGIS platform based on MapTile technology, the raster image of the map is first processed into an image pyramid for cutting, and then the image is encoded by a quadtree and stored on the server. When the client performs operations such as zooming and roaming, only dynamic Calculate the URL of the map tiles to access and display them. These tiles are very small in size, and they can be browsed smoothly when the network conditions are relatively good. Since the pictures are all generated in advance and placed on the server side, the server side needs a storage device with a huge storage capacity. At the same time, all attribute elements on the map are concentrated on one tile, so it is very difficult to achieve effects such as highlighting.

基于矢量数据技术的轻量级WebGIS平台,主要采用SVG(Scalable Vector Graphics)作为地图的展示方式。平台使用的矢量地图数据只是存放在服务器端,客户端请求时一次性发送到客户端,利用SVG图像与脚本语言JavaScript的交互性在客户端完成缩放、漫游、查询等基本操作,不需要和服务器交互。The lightweight WebGIS platform based on vector data technology mainly uses SVG (Scalable Vector Graphics) as the display method of the map. The vector map data used by the platform is only stored on the server side, and is sent to the client at one time when the client requests, and the basic operations such as zooming, roaming, and query are completed on the client side by using the interactivity of SVG images and scripting language JavaScript, without the need to communicate with the server interact.

结合本发明特点,将MapTile技术与矢量数据技术相结合构建轻量级的WebGIS平台发布太湖水域蓝藻实时分布信息。平台利用矢量数据技术SVG发布太湖水域地图,而使用MapTile技术将局部蓝藻分布情况图填充到已经进行分割的SVG地图中,平台结构示意图如图15所示。Combining the characteristics of the present invention, a lightweight WebGIS platform is constructed by combining MapTile technology and vector data technology to release real-time distribution information of cyanobacteria in Taihu Lake waters. The platform uses vector data technology SVG to publish the Taihu Lake waters map, and uses MapTile technology to fill the local cyanobacteria distribution map into the segmented SVG map. The schematic diagram of the platform structure is shown in Figure 15.

服务器端按功能可以分为地图发布服务器和网格图像发布服务器,在本发明中二者可以在同一台机器上实现。当客户端使用Web浏览器访问服务器时,地图发布服务其将太湖水域的SVG地图发送到客户端显示,此后,地图的缩放、漫游、查询网格图像的拼接等基本操作则由脚本JavaScript在客户端完成。网格图像服务器则负责提供不同区域的局部蓝藻分布图,发送到客户端,由客户端脚本进行拼接。The server side can be divided into a map publishing server and a grid image publishing server according to functions, and the two can be implemented on the same machine in the present invention. When the client uses a web browser to access the server, the map publishing service will send the SVG map of the Taihu Lake waters to the client for display. After that, basic operations such as map zooming, roaming, query grid image stitching, etc. will be performed by the script JavaScript on the client. end complete. The grid image server is responsible for providing local cyanobacteria distribution maps in different areas, sending them to the client, and stitching them together by the client script.

客户端与网格图像服务器的通信由Ajax技术实现,Ajax(Asynchronous Javascript and XML)是多种技术的综合,也可以说是一种设计方式,它综合了包括JavaScript、XHTML和CSS、DOM、XML、XSTL、XMLHttpRequest等技术。各种技术在Ajax中作用如图16所示。The communication between the client and the grid image server is realized by Ajax technology. Ajax (Asynchronous Javascript and XML) is a combination of various technologies, and it can also be said to be a design method. It combines JavaScript, XHTML and CSS, DOM, XML , XSTL, XMLHttpRequest and other technologies. The role of various technologies in Ajax is shown in Figure 16.

整个通信过程是异步进行的,通信过程如图17所示,异步指的是Ajax与服务器进行联系的方式。如果使用传统模式,每当用户向服务器发送经纬度请求局部蓝藻图像分布图像块时,Web浏览器就会刷新当前窗口。图X展示了Ajax所采用的异步模式,浏览器不用等待用户操作,也不用刷新整个窗口就可以获得需要的图像块。只要来回传送采用XML格式封装的数据,运行JavaScript代码就可以与服务器进行联系。执行结果到达客户端时,才会通知浏览器在合适的时间将图像块填充到相应的网格中。The whole communication process is carried out asynchronously, as shown in Figure 17. Asynchronous refers to the way Ajax communicates with the server. If the traditional mode is used, the web browser will refresh the current window whenever the user sends a longitude and latitude request to the server for a local cyanobacteria image distribution image block. Figure X shows the asynchronous mode adopted by Ajax. The browser can obtain the required image block without waiting for the user's operation or refreshing the entire window. Running JavaScript code can communicate with the server simply by passing data encapsulated in XML format back and forth. When the execution result reaches the client, the browser will be notified to fill the image block into the corresponding grid at an appropriate time.

Claims (7)

1. based on the water surface general view formation method of the image that cruises, it is characterized in that at first setting up imaging system, comprise server images disposal system, client end interface demonstration and interactive system and the ships and light boats video acquisition system that cruises, the ships and light boats video acquisition system that cruises comprises video acquisition device and GPS locating device, the server images disposal system is handled video data and is obtained water surface general view picture, and the input client end interface shows and interactive system; Wherein: the server images disposal system comprises video stabilization module, camera calibration module, image transformation module and image mosaic module, client end interface shows and interactive system comprises that map filling and display module and grid map make up module, may further comprise the steps:
1), video camera and GPS locating device are set on ships and light boats, video camera record patrol ships and light boats are travelled the water surface scene in the ken in the route, the output video frame sequence, the GPS locating device obtains the GPS information of ships and light boats in cruising in real time, and GPS information is corresponding one by one by the time with sequence of frames of video;
2) sequence of frames of video is imported the server images disposal system, the video stabilization module is carried out pre-service by overall motion estimation, motion compensation and image repair technology to sequence of frames of video, because the float that causes of jolting of ships and light boats is removed, obtain stable video sequence image;
3) image information obtained from video camera of camera calibration module is calculated the geological information the Between Air and Water coordinate system, finishes camera calibration, and the information that obtains is used to rebuild water surface general view image information;
4) image transformation module is utilized the calibration information of camera calibration module, comprise camera attitudes such as the video camera inside and outside parameter and the angle of pitch, the visual angle of sequence of frames of video is converted to from side-looking overlooks, utilize the image reconstruction technique realization that hole-filling, the image sharpening of overhead view image are handled again, obtain the clear general view behind the view transformation, wherein image reconstruction adopts based on template with based on the algorithm of grid;
5) the image mosaic module sequence of frames of video vertical view that one width of cloth width of cloth is isolated pieces together, and obtains a big figure, and splicing is according to handling the LIP model based on Image Feature Point Matching and logarithmic image, and the geographic coordinate of GPS information is auxilliary;
6) grid map makes up module and according to the corresponding relation of latitude and longitude information and map reference the map of the water surface is pressed mesh segmentation, generation is based on the electronics grid map of Geographic Information System GIS, wherein use scalable vector graphics SVG as format map, gps coordinate is converted to water surface map reference, with the corresponding relation of the water surface map reference map is divided into different grids by the GPS longitude and latitude, general view is filled in the corresponding water surface map grid according to gps coordinate, and the overall general view that generates the waters is issued on WEB.
2. the water surface general view formation method based on the image that cruises according to claim 1 is characterized in that step 2) in, the treatment scheme of video stabilization module is as follows:
21) overall motion estimation: adopt the Block Matching Algorithm and the parameter model estimation technique;
22) image motion compensation: at first moving unintentionally and intentionally of sequence of frames of video carried out parameter estimation, adopt Kalman's state filtering method to have a mind to the kinematic parameter estimation, utilize statistical method structural physical state-space model, the dynamic change of having a mind to and being not intended to kinematic parameter is described, and estimate to have a mind to kinematic parameter based on Kalman filter, compensate by image transformation then and be not intended to motion, the motion that makes image sequence with estimate to have a mind to motion model consistent, thereby reach the purpose of video stabilization, suppose n two field picture f nTransformation parameter is (T n, d n), then transformation model is as follows:
p ‾ n = T ‾ n p n + d ‾ n - - - ( 1 )
Wherein, p nBe the point coordinate vector after the image transformation, state equation and observation equation according to Kalman filtering obtain:
p ‾ n = T ^ n p n + d ^ n = ( T ^ n ( T ~ n ) - 1 ) p n - ( T ^ n ( T ~ n ) - 1 d ~ n + d ^ n - - - ( 2 )
Therefore, n two field picture f nConversion process can be described below:
f ′ n ( p ) = f n ( ( T ‾ n ) - 1 p n - ( T ‾ n ) - 1 d ‾ n ) - - - ( 3 ) ;
23) image repair: from view data, obtain iconic model through filtering, parameter or non-parametric estmation and entropy method, employing is based on the digital picture repairing technique of partial differential equation, the marginal information of repairing area is treated in utilization, adopt the direction of estimating isophote to smart method by thick simultaneously, the information around the repairing area for the treatment of is propagated in the repairing area, to realize image repair.
3. the water surface general view formation method based on the image that cruises according to claim 1, it is characterized in that video camera is a pinhole camera, on traditional camera marking method basis, inside and outside ginseng is demarcated respectively during the step 3) camera calibration, with the known calibrating block of structure as the space object of reference, utilize the three-dimensional coordinate of one group of non degenerate point on the calibrating block and corresponding image coordinate by a series of mathematic(al) manipulations and computing method, ask for the inner parameter and the external parameter of camera model, confidential reference items are demarcated and are finished under the laboratory experiment environment, adopt traditional method based on calibrating template; Outer ginseng is carried out on-line proving, realizes by the fixed character object of reference that extracts in the scene.
4. the water surface general view formation method based on the image that cruises according to claim 1, it is characterized in that in the step 4), the image transformation module realizes the visual angle conversion of frame of video by the reverse mapping techniques of image, the mapping relations matrix of foundation from the purpose territory to reference field, the purpose territory is a general view, reference field is a video frame images, the information that obtains with camera calibration is rebuild the depth information in purpose territory in advance, according to certain any coordinate in the general view of required structure (X, Y), obtain the picture planimetric coordinates (x of this point by the mapping relations matrix computations, y), again with (x y) locates the pixel value assignment in (X, Y), the visual angle of sequence of frames of video is converted to from side-looking overlooks; Image reconstruction adopts the algorithm based on template.
5. the water surface general view formation method based on the image that cruises according to claim 1 is characterized in that the treatment scheme of the image mosaic module of step 5) comprises image pre-service, image registration, image co-registration, wherein:
The image pre-service is carried out pre-service to reference picture and image to be spliced, reference picture is got first two field picture of video sequence, pre-service comprises the basic operation of Flame Image Process, set up the matching template of image and image is carried out conversion, wherein with reference picture as matching template, can do some shearings according to actual conditions, image is carried out conversion in order to extract characteristics of image, comprise Fourier transform, wavelet transformation, the Gabor conversion, extract the characteristic set of image afterwards, utilize the rough position relation of feature calculation reference picture and image to be spliced, promptly treat stitching image and carry out coarse localization, find overlapping region roughly, dwindle matching range, raising speed;
Image registration adopts improved Normalized Grey Level level correlation technique NGC to carry out image registration in conjunction with the GPS locator data, improved Normalized Grey Level level correlation technique NGC is: the gray level that the gray level of pixel in the matching template of image be multiply by the respective pixel that is covered by matching template in the image to be spliced, the summation NC value that obtains is stored in the two-dimensional array, the general view that described matching template and image to be spliced have not all transformed through step 4):
NC ( i , j ) = Σ m = 1 M Σ n = 1 N S i , j ( m , n ) T ( m , n ) - - - ( 10 )
Wherein, T is a matching template, comprises M * N pixel, S I, jFor mobile matching template starting point to image to be spliced (i, j) image is called subgraph by the zone that template covers during the position, the position of matching template in image to be spliced is by the subgraph decision the most similar to matching template, with matching template T and subgraph S I, jRegard two vectors as, utilize the cosine formula (11) of vector angle to find the solution S I, jWith the angle theta of T, make the S of θ minimum I, jThe position at place is the position of template in image:
cos θ i , j = Σ m = 1 M Σ n = 1 N S i , j ( m , n ) T ( m , n ) Σ m = 1 M Σ n = 1 N [ S ( m , n ) ] 2 Σ m = 1 M Σ n = 1 N [ T ( m , n ) ] 2 - - - ( 11 ) ;
Image co-registration adopts log-domain Flame Image Process model LIP, and image transitions to be spliced is handled to the LIP territory, realizes figure image intensifying, gamma correction.
6. the water surface general view formation method based on the image that cruises according to claim 5 is characterized in that in the image amalgamation, being positioned at
Figure FDA0000029614060000033
The brightness value of the pixel at place is considered as a tri-vector, is referred to as color vectors, uses
Figure FDA0000029614060000034
Expression,
F ur ( v ) r = F r ( v ) r F g ( v ) r F b ( v ) r - - - ( 18 )
In calculating, quote LIP territory definition With
Figure FDA0000029614060000037
Algorithm, and mapping function ψ are handled three passages of R, G, B of coloured image respectively, on the LIP of gray level image model, set up the logarithm of coloured image and handle the CLIP model, in the CLIP territory, handle Surface Picture, make image more press close to true visual signature.
7. the water surface general view formation method based on the image that cruises according to claim 1 when it is characterized in that in the step 6) that map with the water surface is by mesh segmentation, is divided into hexagonal mesh map of uniform size.
CN2010105211918A 2010-10-27 2010-10-27 Cruise image based imaging method of water-surface aerial view Expired - Fee Related CN101976429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105211918A CN101976429B (en) 2010-10-27 2010-10-27 Cruise image based imaging method of water-surface aerial view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105211918A CN101976429B (en) 2010-10-27 2010-10-27 Cruise image based imaging method of water-surface aerial view

Publications (2)

Publication Number Publication Date
CN101976429A true CN101976429A (en) 2011-02-16
CN101976429B CN101976429B (en) 2012-11-14

Family

ID=43576311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105211918A Expired - Fee Related CN101976429B (en) 2010-10-27 2010-10-27 Cruise image based imaging method of water-surface aerial view

Country Status (1)

Country Link
CN (1) CN101976429B (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256061A (en) * 2011-07-29 2011-11-23 武汉大学 Two-dimensional and three-dimensional hybrid video stabilizing method
CN103033169A (en) * 2011-09-28 2013-04-10 株式会社拓普康 Image acquiring device and image acquiring system
CN103179386A (en) * 2013-03-29 2013-06-26 苏州皓泰视频技术有限公司 Monitoring method and monitoring apparatus based on vector electronic map
CN103309943A (en) * 2013-05-14 2013-09-18 广东南方数码科技有限公司 Three-dimensional geographic information platform and topographic data processing method thereof
CN103686078A (en) * 2013-12-04 2014-03-26 广东威创视讯科技股份有限公司 Method and device for visually monitoring water surface
CN104376332A (en) * 2014-12-09 2015-02-25 深圳市捷顺科技实业股份有限公司 License plate recognition method and device
CN104484898A (en) * 2014-12-22 2015-04-01 山东鲁能软件技术有限公司 Power grid GIS (geographic information system) platform based three-dimensional modeling method for high-resolution remote sensing image equipment
CN104506826A (en) * 2015-01-13 2015-04-08 中南大学 Fixed-point directional video real-time mosaic method without valid overlapping variable structure
CN104501720A (en) * 2014-12-24 2015-04-08 河海大学常州校区 Non-contact object size and distance image measuring instrument
CN105121999A (en) * 2013-04-05 2015-12-02 莱卡地球系统公开股份有限公司 Control of image triggering for aerial image capturing in nadir alignment for an unmanned aircraft
CN105245760A (en) * 2015-09-18 2016-01-13 深圳市安健科技有限公司 CCD image brightness rectification method and system
CN105765624A (en) * 2013-11-27 2016-07-13 微软技术许可有限责任公司 Content-aware image rotation
CN106092929A (en) * 2016-06-07 2016-11-09 同济大学 Eutrophication reservoir surface water algae distribution Landsat remote-sensing monitoring method
CN106293657A (en) * 2015-05-20 2017-01-04 阿里巴巴集团控股有限公司 UI control water curtain display packing and device
CN106530307A (en) * 2016-09-30 2017-03-22 四川农业大学 System and method of processing landscape node images, based on neighborhood algorithm
CN107423753A (en) * 2017-06-15 2017-12-01 新疆大学 A kind of rapid fusion operation method of multi-source Spatial Data
CN107621281A (en) * 2017-08-25 2018-01-23 吴世贵 A kind of paddy environment detects control method
CN107972582A (en) * 2016-10-25 2018-05-01 北京计算机技术及应用研究所 A kind of full HD ring car overlooks display system
CN108009556A (en) * 2017-12-23 2018-05-08 浙江大学 A kind of floater in river detection method based on fixed point graphical analysis
CN108475437A (en) * 2015-04-10 2018-08-31 邦迪克斯商用车系统有限责任公司 360 ° of viewing systems of vehicle of video camera, calibration system and method are placed with corner
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN108765510A (en) * 2018-05-24 2018-11-06 河南理工大学 A kind of quick texture synthesis method based on genetic optimization search strategy
CN108833874A (en) * 2018-07-04 2018-11-16 长安大学 A panoramic image color correction method for driving recorder
CN108986204A (en) * 2017-06-01 2018-12-11 哈尔滨工业大学 A kind of full-automatic quick indoor scene three-dimensional reconstruction apparatus based on dual calibration
CN109074069A (en) * 2016-03-29 2018-12-21 智动科技有限公司 Autonomous vehicle with improved vision-based detection ability
CN109064542A (en) * 2018-06-06 2018-12-21 链家网(北京)科技有限公司 Threedimensional model surface hole complementing method and device
CN109443325A (en) * 2018-09-25 2019-03-08 上海市保安服务总公司 Utilize the space positioning system of floor-mounted camera
WO2019148311A1 (en) * 2018-01-30 2019-08-08 深圳前海达闼云端智能科技有限公司 Information processing method and system, cloud processing device and computer program product
CN110516014A (en) * 2019-01-18 2019-11-29 南京泛在地理信息产业研究院有限公司 A method for mapping urban road surveillance video to a two-dimensional map
CN111239148A (en) * 2019-03-11 2020-06-05 绿桥(泰州)生态修复有限公司 Water quality detection method
WO2020125131A1 (en) * 2018-12-18 2020-06-25 影石创新科技股份有限公司 Panoramic video anti-shake method and portable terminal
CN111982031A (en) * 2020-08-24 2020-11-24 江苏科技大学 Water surface area measuring method based on unmanned aerial vehicle vision
CN113409459A (en) * 2021-06-08 2021-09-17 北京百度网讯科技有限公司 Method, device and equipment for producing high-precision map and computer storage medium
CN113592975A (en) * 2021-06-30 2021-11-02 浙江城建规划设计院有限公司 Aerial view rapid mapping system based on remote sensing
CN114020943A (en) * 2022-01-05 2022-02-08 武汉幻城经纬科技有限公司 Basin water surface mixing drawing method and system, electronic equipment and storage medium
WO2022142009A1 (en) * 2020-12-30 2022-07-07 平安科技(深圳)有限公司 Blurred image correction method and apparatus, computer device, and storage medium
CN114913076A (en) * 2022-07-19 2022-08-16 成都智明达电子股份有限公司 Image scaling and rotating method, device, system and medium
CN114998112A (en) * 2022-04-22 2022-09-02 南通悦福软件有限公司 Image denoising method and system based on adaptive frequency domain filtering
CN116309851A (en) * 2023-05-19 2023-06-23 安徽云森物联网科技有限公司 Position and orientation calibration method for intelligent park monitoring camera
CN116721363A (en) * 2023-08-07 2023-09-08 江苏省地质调查研究院 Ecological disaster identification and motion prediction method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030165255A1 (en) * 2001-06-13 2003-09-04 Hirohiko Yanagawa Peripheral image processor of vehicle and recording medium
CN101138015A (en) * 2005-03-02 2008-03-05 株式会社纳维泰 Map display apparatus and method
US20080252729A1 (en) * 2007-01-03 2008-10-16 Delta Electronics, Inc. Advanced bird view visual system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030165255A1 (en) * 2001-06-13 2003-09-04 Hirohiko Yanagawa Peripheral image processor of vehicle and recording medium
CN101138015A (en) * 2005-03-02 2008-03-05 株式会社纳维泰 Map display apparatus and method
US20080252729A1 (en) * 2007-01-03 2008-10-16 Delta Electronics, Inc. Advanced bird view visual system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《计算机辅助设计与图形学报》 20060531 马骏,朱衡君 利用P_Buffer模拟虚拟场景的后视镜和鸟瞰图 第18卷, 第5期 2 *

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256061B (en) * 2011-07-29 2013-06-05 武汉大学 Two-dimensional and three-dimensional hybrid video stabilizing method
CN102256061A (en) * 2011-07-29 2011-11-23 武汉大学 Two-dimensional and three-dimensional hybrid video stabilizing method
US10602129B2 (en) 2011-09-28 2020-03-24 Kabushiki Kaisha Topcon Image acquiring device and image acquiring system
CN103033169B (en) * 2011-09-28 2015-04-29 株式会社拓普康 Image acquiring device and image acquiring system
CN103033169A (en) * 2011-09-28 2013-04-10 株式会社拓普康 Image acquiring device and image acquiring system
US9544575B2 (en) 2011-09-28 2017-01-10 Kabushiki Kaisha Topcon Image acquiring device and image acquiring system
CN103179386A (en) * 2013-03-29 2013-06-26 苏州皓泰视频技术有限公司 Monitoring method and monitoring apparatus based on vector electronic map
CN105121999B (en) * 2013-04-05 2017-11-14 莱卡地球系统公开股份有限公司 The image for the Aerial Images collection that nadir for UAV is aligned triggers control
CN105121999A (en) * 2013-04-05 2015-12-02 莱卡地球系统公开股份有限公司 Control of image triggering for aerial image capturing in nadir alignment for an unmanned aircraft
US10273000B2 (en) 2013-04-05 2019-04-30 Leica Geosystems Ag Control of image triggering for aerial image capturing in nadir alignment for an unmanned aircraft
CN103309943A (en) * 2013-05-14 2013-09-18 广东南方数码科技有限公司 Three-dimensional geographic information platform and topographic data processing method thereof
CN105765624B (en) * 2013-11-27 2019-04-12 微软技术许可有限责任公司 Perception of content image rotation
CN105765624A (en) * 2013-11-27 2016-07-13 微软技术许可有限责任公司 Content-aware image rotation
CN103686078A (en) * 2013-12-04 2014-03-26 广东威创视讯科技股份有限公司 Method and device for visually monitoring water surface
CN104376332A (en) * 2014-12-09 2015-02-25 深圳市捷顺科技实业股份有限公司 License plate recognition method and device
CN104484898A (en) * 2014-12-22 2015-04-01 山东鲁能软件技术有限公司 Power grid GIS (geographic information system) platform based three-dimensional modeling method for high-resolution remote sensing image equipment
CN104501720A (en) * 2014-12-24 2015-04-08 河海大学常州校区 Non-contact object size and distance image measuring instrument
CN104501720B (en) * 2014-12-24 2017-07-14 河海大学常州校区 Non-contact object size and range image measuring instrument
CN104506826A (en) * 2015-01-13 2015-04-08 中南大学 Fixed-point directional video real-time mosaic method without valid overlapping variable structure
CN104506826B (en) * 2015-01-13 2017-09-12 中南大学 A kind of real-time splicing apparatus of fixed point orientation video without effective overlapping region
CN108475437B (en) * 2015-04-10 2021-12-03 邦迪克斯商用车系统有限责任公司 360 degree look-around system for vehicle with corner-mounted camera, calibration system and method
CN108475437A (en) * 2015-04-10 2018-08-31 邦迪克斯商用车系统有限责任公司 360 ° of viewing systems of vehicle of video camera, calibration system and method are placed with corner
CN106293657A (en) * 2015-05-20 2017-01-04 阿里巴巴集团控股有限公司 UI control water curtain display packing and device
CN106293657B (en) * 2015-05-20 2019-09-24 阿里巴巴集团控股有限公司 UI control water curtain display methods and device
CN105245760A (en) * 2015-09-18 2016-01-13 深圳市安健科技有限公司 CCD image brightness rectification method and system
CN109074069A (en) * 2016-03-29 2018-12-21 智动科技有限公司 Autonomous vehicle with improved vision-based detection ability
CN106092929A (en) * 2016-06-07 2016-11-09 同济大学 Eutrophication reservoir surface water algae distribution Landsat remote-sensing monitoring method
CN106530307A (en) * 2016-09-30 2017-03-22 四川农业大学 System and method of processing landscape node images, based on neighborhood algorithm
CN107972582A (en) * 2016-10-25 2018-05-01 北京计算机技术及应用研究所 A kind of full HD ring car overlooks display system
CN108986204A (en) * 2017-06-01 2018-12-11 哈尔滨工业大学 A kind of full-automatic quick indoor scene three-dimensional reconstruction apparatus based on dual calibration
CN108986204B (en) * 2017-06-01 2021-12-21 哈尔滨工业大学 Full-automatic quick indoor scene three-dimensional reconstruction device based on dual calibration
CN107423753A (en) * 2017-06-15 2017-12-01 新疆大学 A kind of rapid fusion operation method of multi-source Spatial Data
CN107621281A (en) * 2017-08-25 2018-01-23 吴世贵 A kind of paddy environment detects control method
CN108009556A (en) * 2017-12-23 2018-05-08 浙江大学 A kind of floater in river detection method based on fixed point graphical analysis
CN108009556B (en) * 2017-12-23 2021-08-24 浙江大学 A detection method of river floating objects based on fixed-point image analysis
WO2019148311A1 (en) * 2018-01-30 2019-08-08 深圳前海达闼云端智能科技有限公司 Information processing method and system, cloud processing device and computer program product
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN108510497B (en) * 2018-04-10 2022-04-26 四川和生视界医药技术开发有限公司 Method and device for displaying focus information of retina image
CN108765510A (en) * 2018-05-24 2018-11-06 河南理工大学 A kind of quick texture synthesis method based on genetic optimization search strategy
CN109064542A (en) * 2018-06-06 2018-12-21 链家网(北京)科技有限公司 Threedimensional model surface hole complementing method and device
CN108833874B (en) * 2018-07-04 2020-11-03 长安大学 A color correction method of panoramic image for driving recorder
CN108833874A (en) * 2018-07-04 2018-11-16 长安大学 A panoramic image color correction method for driving recorder
CN109443325A (en) * 2018-09-25 2019-03-08 上海市保安服务总公司 Utilize the space positioning system of floor-mounted camera
WO2020125131A1 (en) * 2018-12-18 2020-06-25 影石创新科技股份有限公司 Panoramic video anti-shake method and portable terminal
US11483478B2 (en) 2018-12-18 2022-10-25 Arashi Vision Inc. Panoramic video anti-shake method and portable terminal
CN110516014B (en) * 2019-01-18 2023-05-26 南京泛在地理信息产业研究院有限公司 A method for mapping urban road surveillance video to a two-dimensional map
CN110516014A (en) * 2019-01-18 2019-11-29 南京泛在地理信息产业研究院有限公司 A method for mapping urban road surveillance video to a two-dimensional map
CN111239148B (en) * 2019-03-11 2020-12-04 青田县君翔科技有限公司 Water quality detection method
CN111239148A (en) * 2019-03-11 2020-06-05 绿桥(泰州)生态修复有限公司 Water quality detection method
CN111982031A (en) * 2020-08-24 2020-11-24 江苏科技大学 Water surface area measuring method based on unmanned aerial vehicle vision
CN111982031B (en) * 2020-08-24 2021-12-31 衡阳市大雁地理信息有限公司 Water surface area measuring method based on unmanned aerial vehicle vision
WO2022142009A1 (en) * 2020-12-30 2022-07-07 平安科技(深圳)有限公司 Blurred image correction method and apparatus, computer device, and storage medium
CN113409459A (en) * 2021-06-08 2021-09-17 北京百度网讯科技有限公司 Method, device and equipment for producing high-precision map and computer storage medium
CN113592975A (en) * 2021-06-30 2021-11-02 浙江城建规划设计院有限公司 Aerial view rapid mapping system based on remote sensing
CN114020943A (en) * 2022-01-05 2022-02-08 武汉幻城经纬科技有限公司 Basin water surface mixing drawing method and system, electronic equipment and storage medium
CN114998112A (en) * 2022-04-22 2022-09-02 南通悦福软件有限公司 Image denoising method and system based on adaptive frequency domain filtering
CN114998112B (en) * 2022-04-22 2024-06-28 广州市天誉创高电子科技有限公司 Image denoising method and system based on self-adaptive frequency domain filtering
CN114913076A (en) * 2022-07-19 2022-08-16 成都智明达电子股份有限公司 Image scaling and rotating method, device, system and medium
CN116309851A (en) * 2023-05-19 2023-06-23 安徽云森物联网科技有限公司 Position and orientation calibration method for intelligent park monitoring camera
CN116309851B (en) * 2023-05-19 2023-08-11 安徽云森物联网科技有限公司 Position and orientation calibration method for intelligent park monitoring camera
CN116721363A (en) * 2023-08-07 2023-09-08 江苏省地质调查研究院 Ecological disaster identification and motion prediction method and system
CN116721363B (en) * 2023-08-07 2023-11-03 江苏省地质调查研究院 Ecological disaster identification and motion prediction method and system

Also Published As

Publication number Publication date
CN101976429B (en) 2012-11-14

Similar Documents

Publication Publication Date Title
CN101976429B (en) Cruise image based imaging method of water-surface aerial view
US11521379B1 (en) Method for flood disaster monitoring and disaster analysis based on vision transformer
CN109840553B (en) Extraction method and system of cultivated land crop type, storage medium and electronic equipment
CN103236160B (en) Road network traffic condition monitoring system based on video image processing technology
Luo et al. Semantic Riverscapes: Perception and evaluation of linear landscapes from oblique imagery using computer vision
CN109242291A (en) River and lake basin water environment wisdom management method
CN107862293A (en) Radar based on confrontation generation network generates colored semantic image system and method
CN106780586B (en) A Solar Energy Potential Assessment Method Based on Ground Laser Point Cloud
CN116188671A (en) River course and land integrated three-dimensional real scene modeling method
Yang et al. MSFusion: Multistage for remote sensing image spatiotemporal fusion based on texture transformer and convolutional neural network
CN116258608A (en) Water conservancy real-time monitoring information management system integrating GIS and BIM three-dimensional technology
Zhang et al. A 3d visualization system for hurricane storm-surge flooding
CN113538679A (en) Mixed real-scene three-dimensional channel scene construction method
CN110298271A (en) Seawater method for detecting area based on critical point detection network and space constraint mixed model
CN115661932A (en) Fishing behavior detection method
CN114120145A (en) Monitoring method, monitoring device, electronic equipment and computer readable storage medium
CN111726535A (en) Smart city CIM video big data image quality control method based on vehicle perception
Gu et al. Surveying and mapping of large-scale 3D digital topographic map based on oblique photography technology
CN118229901A (en) Three-dimensional visualization method and system based on multi-source spatial data under Cesium map engine
De Santis et al. Visual risk communication of urban flooding in 3D environments based on terrestrial laser scanning
Meng Remote Sensing Data Preprocessing Technology
Xu et al. Real-time panoramic map modeling method based on multisource image fusion and three-dimensional rendering
CN118536057B (en) Urban waterlogging point-surface monitoring data intelligent generation method based on multi-mode data
Zheng Wetland park environmental data monitoring based on GIS high resolution images and machine learning
Yu et al. Urban Landscape Information Construction and Visual Communication Design Based on Digital Image Matrix Reconstruction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121114

Termination date: 20151027

EXPY Termination of patent right or utility model