WO2018006659A1 - 一种航道监控目标获取方法及装置 - Google Patents

一种航道监控目标获取方法及装置 Download PDF

Info

Publication number
WO2018006659A1
WO2018006659A1 PCT/CN2017/085126 CN2017085126W WO2018006659A1 WO 2018006659 A1 WO2018006659 A1 WO 2018006659A1 CN 2017085126 W CN2017085126 W CN 2017085126W WO 2018006659 A1 WO2018006659 A1 WO 2018006659A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
determining
video
monitoring target
water
Prior art date
Application number
PCT/CN2017/085126
Other languages
English (en)
French (fr)
Inventor
杨伟
田池
郭海训
宋其毅
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2018006659A1 publication Critical patent/WO2018006659A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the field of monitoring technologies, and in particular, to a method and device for acquiring a channel monitoring target.
  • the monitoring of the navigation channel mainly refers to monitoring the ships in the navigation channel. Therefore, it is necessary to obtain the monitoring target (mainly the target ship), and the monitoring of the navigation channel can be realized by tracking the monitoring target.
  • marine radar or CCTV Close Circuit Television Inspection
  • CCTV systems are commonly used to obtain navigational surveillance targets.
  • marine radar and CCTV systems are susceptible to weather: ordinary marine radars are inaccurate when subjected to severe weather, resulting in inaccurate monitoring targets; CCTV systems have low visibility. For example, in foggy weather, rainy days and nights, it is impossible to see the ships in the water surface. Therefore, it is not accurate to obtain the navigation channel monitoring targets.
  • the purpose of the embodiments of the present application is to provide a method and device for acquiring a channel monitoring target to accurately acquire a channel monitoring target.
  • the embodiment of the present application discloses a method for acquiring a channel monitoring target, including:
  • the determining the target area in the first video frame may include:
  • the entire area in the first video frame is determined as the target area.
  • the determining whether the water-day boundary line exists in the first video frame may include:
  • the determining the water area in the first video frame according to the water and water boundary line may include:
  • a region having a small average value is determined as a water region in the first video frame.
  • the method may further include:
  • the first video frame is subjected to denoising processing and/or enhancement processing.
  • the method may further include:
  • the preset number of video frames is: a video frame in an infrared thermal imaging video before the acquisition time of the first video frame;
  • Flow statistics are performed on the navigation channel corresponding to the infrared thermal imaging video according to the determined traveling direction.
  • the method may further include:
  • the second monitoring target is monitored.
  • the embodiment of the present application further discloses a channel monitoring target acquiring device, including:
  • a first obtaining module configured to obtain a first video frame of the infrared thermography video
  • a first determining module configured to determine a target area in the first video frame
  • Generating a module generating a Gaussian pyramid corresponding to the target area
  • an extraction determining module configured to extract a feature value of the Gaussian pyramid, perform a local visual contrast calculation on the feature value, and determine a first monitoring target from the target region according to the calculation result.
  • the first determining module may include:
  • a determining submodule configured to determine whether there is a water dividing line in the first video frame, and if yes, triggering the first determining submodule, and if not, triggering the second determining submodule;
  • the first determining submodule is configured to determine a water area in the first video frame according to the water and water boundary line, and determine the water area as a target area;
  • the second determining submodule is configured to determine all areas in the first video frame as a target area.
  • the determining submodule may include:
  • a calculating unit configured to calculate a standard deviation of gray values of each row of pixels in the first video frame
  • a statistical determining unit configured to calculate a gradient value between standard deviations of gray values of pixels of adjacent rows, and determine a maximum gradient value
  • a determining unit configured to determine whether the maximum gradient value is greater than a preset threshold: if not, determining that there is no water-day boundary line in the first video frame; if yes, determining the first video frame There is a water dividing line in the middle;
  • a determining unit configured to determine a water-and-white boundary line when the determining unit determines that the result is YES, and according to adjacent pixels of the pixel corresponding to the maximum gradient value.
  • the first determining submodule may be specifically configured to:
  • a region having a small average value is determined as a water region in the first video frame.
  • the device may further include:
  • a processing module configured to perform denoising processing and/or enhancement processing on the first video frame.
  • the device may further include:
  • An acquiring module configured to acquire a monitoring target in a preset number of video frames, where the preset number of video frames is: a video in an infrared thermal imaging video before the acquisition moment of the first video frame frame;
  • a second determining module configured to determine a driving direction of the first monitoring target according to the first monitoring target and the acquired monitoring target
  • the statistics module is configured to perform flow statistics on the navigation channel corresponding to the infrared thermal imaging video according to the determined traveling direction.
  • the device may further include:
  • a second obtaining module configured to obtain a second video frame of the visible light video, where the second video frame and the first video frame are video frames obtained by video capturing the same area at the same time;
  • a third determining module configured to determine a second monitoring target corresponding to the first monitoring target in the second video frame
  • a monitoring module configured to monitor the second monitoring target.
  • an embodiment of the present application further discloses an electronic device, including: a housing, a processor, a memory, a circuit board, and a power supply circuit, wherein the circuit board is disposed inside the space enclosed by the housing, the processor and the The memory is disposed on the circuit board; the power circuit is used for each of the electronic devices The circuit or the device is powered; the memory is for storing executable program code; the processor runs the program corresponding to the executable program code by reading the executable program code stored in the memory for executing the above-described channel monitoring target acquisition method.
  • an embodiment of the present application further discloses an executable program code for being executed to execute the above-described channel monitoring target acquisition method.
  • an embodiment of the present application further discloses a storage medium for storing executable program code for being executed to execute the above-described channel monitoring target acquisition method.
  • the first video frame of the infrared thermal imaging video is obtained, and the first monitoring target is obtained in the first video frame. That is to say, this solution acquires monitoring targets through infrared thermal imaging video.
  • infrared thermal imaging video acquisition devices are not susceptible to weather, even in bad weather and low visibility, such as heavy fog. Infrared thermal imaging is still clear on days, rainy days and nights. Therefore, using the solution provided in this application, the channel monitoring targets can be accurately obtained.
  • FIG. 1 is a schematic flowchart of a method for acquiring a channel monitoring target according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a channel monitoring target acquiring apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides a method and device for acquiring a channel monitoring target.
  • the method can be performed by an infrared thermal imaging video capture device, a cell phone, a tablet, a personal computer, a server, and the like.
  • the following describes the method for acquiring the channel monitoring target provided by the embodiment of the present application in detail.
  • FIG. 1 is a schematic flowchart of a method for acquiring a channel monitoring target according to an embodiment of the present application, including:
  • the execution body of the solution is an infrared thermal imaging video capture device
  • the device obtains the first video frame of the video in the collected infrared thermal imaging video
  • the execution body of the solution is a mobile phone, a tablet computer, a personal computer
  • the server or the like receives the infrared thermal imaging video transmitted by the infrared thermal imaging video capture device, and obtains the first video frame of the video in the received infrared thermal imaging video.
  • the first video frame is any video frame in the infrared thermographic video, which is referred to as a first video frame for distinguishing from the second video frame of the visible light video.
  • this step may include:
  • the entire area in the first video frame is determined as the target area.
  • the water-and-dise line divides the boundary between the water area and the sky area. It can be understood that the solution needs to acquire the monitoring target in the water area, and the monitoring target is determined in the water area, and the area of the determined target is narrowed compared to determining the monitoring target in all areas of the first video frame. , which reduces the complexity of determining the monitoring target. However, if there is no water boundary line in the first video frame, that is to say, the water area cannot be identified, the monitoring target can only be determined in the entire area of the first video frame.
  • determining whether there is a water-day boundary in the first video frame Lines can include:
  • the gray value of each pixel may be determined according to the RGB data of each pixel in the row, thereby determining the standard deviation of the gray value of the row of pixels.
  • the standard deviation of the gray value of the pixel in the first row is 10
  • the standard deviation of the gray value of the pixel in the second row is 8
  • the standard of the gray value of the pixel in the third row is The difference is 3
  • the standard deviation of the gray value of the pixel in the 4th row is 2
  • the standard deviation of the gray value of the pixel in the 5th row is 1.
  • the preset threshold is 3, and the gradient value of the standard deviation of the third row and the second row is greater than the preset threshold, it is determined that there is a water-day boundary line in the first video frame, and the third row or the second row of pixels may be The line formed by the dots is determined as the water boundary line.
  • the standard deviation of the gray value of the pixel in the sky region is significantly different from the standard deviation of the gray value of the pixel in the water region, and the gray of the adjacent pixel in the sky region or the water region is gray.
  • the standard deviation of the values is not much different. Therefore, if the standard deviation of the gray values of the adjacent row pixels is very different, that is, the gradient value between the standard deviations of the gray values of the two rows of pixels is large, it means that the two rows of pixels belong to different In the area, one row of pixels belongs to the sky area, and another row of pixels belongs to the water area.
  • a line formed by any one of the two rows of pixels can be determined as a water boundary line.
  • the first video frame is determined. There is no water dividing line, and all the areas in the first video frame are determined as the target area.
  • determining the water area in the first video frame according to the water and water boundary line may include:
  • a region having a small average value is determined as a water region in the first video frame.
  • the average value of the gray value of the pixel in the sky region is generally larger than the average value of the gray value of the pixel in the water region, and therefore, the calculation can be separately separated by the water and the sky boundary.
  • the average of the gray values of the pixel points in the two regions, and the region having the small average value is determined as the water region.
  • the target area determined above may be downsampled multiple times, and images of different resolutions are sequentially obtained, thereby generating a Gaussian pyramid of the multi-layer image.
  • S104 Extract a feature value of the Gaussian pyramid, perform a local visual contrast calculation on the feature value, and determine a first monitoring target from the target area according to the calculation result.
  • the feature values may include grayscale values, gradient values, and texture feature values.
  • grayscale, gradient, and texture features may be separately extracted from each layer image in the generated Gaussian pyramid, thereby obtaining grayscale values, gradient values, and texture feature values of the respective layer images.
  • the center-surround algorithm is used to calculate the local visual contrast of the gray value, gradient value and texture feature value of each layer image, and the contrast result corresponding to each layer image is obtained.
  • This step can be understood as performing weight calculation on the gray value, the gradient value and the texture feature value of each pixel in each layer image, and obtaining the weight value corresponding to each pixel point.
  • the weight value corresponding to each pixel point in each layer image can be understood as the contrast result corresponding to the layer image.
  • the contrast result corresponding to each layer image in the Gaussian pyramid is integrated by normalization processing to obtain a final visual saliency map.
  • This step can be simply understood as determining the weight value of the pixel for each pixel in the image as the gray value of the pixel. Those skilled in the art It can be understood that the value range of the gray value of the pixel is 0-255. If there is a case where the weight value of the pixel is greater than 255, the weight value of each pixel in the image is normalized, for example, Reduce to 1/10 of the original value.
  • the monitoring target that is, the ship target
  • information such as the length and height of the ship may be acquired according to the ratio of the object in the first video frame to the actual object.
  • the infrared imaging video acquisition device has the advantages of strong anti-interference ability, strong adaptability to the climatic environment, and continuous passive detection of day and night. In foggy days, rainy days and nights, or under conditions of strong radar clutter, the application of marine radar or CCTV systems cannot accurately obtain the channel monitoring targets. However, the infrared imaging video capture device can clearly display the original appearance of the objects in the acquisition area by virtue of its unique detection capability, so that the channel monitoring target can be accurately acquired.
  • the solution for acquiring the channel monitoring target by using the infrared imaging video capturing device does not require the ship on the navigation channel to install the GPS ship application system, nor The exchange protocols supported by different GPS ship application systems are required to be compatible with each other, greatly reducing the complexity of acquiring navigation channel monitoring targets.
  • a first video frame of the infrared thermal imaging video is obtained, and a first monitoring target is obtained in the first video frame. That is to say, this solution acquires monitoring targets through infrared thermal imaging video.
  • infrared thermal imaging video acquisition devices are not susceptible to weather, even in bad weather and low visibility, such as heavy fog. Infrared thermal imaging is still clear on days, rainy days and nights. Therefore, using the solution provided in this application, the channel monitoring targets can be accurately obtained.
  • the first video frame may be subjected to denoising processing and/or enhancement processing before determining the target area in the first video frame.
  • the first video frame is pre-processed, and the pre-processing may include denoising processing, enhancement processing, and the like. It will be understood by those skilled in the art that after preprocessing the first video frame, it is easier to subsequently determine the target area, generate a Gaussian pyramid, and the like.
  • the traffic statistics of the channel corresponding to the infrared thermal imaging video may also be performed.
  • Traffic statistics can be understood as statistics on the number of ships passing through the channel in a unit time.
  • the specific process can include:
  • the preset number of video frames is: a video frame in an infrared thermal imaging video before the acquisition time of the first video frame;
  • Flow statistics are performed on the navigation channel corresponding to the infrared thermal imaging video according to the determined traveling direction.
  • the first video frame is any video frame in the infrared thermal imaging video, and in the infrared thermal imaging video, a preset number of video frames before the first video frame are acquired. That is to say, the acquisition time of the preset number of video frames is before the acquisition time of the first video frame, and the acquisition area of the preset number of video frames is the same as the acquisition area of the first video frame.
  • the monitoring target is separately determined in the preset number of video frames. It can be understood that the monitoring target in the preset number of video frames has a corresponding relationship with the first monitoring target in the first video frame.
  • first monitoring targets are determined in the first video frame, namely ship A, ship B, ship C.
  • time difference between video frames is usually 1/24 second, that is, the time difference between the preset number of video frames and the first video frame is very short, and the picture difference is small, so The above three monitoring targets may also be determined in the preset number of video frames: ship A, ship B, ship C.
  • the ship A and the ship may be determined.
  • the coordinate axis in the video frame is the xy coordinate axis, the direction in which the x coordinate axis extends is east, and the direction in which the y coordinate axis extends is north; assuming that the preset number is 3, according to the acquisition time, 4 video frames (first The video frame and the previous 3 video frames are numbered: the coordinates of ship A in the video frame numbered 1 are (50, 60), and the coordinates of ship A in the video frame numbered 2 are (52, 62). The coordinates of ship A in the video frame numbered 3 are (54, 64), and the coordinates of ship A in the video frame numbered 4 are (56, 66). It can be seen that the traveling direction of the ship A is the northeast direction.
  • the coordinates of ship C in the video frame numbered 1 are (70, 40)
  • the coordinates of ship C in the video frame numbered 2 are (68, 38)
  • the ship C The coordinates are (66, 36)
  • the coordinates of ship C in the video frame numbered 4 are (64, 34). It can be seen that the traveling direction of the ship C is the northwest direction.
  • the traffic statistics may be in a cross-line counting manner, that is, a counting line is set in a video frame, and the number of ships crossing the counting line is counted.
  • the method of counting the number of ships may be a total score, not only counting the total number of ships crossing the counting line, but also according to the traveling direction of the ship, to east, west, south, and north. Ships in four directions are counted separately.
  • the ships A, B, and C all cross the counting line according to the coordinates of the monitoring targets determined in the plurality of video frames in the infrared thermal imaging video, and then the total number of ships counted is increased by three.
  • the number of ships on the eastbound line will be increased by two
  • the number of ships on the northbound line will be increased by two
  • the number of ships on the westbound side will be increased by one
  • the number of ships on the southbound line will be increased by one.
  • the infrared thermal imaging video and the visible light video can also be combined to monitor the monitoring target.
  • the specific process can include:
  • the second monitoring target is monitored.
  • a visible light video capture device may be disposed in the vicinity of the infrared thermal imaging video capture device to ensure that the infrared thermal imaging video capture device and the visible light video capture device can perform video capture for the same region.
  • the monitoring target can be monitored by visible light video.
  • the video frame in the infrared thermography video is referred to as a first video frame
  • the video frame in the visible light video is referred to as a first video frame.
  • the infrared thermal imaging video capture device and the visible light video capture device can perform video capture for the same area. Therefore, the first video frame and the second video frame can have a one-to-one correspondence.
  • the collection time and the acquisition area of the corresponding first video frame and the second video frame are the same.
  • the coordinates of the first monitoring target in the first video frame may be corresponding to the second video frame, thereby determining the second monitoring target.
  • the first monitoring target and the second monitoring target represent the same ship, except that the first monitoring target displays the ship in the form of infrared thermal imaging, and the second monitoring target displays the ship in visible light video. .
  • the monitoring manner of the second monitoring target may include zooming, capturing, recording, tracking, alarming, and the like. That is, after the second monitoring target corresponding to the first monitoring target is determined in the second video frame, the visible video capturing device may be set to zoom, capture, record, track, alarm, etc. the second monitoring target. Linkage operation.
  • the following information can be obtained by detecting the two channels of video: for example, the length, height, traveling speed of the ship, the distance from the video capturing device, Position in the video frame, etc.; then use video compression algorithms, such as H.264 or MPEG4 algorithm to compress the two videos, and then the information obtained above into TCP/IP packets; then through the wired or wireless network system
  • video compression algorithms such as H.264 or MPEG4 algorithm to compress the two videos, and then the information obtained above into TCP/IP packets
  • the compressed two-way video and the packaged information are transmitted to the display system for display to the user; in addition, the compressed two-way video and the packaged information may be stored to facilitate subsequent video browsing and playback.
  • it also provides a data source for evidence collection in violation of surface traffic.
  • the infrared thermal imaging video capture device and the visible light video capture device may be disposed on the bank of the river or on the coastline, and each collection device and processing device (a device that processes the captured video, such as a computer) may be Information exchange through the network to achieve seamless monitoring of the entire channel.
  • the embodiment of the present application further provides a channel monitoring target acquiring device.
  • FIG. 2 is a schematic structural diagram of a channel monitoring target acquiring apparatus according to an embodiment of the present disclosure, including:
  • a first obtaining module 201 configured to obtain a first video frame of the infrared thermal imaging video
  • a first determining module 202 configured to determine a target area in the first video frame
  • a generating module 203 configured to generate a Gaussian pyramid corresponding to the target area by using a ship detection algorithm
  • the extraction determining module 204 is configured to extract feature values of the Gaussian pyramid, perform local visual contrast calculation on the feature values, and determine a first monitoring target from the target region according to the calculation result.
  • the first determining module 202 may include: a determining submodule, a first determining submodule, and a second determining submodule (not shown), where
  • a determining submodule configured to determine whether there is a water dividing line in the first video frame, and if yes, triggering the first determining submodule, and if not, triggering the second determining submodule;
  • the first determining submodule is configured to determine a water area in the first video frame according to the water and water boundary line, and determine the water area as a target area;
  • the second determining submodule is configured to determine all areas in the first video frame as a target area.
  • the determining sub-module may include: a calculating unit, a statistical determining unit, a determining unit, and a determining unit (not shown), wherein
  • a calculating unit configured to calculate a standard deviation of gray values of each row of pixels in the first video frame
  • a statistical determination unit for counting a gradient value between standard deviations of gray values of pixel points of adjacent rows, And determine the maximum gradient value
  • a determining unit configured to determine whether the maximum gradient value is greater than a preset threshold: if not, determining that there is no water-day boundary line in the first video frame; if yes, determining that the first video frame exists Water and water dividing line;
  • a determining unit configured to determine a water-and-white boundary line when the determining unit determines that the result is YES, and according to adjacent pixels of the pixel corresponding to the maximum gradient value.
  • the first determining sub-module may be specifically used to:
  • a region having a small average value is determined as a water region in the first video frame.
  • the device may further include:
  • a processing module (not shown) for performing denoising processing and/or enhancement processing on the first video frame.
  • the device may further include: an obtaining module, a second determining module, and a statistic module (not shown), wherein
  • An acquiring module configured to acquire a monitoring target in a preset number of video frames, where the preset number of video frames is: a video in an infrared thermal imaging video before the acquisition moment of the first video frame frame;
  • a second determining module configured to determine a driving direction of the first monitoring target according to the first monitoring target and the acquired monitoring target
  • the statistics module is configured to perform flow statistics on the navigation channel corresponding to the infrared thermal imaging video according to the determined traveling direction.
  • the apparatus may further include: a second obtaining module, a third determining module, and a monitoring module (not shown), wherein
  • a second obtaining module configured to obtain a second video frame of the visible light video, where the second video frame and the first video frame are video frames obtained by video capturing the same area at the same time;
  • a third determining module configured to determine, in the second video frame, corresponding to the first monitoring target Second monitoring target
  • a monitoring module configured to monitor the second monitoring target.
  • the first video frame of the infrared thermal imaging video is obtained, and the first monitoring target is obtained in the first video frame. That is to say, this solution acquires monitoring targets through infrared thermal imaging video.
  • infrared thermal imaging video acquisition devices are not susceptible to weather, even in bad weather and low visibility, such as heavy fog. Infrared thermal imaging is still clear on days, rainy days and nights. Therefore, using the solution provided in this application, the channel monitoring targets can be accurately obtained.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 3, including: a housing 301, a processor 302, a memory 303, a circuit board 304, and a power circuit 305, wherein the circuit board 304 is disposed in the housing 301.
  • the processor 302 and the memory 303 are disposed on the circuit board 304;
  • the power circuit 305 is used to supply power to various circuits or devices of the electronic device;
  • the memory 303 is used to store executable program code; and the processor 302 is read by
  • the executable program code stored in the memory 303 is configured to execute a program corresponding to the executable program code for executing the channel monitoring target acquisition method, the method comprising:
  • the electronic device can be an infrared thermal imaging video capture device, a mobile phone, a tablet computer, a personal computer, a server, etc., and is not limited.
  • a first video frame of the infrared thermography video is obtained, and a first monitoring target is obtained in the first video frame. That is to say, this solution acquires monitoring targets through infrared thermal imaging video.
  • infrared thermal imaging video acquisition devices are not susceptible to weather, even in bad weather and low visibility, such as heavy fog. Infrared thermal imaging is still clear on days, rainy days and nights. Therefore, applying the solution provided in this application can accurately Obtain the channel monitoring target.
  • the embodiment of the present application further provides an executable program code, where the executable program code is used to be executed to execute the channel monitoring target acquisition method, and the method includes:
  • the first video frame of the infrared thermography video is obtained, and the first monitoring target is obtained in the first video frame. That is to say, this solution acquires monitoring targets through infrared thermal imaging video.
  • infrared thermal imaging video acquisition devices are not susceptible to weather, even in bad weather and low visibility, such as heavy fog. Infrared thermal imaging is still clear on days, rainy days and nights. Therefore, using the solution provided in this application, the channel monitoring targets can be accurately obtained.
  • the embodiment of the present application further provides a storage medium for storing executable program code, where the executable program code is used to be executed to execute the channel monitoring target acquisition method, and the method includes:
  • the first video frame of the infrared thermography video is obtained, and the first monitoring target is obtained in the first video frame. That is to say, this solution acquires monitoring targets through infrared thermal imaging video.
  • infrared thermal imaging video acquisition devices are not susceptible to weather, even in bad weather and low visibility, such as heavy fog. Days, rainy days and nights In the evening, infrared thermal imaging is still clear, so the channel monitoring target can be accurately obtained by applying the solution provided by the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种航道监控目标获取方法及装置,方法包括:获得红外热成像视频的第一视频帧;确定第一视频帧中的目标区域;生成目标区域对应的高斯金字塔;提取高斯金字塔的特征值,对特征值进行局部视觉反差计算,根据计算结果从目标区域中确定第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地获取航道监控目标。

Description

一种航道监控目标获取方法及装置
本申请要求于2016年7月8日提交中国专利局、申请号为201610546159.2、发明名称为“一种航道监控目标获取方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及监控技术领域,特别涉及一种航道监控目标获取方法及装置。
背景技术
对航道进行监控,主要是指对航道中的船舶进行监控,因此需要获取监控目标(主要是目标船舶),通过对监控目标的跟踪才能实现对航道的监控。
目前,通常使用船用雷达或CCTV(Close Circuit Television Inspection,管道闭路电视检测)系统获取航道监控目标。但是,船用雷达、CCTV系统都易受天气影响:普通的船用雷达在受到恶劣天气的影响时,产生的图像不准确,由此导致获取的监控目标不准确;CCTV系统在能见度低的情况下,比如大雾天、阴雨天以及夜晚,无法看清水面中的船舶,因此,获取航道监控目标也不准确。
发明内容
本申请实施例的目的在于提供一种航道监控目标获取方法及装置,以准确地获取航道监控目标。
为达到上述目的,本申请实施例公开了一种航道监控目标获取方法,包括:
获得红外热成像视频的第一视频帧;
确定所述第一视频帧中的目标区域;
生成所述目标区域对应的高斯金字塔;
提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
可选的,所述确定所述第一视频帧中的目标区域,可以包括:
判断所述第一视频帧中是否存在水天分界线;
如果是,根据所述水天分界线,确定所述第一视频帧中的水域区域,将所述水域区域确定为目标区域;
如果否,将所述第一视频帧中的全部区域确定为目标区域。
可选的,所述判断所述第一视频帧中是否存在水天分界线,可以包括:
计算所述第一视频帧中每行像素点灰度值的标准差;
统计相邻行像素点灰度值的标准差之间的梯度值,并确定最大的梯度值;
判断所述最大的梯度值是否大于预设阈值;
如果否,则判定所述第一视频帧中不存在水天分界线;
如果是,则判定所述第一视频帧中存在水天分界线,并根据所述最大的梯度值对应的像素点相邻行,确定水天分界线。
可选的,所述根据所述水天分界线,确定所述第一视频帧中的水域区域,可以包括:
分别计算所述水天分界线隔开的两个区域中像素点灰度值的平均值;
将平均值小的区域确定为所述第一视频帧中的水域区域。
可选的,在所述确定所述第一视频帧中的目标区域之前,还可以包括:
对所述第一视频帧进行去噪声处理和/或增强处理。
可选的,在所述根据计算结果从所述目标区域中确定第一监控目标之后,还可以包括:
获取预设数量个视频帧中的监控目标,其中,所述预设数量个视频帧为:采集时刻位于所述第一视频帧的采集时刻之前的红外热成像视频中的视频帧;
根据所述第一监控目标、及所获取的监控目标,确定所述第一监控目标的行驶方向;
根据所确定的行驶方向,对所述红外热成像视频对应的航道进行流量统计。
可选的,所述方法还可以包括:
获得可见光视频的第二视频帧,其中,所述第二视频帧与所述第一视频帧为相同时刻针对相同区域进行视频采集得到的视频帧;
确定所述第二视频帧中与所述第一监控目标对应的第二监控目标;
对所述第二监控目标进行监控。
为达到上述目的,本申请实施例还公开了一种航道监控目标获取装置,包括:
第一获得模块,用于获得红外热成像视频的第一视频帧;
第一确定模块,用于确定所述第一视频帧中的目标区域;
生成模块,生成所述目标区域对应的高斯金字塔;
提取确定模块,用于提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
可选的,所述第一确定模块,可以包括:
判断子模块,用于判断所述第一视频帧中是否存在水天分界线,如果是,触发第一确定子模块,如果否,触发第二确定子模块;
所述第一确定子模块,用于根据所述水天分界线,确定所述第一视频帧中的水域区域,将所述水域区域确定为目标区域;
所述第二确定子模块,用于将所述第一视频帧中的全部区域确定为目标区域。
可选的,所述判断子模块,可以包括:
计算单元,用于计算所述第一视频帧中每行像素点灰度值的标准差;
统计确定单元,用于统计相邻行像素点灰度值的标准差之间的梯度值,并确定最大的梯度值;
判断单元,用于判断所述最大的梯度值是否大于预设阈值:如果否,则判定所述第一视频帧中不存在水天分界线;如果是,则判定所述第一视频帧 中存在水天分界线;
确定单元,用于当所述判断单元判断结果为是时,并根据所述最大的梯度值对应的像素点相邻行,确定水天分界线。
可选的,所述第一确定子模块,具体可以用于:
分别计算所述水天分界线隔开的两个区域中像素点灰度值的平均值;
将平均值小的区域确定为所述第一视频帧中的水域区域。
可选的,所述装置还可以包括:
处理模块,用于对所述第一视频帧进行去噪声处理和/或增强处理。
可选的,所述装置还可以包括:
获取模块,用于获取预设数量个视频帧中的监控目标,其中,所述预设数量个视频帧为:采集时刻位于所述第一视频帧的采集时刻之前的红外热成像视频中的视频帧;
第二确定模块,用于根据所述第一监控目标、及所获取的监控目标,确定所述第一监控目标的行驶方向;
统计模块,用于根据所确定的行驶方向,对所述红外热成像视频对应的航道进行流量统计。
可选的,所述装置还可以包括:
第二获得模块,用于获得可见光视频的第二视频帧,其中,所述第二视频帧与所述第一视频帧为相同时刻针对相同区域进行视频采集得到的视频帧;
第三确定模块,用于确定所述第二视频帧中与所述第一监控目标对应的第二监控目标;
监控模块,用于对所述第二监控目标进行监控。
为达到上述目的,本申请实施例还公开了一种电子设备,包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为电子设备的各个 电路或器件供电;存储器用于存储可执行程序代码;处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行上述航道监控目标获取方法。
为达到上述目的,本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以执行上述航道监控目标获取方法。
为达到上述目的,本申请实施例还公开了一种存储介质,所述存储介质用于存储可执行程序代码,所述可执行程序代码用于被运行以执行上述航道监控目标获取方法。
由上述技术方案可见,应用本申请实施例,获得红外热成像视频的第一视频帧,在第一视频帧中得到第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地获取航道监控目标。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种航道监控目标获取方法的流程示意图;
图2为本申请实施例提供的一种航道监控目标获取装置的结构示意图;
图3为本申请实施例所提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决上述技术问题,本申请实施例提供了一种航道监控目标获取方法及装置。该方法可以由红外热成像视频采集设备、手机、平板电脑、个人计算机、服务器等执行。下面首先对本申请实施例提供的航道监控目标获取方法进行详细说明。
图1为本申请实施例提供的一种航道监控目标获取方法的流程示意图,包括:
S101:获得红外热成像视频的第一视频帧。
如果本方案的执行主体是红外热成像视频采集设备,则该设备在采集到的红外热成像视频中获得该视频的第一视频帧;如果本方案的执行主体是手机、平板电脑、个人计算机、服务器等,则接收红外热成像视频采集设备传输的红外热成像视频,在接收到的红外热成像视频中获得该视频的第一视频帧。
第一视频帧为红外热成像视频中的任一视频帧,将其称之为第一视频帧是为了与可见光视频的第二视频帧进行区分。
S102:确定所述第一视频帧中的目标区域。
在本申请所示实施例中,本步骤可以包括:
判断所述第一视频帧中是否存在水天分界线;
如果是,根据所述水天分界线,确定所述第一视频帧中的水域区域,将所述水域区域确定为目标区域;
如果否,将所述第一视频帧中的全部区域确定为目标区域。
水天分界线,顾名思义,就是划分水域区域与天空区域的分界线。可以理解的是,本方案需要获取的是水域区域中的监控目标,在水域区域中确定监控目标,相比于在第一视频帧的全部区域中确定监控目标,缩小了确定监控目标的区域范围,进而降低了确定监控目标的复杂度。但是,如果第一视频帧中不存在水天分界线,也就是说无法识别出水域区域,则只能在第一视频帧的全部区域中确定监控目标。
作为本申请的一种实施方式,判断所述第一视频帧中是否存在水天分界 线,可以包括:
计算所述第一视频帧中每行像素点灰度值的标准差;
统计相邻行像素点灰度值的标准差之间的梯度值,并确定最大的梯度值;
判断所述最大的梯度值是否大于预设阈值;
如果否,则判定所述第一视频帧中不存在水天分界线;
如果是,则判定所述第一视频帧中存在水天分界线,并根据所述最大的梯度值对应的像素点相邻行,确定水天分界线。
具体的,针对每行像素点,可以根据该行中每个像素点的RGB数据,确定每个像素点的灰度值,进而确定该行像素点灰度值的标准差。
下面举一个简化后的例子来对上述实施方式进行说明:
假设第一视频帧中有5行像素点,第1行像素点灰度值的标准差为10,第2行像素点灰度值的标准差为8,第3行像素点灰度值的标准差为3,第4行像素点灰度值的标准差为2,第5行像素点灰度值的标准差为1。
计算相邻行像素点灰度值的标准差之间的梯度值:第2行与第1行的标准差的梯度值为10-8=2,第3行与第2行的标准差的梯度值为8-3=5,第4行与第3行的标准差的梯度值为3-2=1,第5行与第4行的标准差的梯度值为2-1=1。
假设预设阈值为3,第3行与第2行的标准差的梯度值为5大于预设阈值,则判定第一视频帧中存在水天分界线,可以将第3行或第2行像素点形成的线确定为水天分界线。
本领域技术人员可以理解的是,天空区域像素点灰度值的标准差与水域区域像素点灰度值的标准差是有明显区别的,而天空区域或者水域区域中相邻行像素点灰度值的标准差区别不大。因此,如果相邻行像素点灰度值的标准差区别很大,也就是说这两行像素点灰度值的标准差之间的梯度值很大,则表示这两行像素点属于不同的区域,一行像素点属于天空区域,另一行像素点属于水域区域。可以将这两行像素点中任意一行像素点形成的线确定为水天分界线。
当然,如果计算得到的所有梯度值都不大于预设阈值,也就是说,不存在两个相邻行像素点灰度值的标准差区别很大的情况,此时,判定第一视频帧中不存在水天分界线,将第一视频帧中的全部区域确定为目标区域。
在本申请所示实施例中,根据所述水天分界线,确定所述第一视频帧中的水域区域,可以包括:
分别计算所述水天分界线隔开的两个区域中像素点灰度值的平均值;
将平均值小的区域确定为所述第一视频帧中的水域区域。
本领域技术人员可以理解的是,天空区域中像素点灰度值的平均值一般都要比水域区域中像素点灰度值的平均值要大,因此,可以分别计算被水天分界线隔开的两个区域中像素点灰度值的平均值,将平均值小的区域确定为水域区域。
S103:生成所述目标区域对应的高斯金字塔。
具体的,可以对上述确定的目标区域进行多次下采样,依次得到不同分辨率下的图像,从而生成多层图像的高斯金字塔。
S104:提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
在本实施例中,特征值可以包括灰度值、梯度值和纹理特征值。具体的,可以对生成的高斯金字塔中的各层图像分别进行灰度、梯度和纹理特征的提取,从而得到各层图像的灰度值、梯度值和纹理特征值。然后,采用center-surround算法对各层图像的灰度值、梯度值和纹理特征值进行局部视觉反差计算,得到各层图像对应的反差结果。这个步骤可以理解为,对各层图像中的每个像素点的灰度值、梯度值和纹理特征值进行权重计算,得到每个像素点对应的权重值。每层图像中各个像素点对应的权重值可以理解为该层图像对应的反差结果。
最后,通过归一化处理对高斯金字塔中的各层图像对应的反差结果进行整合,得到最终视觉显著图。这个步骤可以简单地理解为,针对图像中的每个像素点,将该像素点的权重值确定为该像素点的灰度值。本领域技术人员 可以理解的是,像素点的灰度值的数值区间为0-255,如果存在像素点的权重值大于255的情况,则将图像中每个像素点的权重值进行归一化处理,比如全部缩小至原数值的1/10。如果图像中某一区域与其他区域的灰度值、梯度值和纹理特征值的差异均很大,则经过上述处理后,该区域的灰度值与其他区域的灰度值差异会更加明显。因此,经过上述处理,便得到了反差效果明显的最终视觉显著图。
在红外热成像中,水面对应的像素点的灰度值低,梯度变化慢,且纹理不明显,而船舶对应的像素点的灰度值相对较高,梯度变化剧烈,且具有明显的纹理特征。因此,对红外热成像进行上述处理得到最终视觉显著图后,在最终视觉显著图中,通过简单的阈值分割便可确定出监控目标,即船舶目标。另外,还可以根据第一视频帧中的物体与实际物体的比例,获取船舶的长度和高度等信息。
需要说明的是,红外成像视频采集设备具有抗干扰能力强,气候环境适应性强,昼夜连续被动探测等优点。在大雾天、阴雨天以及夜晚,或者在雷达杂波很强烈的的条件下,应用船用雷达或CCTV系统均无法准确获取航道监控目标。但是红外成像视频采集设备能够依靠其独有的探测能力使得采集区域中的物体的原貌清晰地呈现出来,因此能够准确地获取航道监控目标。
另外,相比于利用GPS船舶应用系统获取航道监控目标的方案,本申请实施例提供的利用红外成像视频采集设备获取航道监控目标的方案不要求航道上的船舶都安装GPS船舶应用系统,也不要求不同的GPS船舶应用系统支持的交换协议能够互相兼容,极大地降低了获取航道监控目标的复杂性。
应用本申请图1所示实施例,获得红外热成像视频的第一视频帧,在第一视频帧中得到第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地获取航道监控目标。
作为本申请的一种实施方式,在确定第一视频帧中的目标区域之前,可以对所述第一视频帧进行去噪声处理和/或增强处理。
获得第一视频帧后,先对第一视频帧进行预处理,预处理可以包括去噪声处理、增强处理等等。本领域技术人员可以理解的是,在对第一视频帧进行预处理后,后续确定目标区域,生成高斯金字塔等处理就变得更容易。
在本申请实施例中,根据计算结果从目标区域中确定第一监控目标之后,还可以对上述红外热成像视频对应的航道进行流量统计。流量统计,可以理解为对单位时间内航道中经过的船舶的数量进行统计。具体过程可以包括:
获取预设数量个视频帧中的监控目标,其中,所述预设数量个视频帧为:采集时刻位于所述第一视频帧的采集时刻之前的红外热成像视频中的视频帧;
根据所述第一监控目标、及所获取的监控目标,确定所述第一监控目标的行驶方向;
根据所确定的行驶方向,对所述红外热成像视频对应的航道进行流量统计。
第一视频帧为红外热成像视频中的任一视频帧,在该红外热成像视频中,获取该第一视频帧之前的预设数量个视频帧。也就是说,这预设数量个视频帧的采集时刻位于该第一视频帧的采集时刻之前,而且这预设数量个视频帧的采集区域与该第一视频帧的采集区域相同。应用本申请实施例提供的方案,在这预设数量个视频帧中,分别确定监控目标。可以理解的是,这预设数量个视频帧中的监控目标与第一视频帧中的第一监控目标存在对应关系。
假设在第一视频帧中确定出3个第一监控目标,分别为船舶A、船舶B、船舶C。本领域技术人员可以理解的是,视频帧之间的时间差通常为1/24秒,也就是说这预设数量个视频帧与第一视频帧之间的时间差非常短暂,画面差别很小,因此,也可以在这预设数量个视频帧中确定出上述3个监控目标:船舶A、船舶B、船舶C。
根据第一视频帧中确定出的船舶A、船舶B、船舶C的位置、以及这预设数量个视频帧中确定出的船舶A、船舶B、船舶C的位置,可以确定出船舶A、船舶B、船舶C的行驶方向。
假设视频帧中的坐标轴为xy坐标轴,x坐标轴延伸的方向为东,y坐标轴延伸方向为北;假设上述预设数量为3,按照采集时刻对4个视频帧(第一 视频帧及之前的3个视频帧)进行编号排序:在编号为1的视频帧中船舶A的坐标为(50,60),在编号为2的视频帧中船舶A的坐标为(52,62),在编号为3的视频帧中船舶A的坐标为(54,64),在编号为4的视频帧中船舶A的坐标为(56,66)。由此可以看见,船舶A的行驶方向为东北方向。
同理,假设在编号为1的视频帧中船舶B的坐标为(20,30),在编号为2的视频帧中船舶B的坐标为(22,28),在编号为3的视频帧中船舶B的坐标为(24,26),在编号为4的视频帧中船舶B的坐标为(28,24)。由此可以看见,船舶B的行驶方向为东南方向。
假设在编号为1的视频帧中船舶C的坐标为(70,40),在编号为2的视频帧中船舶C的坐标为(68,38),在编号为3的视频帧中船舶C的坐标为(66,36),在编号为4的视频帧中船舶C的坐标为(64,34)。由此可以看见,船舶C的行驶方向为西北方向。
在本申请所示实施例中,流量统计可以采用跨线计数的方式,即,在视频帧中设置计数线,并对跨越该计数线的船舶的数量进行统计。比如,在上述例子中,将y=30这条线作为计数线,根据多个视频帧中确定出的监控目标,判断该监控目标是否经过该计数线,如果是,将统计的船舶数量加1。
另外,作为本申请的一种实施方式,统计船舶数量的方式可以为总分式,不仅统计跨越计数线的船舶总数,而且还按照船舶的行驶方向,对东行、西行、南行和北行四个方向的船舶进行分别计数。
比如,上述例子中,假设根据红外热成像视频中多个视频帧中确定的监控目标的坐标,判断出船舶A、B、C均跨越了计数线,那么将统计的船舶总数量加3。另外,将东行的船舶数量加2,将北行的船舶数量加2,将西行的船舶数量加1,将南行的船舶数量加1。
在本申请所示实施例中,还可以结合红外热成像视频与可见光视频,对监控目标进行监控。具体过程可以包括:
获得可见光视频的第二视频帧,其中,所述第二视频帧与所述第一视频帧为相同时刻针对相同区域进行视频采集得到的视频帧;
确定所述第二视频帧中与所述第一监控目标对应的第二监控目标;
对所述第二监控目标进行监控。
作为本申请的一种实施方式,可以在红外热成像视频采集设备的附近设置可见光视频采集设备,以保证红外热成像视频采集设备与可见光视频采集设备能够针对相同区域进行视频采集。
在利用红外热成像视频帧确定出监控目标后,可以通过可见光视频对该监控目标进行监控。为了方便说明,将红外热成像视频中的视频帧称为第一视频帧,将可见光视频中的视频帧称为第一视频帧。红外热成像视频采集设备与可见光视频采集设备能够针对相同区域进行视频采集,因此,第一视频帧与第二视频帧可以存在一一对应的关系。对应的第一视频帧与第二视频帧的采集时刻、采集区域均相同。
在确定出第一视频帧中的第一监控目标后,在该第一视频帧对应的第二视频帧中确定出与该第一监控目标对应的第二监控目标。作为一种实施方式,可以将第一监控目标在第一视频帧中的坐标对应到第二视频帧中,从而确定出第二监控目标。可以理解的是,第一监控目标与第二监控目标表示相同的船舶,只是第一监控目标将该船舶以红外热成像的方式进行展示,第二监控目标将该船舶以可见光视频的方式进行展示。
作为本申请的一种实施方式,对第二监控目标的监控方式可以包括变倍、抓拍、录像、跟踪、报警等等。也就是说,当在第二视频帧中确定出与第一监控目标对应的第二监控目标后,可以设置可见光视频采集设备对该第二监控目标进行变倍、抓拍、录像、跟踪、报警等联动操作。
作为本申请的一种实施方式,在获取到红外热成像视频和可见光视频后,可以通过检测这两路视频得到如下信息:比如船舶的长度、高度、行驶速度、与视频采集设备的距离、在视频帧中的位置等等;然后利用视频压缩算法,比如H.264或MPEG4算法对这两路视频进行压缩,并将上述得到的信息打成TCP/IP包;然后通过有线或无线网络系统将压缩后的两路视频及上述打包后的信息传送至显示系统,以展示给用户;另外,还可以对该压缩后的两路视频及上述打包后的信息进行存储,方便后续视频的浏览和回放,同时也为水面交通中的违章事件取证提供数据源。
在本申请所示实施例中,红外热成像视频采集设备及可见光视频采集设备可以设置在内河岸边或海岸线,各采集设备及处理设备(处理采集到的视频的设备,如计算机)之间可通过网络进行信息交互,实现整个航道无缝监控。
与上述的方法实施例相对应,本申请实施例还提供一种航道监控目标获取装置。
图2为本申请实施例提供的一种航道监控目标获取装置的结构示意图,包括:
第一获得模块201,用于获得红外热成像视频的第一视频帧;
第一确定模块202,用于确定所述第一视频帧中的目标区域;
生成模块203,用于利用船舶检测算法,生成所述目标区域对应的高斯金字塔;
提取确定模块204,用于提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
在本申请所示实施例中,第一确定模块202,可以包括:判断子模块、第一确定子模块和第二确定子模块(图中未示出),其中,
判断子模块,用于判断所述第一视频帧中是否存在水天分界线,如果是,触发第一确定子模块,如果否,触发第二确定子模块;
所述第一确定子模块,用于根据所述水天分界线,确定所述第一视频帧中的水域区域,将所述水域区域确定为目标区域;
所述第二确定子模块,用于将所述第一视频帧中的全部区域确定为目标区域。
在本申请所示实施例中,所述判断子模块,可以包括:计算单元、统计确定单元、判断单元和确定单元(图中未示出),其中,
计算单元,用于计算所述第一视频帧中每行像素点灰度值的标准差;
统计确定单元,用于统计相邻行像素点灰度值的标准差之间的梯度值, 并确定最大的梯度值;
判断单元,用于判断所述最大的梯度值是否大于预设阈值:如果否,则判定所述第一视频帧中不存在水天分界线;如果是,则判定所述第一视频帧中存在水天分界线;
确定单元,用于当所述判断单元判断结果为是时,并根据所述最大的梯度值对应的像素点相邻行,确定水天分界线。
在本申请所示实施例中,所述第一确定子模块,具体可以用于:
分别计算所述水天分界线隔开的两个区域中像素点灰度值的平均值;
将平均值小的区域确定为所述第一视频帧中的水域区域。
在本申请所示实施例中,所述装置还可以包括:
处理模块(图中未示出),用于对所述第一视频帧进行去噪声处理和/或增强处理。
在本申请所示实施例中,所述装置还可以包括:获取模块、第二确定模块和统计模块(图中未示出),其中,
获取模块,用于获取预设数量个视频帧中的监控目标,其中,所述预设数量个视频帧为:采集时刻位于所述第一视频帧的采集时刻之前的红外热成像视频中的视频帧;
第二确定模块,用于根据所述第一监控目标、及所获取的监控目标,确定所述第一监控目标的行驶方向;
统计模块,用于根据所确定的行驶方向,对所述红外热成像视频对应的航道进行流量统计。
在本申请所示实施例中,所述装置还可以包括:第二获得模块、第三确定模块和监控模块(图中未示出),其中,
第二获得模块,用于获得可见光视频的第二视频帧,其中,所述第二视频帧与所述第一视频帧为相同时刻针对相同区域进行视频采集得到的视频帧;
第三确定模块,用于确定所述第二视频帧中与所述第一监控目标对应的 第二监控目标;
监控模块,用于对所述第二监控目标进行监控。
应用本申请图2所示实施例,获得红外热成像视频的第一视频帧,在第一视频帧中得到第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地获取航道监控目标。
本申请实施例还提供了一种电子设备,如图3所示,包括:壳体301、处理器302、存储器303、电路板304和电源电路305,其中,电路板304安置在壳体301围成的空间内部,处理器302和存储器303设置在电路板304上;电源电路305,用于为电子设备的各个电路或器件供电;存储器303用于存储可执行程序代码;处理器302通过读取存储器303中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行所述航道监控目标获取方法,方法包括:
获得红外热成像视频的第一视频帧;
确定所述第一视频帧中的目标区域;
生成所述目标区域对应的高斯金字塔;
提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
该电子设备可以为红外热成像视频采集设备、手机、平板电脑、个人计算机、服务器等,具体不做限定。
应用本申请图3所示实施例,获得红外热成像视频的第一视频帧,在第一视频帧中得到第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地 获取航道监控目标。
本申请实施例还提供了一种可执行程序代码,所述可执行程序代码用于被运行以执行所述航道监控目标获取方法,方法包括:
获得红外热成像视频的第一视频帧;
确定所述第一视频帧中的目标区域;
生成所述目标区域对应的高斯金字塔;
提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
应用本申请所示实施例,获得红外热成像视频的第一视频帧,在第一视频帧中得到第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地获取航道监控目标。
本申请实施例还提供了一种存储介质,所述存储介质用于存储可执行程序代码,所述可执行程序代码用于被运行以执行所述航道监控目标获取方法,方法包括:
获得红外热成像视频的第一视频帧;
确定所述第一视频帧中的目标区域;
生成所述目标区域对应的高斯金字塔;
提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
应用本申请所示实施例,获得红外热成像视频的第一视频帧,在第一视频帧中得到第一监控目标。也就是说,本方案是通过红外热成像视频获取监控目标,红外热成像视频采集设备相比于船用雷达及CCTV系统,不易受天气影响,即使在恶劣天气、能见度低的情况下,比如大雾天、阴雨天以及夜 晚,红外热成像依旧清晰,因此,应用本申请提供的方案,能够准确地获取航道监控目标。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、电子设备、可执行程序代码、存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员可以理解实现上述方法实施方式中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于计算机可读取存储介质中,这里所称得的存储介质,如:ROM/RAM、磁碟、光盘等。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (15)

  1. 一种航道监控目标获取方法,其特征在于,包括:
    获得红外热成像视频的第一视频帧;
    确定所述第一视频帧中的目标区域;
    生成所述目标区域对应的高斯金字塔;
    提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述第一视频帧中的目标区域,包括:
    判断所述第一视频帧中是否存在水天分界线;
    如果是,根据所述水天分界线,确定所述第一视频帧中的水域区域,将所述水域区域确定为目标区域;
    如果否,将所述第一视频帧中的全部区域确定为目标区域。
  3. 根据权利要求2所述的方法,其特征在于,所述判断所述第一视频帧中是否存在水天分界线,包括:
    计算所述第一视频帧中每行像素点灰度值的标准差;
    统计相邻行像素点灰度值的标准差之间的梯度值,并确定最大的梯度值;
    判断所述最大的梯度值是否大于预设阈值;
    如果否,则判定所述第一视频帧中不存在水天分界线;
    如果是,则判定所述第一视频帧中存在水天分界线,并根据所述最大的梯度值对应的像素点相邻行,确定水天分界线。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述水天分界线,确定所述第一视频帧中的水域区域,包括:
    分别计算所述水天分界线隔开的两个区域中像素点灰度值的平均值;
    将平均值小的区域确定为所述第一视频帧中的水域区域。
  5. 根据权利要求1所述的方法,其特征在于,在所述根据计算结果从所述目标区域中确定第一监控目标之后,还包括:
    获取预设数量个视频帧中的监控目标,其中,所述预设数量个视频帧为:采集时刻位于所述第一视频帧的采集时刻之前的红外热成像视频中的视频帧;
    根据所述第一监控目标、及所获取的监控目标,确定所述第一监控目标的行驶方向;
    根据所确定的行驶方向,对所述红外热成像视频对应的航道进行流量统计。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获得可见光视频的第二视频帧,其中,所述第二视频帧与所述第一视频帧为相同时刻针对相同区域进行视频采集得到的视频帧;
    确定所述第二视频帧中与所述第一监控目标对应的第二监控目标;
    对所述第二监控目标进行监控。
  7. 一种航道监控目标获取装置,其特征在于,包括:
    第一获得模块,用于获得红外热成像视频的第一视频帧;
    第一确定模块,用于确定所述第一视频帧中的目标区域;
    生成模块,生成所述目标区域对应的高斯金字塔;
    提取确定模块,用于提取所述高斯金字塔的特征值,对所述特征值进行局部视觉反差计算,根据计算结果从所述目标区域中确定第一监控目标。
  8. 根据权利要求7所述的装置,其特征在于,所述第一确定模块,包括:
    判断子模块,用于判断所述第一视频帧中是否存在水天分界线,如果是,触发第一确定子模块,如果否,触发第二确定子模块;
    所述第一确定子模块,用于根据所述水天分界线,确定所述第一视频帧中的水域区域,将所述水域区域确定为目标区域;
    所述第二确定子模块,用于将所述第一视频帧中的全部区域确定为目标区域。
  9. 根据权利要求8所述的装置,其特征在于,所述判断子模块,包括:
    计算单元,用于计算所述第一视频帧中每行像素点灰度值的标准差;
    统计确定单元,用于统计相邻行像素点灰度值的标准差之间的梯度值,并确定最大的梯度值;
    判断单元,用于判断所述最大的梯度值是否大于预设阈值:如果否,则判定所述第一视频帧中不存在水天分界线;如果是,则判定所述第一视频帧中存在水天分界线;
    确定单元,用于当所述判断单元判断结果为是时,并根据所述最大的梯度值对应的像素点相邻行,确定水天分界线。
  10. 根据权利要求8所述的装置,其特征在于,所述第一确定子模块,具体用于:
    分别计算所述水天分界线隔开的两个区域中像素点灰度值的平均值;
    将平均值小的区域确定为所述第一视频帧中的水域区域。
  11. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    获取模块,用于获取预设数量个视频帧中的监控目标,其中,所述预设数量个视频帧为:采集时刻位于所述第一视频帧的采集时刻之前的红外热成像视频中的视频帧;
    第二确定模块,用于根据所述第一监控目标、及所获取的监控目标,确定所述第一监控目标的行驶方向;
    统计模块,用于根据所确定的行驶方向,对所述红外热成像视频对应的航道进行流量统计。
  12. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    第二获得模块,用于获得可见光视频的第二视频帧,其中,所述第二视频帧与所述第一视频帧为相同时刻针对相同区域进行视频采集得到的视频帧;
    第三确定模块,用于确定所述第二视频帧中与所述第一监控目标对应的第二监控目标;
    监控模块,用于对所述第二监控目标进行监控。
  13. 一种电子设备,其特征在于,包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为电子设备的各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行权利要求1-6任一项所述的航道监控目标获取方法。
  14. 一种可执行程序代码,其特征在于,所述可执行程序代码用于被运行以执行权利要求1-6任一项所述的航道监控目标获取方法。
  15. 一种存储介质,其特征在于,所述存储介质用于存储可执行程序代码,所述可执行程序代码用于被运行以执行权利要求1-6任一项所述的航道监控目标获取方法。
PCT/CN2017/085126 2016-07-08 2017-05-19 一种航道监控目标获取方法及装置 WO2018006659A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610546159.2A CN107613244A (zh) 2016-07-08 2016-07-08 一种航道监控目标获取方法及装置
CN201610546159.2 2016-07-08

Publications (1)

Publication Number Publication Date
WO2018006659A1 true WO2018006659A1 (zh) 2018-01-11

Family

ID=60901386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085126 WO2018006659A1 (zh) 2016-07-08 2017-05-19 一种航道监控目标获取方法及装置

Country Status (2)

Country Link
CN (1) CN107613244A (zh)
WO (1) WO2018006659A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738109A (zh) * 2019-09-10 2020-01-31 浙江大华技术股份有限公司 检测用户起立的方法、装置及计算机存储介质
CN112102249A (zh) * 2020-08-19 2020-12-18 深圳数联天下智能科技有限公司 检测人体是否存在的方法及相关装置
CN112953796A (zh) * 2021-02-03 2021-06-11 北京小米移动软件有限公司 设备状态判断方法、装置及存储介质
CN113743151A (zh) * 2020-05-27 2021-12-03 顺丰科技有限公司 一种检测路面抛洒物的方法、装置及存储介质
CN113936221A (zh) * 2021-12-17 2022-01-14 北京威摄智能科技有限公司 应用于高原地区公路环境监测的方法及系统
CN114449144A (zh) * 2022-01-04 2022-05-06 航天科工智慧产业发展有限公司 多路相机的抓拍联动装置及方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109803076B (zh) * 2019-01-21 2020-12-04 刘善成 一种内河水上交通卡口船舶图像抓拍及船名识别的方法
CN109948434B (zh) * 2019-01-31 2023-07-21 平安科技(深圳)有限公司 登船人数统计的方法、装置、计算机设备和存储介质
CN110390288B (zh) * 2019-04-26 2021-05-25 上海鹰觉科技有限公司 基于计算机视觉的目标智能搜索、定位和跟踪系统及方法
CN111163290B (zh) * 2019-11-22 2021-06-25 东南大学 一种夜间航行船舶检测并跟踪的方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101214851A (zh) * 2008-01-10 2008-07-09 黄席樾 船舶行驶智能型全天候主动安全预警系统及其预警方法
CN203084949U (zh) * 2013-02-16 2013-07-24 上海一航凯迈光机电设备有限公司 用于桥梁安全的船舶航道综合监测装置
WO2013132057A1 (de) * 2012-03-09 2013-09-12 Rheinmetall Defence Electronics Gmbh Aufklärungs- und warnsystem für schiffe zum schutz vor piratenangriffen
CN103398710A (zh) * 2013-08-06 2013-11-20 大连海事大学 一种夜雾天况下的舰船进出港导航系统及其构建方法
CN104297176A (zh) * 2014-09-17 2015-01-21 武汉理工大学 全天候监测长江山区河段能见度的装置、系统和方法
CN105405132A (zh) * 2015-11-04 2016-03-16 河海大学 基于视觉反差和信息熵的sar图像人造目标检测方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62204381A (ja) * 1986-03-04 1987-09-09 Mitsubishi Heavy Ind Ltd 船舶画像認識装置
CN101527824A (zh) * 2009-04-07 2009-09-09 上海海事大学 一种基于红外探测器的海上搜救仪
CN103583037B (zh) * 2011-04-11 2017-04-26 菲力尔系统公司 红外相机系统和方法
CN102231205A (zh) * 2011-06-24 2011-11-02 北京戎大时代科技有限公司 一种多模监控装置及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101214851A (zh) * 2008-01-10 2008-07-09 黄席樾 船舶行驶智能型全天候主动安全预警系统及其预警方法
WO2013132057A1 (de) * 2012-03-09 2013-09-12 Rheinmetall Defence Electronics Gmbh Aufklärungs- und warnsystem für schiffe zum schutz vor piratenangriffen
CN203084949U (zh) * 2013-02-16 2013-07-24 上海一航凯迈光机电设备有限公司 用于桥梁安全的船舶航道综合监测装置
CN103398710A (zh) * 2013-08-06 2013-11-20 大连海事大学 一种夜雾天况下的舰船进出港导航系统及其构建方法
CN104297176A (zh) * 2014-09-17 2015-01-21 武汉理工大学 全天候监测长江山区河段能见度的装置、系统和方法
CN105405132A (zh) * 2015-11-04 2016-03-16 河海大学 基于视觉反差和信息熵的sar图像人造目标检测方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738109A (zh) * 2019-09-10 2020-01-31 浙江大华技术股份有限公司 检测用户起立的方法、装置及计算机存储介质
CN113743151A (zh) * 2020-05-27 2021-12-03 顺丰科技有限公司 一种检测路面抛洒物的方法、装置及存储介质
CN112102249A (zh) * 2020-08-19 2020-12-18 深圳数联天下智能科技有限公司 检测人体是否存在的方法及相关装置
CN112102249B (zh) * 2020-08-19 2024-05-28 深圳数联天下智能科技有限公司 检测人体是否存在的方法及相关装置
CN112953796A (zh) * 2021-02-03 2021-06-11 北京小米移动软件有限公司 设备状态判断方法、装置及存储介质
CN113936221A (zh) * 2021-12-17 2022-01-14 北京威摄智能科技有限公司 应用于高原地区公路环境监测的方法及系统
CN114449144A (zh) * 2022-01-04 2022-05-06 航天科工智慧产业发展有限公司 多路相机的抓拍联动装置及方法
CN114449144B (zh) * 2022-01-04 2024-03-05 航天科工智慧产业发展有限公司 多路相机的抓拍联动装置及方法

Also Published As

Publication number Publication date
CN107613244A (zh) 2018-01-19

Similar Documents

Publication Publication Date Title
WO2018006659A1 (zh) 一种航道监控目标获取方法及装置
US11643076B2 (en) Forward collision control method and apparatus, electronic device, program, and medium
CN107133559B (zh) 基于360度全景的运动物体检测方法
CN112800860B (zh) 一种事件相机和视觉相机协同的高速抛撒物检测方法和系统
CN111754394B (zh) 鱼眼图像中的对象检测方法、装置及存储介质
Xu et al. Fast vehicle and pedestrian detection using improved Mask R‐CNN
CN109448326B (zh) 一种基于快速图像识别的地质灾害智能群防监测系统
CN111259868B (zh) 基于卷积神经网络的逆行车辆检测方法、系统及介质
Almagbile Estimation of crowd density from UAVs images based on corner detection procedures and clustering analysis
Sharma Human detection and tracking using background subtraction in visual surveillance
Filonenko et al. Real-time flood detection for video surveillance
KR102579542B1 (ko) 군중 밀집도 기반의 위험 지역 자동 알림 시스템
CN110852179A (zh) 基于视频监控平台的可疑人员入侵的检测方法
CN114140745A (zh) 施工现场人员属性检测方法、系统、装置及介质
Shu et al. Small moving vehicle detection via local enhancement fusion for satellite video
Wu et al. Registration-based moving vehicle detection for low-altitude urban traffic surveillance
Kumar et al. Traffic surveillance and speed limit violation detection system
Poostchi et al. Spatial pyramid context-aware moving vehicle detection and tracking in urban aerial imagery
Tian et al. Multi-scale object detection for high-speed railway clearance intrusion
Tang Development of a multiple-camera tracking system for accurate traffic performance measurements at intersections
Li et al. A fog level detection method based on grayscale features
US20180342048A1 (en) Apparatus, system, and method for determining an object's location in image video data
Jo et al. Pothole detection algorithm based on saliency map for improving detection performance
Tong et al. Human positioning based on probabilistic occupancy map
Lin et al. Accurate coverage summarization of UAV videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17823471

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17823471

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17823471

Country of ref document: EP

Kind code of ref document: A1