WO2020010659A1 - 基于图像配准的机动车违法鸣笛抓拍系统 - Google Patents
基于图像配准的机动车违法鸣笛抓拍系统 Download PDFInfo
- Publication number
- WO2020010659A1 WO2020010659A1 PCT/CN2018/098427 CN2018098427W WO2020010659A1 WO 2020010659 A1 WO2020010659 A1 WO 2020010659A1 CN 2018098427 W CN2018098427 W CN 2018098427W WO 2020010659 A1 WO2020010659 A1 WO 2020010659A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature point
- image
- microphone array
- coordinates
- resolution image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
- G06V30/19013—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
- G06V30/1902—Shifting or otherwise transforming the patterns to accommodate for positional errors
- G06V30/19067—Matching configurations of points or features, e.g. constellation matching
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20101—Interactive definition of point of interest, landmark or seed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/406—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/40—Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
- H04R2201/401—2D or 3D arrays of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
Definitions
- the present invention relates to a technology in the field of intelligent traffic management, and in particular, to a motor vehicle illegal whistle capture system based on image registration.
- Coordinate transformation coefficients between the coordinates of the central location of the whistle sound source obtained by the microphone array and the high-definition pictures obtained by the high-definition camera in the existing motor vehicle illegal whistle capture system generally need to be calibrated after the on-site installation.
- the coordinate transformation coefficient It is related to the installation position, angle, and height of the microphone array and HD camera.
- the general process of traditional coordinate transformation coefficient calibration is: using a test car, pressing the horn at different positions in the capture range, the microphone array collects the audio signal and calculates the center position coordinates of the whistle sound source; finds the whistle moment in the high-definition image
- the pixel point corresponding to the position of the test vehicle license plate is calculated by a linear coordinate transformation formula between the coordinates of the center position of the whistle sound and the pixel point of the license plate.
- the present invention proposes an image registration-based illegal whistle capture system for a motor vehicle, which can automatically and accurately calibrate the coordinates of the center position of the whistle sound source obtained by the microphone array and the high-resolution picture pixels obtained by the high-definition camera Coordinate transformation coefficients between points;
- the invention relates to an image registration-based illegal whistle capturing system for a motor vehicle, which includes an integrated microphone array for real-time acquisition of audio and images, a high-definition camera for real-time acquisition of high-resolution images, a control unit, and image registration.
- the integrated microphone array includes a microphone array composed of multiple microphones and a camera for capturing low-resolution images
- the control unit monitors the whistle occurrence from the integrated microphone array to superimpose the center position of the whistle sound source A frame of low-resolution image of coordinates, and simultaneously acquire a frame of high-resolution image from the HD camera and output to the image registration system.
- the image registration system obtains the inter-image transformation coefficient between the low-resolution image and the high-resolution image , To obtain the corresponding position of the center position coordinate of the whistle sound source on the high-resolution image, and to obtain the license plate information of the vehicle corresponding to this position through image target recognition technology.
- the camera and microphone array in the integrated microphone array are integrated design. Once the integrated microphone array itself is assembled and calibrated, during the installation and use of the illegal whistle capture system, the relative positions of the camera and all microphones of the microphone array The same, the coordinate position of the sound source center calculated by the integrated microphone array and the preset coordinate coefficients of the pixels in the low-resolution image are unchanged.
- the integrated microphone array outputs a low-resolution image superimposed with the center position coordinates of the whistle sound source. To the control unit.
- the invention relates to an image registration method based on the above-mentioned system.
- Two control frames are collected from an integrated microphone array and a high-definition camera at the same time, and image feature points and corresponding feature point descriptors are obtained through the image registration system.
- the feature point descriptor information the feature point similarity is obtained, the feature point similarity is ranked, and the feature points with the greatest similarity and fixed positions on the image are selected as the matching feature points of the two frames of images, and then transformed by linear coordinates Calculate the inter-image conversion coefficient between the integrated microphone array camera and the high-definition camera, and realize the position registration between the low-resolution image and the high-resolution image.
- the feature points of the acquired image are: Gaussian blurring is performed on the picture to construct a Gaussian pyramid to obtain different scale spaces, and then local extreme points are obtained through extreme point detection on different scale spaces. Finally, local extreme values are obtained. The points are initially accurately positioned, the edge response points are eliminated, and the feature points are obtained.
- the feature points are preferably at least four pairs, and more preferably six pairs.
- the feature point descriptor generates a feature point descriptor according to the gradient amplitude and direction of the feature point.
- the present invention relates to an image registration system for implementing the above method, including: an image preprocessing module, a feature point calculation module, a feature point descriptor calculation module, a feature point similarity calculation module, a ranking module, and a linear coordinate transformation module, wherein:
- the image pre-processing module is connected to the feature point calculation module and outputs gray-scaled pictures to the feature point calculation module;
- the feature point calculation module is connected to the feature point descriptor calculation module and outputs feature point information to the feature point descriptor calculation module;
- the point descriptor calculation module is connected to the feature point similarity calculation module and outputs the feature point descriptor information to the feature point similarity calculation module;
- the feature point similarity calculation module is connected to the ranking module and outputs the feature point similarity information to the ranking module;
- ranking The module is connected to the linear coordinate transformation module and outputs matching feature points to the linear coordinate transformation module;
- the linear coordinate transformation module calculates the inter-image transformation coefficients of the two frames of images.
- the present invention adopts an integrated microphone array and a high-definition camera to perform whistle monitoring at the same time, and uses an image registration method to complete the coordinate transformation between the integrated microphone array and the high-definition camera, which can quickly and accurately compare the integrated
- the microphone array and high-definition camera are used to calibrate the conversion coefficients between images. There is no need to test the horn on the vehicle. The calibration error is very small, and it is not affected by the installation site environment. During the use of the system, you can choose whether to recalculate the conversion coefficients between images through image registration every time the whistle is captured, even if the system uses the integrated microphone array and high-definition camera due to natural and man-made factors during the use of the system. Changes in the relative position can also ensure the accuracy of each whistle snap.
- FIG. 1 is a flowchart of the present invention
- FIG. 2 is a schematic diagram of an embodiment
- 3 is a schematic diagram of an integrated microphone array
- 1 is the microphone in the microphone array, and 2 is the camera;
- Figure 4 is a flowchart of image registration calculation
- FIG. 5 shows six pairs of matching feature points with the highest similarity and two fixed positions on the image obtained according to the image registration calculation
- a is a low-resolution image
- b is a high-resolution image
- FIG. 6 is a graphic forensics flow chart schematically according to the present invention.
- this embodiment includes: an integrated microphone array, a high-definition camera, a control unit, and an image registration system.
- the integrated microphone array includes a microphone array composed of multiple microphones and a camera. Audio information and static and / or dynamic low-resolution images. HD cameras capture high-resolution images in real time. The cameras included in the integrated microphone array usually have lower resolutions, and the images obtained are low-resolution images. The image is a high-resolution image and can be used to identify vehicle license plate information.
- the image registration system includes: an image preprocessing module, a feature point calculation module, a feature point descriptor calculation module, a feature point similarity calculation module, a ranking module, and a linear coordinate transformation module, wherein: the image preprocessing module and features The point calculation module is connected and outputs a grayscaled picture to the feature point calculation module; the feature point calculation module and the feature point descriptor calculation module are connected and outputs the feature point information to the feature point descriptor calculation module; the feature point description sub calculation module and The feature point similarity calculation module is connected and outputs the feature point descriptor information to the feature point similarity calculation module; the feature point similarity calculation module is connected to the ranking module and outputs the feature point similarity information to the ranking module; the ranking module and the linear coordinate transformation module The matching feature points are connected and output to the linear coordinate transformation module; the linear coordinate transformation module calculates the inter-image transformation coefficients of the two frames of images.
- the control unit determines that the whistle has occurred, and obtains a frame of low-resolution image superimposed with the coordinates of the center position of the whistle sound source from the integrated microphone array, and simultaneously acquires a frame of high-resolution from the HD camera Image, the image registration system is used to obtain the inter-image transformation coefficients between the low-resolution image and the high-resolution image, and the corresponding position of the center position coordinate of the whistle sound source on the high-resolution image is obtained. This position corresponds to the license plate information of the vehicle.
- the image registration system automatically collects two frames of images from the integrated microphone array camera and the high-definition camera at the same time, obtains the feature points of the image and the corresponding feature point descriptors, and according to the feature point descriptor information Obtain feature point similarity, sort feature point similarity, automatically select several feature points with the largest feature point similarity and fixed positions on the image as matching feature points of two frames of images, and then calculate the integrated microphone array through linear coordinate transformation Inter-image conversion coefficient between the camera and the HD camera.
- the inter-image conversion coefficient is used to calculate the coordinates of any pixel on the image obtained by the integrated microphone array camera and the corresponding pixel point on the image obtained by the HD camera.
- the first step is to grayscale the image.
- the second step is to construct a Gaussian pyramid after Gaussian blurring the picture to obtain different scale spaces, and then obtain local extreme points through extreme point detection on different scale spaces.
- Feature points are obtained after the points, feature point descriptors are generated based on the gradient magnitude and direction of the feature points, feature point similarity is obtained based on the feature point descriptor information, and feature point similarity is ranked, and the feature point maximum similarity and position can be automatically selected.
- n pairs of feature points fixed on the image are the matching feature points of the two frames of images, or the n pairs of feature points can be manually selected as the matching feature points of the two frames of images, n ⁇ 4.
- the Gaussian blur refers to: using a N-dimensional space Gaussian function to calculate a two-dimensional fuzzy template, and using the two-dimensional fuzzy template to perform a convolution operation with the original image to achieve the purpose of the blurred image.
- the N-dimensional space Gaussian function is: Among them: ⁇ is the standard deviation of the normal distribution. The larger the value of ⁇ , the more blurred the image, r is the blur radius, and the blur radius is the distance from the template element to the template center.
- the size of the two-dimensional fuzzy template is m * n, and the Gaussian calculation formula corresponding to the element (x, y) on the template is:
- a 5 * 5 Gaussian template is shown in the following table:
- the number of layers of the Gaussian pyramid is determined according to the original size of the image and the size of the tower top image.
- the modulus value and direction of the characteristic point gradient are: the gradient of the pixel point is expressed as Gradient amplitude: Gradient direction:
- the feature point descriptor refers to: describing a feature point with a set of vectors, so that the vector does not change with changes in, for example, lighting and perspective; the vector, that is, the feature point descriptor includes feature points, pairs Contributing pixels, and feature point descriptors should have high uniqueness, in order to improve the probability of correct matching of feature points.
- the feature point descriptor is calculated by:
- A3 Find the gradient magnitude and direction of each pixel in the image radius area, and then multiply each gradient magnitude by the Gaussian weight parameter to generate a direction histogram: Where: x k is the column distance between the point and the feature point, y k is the row distance between the point and the feature point, ⁇ w is the width of the feature point description subwindow 3 ⁇ ⁇ half the number of histogram columns, and d r is the number of adjacent rows The contribution factor is dc, which is a contribution factor to adjacent columns, and do is a contribution factor to adjacent directions.
- A4 The feature point descriptor vector is normalized.
- the feature points are sorted according to the feature point similarity from high to low, and n pairs of feature points with the highest similarity of the feature points and fixed positions on the image can be automatically selected, or n pairs of features can be manually selected.
- the point is defined as the matching feature point, n ⁇ 4.
- n 6 is selected, and n is not limited to the value of 6 in this embodiment.
- the third step is spatial coordinate transformation:
- A1 Projective transformation formula: Where: (x, y) are the coordinates of the sound source center position on the low-resolution image obtained by the integrated microphone array, (x ', y') are the coordinates of the pixel points of the high-resolution image obtained by the HD camera, Is the rotation matrix, [h 31 h 32 ] is the translation vector, Is the zoom scale.
- A2 The inter-image transformation coefficients of the low-resolution image pixel coordinates obtained by the integrated microphone array and the high-resolution image pixel coordinates obtained by the high-definition camera are established by the following equations and solved by the least square method to obtain Inter-image transform coefficient:
- (x i , y i ) are the coordinates of the low-resolution image matching feature points obtained by the integrated microphone array
- (x i ' , y i ') are the coordinates of the high-resolution image matching feature points obtained by the HD camera
- T is the transform coefficient between images.
- A3 Obtained from the projective transformation formula.
- the coordinates (x, y) of the sound source center position on the low-resolution image obtained by the integrated microphone array correspond to the coordinates (x ' , y') in the high-resolution image obtained by the HD camera:
- the coordinates of the central position of the whistle sound source uses a spherical wave based beamforming algorithm to locate the sound source of the whistle vehicle to generate a sound pressure cloud map, and the coordinates corresponding to the maximum sound pressure in the sound pressure cloud map. That is the coordinates of the center position of the whistle sound source.
- the beamforming algorithm is specifically: Where: V (k, w) is the mean square value of the wave number, k is the focus direction, w is the angular frequency, M is the number of sensors in the microphone array, and C nm is the sound pressure signal received by the m-th microphone relative to the n-th
- the microphone receives the cross-spectrum of the sound pressure signal, r m is the coordinate vector of the m-th microphone, and r n is the coordinate vector of the n-th microphone.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
一种基于图像配准的机动车违法鸣笛抓拍系统,包括:用于实时采集音频和图像的集成式麦克风阵列、用于实时采集高解析度图像的高清摄像机、控制单元和图像配准系统,通过分别采集同一时刻低解析度图像和高解析度图像,获取两帧图像的特征点和特征点描述子,由特征点描述子计算获得特征点相似度,然后可以基于特征点相似度排序获取两帧图像相似度最高且位置在图像上固定的六对特征点为匹配特征点,也可以手动选择六对特征点为匹配特征点,最后通过线性坐标变换计算出两帧图像的图像间变换系数,实现鸣笛抓拍系统的精确校准。本发明能够自动精确校准集成式麦克风阵列和高清摄像机的相对位置信息。本发明可以在每次鸣笛触发后进行集成式麦克风阵列和高清摄像机位置校准。
Description
本发明涉及的是一种智能交通管理领域的技术,特别是一种基于图像配准的机动车违法鸣笛抓拍系统。
现有的机动车违法鸣笛抓拍系统中麦克风阵列获得的鸣笛声源中心位置坐标与高清摄像机获得的高清图片之间的坐标变换系数,一般需要在实施现场安装后进行标定,该坐标变换系数与麦克风阵列和高清摄像机的安装位置、角度、高度等因素有关。传统的坐标变换系数标定的一般流程为:使用测试汽车,在抓拍范围内不同位置分别按喇叭,麦克风阵列采集音频信号并计算获得鸣笛声源的中心位置坐标;在高清图像中找到鸣笛时刻测试车辆车牌所在位置对应的像素点,通过线性坐标变换公式计算鸣笛声中心位置坐标和车牌像素点之间的坐标变换系数。
此种坐标变换系数的标定方法存在以下三个问题:1、标定过程比较繁琐,在安装现场需要使用测试汽车人为制造鸣笛声,可能造成扰民等不良影响。2、现场使用测试车辆按喇叭标定的坐标变换系数可能存在较大的误差,最终造成违法鸣笛车辆定位不准确。误差来自两方面,一是由于现场声音受干扰,麦克风阵列计算得到的鸣笛声中心位置坐标可能有偏差;另外一方面,不同测试汽车的喇叭安装位置和结构不同,发声位置会有很大差异,而在标定时假设发声位置在车牌所在的中间位置。计算得到的鸣笛声源中心位置坐标和在高清图像中指定的测试车辆的发声位置都存在误差,标定得到的坐标变换系数一定存在误差。3、一旦标定完成,获得的坐标变换系数是固定的,但是使用过程中因为天气、温度、人为等因素可能导致麦克风阵列和高清摄像机之间的相对位置发生变化,原来标定的坐标变换系数将不再适用,抓拍系统的准确率将下降,甚至出现抓错鸣笛车辆的可能。
发明内容
本发明针对现有技术存在的不足,提出了一种基于图像配准的机动车违法鸣笛抓拍系统,能够自动精确校准麦克风阵列获得的鸣笛声源中心位置坐标与高清摄像机获得的高清图片像素点之间的坐标变换系数;
本发明是通过以下技术方案实现的:
本发明涉及一种基于图像配准的机动车违法鸣笛抓拍系统,包括:用于实时采集音频和图像的集成式麦克风阵列、用于实时采集高解析度图像的高清摄像机、控制单元和图像配准系 统,其中:集成式麦克风阵列包含多个麦克风组成的麦克风阵列和一个用于采集低解析度图像的摄像机,控制单元监测鸣笛发生时从集成式麦克风阵列获取叠加有鸣笛声源中心位置坐标的一帧低解析度图像,并同时从高清摄像机获取一帧高解析度图像并输出至图像配准系统,图像配准系统获取低解析度图像和高解析度图像之间的图像间变换系数,得到鸣笛声源的中心位置坐标在高解析度图像上对应位置,并通过图像目标识别技术识别得到此位置对应车辆的车牌信息。
所述的集成式麦克风阵列中的摄像机与麦克风阵列为一体化设计,一旦集成式麦克风阵列本身装配校准完成,在违法鸣笛抓拍系统的安装和使用过程中,摄像机与麦克风阵列所有麦克风的相对位置不变,集成式麦克风阵列计算获得的声源中心位置坐标与低解析度图像中像素点的预置坐标系数不变,集成式麦克风阵列输出叠加有鸣笛声源中心位置坐标的低解析度图像给控制单元。
本发明涉及一种基于上述系统的图像配准方法,通过控制单元采集来自集成式麦克风阵列和高清摄像机的同一时刻的两帧图像,通过图像配准系统获取图像的特征点和对应特征点描述子,根据特征点描述子信息获得特征点相似度,对特征点相似度排序,选择特征点相似度最大且位置在图像上固定的若干特征点为两帧图像的匹配特征点,然后通过线性坐标变换计算出集成式麦克风阵列摄像机和高清摄像机之间的图像间变换系数,实现低解析度图像到高解析度图像之间的位置配准。
所述的获取图像的特征点是指:对图片进行高斯模糊后构建高斯金字塔,得到不同的尺度空间,然后在不同的尺度空间上通过极值点检测获得局部极值点,最后对局部极值点初步精确定位、剔除边缘响应点,获得特征点。
所述的特征点优选至少四对,进一步优选为六对。
所述的特征点描述子,根据特征点的梯度幅值和方向生成特征点描述子。
所述的特征点相似度是指:针对高解析度图像中特征点描述子R
i=(r
i1,r
i2,…,r
i128)和低解析度图像中特征点描述子S
i=(s
i1,s
i2,…,s
i128),计算任意两个特征点描述子之间的相似性度量
本发明涉及一种实现上述方法的图像配准系统,包括:图像预处理模块、特征点计算模块、特征点描述子计算模块、特征点相似度计算模块、排序模块和线性坐标变换模块,其中:图像预处理模块与特征点计算模块相连并输出灰度化处理的图片给特征点计算模块;特征点计算模块和特征点描述子计算模块相连并输出特征点信息给特征点描述子计算模块;特征点描述子计算模块和特征点相似度计算模块相连并输出特征点描述子信息给特征点相似度计算模块;特征点相似度计算模块和排序模块相连并输出特征点相似度信息给排序模块;排序模块和线性 坐标变换模块相连并输出匹配特征点给线性坐标变换模块;线性坐标变换模块计算获得两帧图像的图像间变换系数。
技术效果
与现有技术相比,本发明采用集成式麦克风阵列和高清摄像机同时进行鸣笛监控,采用基于图像配准方法完成集成式麦克风阵列与高清摄像机之间的坐标变换,能够快速准确地对集成式麦克风阵列和高清摄像机进行图像间变换系数的标定,无需测试车辆现场按喇叭,标定误差极小,不受安装现场环境影响。在系统使用过程中,可选择是否在每次鸣笛抓拍时,通过图像配准对图像间变换系数进行重新计算,即使系统在使用过程中因为自然、人为等因素引起集成式麦克风阵列和高清摄像机的相对位置发生变化,也能够确保每次鸣笛抓拍的准确性。
图1为本发明流程图;
图2为实施例示意图;
图3为集成式麦克风阵列示意图;
图中:1为麦克风阵列中的麦克风,2为摄像机;
图4为图像配准计算流程图
图5为根据图像配准计算获得的两帧图像的相似度最高且位置在图像上固定的六对匹配特征点;
图中:a为低解析度图像,b为高解析度图像;
图6为本发明以图形示意的取证流程图;
本发明附图中的牌照号码及数字内容均经修改,与实际该牌照车辆无任何关系。
如图2和图3所示,本实施例包括:集成式麦克风阵列、高清摄像机、控制单元和图像配准系统,其中:集成式麦克风阵列包含多个麦克风组成的麦克风阵列和一个摄像机,实时采集音频信息以及静态和/或动态的低解析度图像,高清摄像机实时采集高解析度图像;集成式麦克风阵列所包含的摄像机通常分辨率较低,获得的图像为低解析度图像;高清摄像机获得的图像为高解析度图像,可用于识别车辆车牌信息。
所述的图像配准系统,包括:图像预处理模块、特征点计算模块、特征点描述子计算模块、特征点相似度计算模块、排序模块和线性坐标变换模块,其中:图像预处理模块与特征点计算模块相连并输出灰度化处理的图片给特征点计算模块;特征点计算模块和特征点描述子计算模块相连并输出特征点信息给特征点描述子计算模块;特征点描述子计算模块和特征点相似度计算模块相连并输出特征点描述子信息给特征点相似度计算模块;特征点相似度计算模块和 排序模块相连并输出特征点相似度信息给排序模块;排序模块和线性坐标变换模块相连并输出匹配特征点给线性坐标变换模块;线性坐标变换模块计算获得两帧图像的图像间变换系数。
如图1和图6所示,控制单元判定鸣笛发生,从集成式麦克风阵列获取叠加有鸣笛声源中心位置坐标的一帧低解析度图像,并同时从高清摄像机获取一帧高解析度图像,用图像配准系统获取低解析度图像和高解析度图像之间的图像间变换系数,得到鸣笛声源的中心位置坐标在高解析度图像上对应位置,并结合图像目标识别技术给出此位置对应车辆的车牌信息。
如图4所示,所述的图像配准系统自动采集来自集成式麦克风阵列摄像机和高清摄像机的同一时刻的两帧图像,获取图像的特征点和对应特征点描述子,根据特征点描述子信息获得特征点相似度,对特征点相似度排序,自动选择特征点相似度最大且位置在图像上固定的若干特征点为两帧图像的匹配特征点,然后通过线性坐标变换计算出集成式麦克风阵列摄像机和高清摄像机之间的图像间变换系数,由图像间变换系数计算出集成式麦克风阵列摄像机获取的图像上的任一像素点坐标在高清摄像机获取的图像上的对应的像素点坐标,具体过程如下:
第一步,对图像进行灰度化处理。
第二步,对图片进行高斯模糊后构建高斯金字塔,得到不同的尺度空间,然后在不同的尺度空间上通过极值点检测获得局部极值点,经过局部极值点初步精确定位和剔除边缘响应点后获得特征点,根据特征点的梯度幅值和方向生成特征点描述子,根据特征点描述子信息获得特征点相似度,对特征点相似度排序,可以自动选择特征点相似度最大且位置在图像上固定的n对特征点为两帧图像的匹配特征点,也可以手动选择n对特征点为两帧图像的匹配特征点,n≥4,本实施例选取n=6,n不限于本实施例的取值6,如图5所示。
所述的高斯模糊是指:使用N维空间高斯函数计算得到二维模糊模板,并使用该二维模糊模板与原图像做卷积运算,达到模糊图像的目的。
所述的卷积是指:根据σ的值计算出高斯模板矩阵的大小,计算高斯模板矩阵的值,与原图像做卷积,获得原图像的高斯模糊图像,即一个图像的尺度空间L(x,y,σ)定义为一个变化尺度的高斯函数G(x,y,σ)与原图像I(x,y)的卷积,L(x,y,σ)=G(x,y,σ)*I(x,y),其中:*表示卷积运算。
所述的二维模糊模板大小为m*n,模板上的元素(x,y)对应的高斯计算公式为:
为了确保模板矩阵中的元素在[0,1]之间,优选将二维模糊模板矩阵归一化,5*5的高斯 模板如下表所示:
6.58573e-006 | 0.000424781 | 0.00170354 | 0.000424781 | 6.58573e-006 |
0.000424781 | 0.0273984 | 0.109878 | 0.0273984 | 0.000424781 |
0.00170354 | 0.109878 | 0.440655 | 0.109878 | 0.00170354 |
0.000424781 | 0.0273984 | 0.109878 | 0.0273984 | 0.000424781 |
6.53573e-006 | 0.000424781 | 0.00170354 | 0.000424781 | 6.58573e-006 |
所述的高斯金字塔的层数根据图像的原始大小和塔顶图像的大小共同决定,层数n=log
2{min(M,N)}-t,t∈[0,log
2{min(M,N)}],其中其中M,N为原图像的大小,t为塔顶图像的最小维数的对数值。
所述的高斯金字塔中采用高斯差分算子(Difference of Gaussian,简称DOG算子)进行极值检测,即D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)=L(x,y,kσ)-L(x,y,σ),将每一个像素点与其图像域和尺度域上的相邻点进行比较并获得局部极值点。
所述的初步精确定位是指:DOG算子在尺度空间的Taylor展开式(拟合函数)为:
其中:X=(x,y,σ)
T,通过求导方式得到极值点的偏移量为:
对应方程的值为
所有
小于0.04的极值点删除,获得初步精确定位结果,0.04为经验值。
优选地,对上述得到的初步精确定位结果进一步进行不稳定边缘响应点的剔除,具体为:构建Hessian矩阵,
其中:Tr(H)=D
xx+D
xy=α+β,Det(H)=D
xxD
yy-(D
xy)
2=αβ,α代表x方向的梯度,β代表y方向的梯度,为了检测主曲率是否在某域值r下,r一般取10,检测
成立时将特征点保留,反之剔除,获得精确定位结果。
所述的特征点描述子是指:用一组向量描述特征点,使该向量不随比如光照、视角等的变化而变化;所述的向量,即特征点描述子包括特征点、特征点周围对其有贡献的像素点,并且特征点描述子应该有较高的独特性,以便于提高特征点正确匹配的概率。
所述的特征点描述子通过以下方式计算得到:
A3:在图像半径区域内对每个像素点求其梯度幅值和方向,然后第每个梯度幅值乘以高斯权重参数,生成方向直方图:
其中:x
k为该点与特征点的列距离,y
k为该点与特征点的行距离,σ
w为特征点描述子窗口宽度3σ×直方图列数的一半,d
r为对邻近行的贡献因子,dc为对临近列的贡献因子,do为对邻近方向的贡献因子。
所述的特征点相似度是指:针对高解析度图像中特征点描述子R
i=(r
i1,r
i2,…,r
i128)和低解析度图像中特征点描述子S
i=(s
i1,s
i2,…,s
i128),计算任意两个特征点描述子之间的相似性度量
所述的特征点相似度排序,根据特征点相似度从高到低对特征点排序,可以自动选择特征点相似度最高且位置在图像上固定的n对特征点,也可以手动选择n对特征点,定义为匹配特征点,n≥4,本实施例选取n=6,n不限于本实施例的取值6。
第三步、空间坐标变换:
A1:射影变换公式:
其中:(x,y)为集成式麦克风阵列获得的低解析度图像上的声源中心位置坐标,(x',y')为高清摄像机获得的高解析度图像像素点的坐标,
为旋转矩阵,[h
31 h
32]为平移矢量,
为缩放尺度。
A2:所述的集成式麦克风阵列获得的低解析度图像像素点坐标和高清摄像机获得的高解析度图像像素点坐标的图像间变换系数,建立如下所示方程组,用最小二乘法求解,获得图像间变换系数:
[h
11 h
12 h
13 h
21 h
22 h
23 h
31 h
32]
T为图像间变换系数。
A3:由射影变换公式得到,集成式麦克风阵列获得的低解析度图像上声源中心位置坐标(x,y)对应在高清摄像机获得的高解析度图像中坐标(x'
,y')为:
所述的鸣笛声源的中心位置坐标,采用基于球面波的除自谱的波束成型算法对鸣笛车辆进行声源定位,生成声压云图,声压云图中声压最大值对应的坐标,即为鸣笛声源中心位置坐标。
所述的波束成型算法具体为:
其中:V(k,w)为波数成型的均方值,k为聚焦方向,w为角频率,M为麦克风阵列的传感器数量,C
nm为第m号传声器接收声压信号相对于第n号传声器接收声压信号的互谱,r
m为第m号传声器的坐标向量,r
n为第n号传声器的坐标向量。
上述具体实施可由本领域技术人员在不背离本发明原理和宗旨的前提下以不同的方式对其进行局部调整,本发明的保护范围以权利要求书为准且不由上述具体实施所限,在其范围内的各个实现方案均受本发明之约束。
Claims (10)
- 一种基于图像配准的机动车违法鸣笛抓拍系统,包括:用于实时采集音频和图像的集成式麦克风阵列、用于实时采集高解析度图像的高清摄像机、控制单元和图像配准系统,其中:集成式麦克风阵列包含多个麦克风组成的麦克风阵列和一个用于采集低解析度图像的摄像机,控制单元监测鸣笛发生时从集成式麦克风阵列获取叠加有鸣笛声源中心位置坐标的一帧低解析度图像,并同时从高清摄像机获取一帧高解析度图像并输出至图像配准系统,图像配准系统获取低解析度图像和高解析度图像之间的图像间变换系数,得到鸣笛声源的中心位置坐标在高解析度图像上对应位置,并通过图像目标识别技术识别得到此位置对应车辆的车牌信息。
- 根据权利要求1所述的系统,其特征是,所述的集成式麦克风阵列中的摄像机与麦克风阵列为一体化设计,即摄像机与麦克风阵列所有麦克风的相对位置不变,集成式麦克风阵列计算获得的声源中心位置坐标与低解析度图像中像素点的预置坐标系数不变,集成式麦克风阵列输出叠加有鸣笛声源中心位置坐标的低解析度图像给控制单元。
- 一种基于权利要求1或2所述系统的图像配准方法,其特征在于,通过控制单元采集来自集成式麦克风阵列和高清摄像机的同一时刻的两帧图像,通过图像配准系统获取图像的特征点和对应特征点描述子,根据特征点描述子信息获得特征点相似度,对特征点相似度排序,选择特征点相似度最大且位置在图像上固定的若干特征点为两帧图像的匹配特征点,然后通过线性坐标变换计算出集成式麦克风阵列摄像机和高清摄像机之间的图像间变换系数,实现低解析度图像到高解析度图像之间的位置配准。
- 根据权利要求3所述的方法,其特征是,所述的获取图像的特征点是指:对图片进行高斯模糊后构建高斯金字塔,得到不同的尺度空间,然后在不同的尺度空间上通过极值点检测获得局部极值点,最后对局部极值点初步精确定位、剔除边缘响应点,获得特征点。
- 根据权利要求3所述的方法,其特征是,所述的特征点至少四对。
- 根据权利要求3所述的方法,其特征是,所述的特征点描述子,根据特征点的梯度幅值和方向生成。
- 根据权利要求3所述的方法,其特征是,所述的特征点描述子通过以下方式计算得到:A3:在图像半径区域内对每个像素点求其梯度幅值和方向,然后第每个梯度幅值乘以高斯权重参数,生成方向直方图: 其中:x k为该点与特征点的列距离,y k为该点与特征点的行距离,σ w为特征点描述子窗口宽度3σ×直方图列数的一半,d r为对邻近行的贡献因子,dc为对临近列的贡献因子,do为对邻近方向的贡献因子;
- 根据权利要求3所述的方法,其特征是,所述的线性坐标变换,具体包括以下步骤:A1:射影变换公式: 其中:(x,y)为集成式麦克风阵列获得的低解析度图像上声源中心位置坐标,(x',y')为高清摄像机获得的高解析度图像像素点的坐标, 为旋转矩阵,[h 31 h 32]为平移矢量, 为缩放尺度;A2:所述的集成式麦克风阵列获得的低解析度图像像素点坐标和高清摄像机获得的高解析度图像像素点坐标的图像间变换系数,建立如下所示方程组,用最小二乘法求解,获得图像间变换系数:其中:(x i,y i)为集成式麦克风阵列获得的低解析度图像匹配特征点坐标,(x i',y i')为高清摄像机获得的高解析度图像匹配特征点坐标,i=1,2,3,…n,n≥4,[h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32] T为图像间变换系数;
- 一种实现权利要求3~9中任一所述方法的图像配准系统,其特征在于,包括:图像预处理模块、特征点计算模块、特征点描述子计算模块、特征点相似度计算模块、排序模块和线性坐标变换模块,其中:图像预处理模块与特征点计算模块相连并输出灰度化处理的图片给特征点计算模块;特征点计算模块和特征点描述子计算模块相连并输出特征点信息给特征点描述子计算模块;特征点描述子计算模块和特征点相似度计算模块相连并输出特征点描述子信息给特征点相似度计算模块;特征点相似度计算模块和排序模块相连并输出特征点相似度信息给排序模块;排序模块和线性坐标变换模块相连并输出匹配特征点给线性坐标变换模块;线性坐标变换模块计算获得两帧图像的图像间变换系数。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18899032.9A EP3822854A4 (en) | 2018-07-10 | 2018-08-03 | ILLEGAL MOTOR VEHICLE WHISTLING SPONTANEOUS PHOTOGRAPHY SYSTEM BASED ON IMAGE RECORDING |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810750656.3 | 2018-07-10 | ||
CN201810750656.3A CN109858479B (zh) | 2018-07-10 | 2018-07-10 | 基于图像配准的机动车违法鸣笛抓拍系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020010659A1 true WO2020010659A1 (zh) | 2020-01-16 |
Family
ID=66889678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/098427 WO2020010659A1 (zh) | 2018-07-10 | 2018-08-03 | 基于图像配准的机动车违法鸣笛抓拍系统 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3822854A4 (zh) |
CN (1) | CN109858479B (zh) |
WO (1) | WO2020010659A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113936472A (zh) * | 2021-10-27 | 2022-01-14 | 安徽隼波科技有限公司 | 一种高速公路分流区实线变道检测方法及检测装置 |
WO2023122927A1 (en) * | 2021-12-28 | 2023-07-06 | Boe Technology Group Co., Ltd. | Computer-implemented method, apparatus, and computer-program product |
CN117132913A (zh) * | 2023-10-26 | 2023-11-28 | 山东科技大学 | 基于无人机遥感与特征识别匹配的地表水平位移计算方法 |
WO2024077366A1 (pt) * | 2022-10-11 | 2024-04-18 | Perkons S/A | Sistema e método para detecção de ruído de veículos automotores e memória lida por computador correspondente |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647949B (zh) * | 2019-10-21 | 2024-02-20 | 中国计量大学 | 一种基于深度学习的汽车鸣笛抓拍装置的校准方法 |
CN111476829A (zh) * | 2020-03-12 | 2020-07-31 | 梦幻世界科技(珠海)有限公司 | 一种基于光流的增强现实虚实图像动态配准方法及装置 |
CN111556282A (zh) * | 2020-03-16 | 2020-08-18 | 浙江大华技术股份有限公司 | 远距离音视频采集的系统、方法、计算机设备和存储介质 |
CN111915918A (zh) * | 2020-06-19 | 2020-11-10 | 中国计量大学 | 一种基于动态特性的汽车鸣笛抓拍装置现场校准系统及方法 |
CN111932593B (zh) * | 2020-07-21 | 2024-04-09 | 湖南中联重科智能技术有限公司 | 基于触摸屏手势校正的图像配准方法、系统及设备 |
CN112511817B (zh) * | 2020-11-23 | 2022-09-16 | 浙江省计量科学研究院 | 一种鸣笛抓拍系统动态参数校准方法及其校准装置 |
CN116736227B (zh) * | 2023-08-15 | 2023-10-27 | 无锡聚诚智能科技有限公司 | 一种麦克风阵列和摄像头联合标定声源位置的方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002243534A (ja) * | 2001-02-20 | 2002-08-28 | Omron Corp | 騒音計測装置 |
JP2004191180A (ja) * | 2002-12-11 | 2004-07-08 | Matsushita Electric Ind Co Ltd | 移動体検出装置 |
CN101042803A (zh) * | 2007-04-23 | 2007-09-26 | 凌子龙 | 车辆违章鸣笛电子取证装置、电子警察系统及取证方法 |
CN105096606A (zh) * | 2015-08-31 | 2015-11-25 | 成都众孚理想科技有限公司 | 汽车鸣笛闯红灯抓拍系统 |
CN107045784A (zh) * | 2017-01-22 | 2017-08-15 | 苏州奇梦者网络科技有限公司 | 一种电子交警系统 |
CN207082221U (zh) * | 2017-05-24 | 2018-03-09 | 长沙艾克赛普仪器设备有限公司 | 违法鸣笛多目标实时抓拍电子警察系统 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104766319B (zh) * | 2015-04-02 | 2017-06-13 | 西安电子科技大学 | 提升夜间拍照图像配准精度的方法 |
CN105046644B (zh) * | 2015-07-06 | 2021-08-13 | 嘉恒医疗科技(上海)有限公司 | 基于线性相关性的超声与ct图像配准方法和系统 |
CN106296714B (zh) * | 2016-08-23 | 2019-05-03 | 东方网力科技股份有限公司 | 一种图像配准方法和装置 |
CN106384510B (zh) * | 2016-10-11 | 2019-10-08 | 擎翌(上海)智能科技有限公司 | 一种违法鸣笛抓拍系统 |
CN106355893A (zh) * | 2016-10-28 | 2017-01-25 | 东方智测(北京)科技有限公司 | 用于实时定位鸣笛的机动车的方法及系统 |
CN106875678A (zh) * | 2017-01-23 | 2017-06-20 | 上海良相智能化工程有限公司 | 一种汽车鸣笛执法取证系统 |
-
2018
- 2018-07-10 CN CN201810750656.3A patent/CN109858479B/zh active Active
- 2018-08-03 EP EP18899032.9A patent/EP3822854A4/en active Pending
- 2018-08-03 WO PCT/CN2018/098427 patent/WO2020010659A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002243534A (ja) * | 2001-02-20 | 2002-08-28 | Omron Corp | 騒音計測装置 |
JP2004191180A (ja) * | 2002-12-11 | 2004-07-08 | Matsushita Electric Ind Co Ltd | 移動体検出装置 |
CN101042803A (zh) * | 2007-04-23 | 2007-09-26 | 凌子龙 | 车辆违章鸣笛电子取证装置、电子警察系统及取证方法 |
CN105096606A (zh) * | 2015-08-31 | 2015-11-25 | 成都众孚理想科技有限公司 | 汽车鸣笛闯红灯抓拍系统 |
CN107045784A (zh) * | 2017-01-22 | 2017-08-15 | 苏州奇梦者网络科技有限公司 | 一种电子交警系统 |
CN207082221U (zh) * | 2017-05-24 | 2018-03-09 | 长沙艾克赛普仪器设备有限公司 | 违法鸣笛多目标实时抓拍电子警察系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3822854A4 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113936472A (zh) * | 2021-10-27 | 2022-01-14 | 安徽隼波科技有限公司 | 一种高速公路分流区实线变道检测方法及检测装置 |
CN113936472B (zh) * | 2021-10-27 | 2023-12-22 | 安徽隼波科技有限公司 | 一种高速公路分流区实线变道检测方法及检测装置 |
WO2023122927A1 (en) * | 2021-12-28 | 2023-07-06 | Boe Technology Group Co., Ltd. | Computer-implemented method, apparatus, and computer-program product |
WO2024077366A1 (pt) * | 2022-10-11 | 2024-04-18 | Perkons S/A | Sistema e método para detecção de ruído de veículos automotores e memória lida por computador correspondente |
CN117132913A (zh) * | 2023-10-26 | 2023-11-28 | 山东科技大学 | 基于无人机遥感与特征识别匹配的地表水平位移计算方法 |
CN117132913B (zh) * | 2023-10-26 | 2024-01-26 | 山东科技大学 | 基于无人机遥感与特征识别匹配的地表水平位移计算方法 |
Also Published As
Publication number | Publication date |
---|---|
CN109858479A (zh) | 2019-06-07 |
EP3822854A4 (en) | 2022-04-27 |
CN109858479B (zh) | 2022-11-18 |
EP3822854A1 (en) | 2021-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020010659A1 (zh) | 基于图像配准的机动车违法鸣笛抓拍系统 | |
CN110363158B (zh) | 一种基于神经网络的毫米波雷达与视觉协同目标检测与识别方法 | |
CN111750820B (zh) | 影像定位方法及其系统 | |
CN110211043B (zh) | 一种用于全景图像拼接的基于网格优化的配准方法 | |
WO2017092631A1 (zh) | 鱼眼图像的畸变图像校正方法及鱼眼相机的标定方法 | |
CN112396562B (zh) | 一种高动态范围场景下基于rgb与dvs图像融合的视差图增强方法 | |
CN106447601B (zh) | 一种基于投影-相似变换的无人机遥感影像拼接方法 | |
CN111339839B (zh) | 一种密集型目标检测计量方法 | |
CN109118544B (zh) | 基于透视变换的合成孔径成像方法 | |
CN111723801B (zh) | 鱼眼相机图片中目标检测矫正的方法与系统 | |
CN108171735B (zh) | 基于深度学习的十亿像素视频对齐方法及系统 | |
CN110782498B (zh) | 一种视觉传感网络的快速通用标定方法 | |
CN113313047B (zh) | 一种基于车道结构先验的车道线检测方法及系统 | |
CN111123962A (zh) | 一种用于电力杆塔巡检的旋翼无人机重定位拍照方法 | |
CN110322485A (zh) | 一种异构多相机成像系统的快速图像配准方法 | |
JP2016194895A (ja) | 室内2d平面図の生成方法、装置及びシステム | |
CN114372992A (zh) | 一种基于动平台的边缘角点检测四目视觉算法 | |
CN115240089A (zh) | 一种航空遥感图像的车辆检测方法 | |
CN113642463B (zh) | 一种视频监控和遥感图像的天地多视图对齐方法 | |
CN116152068A (zh) | 一种可用于太阳能板图像的拼接方法 | |
TWI779957B (zh) | 影像分析模型建立方法及其影像分析設備 | |
CN114897676A (zh) | 一种无人机遥感多光谱图像拼接方法、设备及介质 | |
CN110689720A (zh) | 基于无人机的实时动态车流量检测方法 | |
CN111047513B (zh) | 一种用于柱面全景拼接的鲁棒性图像对齐方法及装置 | |
CN115588033A (zh) | 基于结构提取的合成孔径雷达与光学图像配准系统及方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18899032 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |