CN108182383B - Vehicle window detection method and device - Google Patents

Vehicle window detection method and device Download PDF

Info

Publication number
CN108182383B
CN108182383B CN201711285415.8A CN201711285415A CN108182383B CN 108182383 B CN108182383 B CN 108182383B CN 201711285415 A CN201711285415 A CN 201711285415A CN 108182383 B CN108182383 B CN 108182383B
Authority
CN
China
Prior art keywords
matching
image
detected
points
template image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711285415.8A
Other languages
Chinese (zh)
Other versions
CN108182383A (en
Inventor
蔡丹平
周祥明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201711285415.8A priority Critical patent/CN108182383B/en
Publication of CN108182383A publication Critical patent/CN108182383A/en
Application granted granted Critical
Publication of CN108182383B publication Critical patent/CN108182383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a method and equipment for detecting a vehicle window, which are used for solving the technical problem of poor vehicle window detection effect in the prior art. The method comprises the following steps: determining a vehicle window candidate region in a vehicle image, and dividing the vehicle window candidate region to obtain at least two images to be detected; matching each image to be detected in the at least two images to be detected with at least one vehicle window template image, and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point; and positioning window corner points in the at least two regions to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points.

Description

Vehicle window detection method and device
Technical Field
The invention relates to the field of traffic, in particular to a method and equipment for detecting a car window.
Background
With the development of transportation, the vehicles are increasingly scaled up, and the traffic control becomes more complex and heavy. In order to monitor and manage urban traffic more efficiently and accurately, intelligent traffic monitoring technology is in force. The car window positioning is a relatively new research direction, the size and the position of the car window provide very important characteristic information for problems of car type recognition, safety belt detection, annual inspection identification detection, car front ornament detection and the like, and the car window positioning is a key technology in a car recognition system
At present, the commonly used car window detection methods mainly include two methods:
the first is a car window detection method based on color difference, because of the light transmission and light absorption of glass, the saturation and brightness of most pixels are low, car window areas can be divided by using car window and car body color features, but the car window detection method based on color difference has good detection effect of obviously distinguishing car body color from car window color features, and has poor effect of not greatly distinguishing car body and car window color features;
the second type is a car window detection method based on linear characteristics, which firstly carries out horizontal edge detection, combines hough transformation to carry out linear connection and detects the upper and lower horizontal boundaries of the car window. And secondly, removing small short branches by using mathematical morphology, and finally detecting the boundaries of the two sides of the car window by combining an integral projection method. Because the number of lines on the vehicle body is large, the vehicle window detection method based on the linear characteristic is easy to cause false detection, and the accuracy rate is low.
In conclusion, the detection effect on the vehicle window in the prior art is poor.
Disclosure of Invention
The embodiment of the invention provides a method and equipment for detecting a vehicle window, which are used for solving the technical problem of poor vehicle window detection effect in the prior art.
In a first aspect, an embodiment of the present invention provides a method for detecting a vehicle window, including the following steps:
determining a vehicle window candidate region in a vehicle image, and dividing the vehicle window candidate region to obtain at least two images to be detected;
matching each image to be detected in the at least two images to be detected with at least one vehicle window template image, and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point;
and positioning window corner points in the at least two regions to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points.
Optionally, before matching each image to be detected of the at least two images to be detected with at least one vehicle window template image, the method further includes:
carrying out gray level processing on a car window image containing at least one car window corner point to obtain a car window template image corresponding to the car window image;
determining characteristic parameters of edge points on edge lines corresponding to the at least one vehicle window corner point in the vehicle window template image; the characteristic parameters are used for representing the gradient amplitude of the position coordinates of the edge points in the car window template image;
and determining a plurality of edge points of which the gradient amplitudes are larger than the threshold amplitudes as a plurality of matching points of the car window template image according to the characteristic parameters.
Optionally, after determining, according to the feature parameters, that the plurality of edge points with the gradient amplitudes larger than the threshold amplitude among the edge points are a plurality of matching points of the vehicle window template image, the method further includes:
calculating a matching parameter corresponding to each matching point according to a first gradient amplitude of each matching point in the car window template image and a second gradient amplitude of each matching point in the car window image; the matching parameters are used for indicating the matching degree of the car window template image and the car window image at the corresponding matching points;
setting the weight values of the plurality of matching points according to the matching degree represented by the matching parameters;
sorting the matching points according to the weight values of the matching points, and determining the arrangement sequence of the matching points; and the weight is inversely proportional to the matching degree represented by the matching parameter.
Optionally, will every in at least two images of waiting to detect matches with at least one door window template image, confirms every door window template image that the matching degree that waits to detect the image correspondence is the highest, include:
determining an arrangement sequence of a plurality of matching points included in each of the at least one window template image;
matching a plurality of matching points of each car window template image in the at least one car window template image with corresponding images to be detected in the at least two images to be detected according to a corresponding arrangement sequence, and recording the matching degree between each car window template image and the corresponding images to be detected;
and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected in the at least two images to be detected.
Optionally, matching a plurality of matching points of each car window template image in the at least one car window template image with corresponding to-be-detected images in the at least two to-be-detected images according to the arrangement sequence, and recording the matching degree between each car window template image and the corresponding to-be-detected image, including:
traversing each car window template image in the at least one car window template image in the corresponding image to be detected for N times, wherein N is less than or equal to the number of pixel points in the image to be detected, and is a positive integer;
when the kth traversal in the N traversals is executed, determining a first gradient amplitude of each matching point in the multiple matching points in a car window template image and a second gradient amplitude of each matching point in an image to be detected, sequentially calculating and accumulating the matching degree of each matching point in the multiple matching points according to the arrangement sequence of the multiple matching points according to the first gradient amplitude and the second gradient amplitude, determining the kth accumulated matching degree corresponding to the kth traversal, wherein k is an integer from 1 to N;
and recording the maximum cumulative matching degree in the N cumulative matching degrees traversed for N times, and taking the maximum cumulative matching degree as the matching degree between the current car window template image and the image to be detected.
Optionally, while sequentially calculating and accumulating the matching degree of each of the plurality of matching points according to the arrangement order of the plurality of matching points according to the first gradient magnitude and the second gradient magnitude, the method further includes:
obtaining the current accumulated matching degree; the current accumulated matching degree is the accumulated matching degree after at least two matching points in the multiple matching points are matched in the ith traversal;
if the current accumulated matching degree is determined to be smaller than a preset threshold value, ending the k-th traversal; the preset threshold is related to the number of matched matching points and the minimum matching threshold;
and determining the current accumulated matching degree as the kth accumulated matching degree corresponding to the kth traversal.
Optionally, the step of positioning window corner points in the at least two areas to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points includes:
determining corner point coordinates of a vehicle window corner point included in each to-be-detected area in the at least two to-be-detected areas according to a first coordinate position of the determined vehicle window template image with the highest matching degree in the corresponding to-be-detected image and a second coordinate position of the vehicle window corner point in the vehicle window template image with the highest matching degree;
mapping the determined corner point coordinates into the vehicle image, and connecting the corner point coordinates mapped in the vehicle image;
and determining the position of the area formed by the coordinate connection of the corner points as the position of the vehicle window.
In a second aspect, an embodiment of the present invention provides a vehicle window detecting apparatus, including:
the processing module is used for determining a vehicle window candidate region in a vehicle image, and dividing the vehicle window candidate region to obtain at least two images to be detected;
the matching module is used for matching each image to be detected in the at least two images to be detected with at least one vehicle window template image and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point;
and the positioning module is used for positioning the window corner points in the at least two areas to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points.
Optionally, the vehicle window detecting apparatus further includes:
the first acquisition module is used for carrying out gray processing on the car window image containing at least one car window angular point before each image to be detected in the at least two images to be detected is matched with at least one car window template image to obtain a car window template image corresponding to the car window image;
the first determining module is used for determining the characteristic parameters of the edge points on the edge line corresponding to the at least one vehicle window corner point in the vehicle window template image; the characteristic parameters are used for representing the gradient amplitude of the position coordinates of the edge points in the car window template image;
and the second determining module is used for determining a plurality of edge points of which the gradient amplitudes are larger than the threshold amplitudes as a plurality of matching points of the car window template image according to the characteristic parameters.
Optionally, the second determining module is further configured to:
after determining a plurality of edge points of which the gradient amplitudes are larger than the threshold amplitude as a plurality of matching points of a car window template image according to the characteristic parameters, calculating matching parameters corresponding to each matching point according to a first gradient amplitude of each matching point in the car window template image and a second gradient amplitude of each matching point in the car window image; the matching parameters are used for indicating the matching degree of the car window template image and the car window image at the corresponding matching points;
setting the weight values of the plurality of matching points according to the matching degree represented by the matching parameters;
sorting the matching points according to the weight values of the matching points, and determining the arrangement sequence of the matching points; and the weight is inversely proportional to the matching degree represented by the matching parameter.
Optionally, the matching module is configured to:
determining an arrangement sequence of a plurality of matching points included in each of the at least one window template image;
matching a plurality of matching points of each car window template image in the at least one car window template image with corresponding images to be detected in the at least two images to be detected according to a corresponding arrangement sequence, and recording the matching degree between each car window template image and the corresponding images to be detected;
and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected in the at least two images to be detected.
Optionally, the matching module is configured to:
traversing each car window template image in the at least one car window template image in the corresponding image to be detected for N times, wherein N is less than or equal to the number of pixel points in the image to be detected, and is a positive integer;
when the kth traversal in the N traversals is executed, determining a first gradient amplitude of each matching point in the multiple matching points in a car window template image and a second gradient amplitude of each matching point in an image to be detected, sequentially calculating and accumulating the matching degree of each matching point in the multiple matching points according to the arrangement sequence of the multiple matching points according to the first gradient amplitude and the second gradient amplitude, determining the kth accumulated matching degree corresponding to the kth traversal, wherein k is an integer from 1 to N;
and recording the maximum cumulative matching degree in the N cumulative matching degrees traversed for N times, and taking the maximum cumulative matching degree as the matching degree between the current car window template image and the image to be detected.
Optionally, the vehicle window detecting apparatus further includes:
the second obtaining module is used for obtaining the currently accumulated matching degree while the matching module sequentially calculates and accumulates the matching degree of each matching point in the plurality of matching points according to the arrangement sequence of the plurality of matching points according to the first gradient amplitude and the second gradient amplitude; the current accumulated matching degree is the accumulated matching degree after at least two matching points in the multiple matching points are matched in the ith traversal;
the matching module is further configured to: if the current accumulated matching degree is determined to be smaller than a preset threshold value, ending the k-th traversal; and the preset threshold is related to the number of matched matching points and the minimum matching threshold, and the current accumulated matching degree is determined as the k accumulated matching degree corresponding to the k traversal.
Optionally, the positioning module is configured to:
determining corner point coordinates of a vehicle window corner point included in each to-be-detected area in the at least two to-be-detected areas according to a first coordinate position of the determined vehicle window template image with the highest matching degree in the corresponding to-be-detected image and a second coordinate position of the vehicle window corner point in the vehicle window template image with the highest matching degree;
mapping the determined corner point coordinates into the vehicle image, and connecting the corner point coordinates mapped in the vehicle image;
and determining the position of the area formed by the coordinate connection of the corner points as the position of the vehicle window.
In a third aspect, an embodiment of the present invention provides a computer apparatus, which includes a processor, and the processor is configured to implement the method according to the first aspect when executing a computer program stored in a memory.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions, which when executed on a computer, cause the computer to perform the method according to the first aspect.
In the embodiment of the invention, the car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the plurality of matching points can represent the edge line of the at least one car window corner point, so that the image to be detected is respectively matched with the corresponding car window template image, the car window template image with the highest matching degree corresponding to each image to be detected is determined, the car window corner point in the corresponding image to be detected can be positioned according to the car window template image with the highest matching degree, the car window position in the car image can be determined according to the determined car window corner point, and the car window detection accuracy can be improved by detecting the edge line of the car window corner point.
Drawings
FIG. 1 is a schematic illustration of a window template image in an embodiment of the present invention;
FIG. 2 is a flow chart of a method of vehicle window detection in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating connection of positioning corner points according to an embodiment of the present invention;
fig. 4 is a block diagram of a vehicle window detecting apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, some terms in the embodiments of the present invention are explained so as to be easily understood by those skilled in the art.
1) The gradient magnitude is a vector. In the embodiment of the present invention, the gradient magnitude may refer to a gray level vector generated by any point of an image using a Sobel operator (Sobel operator) when edge detection is performed by using the Sobel operator. The Sobel operator detects the edge according to the gray weighting difference of upper, lower, left and right adjacent points of the pixel point, and the phenomenon that the edge reaches an extreme value.
In practical application, the Sobel operator includes two 3 × 3 matrixes, which are respectively horizontal and vertical, and performs planar convolution with the image to obtain horizontal and vertical luminance difference approximate values. If A represents the original image, and Gx and Gy represent the gray level of the image detected by the horizontal and vertical edges, respectively, the gray level of each pixel point of the image can be expressed as:
Figure BDA0001498338430000081
2) the window template image may be a grayscale image obtained by processing an image including window corner points. Generally, a window includes 4 window corner points, and correspondingly, 4 window template images may be set, that is, each window template image may include a window corner point and its corresponding edge line.
For example, several car window template images provided in the embodiment of the present invention are as shown in 4 figures labeled a-d in fig. 1, where the figure a and the figure b are template images corresponding to an upper left corner point and an upper right corner point of a car window, the figure c and the figure d are template images corresponding to a lower left corner point and a lower right corner point of a car window, a black area portion in the template images is a car window portion, and a marked black/white dot is a car window corner point.
Of course, in the embodiment of the present invention, one window template image may also include two window corner points, for example, a top left corner point and a top right corner point of a window, which are not illustrated here.
3) In the embodiments of the present invention, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the preceding and following related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present invention will be described in further detail with reference to the drawings attached hereto.
Example one
Fig. 2 provides a method for detecting a vehicle window according to an embodiment of the present invention, where the method may be applied to a corresponding detection device, for example, a detection device in an intelligent traffic monitoring system. The process of the method can be described as follows:
s11: determining a window candidate region in the vehicle image, and dividing the window candidate region to obtain at least two images to be detected.
In the embodiment of the invention, the vehicle image can be an image collected by the detection equipment. After the vehicle image is collected, a vehicle detection algorithm can be used for determining the region of the vehicle in the vehicle image, and then the approximate position of the window region is estimated based on the region of the vehicle, and the position can be used as a window candidate region.
After the candidate area of the car window is obtained, the candidate area of the car window can be partitioned to obtain a corresponding image to be detected. For example, the window candidate area may be divided into 2 blocks, i.e., up/down/left/right, and at this time, the window candidate area corresponds to 2 images to be detected; or, the window candidate region may be divided into 4 blocks, and at this time, the window candidate region corresponds to 4 images to be detected.
S12: matching each image to be detected in at least two images to be detected with at least one vehicle window template image, and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point.
In the embodiment of the present invention, before S12, a plurality of window template images may be selected and created according to an existing window type, that is, a window template image corresponding to a window image is obtained by performing gray-scale processing on a selected window image including at least one window corner point, and the created partial window template image may be as shown in fig. 1.
Of course, in practical application, the corresponding template image can be added according to the type of the car window which may appear. Each car window template image is a gray level image and only comprises three gray levels at most, so that interference information is reduced, main characteristic information is reserved, and the calculated amount in subsequent template matching is reduced.
Meanwhile, in the embodiment of the invention, when the corresponding image is selected to create the template image, the information around the car window corner points is fully considered, for example, the white parts in the graph a and the graph b in fig. 1 represent the car roof part, and the addition of the block can effectively reduce false detection and improve the identification of the edge lines of the car window corner points. In practical application, the creation of the template image is a relatively critical step, and if the template is not well selected, the result is directly influenced, so that the method is too simple, is easy to detect by mistake, and is too complex and easy to miss detection.
In the embodiment of the invention, after the car window template image is created, the characteristic parameters of the edge points on the edge line corresponding to at least one car window corner point in the car window template image can be further determined, and the characteristic parameters are used for representing the gradient amplitude of the position coordinates of the edge points in the car window template image.
Carrying out edge detection through a Sobel operator to obtain a gradient image of the template image in the x and y directions, and further calculating the gradient amplitude (M) of an edge point, namely:
Figure BDA0001498338430000101
wherein, GxTRepresents a gradient map of pixel coordinates in the x-direction, GyTRepresenting a gradient map of pixel coordinates in the y-direction.
Furthermore, the preferred matching points in the edge points may be determined according to the gradient amplitudes, for example, the edge points with the gradient amplitudes greater than or equal to the threshold T and the gradient amplitudes of the neighboring points in the edge points may be used as the preferred multiple matching points in the window template image.
Further, after a plurality of optimal matching points in the car window template image are determined, the car window template image can be corresponding to the corresponding original car window image, a first gradient amplitude of each matching point in the car window template image and a second gradient amplitude of each matching point in the car window image are determined, and a matching parameter of each matching point is calculated and is used for indicating the matching degree of the car window template image and the car window image at the corresponding matching point. The match parameter may be referred to herein as a match score.
That is, in a small sample set, each matching point on the template image may be regarded as a sub-template, and the matching scores may be accumulated by performing template matching separately. The matching score calculation formula of a single matching point is as follows:
Figure BDA0001498338430000102
wherein m isi(u, v) is a matching score of the ith matching point in the plurality of matching points, and (u, v) is a window map of the template imageCoordinates in the image, GxS、GySGradient of template image on vehicle window image, GxT、GyTGradient maps of the template image in the x-direction and the y-direction, respectively.
Then, according to the matching score of each matching point in the multiple matching points, a corresponding weight can be set for each matching point in the multiple matching points, and the principle of setting the weight is as follows: the larger the cumulative score, the less weight is assigned to the matching point. Further, the plurality of matching points may be sorted according to the weight of each matching point, for example, sorted in the order of the weights from large to small, and the arrangement order of the plurality of matching points may be obtained. The arrangement sequence can represent the importance degree of the matching points representing the edge lines in the car window template image in the matching process, for example, the more unique the matching points are, the greater the weight is, the higher the importance degree is.
Then, when each image to be detected is matched with at least one window template image in S12, the arrangement order of the plurality of matching points included in each window template image in the at least one window template image may be acquired. That is to say, when one image to be detected is matched with at least one selected vehicle window template image, the arrangement sequence of a plurality of matching points contained in the current vehicle window template image can be obtained.
Furthermore, a plurality of matching points of the current car window template image can be matched with the corresponding image to be detected according to the corresponding arrangement sequence, and the matching degree between each car window template image and the corresponding image to be detected is recorded.
Particularly, when the image to be detected is matched with the car window template image based on a plurality of matching points, the current car window template image can be traversed for a plurality of times in the image to be detected according to the pixel point coordinates, and the number of times of traversal can be less than or equal to the number of the pixel points in the image to be detected.
In each traversal, matching may be performed based on a plurality of matching points, for example, if the current window template image includes n matching points, matching in the current traversal may be performed according to the arrangement order of the n matching points.
At this time, according to the arrangement sequence, a first gradient amplitude of each matching point in the vehicle window template image and a second gradient amplitude of each matching point in the image to be detected are determined, the matching degree of each matching point in the n matching points is calculated and accumulated according to the first gradient amplitude and the second gradient amplitude, and the accumulated matching degree corresponding to the current traversal is determined, namely, the calculation formula of the accumulated matching degree is as follows:
Figure BDA0001498338430000111
wherein n is the total number of edge points, λiFor the weight of each matching point, (u, v) is the corresponding coordinate of a reference point (e.g. the first pixel point in the upper left corner) on the template image in the image to be detected, (X)i,Yi) For the coordinates of the ith matching point in the template image, (u + X)i,v+Yi) And the corresponding coordinates of the ith matching point in the image to be detected. When S is(u,v)When the number is 1, the window template image and the image to be detected are identical.
In another embodiment of the present invention, the matching degree is accumulated in one traversal, and the currently accumulated matching degree may be determined, where the currently accumulated matching degree may be the matching degree accumulated after at least two matching points of the multiple matching points are matched in the current traversal. For example, when calculating and accumulating in the drainage order of n matching points, the currently accumulated matching degree may be a score accumulated from the 1 st matching point to the j th matching point.
If the current accumulated matching degree is determined to be smaller than the preset threshold, the current traversal can be ended, and the preset threshold can also be the correlation between the number of matched matching points and the minimum matching threshold. For example, if the j-th point of n matching points is calculated, the current accumulated matching degree SjSatisfies the following conditions: sj<Tsmin+ Sum (j) -1, wherein,
Figure BDA0001498338430000121
λiis as followsAnd stopping the current traversal matching by the weights of the j matching points, and taking the current accumulated matching degree as the matching degree of the current traversal so as to enter the next traversal.
Therefore, in the embodiment of the invention, in the template matching stage, only gradient calculation and matching calculation are carried out on the corresponding matching points in the template image, and calculation is carried out in sequence according to the weight of the matching points from large to small, and the calculation amount is greatly reduced by combining the condition of stopping calculation in advance.
In practical application, after one traversal is finished, different search step lengths can be set according to the total matching score. The search step size includes a horizontal step size and a vertical step size. For example, if the total score (i.e., the total matching degree) of the current traversal is greater than the threshold value 1, the horizontal step size may be set to 1, and the vertical step size may be set to 1; if the total score is between the threshold values 1 and 2, setting the transverse step length to be 2 and the longitudinal step length to be 1; if the total score is less than the threshold 2, the setting is that the transverse step size is 2 and the longitudinal step size is 2.
Then, corresponding pixel points can be selected for traversal according to the search step length during subsequent traversal in the image to be detected, and the reference point of the car window template image does not need to be aligned with each pixel point in the image to be detected for traversal, so that the complexity of the algorithm is reduced, and meanwhile, the matching efficiency is improved.
Therefore, N accumulated matching degrees corresponding to N times of traversal can be finally obtained, and the maximum accumulated matching degree in the N accumulated matching degrees is used as the matching degree between the current car window template image and the image to be detected.
In practical application, if the matching score corresponding to the maximum cumulative matching degree is determined to be smaller than the minimum matching threshold value TsminThen the template can be replaced to re-match the calculation, and finally the result with the largest score is selected.
S13: and positioning window corner points in at least two areas to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points.
After the vehicle window template with the highest matching degree corresponding to each module to be detected is determined, a first coordinate position of a vehicle window template image with the highest matching degree in a corresponding image to be detected and a second coordinate position of a vehicle window corner point in the vehicle window template image with the highest matching degree can be determined, and then the corner point coordinates of the vehicle window corner point included in each region to be detected in at least two regions to be detected can be determined according to the first coordinate position and the second coordinate position. The position of the window corner points in the vehicle image can be determined by mapping the determined corner point coordinates into the vehicle image.
For example, if the coordinates corresponding to the first pixel point (reference point) in the upper left corner of the template image in the image to be detected are (a, b), and (x, y) are the coordinates of the corner point of the vehicle window in the template image relative to the reference point, the coordinates corresponding to the corner point of the vehicle window in the image to be detected are (a + x, b + y), so that the position of the corner point of the vehicle window in the image to be detected can be located.
In practical application, after all the window corner points in at least two images to be detected are positioned, whether the number of the window corner points is more than or equal to 4 or not can be judged, and if the number of the window corner points is less than 4, the window detection can be considered to be failed. If the number of windows is equal to 4, the window corner points can be mapped into the vehicle image.
Further, the corner point coordinates mapped in the vehicle image are connected, and whether a quadrangle formed by the connected corner point coordinates is in a regular shape is judged, for example, whether the quadrangle is similar to a trapezoid or not is judged, if the quadrangle is in a trapezoid, the detection result is considered to be reliable, and the position of the area in the vehicle image can be determined as the position of the vehicle window.
If the shape of the multi-deformation is not close to the trapezoid, points which are possibly inaccurately positioned can be found out according to the characteristics of the trapezoid and the position of the points can be estimated.
For example, if the quadrangle formed by connecting the positioning corner points is shown in fig. 3, wherein I, C, H, G is that AB and DC are approximately parallel, angle ADC and angle BCD are acute angles, the angles are more probable within the angle range [70 °, 85 ° ], angle DAB and angle CBA are obtuse angles and the angles are more probable within the angle range [95 °, 110 ° ], and DE and FC are similar in length.
Generally, with the window detection method according to the embodiment of the present invention, it is less likely that two or more false results occur at the same time, for example, G, H, I occurs. If the angular point is inaccurate, firstly, the result is estimated to be unreliable according to the parallel characteristics of the upper line and the lower line when the angular point is inaccurate, then, according to the characteristics that the angle size and the inclination possibility of the vehicle body are small, the reliability of the ABD is high, G, H, I points are unreliable, further, the real position C can be estimated according to the characteristics that the size of the upper left corner and the size of the upper right corner of the trapezoid are the same, or the size of the lower left corner and the size of the lower right corner of the trapezoid are the same, and the position of the area formed by the ABCD is the position of the vehicle window in the vehicle image.
In the embodiment of the invention, the corner points of the car window are accurately positioned by utilizing the edge-based template matching method, the positioning accuracy is still higher under the conditions of partial shielding and nonlinear illumination change,
example two
Based on the same inventive concept, the embodiment of the invention also provides a vehicle window detection device, and the vehicle window detection device can execute the vehicle window detection method in the first embodiment. As shown in fig. 4, the apparatus comprises a processing module 21, a matching module 22 and a positioning module 23.
The processing module 21 is configured to determine a window candidate region in a vehicle image, and divide the window candidate region to obtain at least two images to be detected;
the matching module 22 is used for matching each image to be detected in the at least two images to be detected with at least one vehicle window template image, and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point;
the positioning module 23 is further configured to position window corner points in the at least two areas to be detected according to the determined window template image with the highest matching degree, and determine the position of a window in the vehicle image according to the positioned window corner points.
Optionally, the vehicle window detecting apparatus further includes:
the first acquisition module is used for carrying out gray processing on the car window image containing at least one car window angular point before each image to be detected in the at least two images to be detected is matched with at least one car window template image to obtain a car window template image corresponding to the car window image;
the first determining module is used for determining the characteristic parameters of the edge points on the edge line corresponding to the at least one vehicle window corner point in the vehicle window template image; the characteristic parameters are used for representing the gradient amplitude of the position coordinates of the edge points in the car window template image;
and the second determining module is used for determining a plurality of edge points of which the gradient amplitudes are larger than the threshold amplitudes as a plurality of matching points of the car window template image according to the characteristic parameters.
Optionally, the second determining module is further configured to:
after determining a plurality of edge points of which the gradient amplitudes are larger than the threshold amplitude as a plurality of matching points of a car window template image according to the characteristic parameters, calculating matching parameters corresponding to each matching point according to a first gradient amplitude of each matching point in the car window template image and a second gradient amplitude of each matching point in the car window image; the matching parameters are used for indicating the matching degree of the car window template image and the car window image at the corresponding matching points;
setting the weight values of the plurality of matching points according to the matching degree represented by the matching parameters;
sorting the matching points according to the weight values of the matching points, and determining the arrangement sequence of the matching points; and the weight is inversely proportional to the matching degree represented by the matching parameter.
Optionally, the matching module 22 may be configured to:
determining an arrangement sequence of a plurality of matching points included in each of the at least one window template image;
matching a plurality of matching points of each car window template image in the at least one car window template image with corresponding images to be detected in the at least two images to be detected according to a corresponding arrangement sequence, and recording the matching degree between each car window template image and the corresponding images to be detected;
and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected in the at least two images to be detected.
Optionally, the matching module 22 may be configured to:
traversing each car window template image in the at least one car window template image in the corresponding image to be detected for N times, wherein N is less than or equal to the number of pixel points in the image to be detected, and is a positive integer;
when the kth traversal in the N traversals is executed, determining a first gradient amplitude of each matching point in the multiple matching points in a car window template image and a second gradient amplitude of each matching point in an image to be detected, sequentially calculating and accumulating the matching degree of each matching point in the multiple matching points according to the arrangement sequence of the multiple matching points according to the first gradient amplitude and the second gradient amplitude, determining the kth accumulated matching degree corresponding to the kth traversal, wherein k is an integer from 1 to N;
and recording the maximum cumulative matching degree in the N cumulative matching degrees traversed for N times, and taking the maximum cumulative matching degree as the matching degree between the current car window template image and the image to be detected.
Optionally, the vehicle window detecting apparatus further includes:
the second obtaining module is used for obtaining the currently accumulated matching degree while the matching module sequentially calculates and accumulates the matching degree of each matching point in the plurality of matching points according to the arrangement sequence of the plurality of matching points according to the first gradient amplitude and the second gradient amplitude; the current accumulated matching degree is the accumulated matching degree after at least two matching points in the multiple matching points are matched in the ith traversal;
the matching module 22 is further configured to: if the current accumulated matching degree is determined to be smaller than a preset threshold value, ending the k-th traversal; and the preset threshold is related to the number of matched matching points and the minimum matching threshold, and the current accumulated matching degree is determined as the k accumulated matching degree corresponding to the k traversal.
Optionally, the positioning module 23 may be configured to:
determining corner point coordinates of a vehicle window corner point included in each to-be-detected area in the at least two to-be-detected areas according to a first coordinate position of the determined vehicle window template image with the highest matching degree in the corresponding to-be-detected image and a second coordinate position of the vehicle window corner point in the vehicle window template image with the highest matching degree;
mapping the determined corner point coordinates into the vehicle image, and connecting the corner point coordinates mapped in the vehicle image;
and determining the position of the area formed by the coordinate connection of the corner points as the position of the vehicle window.
EXAMPLE III
In an embodiment of the present invention, a computer apparatus is further provided, which is configured as shown in fig. 5, and includes a processor 31 and a memory 32, where the processor 31 is configured to implement the steps of the method for detecting a vehicle window provided in the first embodiment of the present invention when executing a computer program stored in the memory 32.
Optionally, the processor 31 may specifically be a central processing unit, an Application Specific Integrated Circuit (ASIC), one or more Integrated circuits for controlling program execution, a hardware Circuit developed by using a Field Programmable Gate Array (FPGA), or a baseband processor.
Optionally, the processor 31 may include at least one processing core.
Optionally, the electronic device further includes a Memory 32, and the Memory 32 may include a Read Only Memory (ROM), a Random Access Memory (RAM), and a disk Memory. The memory 32 is used for storing data required by the processor 31 in operation. The number of the memory 32 is one or more.
Example four
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the steps of the method for detecting a vehicle window provided in an embodiment of the present invention may be implemented.
In the embodiment of the present invention, it should be understood that the disclosed method and apparatus for detecting a vehicle window may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical or other form.
The functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be an independent physical module.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device, such as a personal computer, a server, or a network device, or a Processor (Processor), to execute all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a Universal Serial Bus flash drive (USB), a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
The above embodiments are only used to describe the technical solutions of the present invention in detail, but the above embodiments are only used to help understanding the method of the embodiments of the present invention, and should not be construed as limiting the embodiments of the present invention. Variations or substitutions that may be readily apparent to one skilled in the art are intended to be included within the scope of the embodiments of the present invention.

Claims (12)

1. A method of vehicle window detection, comprising:
determining a vehicle window candidate region in a vehicle image, and dividing the vehicle window candidate region to obtain at least two images to be detected;
carrying out gray level processing on a car window image containing at least one car window corner point to obtain a car window template image corresponding to the car window image;
determining characteristic parameters of edge points on edge lines corresponding to the at least one vehicle window corner point in the vehicle window template image; the characteristic parameters are used for representing the gradient amplitude of the position coordinates of the edge points in the car window template image;
determining a plurality of edge points of which the gradient amplitudes are larger than a threshold amplitude in the edge points as a plurality of matching points of the car window template image according to the characteristic parameters;
calculating a matching score corresponding to each matching point according to a first gradient amplitude of each matching point in the car window template image and a second gradient amplitude of each matching point in the car window image; the matching score is used for indicating the matching degree of the car window template image and the car window image at the corresponding matching point; setting corresponding weight according to the matching score of each matching point, wherein the more the accumulated matching score is, the smaller the weight is; sequencing the matching points according to the weight of each matching point, and determining the arrangement sequence of the matching points, wherein the arrangement sequence is used for representing the importance degree of the matching points of the edge line in the car window template image in the matching process;
the calculation formula of the matching score is as follows:
Figure FDA0003039716140000011
wherein m isi(ii) a matching score representing the ith matching point of the plurality of matching points, (u, v) coordinates of the template image in the window image, GxS、GySRespectively the gradient, Gx, of the template image on the vehicle window imageT、GyTGradient graphs of the template image in the x direction and the y direction respectively;
when each image to be detected in the at least two images to be detected is matched with at least one vehicle window template image, matching the plurality of matching points of the vehicle window template image with the corresponding image to be detected according to the arrangement sequence, recording the matching degree between each vehicle window template image and the corresponding image to be detected, and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point;
and positioning window corner points in the at least two regions to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points.
2. The method of claim 1, wherein the step of matching each image to be detected of the at least two images to be detected with at least one window template image to determine the window template image with the highest matching degree corresponding to each image to be detected comprises the steps of:
determining an arrangement sequence of a plurality of matching points included in each of the at least one window template image;
matching a plurality of matching points of each car window template image in the at least one car window template image with corresponding images to be detected in the at least two images to be detected according to a corresponding arrangement sequence, and recording the matching degree between each car window template image and the corresponding images to be detected;
and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected in the at least two images to be detected.
3. The method of claim 2, wherein matching the plurality of matching points of each of the at least one window template image with the corresponding image to be detected of the at least two images to be detected in the arranged order, and recording the degree of matching between each window template image and the corresponding image to be detected comprises:
traversing each car window template image in the at least one car window template image in the corresponding image to be detected for N times, wherein N is less than or equal to the number of pixel points in the image to be detected, and is a positive integer;
when the kth traversal in the N traversals is executed, determining a first gradient amplitude of each matching point in the multiple matching points in a car window template image and a second gradient amplitude of each matching point in an image to be detected, sequentially calculating and accumulating the matching degree of each matching point in the multiple matching points according to the arrangement sequence of the multiple matching points according to the first gradient amplitude and the second gradient amplitude, determining the kth accumulated matching degree corresponding to the kth traversal, wherein k is an integer from 1 to N;
and recording the maximum cumulative matching degree in the N cumulative matching degrees traversed for N times, and taking the maximum cumulative matching degree as the matching degree between the current car window template image and the image to be detected.
4. The method according to claim 3, wherein while the matching degree of each of the plurality of matching points is sequentially calculated and accumulated in the arrangement order of the plurality of matching points based on the first gradient magnitude and the second gradient magnitude, the method further comprises:
obtaining the current accumulated matching degree; the current accumulated matching degree is the accumulated matching degree after at least two matching points in the multiple matching points are matched in the ith traversal;
if the current accumulated matching degree is determined to be smaller than a preset threshold value, ending the k-th traversal; the preset threshold is related to the number of matched matching points and the minimum matching threshold;
and determining the current accumulated matching degree as the kth accumulated matching degree corresponding to the kth traversal.
5. The method according to any one of claims 1 to 4, wherein locating window corner points in the at least two areas to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the located window corner points comprises:
determining corner point coordinates of a vehicle window corner point included in each to-be-detected area in the at least two to-be-detected areas according to a first coordinate position of the determined vehicle window template image with the highest matching degree in the corresponding to-be-detected image and a second coordinate position of the vehicle window corner point in the vehicle window template image with the highest matching degree;
mapping the determined corner point coordinates into the vehicle image, and connecting the corner point coordinates mapped in the vehicle image;
and determining the position of the area formed by the coordinate connection of the corner points as the position of the vehicle window.
6. A vehicle window detecting apparatus, characterized by comprising:
the processing module is used for determining a vehicle window candidate region in a vehicle image, and dividing the vehicle window candidate region to obtain at least two images to be detected;
the first acquisition module is used for carrying out gray processing on the car window image containing at least one car window angular point before each image to be detected in the at least two images to be detected is matched with at least one car window template image to obtain a car window template image corresponding to the car window image;
the first determining module is used for determining the characteristic parameters of the edge points on the edge line corresponding to the at least one vehicle window corner point in the vehicle window template image; the characteristic parameters are used for representing the gradient amplitude of the position coordinates of the edge points in the car window template image;
the second determining module is used for determining a plurality of edge points of which the gradient amplitudes are larger than the threshold amplitudes as a plurality of matching points of the car window template image according to the characteristic parameters;
the matching module is used for calculating a matching score corresponding to each matching point according to a first gradient amplitude value of each matching point in the car window template image and a second gradient amplitude value of each matching point in the car window image; the matching score is used for indicating the matching degree of the car window template image and the car window image at the corresponding matching point; setting corresponding weight according to the matching score of each matching point, wherein the more the accumulated matching score is, the smaller the weight is; sequencing the matching points according to the weight of each matching point, and determining the arrangement sequence of the matching points, wherein the arrangement sequence is used for representing the importance degree of the matching points of the edge line in the car window template image in the matching process; when each image to be detected in the at least two images to be detected is matched with at least one vehicle window template image, matching the plurality of matching points of the vehicle window template image with the corresponding image to be detected according to the arrangement sequence, recording the matching degree between each vehicle window template image and the corresponding image to be detected, and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected; each car window template image is a gray scale image comprising at least one car window corner point and a plurality of matching points, and the matching points are used for representing edge lines of the at least one car window corner point; wherein, the calculation formula of the matching score is as follows:
Figure FDA0003039716140000041
wherein m isi(ii) a matching score representing the ith matching point of the plurality of matching points, (u, v) coordinates of the template image in the window image, GxS、GySRespectively the gradient, Gx, of the template image on the vehicle window imageT、GyTGradient graphs of the template image in the x direction and the y direction respectively;
and the positioning module is used for positioning the window corner points in the at least two areas to be detected according to the determined window template image with the highest matching degree, and determining the position of a window in the vehicle image according to the positioned window corner points.
7. The device of claim 6, wherein the matching module is to:
determining an arrangement sequence of a plurality of matching points included in each of the at least one window template image;
matching a plurality of matching points of each car window template image in the at least one car window template image with corresponding images to be detected in the at least two images to be detected according to a corresponding arrangement sequence, and recording the matching degree between each car window template image and the corresponding images to be detected;
and determining the vehicle window template image with the highest matching degree corresponding to each image to be detected in the at least two images to be detected.
8. The device of claim 7, wherein the matching module is to:
traversing each car window template image in the at least one car window template image in the corresponding image to be detected for N times, wherein N is less than or equal to the number of pixel points in the image to be detected, and is a positive integer;
when the kth traversal in the N traversals is executed, determining a first gradient amplitude of each matching point in the multiple matching points in a car window template image and a second gradient amplitude of each matching point in an image to be detected, sequentially calculating and accumulating the matching degree of each matching point in the multiple matching points according to the arrangement sequence of the multiple matching points according to the first gradient amplitude and the second gradient amplitude, determining the kth accumulated matching degree corresponding to the kth traversal, wherein k is an integer from 1 to N;
and recording the maximum cumulative matching degree in the N cumulative matching degrees traversed for N times, and taking the maximum cumulative matching degree as the matching degree between the current car window template image and the image to be detected.
9. The apparatus according to claim 8, wherein the window detecting apparatus further comprises:
the second obtaining module is used for obtaining the currently accumulated matching degree while the matching module sequentially calculates and accumulates the matching degree of each matching point in the plurality of matching points according to the arrangement sequence of the plurality of matching points according to the first gradient amplitude and the second gradient amplitude; the current accumulated matching degree is the accumulated matching degree after at least two matching points in the multiple matching points are matched in the ith traversal;
the matching module is further configured to: if the current accumulated matching degree is determined to be smaller than a preset threshold value, ending the k-th traversal; and the preset threshold is related to the number of matched matching points and the minimum matching threshold, and the current accumulated matching degree is determined as the k accumulated matching degree corresponding to the k traversal.
10. The apparatus of any one of claims 6-9, wherein the positioning module is to:
determining corner point coordinates of a vehicle window corner point included in each to-be-detected area in the at least two to-be-detected areas according to a first coordinate position of the determined vehicle window template image with the highest matching degree in the corresponding to-be-detected image and a second coordinate position of the vehicle window corner point in the vehicle window template image with the highest matching degree;
mapping the determined corner point coordinates into the vehicle image, and connecting the corner point coordinates mapped in the vehicle image;
and determining the position of the area formed by the coordinate connection of the corner points as the position of the vehicle window.
11. A computer arrangement, characterized in that the computer arrangement comprises a processor for implementing the method according to any of claims 1-5 when executing a computer program stored in a memory.
12. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-5.
CN201711285415.8A 2017-12-07 2017-12-07 Vehicle window detection method and device Active CN108182383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711285415.8A CN108182383B (en) 2017-12-07 2017-12-07 Vehicle window detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711285415.8A CN108182383B (en) 2017-12-07 2017-12-07 Vehicle window detection method and device

Publications (2)

Publication Number Publication Date
CN108182383A CN108182383A (en) 2018-06-19
CN108182383B true CN108182383B (en) 2021-07-20

Family

ID=62545852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711285415.8A Active CN108182383B (en) 2017-12-07 2017-12-07 Vehicle window detection method and device

Country Status (1)

Country Link
CN (1) CN108182383B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190639A (en) * 2018-08-16 2019-01-11 新智数字科技有限公司 A kind of vehicle color identification method, apparatus and system
CN110245674B (en) * 2018-11-23 2023-09-15 浙江大华技术股份有限公司 Template matching method, device, equipment and computer storage medium
CN109741406A (en) * 2019-01-03 2019-05-10 广州广电银通金融电子科技有限公司 A kind of body color recognition methods under monitoring scene
CN111724335A (en) * 2019-03-21 2020-09-29 深圳中科飞测科技有限公司 Detection method and detection system
CN110866949A (en) * 2019-11-15 2020-03-06 广东利元亨智能装备股份有限公司 Center point positioning method and device, electronic equipment and storage medium
CN111738232B (en) * 2020-08-06 2020-12-15 深圳须弥云图空间科技有限公司 Method and device for marking pile foundation
CN111986169A (en) * 2020-08-12 2020-11-24 深圳华芯信息技术股份有限公司 Door and window detection method, system, terminal and medium
CN113686562B (en) * 2021-08-26 2024-03-12 浙江吉利控股集团有限公司 Method and device for detecting off-line of vehicle door
CN114998424B (en) * 2022-08-04 2022-10-21 中国第一汽车股份有限公司 Vehicle window position determining method and device and vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548147A (en) * 2016-11-02 2017-03-29 南京鑫和汇通电子科技有限公司 A kind of quick noise robustness image foreign matter detection method and TEDS systems

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9349189B2 (en) * 2013-07-01 2016-05-24 Here Global B.V. Occlusion resistant image template matching using distance transform

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548147A (en) * 2016-11-02 2017-03-29 南京鑫和汇通电子科技有限公司 A kind of quick noise robustness image foreign matter detection method and TEDS systems

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
加权模板匹配的二值化阈值不敏感性研究;王刚等;《计算机技术与发展》;20120131;第22卷(第1期);论文第123页第1栏最后一段、第123页第2栏第1-5段 *
唐红强.基于GRM模板匹配算法的车型和车系识别.《中国优秀硕士学位论文全文数据库信息科技辑》.2016, *
基于GRM模板匹配算法的车型和车系识别;唐红强;《中国优秀硕士学位论文全文数据库信息科技辑》;20160315;论文第10页第1-3段、论文第11页第3段、第15页第3段、第37页第1-3段及图2-1、4-1、4-2 *

Also Published As

Publication number Publication date
CN108182383A (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN108182383B (en) Vehicle window detection method and device
CN110472580B (en) Method, device and storage medium for detecting parking stall based on panoramic image
CN110502982B (en) Method and device for detecting obstacles in expressway and computer equipment
KR101609303B1 (en) Method to calibrate camera and apparatus therefor
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN108921813B (en) Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision
CN103400151A (en) Optical remote-sensing image, GIS automatic registration and water body extraction integrated method
CN110490839B (en) Method and device for detecting damaged area in expressway and computer equipment
CN109376740A (en) A kind of water gauge reading detection method based on video
CN113240623B (en) Pavement disease detection method and device
CN110660072B (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
CN104318559A (en) Quick feature point detecting method for video image matching
CN112052782A (en) Around-looking-based parking space identification method, device, equipment and storage medium
CN104732510A (en) Camera lens black spot detecting method and device
CN111488808A (en) Lane line detection method based on traffic violation image data
CN113850749A (en) Method for training defect detector
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN114663859A (en) Sensitive and accurate complex road condition lane deviation real-time early warning system
CN116091503B (en) Method, device, equipment and medium for discriminating panel foreign matter defects
CN102313740A (en) Solar panel crack detection method
CN105844260A (en) Multifunctional smart cleaning robot apparatus
CN113822105B (en) Artificial intelligence water level monitoring system based on online two classifiers of SVM water scale
CN103679738B (en) Method for detecting image edge based on color radius adjacent domains pixel classifications
CN112132036A (en) Big data image processing method and system based on safe area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant