CN103065147B - A kind of vehicle monitoring method based on images match and recognition technology - Google Patents

A kind of vehicle monitoring method based on images match and recognition technology Download PDF

Info

Publication number
CN103065147B
CN103065147B CN201210569107.9A CN201210569107A CN103065147B CN 103065147 B CN103065147 B CN 103065147B CN 201210569107 A CN201210569107 A CN 201210569107A CN 103065147 B CN103065147 B CN 103065147B
Authority
CN
China
Prior art keywords
image
pixel
value
gray
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210569107.9A
Other languages
Chinese (zh)
Other versions
CN103065147A (en
Inventor
张玲
严玉华
陈智也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianze Information Industry Corp
Original Assignee
Tianze Information Industry Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianze Information Industry Corp filed Critical Tianze Information Industry Corp
Priority to CN201210569107.9A priority Critical patent/CN103065147B/en
Publication of CN103065147A publication Critical patent/CN103065147A/en
Application granted granted Critical
Publication of CN103065147B publication Critical patent/CN103065147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to vehicle monitoring, particularly a kind of vehicle monitoring method based on image recognition, is applicable to the vehicle monitoring of logistic industry.The method comprises 1) utilize the method for Image-matching and process, in map image, search out each destination icon, draw the longitude and latitude of each destination icon, and latitude and longitude information is recorded in map system; 2) when vehicle carries out dispensing activity, when the icon longitude and latitude approximately equal of the destination of recording in the longitude and latitude and step 1) of vehicle, show that vehicle carries out dispensing activity; Otherwise think that dispensing activity is false.The inventive method adopts images match and recognizer, characteristic matching and Gray-scale Matching method to come reliably, effectively determine vehicle delivery, add wet goods activity, can improve the accuracy of Information system for vehicle management.

Description

A kind of vehicle monitoring method based on images match and recognition technology
Technical field
The present invention relates to vehicle monitoring, particularly a kind of vehicle monitoring method based on image recognition, is applicable to the vehicle monitoring of logistic industry.
Background technology
Along with Information system for vehicle management is in the extensive utilization of industry-by-industry, to the tracking of movable vehicle state, monitoring especially important, especially the movable vehicle that the oiling, dispensing etc. of vehicle can affect operation cost needs reliable monitoring means and method more, with the target realizing user transparentization management and control cost.For the refueling activity of vehicle, the data such as longitude and latitude, time, speed, active state, sensor voltage uploaded by car machine at present, we can calculate this vehicle in the fuel level in tank in corresponding moment and oil consumption, by comparing the oil level of adjacent moment, the oiling moment point of vehicle can be obtained.But because the driving action of vehicle and other factors can affect the data that car machine uploads, whether this time point belongs to is refueled moment point normally, is difficult to conclude.Other similar activities of same vehicle also have the features such as unreliable, difficult judgement.
Summary of the invention
The object of the invention is to provide a kind of vehicle monitoring method based on images match and recognition technology for above-mentioned weak point, combining image coupling and recognition technology, can the dispensing activity, refueling activity etc. of monitoring vehicle, the active state of vehicle is followed the tracks of and monitored, to realize user transparentization management and to control cost, have that reliability is strong, cost is low, percent of automatization high.
A kind of vehicle monitoring method based on images match and recognition technology of the present invention takes following technical scheme to realize: the method comprises the steps,
1) utilize the method for Image-matching and process, in map image, search out each destination icon, draw the longitude and latitude of each destination icon, and latitude and longitude information is recorded in map system;
2) when vehicle carries out dispensing activity, when the icon longitude and latitude approximately equal of the destination of recording in the longitude and latitude and step 1) of vehicle, show that vehicle carries out dispensing activity; Otherwise think that dispensing activity is false.
Image-matching in step 1) and process, comprise the steps:
1st step, color Map Image gray processing
Described gray processing converts gray image to by coloured image; In colored RGB model, if during R=G=B, then a kind of greyscale color of colored expression, wherein the value of R=G=B is gray-scale value, and therefore, each pixel of gray level image only needs a byte to deposit gray-scale value (also known as intensity level, brightness value), and tonal range is 0-255.The equation asking for respective pixel gray-scale value from true color image is:
Wherein (x, y) represents pixel coordinate.
Through this step computing, gray-scale value is normalized within 0 ~ 255, thus realizes color Map Image gray processing.
2nd step, by the Binary Sketch of Grey Scale Image obtained in the 1st step
After having carried out gray processing process, each pixel in image has had a gray-scale value, and it embodies the bright-dark degree of pixel.In order to launch subsequent algorithm, also need to do a binary conversion treatment to gray level image.The binary conversion treatment of image is exactly that the gray scale of the point on image is set to 0 or 255, namely makes whole image present obvious black and white effect.Gray level image by 256 brightness registrations is chosen by suitable threshold value and obtains the binary image that still can reflect integral image and local feature.About the binaryzation of image, the algorithm of existing many maturations, can adopt Adaptive Thresholding, also can adopt given threshold method; And the key of carrying out image binary transform determines suitable threshold value, make object and Background can be separated, and the result images of binary transform must possess good conformality, does not lose useful shape information, can not produce extra vacancy etc.Meanwhile, intelligent transportation system requires that the speed of process is fast, contains much information, and adopts bianry image to process, can improve treatment effeciency widely.
The key of binaryzation is to locate suitable threshold value t to distinguish object and background.
The procedural representation of binaryzation is as follows:
Wherein, originally gray level image is f (x, y), and the image after binaryzation is G (x, y).
The selection of threshold value t is crucial, and it can be expressed as a tlv triple, namely
Wherein, (x, y) is location of pixels in image, the gray-scale value at (x, y) place in f (x, y) representative image, and N (x, y) is the gray feature of surrounding neighbors.
The threshold values of binaryzation has chosen a lot of method, is mainly divided into 3 classes: overall threshold method, local threshold method and dynamic thresholding method.
(1) overall threshold method
Overall situation threshold values binarization method determines a threshold values according to the histogram of image or the space distribution of gray scale, and realize the conversion of gray level image to binary image according to this threshold values.Typical overall threshold values method comprises Ostu method, maximum entropy method (MEM) etc.
(2) local threshold method
Local threshold values rule is by putting local gray level characteristic around pixel gray value and pixel to determine the threshold values of pixel, Bernsen algorithm is typical local threshold values method, although the intensity profile that the situations such as inhomogeneous illumination condition affect general image does not affect image property locally.Now the regulation of neighborhood and choosing of neighborhood calculation template are all the key factors determining algorithm effect.
(3) dynamic thresholding method
The threshold values of dynamic thresholding method selects the gray-scale value not only depending on this pixel gray value and its surrounding pixel, but also relevant with the coordinate position of this pixel.
Adopt Ostu method in overall threshold method to realize Binary Sketch of Grey Scale Image in 2nd step, utilize the middle graythresh function in matlab tool box to realize.
3rd step, the connected region of the binary image obtained in mark the 2nd step
The connected region of the binary image obtained in the 2nd step is made a mark, because the image that will mate is connected graph, only need operates first connected region, search and the execution time of matching algorithm can be reduced.
The step of Pixel Labeling in Binary Images is as follows:
3.1) the marking image upper left corner, the i.e. pixel of the first row first row
If its pixel value is 255, then the value marking this point is 1, otherwise, start the pixel scanning the first row secondary series.
3.2) other pixel of the first row is marked
Now, of equal value right situation can not be produced, need not consider that record is of equal value right.To each pixel of this row, if its value is 255, detect whether leftmost pixel is 255, if so, then this point is labeled as the mark of leftmost pixel point; Otherwise the previous mark value that is labeled as of this point adds one; If the pixel value of this point is 0, continue the next pixel of scanning.
3.3) pixel column except the first row is marked, now there will be of equal value right situation, need to carry out record.
3.3.1) first first row is processed, if this pixel value is 0, then scan the next pixel of this row, otherwise, in detection, the pixel value of upper right two location of pixels.If upper marked, this point is labeled as the mark value of pixel.Whether at this moment, then see that whether upper right is marked, if also marked, relatively go up equal with the mark value of upper right, if unequal, then upper the and upper right of record is that an equivalence is right, and is recorded in equivalence in record sheet.If on be not labeled, and upper right has been labeled, then this point is labeled as the mark value of upper right.If upper and upper right is not all labeled, the mark value of this point is that a upper mark value adds one.
3.3.2) middle column is processed, if the pixel value of this pixel is 255, then detect the pixel value of a left side, upper left, upper, upper-right position.If the pixel value of above-mentioned four positions is all 0, then the mark value of this point is that a upper mark value adds one.If only have the pixel value of to be 255 in above-mentioned four positions, then this point is just labeled as the mark value of that pixel.If wherein there is m(m to be greater than 1, being less than or equal to 4) pixel value of individual pixel is 255, then determine the mark value of this point according to a left side, upper left, priority that is upper, upper right, then of equal value right analysis is carried out to the mark value of this m location of pixels, and carry out corresponding record.
3.3.3) process last row, step is the same.
3.3.4) scan successively, until all pixel values are all scanned.
3.4) connected component corresponding to all equivalent labels give when second time scanning with label minimum in record sheet of equal value, obtain final connected component labeling.
4th step, detects the edge of the connected region marked in the 3rd step
The essence of rim detection is the boundary line adopting certain algorithm to ask to extract objects in images and background.Edge definition is that in image, zone boundary jumpy occurs gray scale by we.The situation of change of gradation of image can reflect by the gradient of gradation of image distribution, and therefore we can obtain edge detection operator with topography's differential technology.
First connected region is white background, contains the connection object of all black borders, and namely described rim detection will detect that these contain the black border being communicated with object.Through the 4th step operation, all connection objects are made to carry out separate marking.
Described edge detecting step is as follows:
4.1) filtering
Wave filter is used to improve the performance of the edge detector relevant with noise.Edge detection algorithm is mainly based on single order and the second derivative of image intensity, but the calculating of derivative is very sensitive to noise, therefore must use wave filter to improve the performance of the edge detector relevant with noise.It may be noted that most of wave filter result also in the loss of edge strength while reducing noise, therefore, strengthening edge and reducing between noise needs compromise.
4.2) strengthen
Complete edge by compute gradient amplitude to strengthen.The basis strengthening edge is the changing value determining each vertex neighborhood intensity of image.Strengthen algorithm the point that neighborhood (or local) intensity level has significant change to be highlighted.Edge strengthens generally to have been come by compute gradient amplitude.
4.3) detect
Adopt gradient magnitude threshold criterion to determine marginal point, realize rim detection.Have the gradient magnitude of many points larger in the picture, and these points are also not all edges in specific application, so should with determining which point be marginal point someway.The simplest rim detection criterion is gradient magnitude threshold criterion.
4.4) locate
The marginal position to determine is estimated by subpixel resolution.If a certain application scenario requires to determine marginal position, then the position at edge can be estimated on subpixel resolution, and the orientation at edge also can be estimated.
5th step, mark has carried out the binary image after rim detection
Label to all connection objects after having carried out rim detection in the 4th step, if L is the class label identical with being communicated with subject image size, then L=i represents that i-th is communicated with object.
6th step, characteristic matching
Height and the width characteristics of subject image is communicated with according to certain mark in the 5th step, compare with the height of destination icon matches figure and width characteristics, (require equal in principle if equal or close, but consider the undesirable effect of rim detection in the algorithm, wide to those and coupling figure, highly also the taking into account of a difference pixel), then carry out the 8th step; If unequal also not close, carry out the 7th step.
7th step, the peripheral regions of coupling map manuscript
Judge the result of mating in the 6th step, whether comprise following four kinds of situations: 1) whether close to coupling figure lower part image; 2) whether close to coupling figure upper part image; 3) whether close to coupling figure left half image; 4) whether close to coupling figure right half image, if there is the situation of one of them, then the 8th step is carried out;
Because when considering segmentation map, some image that will mate is divided into two, and therefore simply can not detect matching image according to the 6th step.
8th step, Gray-scale Matching
Cut out out the subgraph corresponding to certain object that former figure gray-scale map satisfies condition, carry out Gray-scale Matching with coupling figure, namely calculate two images gray-scale value between correlativity, if related coefficient is greater than 0.9, then think that both mate, the reference position assignment of subgraph is given and exports array variable.
9th step, exports longitude and latitude.
By step above, the accurate location of destination address on map can be matched, thus obtain each and obtain the accurately latitude and longitude information of each destination in map.
When vehicle to be monitored run to destination-address carry out dispensing activity time, vehicle information management system is by the latitude and longitude coordinates of Auto-matching vehicle coordinate and destination, when both coordinate distances are less than 1 meter, think that vehicle carries out dispensing activity, this activity alarm is shown to user, improves the accuracy of user's vehicle monitoring.
Adopt Canny edge detection algorithm to carry out rim detection in described 4th step, in the 6th step, adopt 8 connected regions to carry out the coupling of connected region.According to the feature of image, find the connected region in former figure, carry out Canny rim detection, get rid of the non-interconnected region that those have nothing to do with coupling figure, to reduce the scope retrieved; Then Gray-scale Matching is adopted in target area, in order to improve the precision of retrieval.The method completely can the effective implemention coupling of coupling figure.
Advantage of the present invention: the inventive method adopts images match and recognizer, characteristic matching and Gray-scale Matching method to come reliably, effectively determine vehicle delivery, add wet goods activity, can improve the accuracy of Information system for vehicle management.Apply to the positional information that images match and recognition technology obtain all supermarkets, certain region mark in the map of image conversion, thus the target realizing user transparentization management and control cost, also can be applicable to the collection of landmark point in sundry item simultaneously, target following, the input of human resources can be reduced, cost-saving.
Accompanying drawing explanation
Below with reference to accompanying drawing, the invention will be further described:
Fig. 1 is the process flow diagram of Image-matching of the present invention and disposal route.
Fig. 2 is the map image in Image-matching of the present invention and disposal route after coloured image gray processing.
Fig. 3 is the map image in Image-matching of the present invention and disposal route after binaryzation.
Fig. 4 is the map image after having carried out rim detection in Image-matching of the present invention and disposal route.
Fig. 5 is the map image that Image-matching of the present invention and disposal route finally successfully carry out destination icon matches.
Embodiment
With reference to accompanying drawing 1 ~ 5, the vehicle monitoring method that the present invention is based on images match and recognition technology comprises the steps:
1) utilize the method for Image-matching and process, in map image, search out each destination icon, draw the longitude and latitude of each destination icon, and latitude and longitude information is recorded in map system;
2) when vehicle carries out dispensing activity, when the icon longitude and latitude approximately equal of the destination of recording in the longitude and latitude and step 1) of vehicle, show that vehicle carries out dispensing activity; Otherwise think that dispensing activity is false.
Image-matching in step 1) and process, comprise the steps:
1st step, color Map Image gray processing
Described gray processing converts gray image to by coloured image; In colored RGB model, if during R=G=B, then a kind of greyscale color of colored expression, wherein the value of R=G=B is gray-scale value, and therefore, each pixel of gray level image only needs a byte to deposit gray-scale value (also known as intensity level, brightness value), and tonal range is 0-255.The equation asking for respective pixel gray-scale value from true color image is:
Wherein (x, y) represents pixel coordinate.
Through this step computing, gray-scale value is normalized within 0 ~ 255, thus realizes color Map Image gray processing.
2nd step, by the Binary Sketch of Grey Scale Image obtained in the 1st step
After having carried out gray processing process, each pixel in image has had a gray-scale value, and it embodies the bright-dark degree of pixel.In order to launch subsequent algorithm, also need to do a binary conversion treatment to gray level image.The binary conversion treatment of image is exactly that the gray scale of the point on image is set to 0 or 255, namely makes whole image present obvious black and white effect.Gray level image by 256 brightness registrations is chosen by suitable threshold value and obtains the binary image that still can reflect integral image and local feature.About the binaryzation of image, the algorithm of existing many maturations, can adopt Adaptive Thresholding, also can adopt given threshold method; And the key of carrying out image binary transform determines suitable threshold value, make object and Background can be separated, and the result images of binary transform must possess good conformality, does not lose useful shape information, can not produce extra vacancy etc.Meanwhile, intelligent transportation system requires that the speed of process is fast, contains much information, and adopts bianry image to process, can improve treatment effeciency widely.
The key of binaryzation is to locate suitable threshold value t to distinguish object and background.
The procedural representation of binaryzation is as follows:
Wherein, originally gray level image is f (x, y), and the image after binaryzation is G (x, y).
The selection of threshold value t is crucial, and it can be expressed as a tlv triple, namely
Wherein, (x, y) is location of pixels in image, the gray-scale value at (x, y) place in f (x, y) representative image, and N (x, y) is the gray feature of surrounding neighbors.
The threshold values of binaryzation has chosen a lot of method, is mainly divided into 3 classes: overall threshold method, local threshold method and dynamic thresholding method.
(1) overall threshold method
Overall situation threshold values binarization method determines a threshold values according to the histogram of image or the space distribution of gray scale, and realize the conversion of gray level image to binary image according to this threshold values.Typical overall threshold values method comprises Ostu method, maximum entropy method (MEM) etc.
(2) local threshold method
Local threshold values rule is by putting local gray level characteristic around pixel gray value and pixel to determine the threshold values of pixel, Bernsen algorithm is typical local threshold values method, although the intensity profile that the situations such as inhomogeneous illumination condition affect general image does not affect image property locally.Now the regulation of neighborhood and choosing of neighborhood calculation template are all the key factors determining algorithm effect.
(3) dynamic thresholding method
The threshold values of dynamic thresholding method selects the gray-scale value not only depending on this pixel gray value and its surrounding pixel, but also relevant with the coordinate position of this pixel.
Adopt Ostu method in overall threshold method to realize Binary Sketch of Grey Scale Image in 2nd step, utilize the middle graythresh function in matlab tool box to realize.
3rd step, the connected region of the binary image obtained in mark the 2nd step
The connected region of the binary image obtained in the 2nd step is made a mark, because the image that will mate is connected graph, only need operates first connected region, search and the execution time of matching algorithm can be reduced.
The step of Pixel Labeling in Binary Images is as follows:
3.1) the marking image upper left corner, the i.e. pixel of the first row first row
If its pixel value is 255, then the value marking this point is 1, otherwise, start the pixel scanning the first row secondary series.
3.2) other pixel of the first row is marked
Now, of equal value right situation can not be produced, need not consider that record is of equal value right.To each pixel of this row, if its value is 255, detect whether leftmost pixel is 255, if so, then this point is labeled as the mark of leftmost pixel point; Otherwise the previous mark value that is labeled as of this point adds one; If the pixel value of this point is 0, continue the next pixel of scanning.
3.3) pixel column except the first row is marked, now there will be of equal value right situation, need to carry out record.
3.3.1) first first row is processed, if this pixel value is 0, then scan the next pixel of this row, otherwise, in detection, the pixel value of upper right two location of pixels.If upper marked, this point is labeled as the mark value of pixel.Whether at this moment, then see that whether upper right is marked, if also marked, relatively go up equal with the mark value of upper right, if unequal, then upper the and upper right of record is that an equivalence is right, and is recorded in equivalence in record sheet.If on be not labeled, and upper right has been labeled, then this point is labeled as the mark value of upper right.If upper and upper right is not all labeled, the mark value of this point is that a upper mark value adds one.
3.3.2) middle column is processed, if the pixel value of this pixel is 255, then detect the pixel value of a left side, upper left, upper, upper-right position.If the pixel value of above-mentioned four positions is all 0, then the mark value of this point is that a upper mark value adds one.If only have the pixel value of to be 255 in above-mentioned four positions, then this point is just labeled as the mark value of that pixel.If wherein there is m(m to be greater than 1, being less than or equal to 4) pixel value of individual pixel is 255, then determine the mark value of this point according to a left side, upper left, priority that is upper, upper right, then of equal value right analysis is carried out to the mark value of this m location of pixels, and carry out corresponding record.
3.3.3) process last row, step is the same.
3.3.4) scan successively, until all pixel values are all scanned.
3.4) connected component corresponding to all equivalent labels give when second time scanning with label minimum in record sheet of equal value, obtain final connected component labeling.
4th step, detects the edge of the connected region marked in the 3rd step
The essence of rim detection is the boundary line adopting certain algorithm to ask to extract objects in images and background.Edge definition is that in image, zone boundary jumpy occurs gray scale by we.The situation of change of gradation of image can reflect by the gradient of gradation of image distribution, and therefore we can obtain edge detection operator with topography's differential technology.
First connected region is white background, contains the connection object of all black borders, and namely described rim detection will detect that these contain the black border being communicated with object.Through the 4th step operation, all connection objects are made to carry out separate marking.
Described edge detecting step is as follows:
4.1) filtering
Wave filter is used to improve the performance of the edge detector relevant with noise.Edge detection algorithm is mainly based on single order and the second derivative of image intensity, but the calculating of derivative is very sensitive to noise, therefore must use wave filter to improve the performance of the edge detector relevant with noise.It may be noted that most of wave filter result also in the loss of edge strength while reducing noise, therefore, strengthening edge and reducing between noise needs compromise.
4.2) strengthen
Complete edge by compute gradient amplitude to strengthen.The basis strengthening edge is the changing value determining each vertex neighborhood intensity of image.Strengthen algorithm the point that neighborhood (or local) intensity level has significant change to be highlighted.Edge strengthens generally to have been come by compute gradient amplitude.
4.3) detect
Adopt gradient magnitude threshold criterion to determine marginal point, realize rim detection.Have the gradient magnitude of many points larger in the picture, and these points are also not all edges in specific application, so should with determining which point be marginal point someway.The simplest rim detection criterion is gradient magnitude threshold criterion.
4.4) locate
The marginal position to determine is estimated by subpixel resolution.If a certain application scenario requires to determine marginal position, then the position at edge can be estimated on subpixel resolution, and the orientation at edge also can be estimated.
5th step, mark has carried out the binary image after rim detection
Label to all connection objects after having carried out rim detection in the 4th step, if L is the class label identical with being communicated with subject image size, then L=i represents that i-th is communicated with object.
6th step, characteristic matching
Height and the width characteristics of subject image is communicated with according to certain mark in the 5th step, compare with the height of destination icon matches figure and width characteristics, (require equal in principle if equal or close, but consider the undesirable effect of rim detection in the algorithm, wide to those and coupling figure, highly also the taking into account of a difference pixel), then carry out the 8th step; If unequal also not close, carry out the 7th step.
7th step, the peripheral regions of coupling map manuscript
Judge the result of mating in the 6th step, whether comprise following four kinds of situations: 1) whether close to coupling figure lower part image; 2) whether close to coupling figure upper part image; 3) whether close to coupling figure left half image; 4) whether close to coupling figure right half image, if there is the situation of one of them, then the 8th step is carried out;
Because when considering segmentation map, some image that will mate is divided into two, and therefore simply can not detect matching image according to the 6th step.
8th step, Gray-scale Matching
Cut out out the subgraph corresponding to certain object that former figure gray-scale map satisfies condition, carry out Gray-scale Matching with coupling figure, namely calculate two images gray-scale value between correlativity, if related coefficient is greater than 0.9, then think that both mate, the reference position assignment of subgraph is given and exports array variable.Figure 4 shows that final matching results.
9th step, exports longitude and latitude.
By step above, the accurate location of destination address on map can be matched, thus obtain each and obtain the accurately latitude and longitude information of each destination in map.
When vehicle to be monitored run to destination-address carry out dispensing activity time, vehicle information management system is by the latitude and longitude coordinates of Auto-matching vehicle coordinate and destination, when both coordinate distances are less than 1 meter, think that vehicle carries out dispensing activity, this activity alarm is shown to user, improves the accuracy of user's vehicle monitoring.
Adopt Canny edge detection algorithm to carry out rim detection in described 4th step, in the 6th step, adopt 8 connected regions to carry out the coupling of connected region.According to the feature of image, find the connected region in former figure, carry out Canny rim detection, get rid of the non-interconnected region that those have nothing to do with coupling figure, to reduce the scope retrieved; Then Gray-scale Matching is adopted in target area, in order to improve the precision of retrieval.The method completely can the effective implemention coupling of coupling figure, and result as shown in Figure 3.
Embodiment 1:
In refueling activity monitoring, in map, there is oneself mark each refuelling station, in the moment point of group refueling, the longitude and latitude of vehicle should with the longitude and latitude approximately equal of corresponding refuelling station.Therefore, according to the longitude and latitude of the group refueling moment point vehicle calculated, we inquire the position of all refuelling stations in this longitude and latitude region, and then carry out matching ratio comparatively, reliably can confirm the refueling activity of vehicle.
In map, how to obtain the positional information of all refuelling stations, certain region, this just relates to the technology of image procossing aspect.At present, very deep to the research of image processing techniques, as various intelligent algorithm comprises BP neural network, genetic algorithm, wavelet technique and digital image processing techniques.And in most intelligent algorithm, mostly need the feature extracting input picture.The image of input is different, and the mode extracting feature also will change.Consider the Universal and scalability of project, final decision employing Digital image technology solves the image problem in actual items.
In research process, attempted multiple method, result or working time are partially long, or restricted to the image that will mate, and finally on the basis of comprehensive multiple method, just successfully achieve image matching algorithm.For refuelling station, the icon cutting out refuelling station from map preserves, as coupling figure.Map area to be checked is divided into the subgraph of 256 x 256 pixels, then using subgraph as former figure, searches the position of coupling figure in former figure.User according to the latitude and longitude information of former figure itself, thus can obtain the latitude and longitude information of matched position.

Claims (6)

1., based on a vehicle monitoring method for images match and recognition technology, it is characterized in that: the method comprises the steps,
1) utilize the method for Image-matching and process, in map image, search out each destination icon, draw the longitude and latitude of each destination icon, and latitude and longitude information is recorded in map system;
2) when vehicle carries out dispensing activity, when the icon longitude and latitude approximately equal of the destination of recording in the longitude and latitude and step 1) of vehicle, show that vehicle carries out dispensing activity; Otherwise think that dispensing activity is false;
Image-matching in step 1) and process, comprise the steps,
1st step, color Map Image gray processing
The equation asking for respective pixel gray-scale value from true color image is:
Wherein (x, y) represents pixel coordinate;
Through this step computing, gray-scale value is normalized within 0 ~ 255, thus realizes color Map Image gray processing;
2nd step, by the Binary Sketch of Grey Scale Image obtained in the 1st step
The procedural representation of binaryzation is as follows:
Wherein, originally gray level image is f (x, y), and the image after binaryzation is G (x, y);
T represents threshold value, is a tlv triple, namely
Wherein, (x, y) is location of pixels in image, the gray-scale value at (x, y) place in f (x, y) representative image, and N (x, y) is the gray feature of surrounding neighbors;
3rd step, the connected region of the binary image obtained in mark the 2nd step
The connected region of the binary image obtained in the 2nd step is made a mark;
4th step, detects the edge of the connected region marked in the 3rd step
Detect that these contain the black border being communicated with object, through the 4th step operation, make all connection objects to carry out separate marking;
5th step, mark has carried out the binary image after rim detection
Label to all connection objects after having carried out rim detection in the 4th step;
6th step, characteristic matching
Be communicated with the height of subject image and width characteristics according to certain mark in the 5th step, compare with the height of destination icon matches figure and width characteristics, if equal or close, then carry out the 8th step; If unequal also not close, carry out the 7th step;
7th step, the peripheral regions of coupling map manuscript
Judge the result of mating in the 6th step, whether comprise following four kinds of situations: 1) whether close to coupling figure lower part image; 2) whether close to coupling figure upper part image; 3) whether close to coupling figure left half image; 4) whether close to coupling figure right half image, if there is the situation of one of them, then the 8th step is carried out;
8th step, Gray-scale Matching
Cut out out the subgraph corresponding to certain object that former figure gray-scale map satisfies condition, carry out Gray-scale Matching with coupling figure, namely calculate two images gray-scale value between correlativity, if related coefficient is greater than 0.9, then think that both mate, the reference position assignment of subgraph is given and exports array variable;
9th step, exports longitude and latitude
By step above, the accurate location of destination address on map can be matched, thus obtain the accurately latitude and longitude information of each destination in map.
2. a kind of vehicle monitoring method based on images match and recognition technology according to claim 1, it is characterized in that: in described 2nd step, adopt Ostu method in overall threshold method to realize Binary Sketch of Grey Scale Image, utilize the middle graythresh function in matlab tool box to realize.
3. a kind of vehicle monitoring method based on images match and recognition technology according to claim 1, is characterized in that: in described 3rd step, as follows to the step of Pixel Labeling in Binary Images,
3.1) the marking image upper left corner, the i.e. pixel of the first row first row
If its pixel value is 255, then the value marking this point is 1, otherwise, start the pixel scanning the first row secondary series;
3.2) other pixel of the first row is marked
To each pixel of this row, if its value is 255, detect whether leftmost pixel is 255, if so, then this point is labeled as the mark of leftmost pixel point; Otherwise the previous mark value that is labeled as of this point adds one; If the pixel value of this point is 0, continue the next pixel of scanning;
3.3) pixel column except the first row is marked
3.3.1) first first row is processed, if this pixel value is 0, then scan the next pixel of this row, otherwise, in detection, the pixel value of upper right two location of pixels; If upper marked, this point is labeled as the mark value of pixel;
Whether at this moment, then see that whether upper right is marked, if also marked, relatively go up equal with the mark value of upper right, if unequal, then upper the and upper right of record is that an equivalence is right, and is recorded in equivalence in record sheet; If on be not labeled, and upper right has been labeled, then this point is labeled as the mark value of upper right; If upper and upper right is not all labeled, the mark value of this point is that a upper mark value adds one;
3.3.2) middle column is processed, if the pixel value of this pixel is 255, then detect the pixel value of a left side, upper left, upper, upper-right position; If the pixel value of above-mentioned four positions is all 0, then the mark value of this point is that a upper mark value adds one; If only have the pixel value of to be 255 in above-mentioned four positions, then this point is just labeled as the mark value of that pixel; If wherein there is the pixel value of m pixel to be 255, then determines the mark value of this point according to a left side, upper left, priority that is upper, upper right, then of equal value right analysis is carried out to the mark value of this m location of pixels, and carry out corresponding record; Described m is greater than 1, is less than or equal to 4;
3.3.3) process last row, step is with above-mentioned steps 3.3.2;
3.3.4) scan successively, until all pixel values are all scanned;
3.4) connected component corresponding to all equivalent labels give when second time scanning with label minimum in record sheet of equal value, obtain final connected component labeling.
4. a kind of vehicle monitoring method based on images match and recognition technology according to claim 1 and 2, is characterized in that: in described 3rd step, only needs to operate first connected region, can reduce search and the execution time of matching algorithm.
5. a kind of vehicle monitoring method based on images match and recognition technology according to claim 1, it is characterized in that: in described 4th step, adopt Canny edge detection algorithm to carry out rim detection, 8 connected regions are adopted to carry out the coupling of connected region in 6th step, according to the feature of image, find the connected region in former figure, carry out Canny rim detection, get rid of the non-interconnected region that those have nothing to do with coupling figure, to reduce the scope retrieved; Then Gray-scale Matching is adopted in target area, in order to improve the precision of retrieval.
6. a kind of vehicle monitoring method based on images match and recognition technology according to claim 1, is characterized in that: the edge detecting step in described 4th step is as follows,
4.1) filtering
Wave filter is used to improve the performance of the edge detector relevant with noise;
4.2) strengthen
Complete edge by compute gradient amplitude to strengthen;
4.3) detect
Adopt gradient magnitude threshold criterion to determine marginal point, realize rim detection;
4.4) locate
The marginal position to determine is estimated by subpixel resolution.
CN201210569107.9A 2012-12-25 2012-12-25 A kind of vehicle monitoring method based on images match and recognition technology Active CN103065147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210569107.9A CN103065147B (en) 2012-12-25 2012-12-25 A kind of vehicle monitoring method based on images match and recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210569107.9A CN103065147B (en) 2012-12-25 2012-12-25 A kind of vehicle monitoring method based on images match and recognition technology

Publications (2)

Publication Number Publication Date
CN103065147A CN103065147A (en) 2013-04-24
CN103065147B true CN103065147B (en) 2015-10-07

Family

ID=48107770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210569107.9A Active CN103065147B (en) 2012-12-25 2012-12-25 A kind of vehicle monitoring method based on images match and recognition technology

Country Status (1)

Country Link
CN (1) CN103065147B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258434A (en) * 2013-04-25 2013-08-21 广州中国科学院软件应用技术研究所 Image border detecting system based on video traffic flow detection and vehicle identification
CN103678559A (en) * 2013-12-06 2014-03-26 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 Method and device for displaying monitoring data
CN104156724A (en) * 2014-07-09 2014-11-19 宁波摩视光电科技有限公司 Two-value image connected domain marking algorithm based on AOI bullet apparent defect detecting system
CN106032967B (en) * 2015-02-11 2018-09-18 贵州景浩科技有限公司 The automatic multiplying power method of adjustment of electronic sighting device
CN114549649A (en) * 2022-04-27 2022-05-27 江苏智绘空天技术研究院有限公司 Feature matching-based rapid identification method for scanned map point symbols

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398894A (en) * 2008-06-17 2009-04-01 浙江师范大学 Automobile license plate automatic recognition method and implementing device thereof
CN201590160U (en) * 2010-02-04 2010-09-22 北京交通大学 Agricultural materials chain administration distribution vehicle information management and monitoring scheduling system
CN102426584A (en) * 2011-10-13 2012-04-25 天泽信息产业股份有限公司 Service system for obtaining accurate geographical position of vehicle and obtaining method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7787711B2 (en) * 2006-03-09 2010-08-31 Illinois Institute Of Technology Image-based indexing and classification in image databases

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398894A (en) * 2008-06-17 2009-04-01 浙江师范大学 Automobile license plate automatic recognition method and implementing device thereof
CN201590160U (en) * 2010-02-04 2010-09-22 北京交通大学 Agricultural materials chain administration distribution vehicle information management and monitoring scheduling system
CN102426584A (en) * 2011-10-13 2012-04-25 天泽信息产业股份有限公司 Service system for obtaining accurate geographical position of vehicle and obtaining method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征模板匹配识别地图中特殊图标的方法;李洋等;《电子测量与仪器学报》;20120715;第26卷(第7期);605-609 *

Also Published As

Publication number Publication date
CN103065147A (en) 2013-04-24

Similar Documents

Publication Publication Date Title
CN103065147B (en) A kind of vehicle monitoring method based on images match and recognition technology
CN101701818B (en) Method for detecting long-distance barrier
CN107578035A (en) Human body contour outline extracting method based on super-pixel polychrome color space
Cho et al. 2D barcode detection using images for drone-assisted inventory management
CN106530297A (en) Object grabbing region positioning method based on point cloud registering
CN104751187A (en) Automatic meter-reading image recognition method
CN107066933A (en) A kind of road sign recognition methods and system
CN110008900B (en) Method for extracting candidate target from visible light remote sensing image from region to target
CN104156731A (en) License plate recognition system based on artificial neural network and method
CN108090459B (en) Traffic sign detection and identification method suitable for vehicle-mounted vision system
CN102243705B (en) Method for positioning license plate based on edge detection
CN106408555A (en) Bearing surface flaw detection method based on image vision
CN102663723B (en) Image segmentation method based on color sample and electric field model
CN105574542A (en) Multi-vision feature vehicle detection method based on multi-sensor fusion
CN103593695A (en) Method for positioning DPM two-dimension code area
CN102194102A (en) Method and device for classifying a traffic sign
CN110751619A (en) Insulator defect detection method
CN104899892A (en) Method for quickly extracting star points from star images
CN103914849A (en) Method for detecting red date image
CN109389165A (en) Oil level gauge for transformer recognition methods based on crusing robot
CN101383005A (en) Method for separating passenger target image and background by auxiliary regular veins
CN103996031A (en) Self adaptive threshold segmentation lane line detection system and method
CN106845458A (en) A kind of rapid transit label detection method of the learning machine that transfinited based on core
CN102073868A (en) Digital image closed contour chain-based image area identification method
CN112926365A (en) Lane line detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20130424

Assignee: JIANGSU SEA LEVEL DATA TECHNOLOGY Co.,Ltd.

Assignor: TIANZE INFORMATION INDUSTRY Corp.

Contract record no.: X2020320000015

Denomination of invention: Vehicle monitoring method based on image matching and recognition technology

Granted publication date: 20151007

License type: Exclusive License

Record date: 20200518