CN113158728A - Parking space state detection method based on gray level co-occurrence matrix - Google Patents

Parking space state detection method based on gray level co-occurrence matrix Download PDF

Info

Publication number
CN113158728A
CN113158728A CN202011617045.5A CN202011617045A CN113158728A CN 113158728 A CN113158728 A CN 113158728A CN 202011617045 A CN202011617045 A CN 202011617045A CN 113158728 A CN113158728 A CN 113158728A
Authority
CN
China
Prior art keywords
gray
vehicle
state
effective area
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011617045.5A
Other languages
Chinese (zh)
Other versions
CN113158728B (en
Inventor
华璟
俞庭
彭浩宇
胡峥
吕佳俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Tuge Technology Co ltd
Original Assignee
Hangzhou Tuge Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Tuge Technology Co ltd filed Critical Hangzhou Tuge Technology Co ltd
Priority to CN202011617045.5A priority Critical patent/CN113158728B/en
Publication of CN113158728A publication Critical patent/CN113158728A/en
Application granted granted Critical
Publication of CN113158728B publication Critical patent/CN113158728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a parking space state detection method based on a gray level co-occurrence matrix, which comprises the steps of obtaining a video stream image; converting the video stream image into a single-channel gray image; obtaining a gray matrix of each frame of image and extracting a regional gray matrix of the parking space from the complete gray matrix; normalizing the regional gray level matrix and calculating a gray level co-occurrence matrix; and calculating corresponding characteristics according to the gray level co-occurrence matrix and evaluating the state of the parking space. The invention is suitable for judging whether the parking space with a clear view field is an empty parking space by utilizing the image, can quickly and accurately evaluate the state of the parking space, and reduces errors caused by illumination influence to a certain extent.

Description

Parking space state detection method based on gray level co-occurrence matrix
[ technical field ] A method for producing a semiconductor device
The invention belongs to the field of digital image processing, and particularly relates to a parking space state detection method based on a gray level co-occurrence matrix.
[ background of the invention ]
With the improvement of living standard of people and the improvement of automobile consumption capacity, automobiles move into more families. Correspondingly, people have higher and higher demands on parking spaces, and the contradiction between supply and demand of parking space resources is more severe. And with the remarkable development of the automatic driving technology, parking space monitoring and parking guidance technology has slowly shown its necessity. How to monitor the parking space in real time, the service condition of the parking space is monitored accurately in real time, and the induction and allocation of vehicles become important topics.
In recent years, the management of private closed parking lots depends on intelligent gates, and in the parking lots of the type, because the parking lots are closed and the number of entrances and exits is small, the parking lots in the venue can be counted and managed to a certain extent by means of the intelligent gates. However, most parking lots only calculate the information of the parking space allowance for parking space management, and few parking lots can locate idle parking spaces and guide vehicles entering for parking. For open parking spaces, the situation is worse, and the phenomena of no parking space finding and random parking are frequent.
How to utilize internet of things to form real-time parking stall monitoring network with the equipment of monitoring parking stall real-time status through server and terminal equipment etc. satisfies people's real-time inquiry vacant parking stall or the parking stall state of nearby, carries out the distribution of parking stall, to parking to the vehicle and induces, navigates very big realistic meaning.
The statistical traffic flow at the entrance of the parking lot can only know the allowance of the current parking space, and the condition of the empty parking space cannot be counted in real time. The installation of sensors in the region of the parking space places relatively high demands on the surrounding environment and is very susceptible to interference. Therefore, it is necessary to design a parking space state detection method that can adapt to various environments and has strong anti-jamming capability.
[ summary of the invention ]
Based on the defects of the prior art, the invention provides a parking space state detection method based on a gray level co-occurrence matrix, which monitors the real-time state of parking spaces in a parking lot by erecting monitoring cameras at proper positions on a parking lot and analyzing the parking spaces on images through an image processing technology.
In order to achieve the purpose, the invention adopts the following technical scheme:
a parking space state detection method based on a gray level co-occurrence matrix comprises the following steps:
SO 1: acquiring images of a video stream of a parking space through a monitoring camera arranged on the parking space, selecting a plurality of frames as sample images, and acquiring an effective area of the parking space in a visual field of each frame of the sample image by taking the other frames as images to be detected, wherein the effective area is a largest inscribed rectangle in a boundary quadrangle of the parking space in the visual field, the effective area has a vehicle-presence state and a vehicle-absence state, the vehicle-presence state is a sample A, and the vehicle-absence state is a sample B;
SO 2: converting the sample image into a single-channel gray image by an averaging method, performing normalization conversion on the gray level of the gray image to compress the gray level of the gray image within 16, extracting a gray matrix of the effective area from the normalized gray image, calculating a gray level co-occurrence matrix of the gray matrix of the effective area by using the gray matrix of the effective area, and calculating a first characteristic quantity F by using the gray level co-occurrence matrix, wherein the first characteristic quantity F comprises: second moment of angle FASMEntropy FENTInverse differential moment FIDMContrast FCONObtaining a vehicle-presence state feature set M and a vehicle-absence state feature set N of each first feature quantity F;
SO 3: respectively calculating the sample center O of the first characteristic quantity F of the corresponding type according to each type of the characteristic set M and the characteristic set Nfull、OemptyAnd calculating the effective range R of each first characteristic quantityfull、Rempty
SO 4: o for each first characteristic quantity Ffull、OemptyRespectively calculating the demarcation point flag of the 'vehicle-on' state and the 'vehicle-off' state according to the proportional relation between the effective range and the sample center;
SO 5: calculating a second characteristic quantity F 'of the image to be detected according to step SO2, including a second angular moment F'ASMEntropy F'ENTReverse differential torch F'IDMContrast F'CONFor each type of the second feature quantity F ', O of the corresponding type of the second feature quantity F' is calculated in accordance with the steps SO3 and SO4full、OemptyAnd a demarcation point flag, and the probability evaluation of the 'vehicle-in' state is carried out to calculate the angle II of the image to be detectedProbability of torch PASMEntropy probability PENTInverse differential torch probability PIDMContrast probability PCONThe four kinds of probabilities each represent the probability that the parking space evaluated according to the second feature quantity F' has a car;
SO 6: and carrying out weight distribution on the probability of the ' vehicle-in state evaluated by each second characteristic quantity, calculating the weighted sum of all the probabilities to be used as the total probability P of the ' vehicle-in state of the effective area under the frame image, wherein when the P is greater than a critical value, the parking space is in the ' vehicle-in state ', the critical value is greater than or equal to 0.8, and if the effective areas in the image to be detected read out for more than three consecutive seconds are all in the ' vehicle-in state, the parking space is considered to be occupied.
Further, the step SO2 includes the following sub-steps:
converting RGB three channels of the sample image into a single-channel gray image by using an averaging method, independently storing the converted single-channel gray image into a two-dimensional array, keeping the size of the array consistent with the resolution of the converted single-channel gray image, and then performing normalization processing on each element in the two-dimensional array, wherein the normalization mode is as follows:
B(i,j)=int((double)(A(i,j)–minSubLevel)/(double)(maxSubLevel–minSubLevel)*16);
a (i, j) is an element in a two-dimensional array, B (i, j) is an element to be solved, minSubLevel is the minimum value of the element, and maxSubLevel is the maximum value of the element;
finding out the pixel coordinate starting point of the effective area in the converted single-channel gray image and the width and height of the effective area, and storing elements in the corresponding range in the two-dimensional array in the step I into a new array, wherein the new array is a gray matrix of the effective area;
finding the maximum value maxGraylevel and the minimum value minGraylevel of the gray level from the gray level matrix of the effective area;
fourthly, traversing the gray matrix of the effective area, carrying out normalization processing on each gray value src, and storing the normalized gray value src into the gray matrix of the effective area again, wherein the normalization mode is as follows:
B’(i,j)=int((double)(A’(i,j)–minGrayLevel)/(double)(maxGrayLevel–minGrayLevel)*16),
wherein, A '(i, j) is the gray value in the gray matrix of the effective area, and B' (i, j) is the gray value to be solved;
creating a new matrix of 16 x 16, wherein the initial value of each element is 0, traversing the gray-scale matrix of the effective area, selecting a pixel pair B (i, j) and B (i, j +1), wherein the value of the pixel B (i, j) is m, the value of the pixel B (i, j +1) is n, and making C (m, n) +1, and after the traversal of the gray-scale matrix of the effective area is finished, the obtained new matrix is the gray-scale co-occurrence matrix; sixthly, respectively calculating the second moment of the angle FASMEntropy FENTInverse differential moment FIDMContrast FCON
FASM=sum(C(i,j)^2);FENT=sum(C(i,j)*(-logC(i,j)));
FIDM=sum(C(i,j)/(1+(i-j)^2));FCON=sum((i-j)^2*C(i,j))。
Further, the step SO3 includes the following sub-steps:
obtaining a feature quantity Fn of each frame of image effective area of a sample A from a feature set M, wherein N is 1,2,3 … N, and N is the total number of the sample A;
secondly, removing a maximum value and a minimum value in the sample A, and calculating the average value of the characteristic quantity of the effective area in the new sample A as the center O of the samplefull
Figure RE-GDA0003076810370000031
Wherein N' is the number of samples after the maximum value and the minimum value are removed;
thirdly, calculating the characteristic quantities in the sample A and O respectivelyfullAnd finding out the maximum value, which is the effective range R of each characteristic quantity of the vehicle statefull
Fourthly, according to the steps I to III, the sample center O of the sample B is calculatedemptyAnd an effective range Rempty
Further, in the step SO4, the boundary point flag is calculated by the following formula:
Figure RE-GDA0003076810370000032
further, the probability of the presence state of the second feature quantity F' is calculated as:
Figure RE-GDA0003076810370000033
Figure RE-GDA0003076810370000034
Figure RE-GDA0003076810370000035
Figure RE-GDA0003076810370000036
wherein O isemptyAnd the flag is a state demarcation point for the data center in the 'no vehicle' state.
Further, the step SO6 includes the following sub-steps:
calculating a reliability evaluation quantity Q of each characteristic quantity, wherein the reliability evaluation quantity Q is the distinguishing degree of a certain characteristic quantity to the vehicle-presence state and the vehicle-absence state of an effective area, the higher the distinguishing degree of the vehicle-presence state and the vehicle-absence state is, the larger the reliability evaluation quantity Q value is, and the calculation formula of the reliability evaluation quantity Q is as follows:
Figure RE-GDA0003076810370000041
mixing four characteristic quantities of Ofull、Oempty、Rfull、RemptyRespectively substituting the above formula to obtain the reliability evaluation quantity Q of each characteristic quantityASM、QENT、QDIM、QCON
Secondly, calculating the probability weight W of each characteristic quantity according to the reliability evaluation quantity Q:
Figure RE-GDA0003076810370000042
and thirdly, calculating the weighted sum P of all probabilities:
P=Pasm*Wasm+Pent*Went+Pidm*Widm+Pcon*Wcon
after the technical scheme is adopted, the invention has the following advantages:
only need rely on the camera on equipment to same camera can be used for the monitoring on a plurality of parking stalls, can reach the effect that 4-8 parking stalls only need rely on a camera according to the site conditions.
The user can erect the camera of gathering the image according to the demand of oneself, erects the camera according to multiple demands such as place of difference correspondingly to can not occupy the region of parking stall, the possibility of damage is low, compares in traditional sensor monitoring mode can effectual reduce cost.
The camera used for collecting the images is erected, and other functions can be carried, such as security monitoring, license plate recognition function adding and the like. Can achieve multiple purposes.
These features and advantages of the present invention will be disclosed in more detail in the following detailed description and the accompanying drawings. The best mode or means of the present invention will be described in detail with reference to the accompanying drawings, but the present invention is not limited thereto. In addition, the features, elements and components appearing in each of the following and in the drawings are plural and different symbols or numerals are labeled for convenience of representation, but all represent components of the same or similar construction or function.
[ description of the drawings ]
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and not to limit the application.
FIG. 1 is a flow chart of a parking space state detection method based on a gray level co-occurrence matrix;
FIG. 2 is an overall view of a parking space in the field of view of a camera;
FIG. 3 is a detail view of a single slot.
[ detailed description ] embodiments
The technical solutions of the embodiments of the present invention are explained and illustrated below with reference to the drawings of the embodiments of the present invention, but the following embodiments are only preferred embodiments of the present invention, and not all embodiments. Based on the embodiments in the implementation, other embodiments obtained by those skilled in the art without any creative effort belong to the protection scope of the present invention.
In addition, it is also to be understood that, unless otherwise indicated, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. Also, unless the context clearly dictates otherwise, the singular form is also intended to include the plural form when the terms "comprises" and/or "comprising" are used in this specification to specify the presence of features, steps, operations, devices, components and/or combinations thereof.
The first embodiment is as follows:
as shown in fig. 1 and fig. 2, the present embodiment provides a parking space state detection method based on a gray level co-occurrence matrix, including the following steps:
SO 1: the method comprises the steps that images of video streams of parking spaces are obtained through a camera which is erected on a parking space accessory and used for collecting parking space video materials, a plurality of frames are selected as sample images, the other frames are used as images to be detected, an effective area of the parking space in the visual field of each frame of sample image is obtained, the effective area is the largest inscribed rectangle in a boundary quadrangle of the parking space in the visual field, as shown in figure 3, the effective area has a vehicle-presence state and a vehicle-absence state, the vehicle-presence state is a sample A, and the vehicle-absence state is a sample B;
SO 2: converting a sample image into a single-channel gray image by an averaging method, performing normalization conversion on the gray level of the gray image to compress the gray level of the gray image within 16, extracting a gray matrix of an effective area from the normalized gray image, calculating a gray co-occurrence matrix of the gray matrix of the effective area by the gray matrix of the effective area, and calculating a first characteristic quantity F by the gray co-occurrence matrix, wherein the first characteristic quantity F comprises: second moment of angle FASMEntropy FENTInverse differential moment FIDMContrast FCONAnd obtaining a vehicle-presence state feature set M and a vehicle-absence state feature set N of each first feature quantity F.
The method comprises the following substeps:
converting RGB three channels of the sample image into a single-channel gray image by using an averaging method, independently storing the converted single-channel gray image into a two-dimensional array, keeping the size of the array consistent with the resolution of the converted single-channel gray image, and then performing normalization processing on each element in the two-dimensional array, wherein the normalization mode is as follows:
B(i,j)=int((double)(A(i,j)–minSubLevel)/(double)(maxSubLevel–minSubLevel)*16);
a (i, j) is an element in a two-dimensional array, B (i, j) is an element to be solved, minSubLevel is the minimum value of the element, and maxSubLevel is the maximum value of the element;
finding out the pixel coordinate starting point of the effective area in the converted single-channel gray image and the width and height of the effective area, and storing elements in the corresponding range in the two-dimensional array in the step I into a new array, wherein the new array is a gray matrix of the effective area;
finding the maximum value maxGraylevel and the minimum value minGraylevel of the gray level from the gray level matrix of the effective area;
fourthly, traversing the gray matrix of the effective area, carrying out normalization processing on each gray value src, and storing the normalized gray value src into the gray matrix of the effective area again, wherein the normalization mode is as follows:
B’(i,j)=int((double)(A’(i,j)–minGrayLevel)/(double)(maxGrayLevel–minGrayLevel)*16),
wherein, A '(i, j) is the gray value in the gray matrix of the effective area, and B' (i, j) is the gray value to be solved;
creating a new matrix of 16 x 16, wherein the initial value of each element is 0, traversing the gray-scale matrix of the effective area, selecting a pixel pair B (i, j) and B (i, j +1), wherein the value of the pixel B (i, j) is m, the value of the pixel B (i, j +1) is n, and making C (m, n) +1, and after the traversal of the gray-scale matrix of the effective area is finished, the obtained new matrix is the gray-scale co-occurrence matrix;
sixthly, respectively calculating the second moment of the angle FASMEntropy FENTInverse differential moment FIDMContrast FCON
FASM=sum(C(i,j)^2);FENT=sum(C(i,j)*(-logC(i,j)));
FIDM=sum(C(i,j)/(1+(i-j)^2));FCON=sum((i-j)^2*C(i,j));
SO 3: respectively calculating the sample center O of the first characteristic quantity F of the corresponding type according to each type of the characteristic set M and the characteristic set Nfull、OemptyAnd calculating the effective range R of each first characteristic quantityfull、Rempty
Step SO3 includes the following substeps:
firstly, obtaining the characteristic quantity F of each frame of image effective area of a sample A from a characteristic set MnWherein N is 1,2,3 … N, and N is the total number of samples a;
secondly, removing a maximum value and a minimum value in the sample A, and calculating the average value of the characteristic quantity of the effective area in the new sample A as the center O of the samplefull
Figure RE-GDA0003076810370000061
Wherein N' is the number of samples after the maximum value and the minimum value are removed;
thirdly, calculating the characteristic quantities in the sample A and O respectivelyfullAnd finding out the maximum value, which is the effective range R of each characteristic quantity of the vehicle statefull
Fourthly, according to the steps I to III, the sample center O of the sample B is calculatedemptyAnd an effective range Rempty
SO 4: for each Ofull、OemptyAnd respectively finding out a demarcation point flag of a vehicle-presence state and a demarcation point flag of a vehicle-absence state according to a proportional relation between the effective range and the sample center, wherein the demarcation point flag is calculated by the following formula:
Figure RE-GDA0003076810370000062
SO 5: calculating a second characteristic quantity F 'of the image to be detected according to step SO2, including a second angular moment F'ASMEntropy F'ENTReverse differential torch F'IDMContrast F'CONFor each type of the second feature quantity F ', O of the corresponding type of the second feature quantity F' is calculated in accordance with the steps SO3 and SO4full、OemptyAnd a demarcation point flag, carrying out probability evaluation of a 'vehicle-in' state, and calculating the angular second-order torch probability P of the image to be detectedASMEntropy probability PENTInverse differential torch probability PIDMContrast probability PCONFour kinds of probabilities each represent the probability that the parking space evaluated according to the second feature quantity F' has a car:
Figure RE-GDA0003076810370000063
Figure RE-GDA0003076810370000064
Figure RE-GDA0003076810370000071
Figure RE-GDA0003076810370000072
wherein O isemptyAnd the flag is a state demarcation point for the data center in the 'no vehicle' state.
SO 6: carrying out weight distribution on the probability of the 'vehicle-in' state evaluated by each second characteristic quantity, calculating the weighted sum of all the probabilities to be used as the total probability P of the 'vehicle-in' state of the effective area under the frame image, when the P is greater than a critical value, the parking space is in the 'vehicle-in' state, the critical value is greater than or equal to 0.8, and if the effective areas in the image to be detected read out for more than three consecutive seconds are all in the 'vehicle-in' state, the parking space is considered to be occupied;
step SO6 includes the following substeps:
calculating a reliability evaluation quantity Q of each characteristic quantity, wherein the reliability evaluation quantity Q is the distinguishing degree of a certain characteristic quantity to the vehicle-presence state and the vehicle-absence state of an effective area, the higher the distinguishing degree of the vehicle-presence state and the vehicle-absence state is, the larger the reliability evaluation quantity Q value is, and the calculation formula of the reliability evaluation quantity Q is as follows:
Figure RE-GDA0003076810370000073
mixing four characteristic quantities of Ofull、Oempty、Rfull、RemptyRespectively substituting the above formula to obtain the reliability evaluation quantity Q of each characteristic quantityASM、QENT、QDIM、QCON
Secondly, calculating the probability weight W of each characteristic quantity according to the reliability evaluation quantity Q:
Figure RE-GDA0003076810370000074
and thirdly, calculating the weighted sum P of all probabilities:
P=Pasm*Wasm+Pent*Went+Pidm*Widm+Pcon*Wcon
while the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Any modification which does not depart from the functional and structural principles of the present invention is intended to be included within the scope of the claims.

Claims (6)

1. A parking space state detection method based on a gray level co-occurrence matrix is characterized by comprising the following steps:
SO 1: acquiring images of a video stream of a parking space through a monitoring camera arranged on the parking space, selecting a plurality of frames as sample images, and acquiring an effective area of the parking space in a visual field of each frame of the sample image by taking the other frames as images to be detected, wherein the effective area is a largest inscribed rectangle in a boundary quadrangle of the parking space in the visual field, the effective area has a vehicle-presence state and a vehicle-absence state, the vehicle-presence state is a sample A, and the vehicle-absence state is a sample B;
SO 2: converting the sample image into a single-channel gray image by an averaging method, performing normalization conversion on the gray level of the gray image to compress the gray level of the gray image within 16, extracting a gray matrix of the effective area from the normalized gray image, calculating a gray level co-occurrence matrix of the gray matrix of the effective area by using the gray matrix of the effective area, and calculating a first characteristic quantity F by using the gray level co-occurrence matrix, wherein the first characteristic quantity F comprises: second moment of angle FASMEntropy FENTInverse differential moment FIDMContrast FCONObtaining a vehicle-presence state feature set M and a vehicle-absence state feature set N of each first feature quantity F;
SO 3: respectively calculating the sample center O of the first characteristic quantity F of the corresponding type according to each type of the characteristic set M and the characteristic set Nfull、OemptyAnd calculating the effective range R of each first characteristic quantityfull、Rempty
SO 4: o for each first characteristic quantity Ffull、OemptyRespectively calculating the demarcation point flag of the 'vehicle-on' state and the 'vehicle-off' state according to the proportional relation between the effective range and the sample center;
SO 5: calculating a second characteristic quantity F 'of the image to be detected according to step SO2, including a second angular moment F'ASMEntropy F'ENTReverse differential torch F'IDMContrast F'CONFor each type of the second feature quantity F ', O of the corresponding type of the second feature quantity F' is calculated in accordance with the steps SO3 and SO4full、OemptyAnd a demarcation point flag, carrying out probability evaluation of a 'vehicle-in' state, and calculating the angular second moment probability P of the image to be detectedASMEntropy probability PENTInverse differential torch probability PIDMContrast probability PCONThe four kinds of probabilities each represent the probability that the parking space evaluated according to the second feature quantity F' has a car;
SO 6: and carrying out weight distribution on the probability of the ' vehicle-in state evaluated by each second characteristic quantity, calculating the weighted sum of all the probabilities to be used as the total probability P of the ' vehicle-in state of the effective area under the frame image, wherein when the P is greater than a critical value, the parking space is in the ' vehicle-in state ', the critical value is greater than or equal to 0.8, and if the effective areas in the image to be detected read out for more than three consecutive seconds are all in the ' vehicle-in state, the parking space is considered to be occupied.
2. The method according to claim 1, wherein said step SO2 includes the following sub-steps:
converting RGB three channels of the sample image into a single-channel gray image by using an averaging method, independently storing the converted single-channel gray image into a two-dimensional array, keeping the size of the array consistent with the resolution of the converted single-channel gray image, and then performing normalization processing on each element in the two-dimensional array, wherein the normalization mode is as follows:
B(i,j)=int((double)(A(i,j)–minSubLevel)/(double)(maxSubLevel–minSubLevel)*16);
a (i, j) is an element in a two-dimensional array, B (i, j) is an element to be solved, minSubLevel is the minimum value of the element, and maxSubLevel is the maximum value of the element;
finding out the pixel coordinate starting point of the effective area in the converted single-channel gray image and the width and height of the effective area, and storing elements in the corresponding range in the two-dimensional array in the step I into a new array, wherein the new array is a gray matrix of the effective area;
finding the maximum value maxGraylevel and the minimum value minGraylevel of the gray level from the gray level matrix of the effective area;
fourthly, traversing the gray matrix of the effective area, carrying out normalization processing on each gray value src, and storing the normalized gray value src into the gray matrix of the effective area again, wherein the normalization mode is as follows:
B’(i,j)=int((double)(A’(i,j)–minGrayLevel)/(double)(maxGrayLevel–minGrayLevel)*16),
wherein, A '(i, j) is the gray value in the gray matrix of the effective area, and B' (i, j) is the gray value to be solved;
creating a new matrix of 16 x 16, wherein the initial value of each element is 0, traversing the gray-scale matrix of the effective area, selecting a pixel pair B (i, j) and B (i, j +1), wherein the value of the pixel B (i, j) is m, the value of the pixel B (i, j +1) is n, and making C (m, n) +1, and after the traversal of the gray-scale matrix of the effective area is finished, the obtained new matrix is the gray-scale co-occurrence matrix;
sixthly, respectively calculating the second moment of the angle FASMEntropy FENTInverse differential moment FIDMContrast FCON
FASM=sum(C(i,j)^2);FENT=sum(C(i,j)*(-logC(i,j)));
FIDM=sum(C(i,j)/(1+(i-j)^2));FCON=sum((i-j)^2*C(i,j))。
3. The method according to claim 1, wherein said step SO3 comprises the sub-steps of:
obtaining a feature quantity Fn of each frame of image effective area of a sample A from a feature set M, wherein N is 1,2,3 … N, and N is the total number of the sample A;
② remove a maximum sum in the sample AA minimum value, and calculating the average value of the feature quantities of the effective area in the new sample A as the sample center Ofull
Figure FDA0002877057390000021
Wherein N' is the number of samples after the maximum value and the minimum value are removed;
thirdly, calculating the characteristic quantities in the sample A and O respectivelyfullAnd finding out the maximum value, which is the effective range R of each characteristic quantity of the vehicle statefull
Fourthly, according to the steps I to III, the sample center O of the sample B is calculatedemptyAnd an effective range Rempty
4. The method according to claim 1, wherein in step SO4, the cut point flag is calculated by the following formula:
Figure FDA0002877057390000022
5. the method according to claim 1, wherein the probability of the presence state of the second feature quantity F' is calculated as:
Figure FDA0002877057390000023
Figure FDA0002877057390000031
Figure FDA0002877057390000032
Figure FDA0002877057390000033
wherein O isemptyAnd the flag is a state demarcation point for the data center in the 'no vehicle' state.
6. Method according to claim 1, characterized in that said step SO6 comprises the sub-steps of:
calculating a reliability evaluation quantity Q of each characteristic quantity, wherein the reliability evaluation quantity Q is the distinguishing degree of a certain characteristic quantity to the vehicle-presence state and the vehicle-absence state of an effective area, the higher the distinguishing degree of the vehicle-presence state and the vehicle-absence state is, the larger the reliability evaluation quantity Q value is, and the calculation formula of the reliability evaluation quantity Q is as follows:
Figure FDA0002877057390000034
mixing four characteristic quantities of Ofull、Oempty、Rfull、RemptyRespectively substituting the above formula to obtain the reliability evaluation quantity Q of each characteristic quantityASM、QENT、QDIM、QCON
Secondly, calculating the probability weight W of each characteristic quantity according to the reliability evaluation quantity Q:
Figure FDA0002877057390000035
and thirdly, calculating the weighted sum P of all probabilities:
P=Pasm*Wasm+Pent*Went+Pidm*Widm+Pcon*Wcon
CN202011617045.5A 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix Active CN113158728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011617045.5A CN113158728B (en) 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011617045.5A CN113158728B (en) 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix

Publications (2)

Publication Number Publication Date
CN113158728A true CN113158728A (en) 2021-07-23
CN113158728B CN113158728B (en) 2023-06-09

Family

ID=76878161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011617045.5A Active CN113158728B (en) 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix

Country Status (1)

Country Link
CN (1) CN113158728B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082443A (en) * 2022-07-25 2022-09-20 山东天意机械股份有限公司 Concrete product quality detection method based on intelligent monitoring platform

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600585A (en) * 2016-12-08 2017-04-26 北京工商大学 Skin condition quantitative evaluation method based on gray level co-occurrence matrix
CN106600612A (en) * 2016-12-27 2017-04-26 重庆大学 Damage identification and detection method for electric automobile before and after renting
CN108563994A (en) * 2018-03-14 2018-09-21 吉林大学 A kind of parking position recognition methods based on image similarity degree
US20190179027A1 (en) * 2017-12-13 2019-06-13 Luminar Technologies, Inc. Processing point clouds of vehicle sensors having variable scan line distributions using two-dimensional interpolation and distance thresholding
CN109993991A (en) * 2018-11-30 2019-07-09 浙江工商大学 Parking stall condition detection method and system
CN110689761A (en) * 2019-12-11 2020-01-14 上海赫千电子科技有限公司 Automatic parking method
CN111081064A (en) * 2019-12-11 2020-04-28 上海赫千电子科技有限公司 Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
CN111402215A (en) * 2020-03-07 2020-07-10 西南交通大学 Contact net insulator state detection method based on robust principal component analysis method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600585A (en) * 2016-12-08 2017-04-26 北京工商大学 Skin condition quantitative evaluation method based on gray level co-occurrence matrix
CN106600612A (en) * 2016-12-27 2017-04-26 重庆大学 Damage identification and detection method for electric automobile before and after renting
US20190179027A1 (en) * 2017-12-13 2019-06-13 Luminar Technologies, Inc. Processing point clouds of vehicle sensors having variable scan line distributions using two-dimensional interpolation and distance thresholding
CN108563994A (en) * 2018-03-14 2018-09-21 吉林大学 A kind of parking position recognition methods based on image similarity degree
CN109993991A (en) * 2018-11-30 2019-07-09 浙江工商大学 Parking stall condition detection method and system
CN110689761A (en) * 2019-12-11 2020-01-14 上海赫千电子科技有限公司 Automatic parking method
CN111081064A (en) * 2019-12-11 2020-04-28 上海赫千电子科技有限公司 Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
CN111402215A (en) * 2020-03-07 2020-07-10 西南交通大学 Contact net insulator state detection method based on robust principal component analysis method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹超等: ""基于模糊类别共生矩阵的纹理疵点检测方法\"" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082443A (en) * 2022-07-25 2022-09-20 山东天意机械股份有限公司 Concrete product quality detection method based on intelligent monitoring platform
CN115082443B (en) * 2022-07-25 2022-11-08 山东天意机械股份有限公司 Concrete product quality detection method based on intelligent monitoring platform

Also Published As

Publication number Publication date
CN113158728B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN106373426B (en) Parking stall based on computer vision and violation road occupation for parking monitoring method
CN109033950B (en) Vehicle illegal parking detection method based on multi-feature fusion cascade depth model
CN107665603B (en) Real-time detection method for judging parking space occupation
WO2019223586A1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN103824452B (en) A kind of peccancy parking detector based on panoramic vision of lightweight
CN208521463U (en) Managing system of car parking and its parking space management system, Car license recognition charging system
CN108765975B (en) Roadside vertical parking lot management system and method
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN113158728B (en) Parking space state detection method based on gray level co-occurrence matrix
CN115063746A (en) Vehicle warehousing management method and device, computer equipment and storage medium
WO2023241595A1 (en) Parking space range processing method and computing device
CN112560814A (en) Method for identifying vehicles entering and exiting parking spaces
CN111785068A (en) Garage management system
CN114627420B (en) Urban management violation event information acquisition method and system
CN110765900A (en) DSSD-based automatic illegal building detection method and system
DE202022102745U1 (en) An IoT-based smart indoor parking system
CN115588047A (en) Three-dimensional target detection method based on scene coding
CN115131986A (en) Intelligent management method and system for closed parking lot
KR20200025384A (en) Parking Information Providing Device Using Real-Time Image Processing and System thereof
CN212624419U (en) Garage management system
Bachtiar et al. Parking management by means of computer vision
CN114627653B (en) 5G intelligent barrier gate management system based on binocular recognition
CN112419780A (en) Outdoor parking management method and system based on video identification technology
KR101611258B1 (en) Remote real-time parking information system using image matching technique
CN113591814B (en) Road cleanliness detection method and detection system based on dynamic vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant