WO2018076281A1 - 停车位状态的检测方法、检测装置和电子设备 - Google Patents

停车位状态的检测方法、检测装置和电子设备 Download PDF

Info

Publication number
WO2018076281A1
WO2018076281A1 PCT/CN2016/103778 CN2016103778W WO2018076281A1 WO 2018076281 A1 WO2018076281 A1 WO 2018076281A1 CN 2016103778 W CN2016103778 W CN 2016103778W WO 2018076281 A1 WO2018076281 A1 WO 2018076281A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
occlusion
detecting
image
area
Prior art date
Application number
PCT/CN2016/103778
Other languages
English (en)
French (fr)
Inventor
张国成
王琪
Original Assignee
富士通株式会社
张国成
王琪
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 张国成, 王琪 filed Critical 富士通株式会社
Priority to PCT/CN2016/103778 priority Critical patent/WO2018076281A1/zh
Publication of WO2018076281A1 publication Critical patent/WO2018076281A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • the present application relates to the field of information technology, and in particular, to a method for detecting a parking space state, a detecting device, and an electronic device.
  • image processing technology is being used more and more widely in various fields, including the field of parking space status detection.
  • CN201510705589.X a method for quickly and accurately detecting the state of a parking space is described.
  • the method detects the object in the parking space based on the monitoring image of the parking space.
  • the state of motion generates a steady-state parking bitmap for a parking space that does not contain object motion, and sharpens the image by a certain post-processing technique, and then determines the state of the parking space based on the contour method and the classifier detection.
  • the inventor of the present application has found that for many parking lots in a busy area, there is often a vehicle passing through the vehicle path around the parking space. For the vehicle moving on the road of the vehicle, the above-mentioned prior application document 1 can be identified, so It has an impact on the detection result of the parking space status.
  • the vehicle will stop on the vehicle's path for a few minutes to tens of minutes to wait for the passengers to pick up and drop off, load or unload, or wait for other vehicles to leave the parking space, etc., thus, in the surveillance image, these are stopped in the vehicle.
  • Vehicles on the road will block the images of the vehicles in several parking spaces, and these static vehicles will cause a significant change in the image of the steady-state parking space, resulting in errors in the monitoring of the parking status.
  • Quiet stop The vehicle is occluded, and the vehicle information is not detected in the parking space of the original vehicle, and the air state is continued until the static stop vehicle that causes the occlusion stops to return to the correct state of the vehicle.
  • the water in the detection area is like a mirror reflecting the surrounding image. If there is a car in the parking space, the entire front face will be reflected in the water in the road. When the car leaves the parking space. After that, the original front face of the car will disappear. In this case, due to the change of the reflection, a strong foreground image appears in the occlusion detection area, and this foreground image cannot be filtered by the flat detection method because of the rich details. The error detection result that the parking space is blocked is obtained, so that although the state of the actual parking space has changed, the state of the parking space cannot be updated due to the erroneous detection result, which seriously affects the use effect of the parking space.
  • the cause of the false foreground may also include a case of a rapid shadow change caused by sunlight, and the present application only exemplifies the road area water as an example.
  • An embodiment of the present application provides a method for detecting a parking space state, a detecting device, and an electronic device, and determining whether a parking space is occluded by determining whether an image of a predetermined area around a parking space matches a reference image, thereby determining a parking space state. Thereby, the detection accuracy of the parking space state can be improved.
  • a parking space state detecting device which detects a state of a parking space based on a monitoring image of a parking space, the detecting device comprising:
  • a parking space motion detecting unit configured to detect whether there is a moving object in the parking space according to a monitoring image of the parking space
  • An occlusion motion detecting unit that detects whether there is a moving object in the occlusion detecting area in a case where there is no moving object in the parking space, wherein the occlusion detecting area is adjacent to the parking space;
  • a first occlusion detecting unit that detects whether there is a match in the occlusion detection area according to whether an image of a predetermined area in the monitoring image matches a reference image in a case where there is no moving object in the occlusion detection area a occlusion object that causes a occlusion;
  • a parking space state determining unit that determines a state of the parking space based on a detection result of the first occlusion detecting unit.
  • a method for detecting a parking space state is provided, based on a parking space
  • the monitoring image detects the state of the parking space, and the detection method includes:
  • an electronic device including the parking space state detecting device according to the first aspect of the embodiment of the present application.
  • the beneficial effects of the present application are: improving the detection accuracy of the parking space state.
  • FIG. 1 is a schematic diagram of a detecting device of Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram of a first occlusion detecting unit according to Embodiment 1 of the present application;
  • FIG. 3 is a schematic diagram of a reference image generating unit according to Embodiment 1 of the present application.
  • FIG. 4 is a schematic diagram of an image of a predetermined area in the case where there is no occlusion in the daytime according to Embodiment 1 of the present application;
  • Figure 5 is a schematic diagram of the binary image corresponding to Figure 4.
  • FIG. 6 is a schematic diagram of a partial image of a reference image in a daytime case according to Embodiment 1 of the present application;
  • FIG. 7 is a schematic diagram of an image of a predetermined area in the case where there is occlusion in the daytime according to Embodiment 1 of the present application;
  • Figure 8 is a schematic diagram of a binary image corresponding to Figure 7;
  • FIG. 9 is a schematic diagram of an image of a predetermined area in the case where the night is not blocked at the night of Embodiment 1 of the present application;
  • Figure 10 is a schematic diagram of the binary image corresponding to Figure 9;
  • FIG. 11 is a schematic diagram of a partial image of a reference image in the nighttime embodiment of the present application.
  • FIG. 12 is a schematic diagram showing a configuration of an electronic device according to Embodiment 2 of the present application.
  • FIG. 13 is a schematic diagram of a detection method according to Embodiment 3 of the present application.
  • FIG. 14 is a schematic diagram of step 1303 of Embodiment 3 of the present application.
  • step 1401 of Embodiment 3 of the present application is a schematic diagram of step 1401 of Embodiment 3 of the present application.
  • Fig. 16 is a flow chart showing a method of detecting a parking space state in the third embodiment of the present application.
  • Embodiment 1 of the present application provides a parking space state detecting device that detects a state of a parking space based on a monitoring image of a parking space.
  • the detecting device 100 may include: a parking space motion detecting unit 101, an occlusion motion detecting unit 102, a first occlusion detecting unit 103, and a parking space state determination.
  • Unit 104 a parking space motion detecting unit 101, an occlusion motion detecting unit 102, a first occlusion detecting unit 103, and a parking space state determination.
  • the parking space motion detecting unit 101 detects whether there is a moving object in the parking space according to the monitoring image of the parking space; the occlusion motion detecting unit 102 has no moving object in the parking space. And detecting whether there is a moving object in the occlusion detecting area, wherein the occlusion detecting area is adjacent to the parking space; and the first occlusion detecting unit 103 is not in the moving object according to the monitoring image in the occlusion detecting area Whether the image of the predetermined area matches the reference image to detect whether there is an occlusion object that blocks the parking space in the occlusion detection area; the parking space state determining unit 104 determines according to the detection result of the first occlusion detecting unit 103. The status of the parking space.
  • the monitoring image of the parking space can be obtained using the prior art, for example, by setting a camera in the parking lot to capture the parking space.
  • the parking space motion detecting unit 101 can detect whether there is a moving object in the parking space according to the prior art.
  • the parking space motion detecting unit 101 can process the monitoring image by using a foreground detection method, thereby detecting Whether there is a moving object in the parking space.
  • the moving object of the embodiment may be a moving car or a moving person or the like.
  • the parking space motion detecting unit 101 When the parking space motion detecting unit 101 detects that there is a moving object in the parking space, it indicates that the parking space is in an unstable state, for example, there is a vehicle entering or leaving the parking space; when the parking space motion detecting unit 101 detects the parking space When there is no moving object in the middle, it indicates that the parking space is in a stable state. If the steady state is the state in which the parking space is occupied or the parking space is empty, the parking space state determining unit 104 needs to perform the determination.
  • the occlusion motion detecting unit 102 may further detect whether there is a moving object in the occlusion detecting region, wherein the occlusion detecting region may be An area adjacent to the parking space, and the shape and size of the occlusion detection area may be set as needed, for example, the occlusion detection area may be located at the entrance and/or exit of the parking space, and located at the parking space On the outside, the occlusion detection area may be a rectangle whose side length may be substantially the same as the width of the parking space.
  • the occlusion motion detecting unit 102 detects the presence or absence of a moving object in the occlusion detecting area.
  • the method may be referred to in the prior art, which is not limited in this embodiment.
  • the occlusion motion detecting unit 102 detects that there is a moving object in the occlusion detecting area, it indicates that the occlusion detecting area is in an unstable state, for example, there is a vehicle traveling through the occlusion detecting area; when the occlusion motion
  • the detecting unit 102 detects that there is no moving object in the occlusion detecting area, it indicates that the occlusion detecting area is in a stable state, and the stable state of the occlusion detecting area is a state in which an object is blocked or a state in which an object is not present, Detection is performed by the first occlusion detecting unit 103.
  • the first occlusion detecting unit 103 can detect whether there is an occlusion object in the occlusion detecting region that blocks the parking space.
  • the first occlusion detecting unit 103 can detect whether or not there is an occlusion object in the occlusion detection area according to various methods, and the description of these detection methods will be described later.
  • the parking space state determining unit 104 can determine the state of the parking space according to the detection result of the occlusion detecting unit 103, for example, when the first occlusion detecting unit 103 detects that there is no occluding object in the occlusion detecting region.
  • the parking space state determining unit 104 may generate a steady state image of the parking space, and determine a state of the parking space based on the steady state image of the parking space.
  • the occlusion detecting unit 103 detects that there is an occlusion object in the occlusion detection area, the parking is stopped.
  • the bit state determining unit 104 may determine that the state of the parking space is unchanged, that is, if it is determined that the parking space is occupied, the current determination continues to determine that the parking space is occupied, if the previous parking space is determined to be If it is empty, this time continues to determine that the parking space is empty.
  • the method for determining the state of the parking space based on the steady state image of the parking space may refer to the above-mentioned prior application document 1, which is not described in this embodiment.
  • the detecting device 100 may further have a classifying unit 105, and the classifying unit 105 may detect whether or not a vehicle exists in the parking space based on the trained classifier.
  • the classification unit 105 regardless of whether the parking space is occluded, as long as there is a parking space image in the monitoring image, the classification unit 105 can detect the parking space image based on the classification, and as long as the vehicle image exists in the parking space image, the classification Unit 105 can correctly detect the vehicle image.
  • the detection method of the classification unit 105 and a description of the method for obtaining the trained classifier, reference may be made to the prior art, which is not described in this embodiment.
  • the detecting device 100 may further have a foreground detecting unit 106 and a flat detecting unit 107.
  • the foreground detecting unit 106 may be configured to perform foreground detection on the occlusion detection area to determine whether there is an occlusion object; the flat detecting unit 107 may be configured to detect the foreground in the foreground detecting unit 106, and the classification unit 105 detects In the case where there is no vehicle in the parking space, it is detected whether the occlusion detection area is flat.
  • the first occlusion detection unit 103 may further detect an image and a reference of the predetermined area in the monitoring image. Whether the images match, thereby determining whether there is an occlusion object in the occlusion detection area.
  • the parking space state determining unit 104 can be based on the detection result of the first occlusion detecting unit 103, the classifying unit.
  • the classification result of 105, the detection result of the foreground detecting unit 106, and the detection result of the flat detecting unit 107 determine the state of the parking space.
  • the first occlusion detecting unit 103 can detect whether there is an occlusion object in the occlusion detection area according to whether the image of the predetermined area matches the reference image, thereby, the detection result is less affected by the false foreground, and thus the detection result is less affected by the false foreground. more precise.
  • the first occlusion detecting unit 103 may include a reference image generating unit 201 and a matching determining unit 202.
  • the reference image generating unit 201 may generate the reference image according to a predetermined number of frame monitoring images before the current frame monitoring image; the matching determining unit 202 may be configured to determine whether the image of the predetermined area in the current frame monitoring image is The reference image matches.
  • the predetermined number of frame monitoring images may be continuous or non-contiguous, and the predetermined number may be, for example, 100.
  • the monitoring image included in the predetermined number of frame monitoring images may also change according to the change of the current frame monitoring image. For example, after the current frame monitoring image detection is completed, the current frame monitoring image may be used. A frame of the predetermined number of frame monitoring images corresponding to the current frame monitoring image is replaced, thereby forming a predetermined number of frame monitoring images corresponding to the next frame monitoring image, whereby a predetermined number of frame monitoring images can be updated.
  • the reference image generating unit 201 generates a reference image according to a predetermined number of frame monitoring images, whereby the reference image can be updated as the environment changes, so that the detection result is adapted to the change of the environment, thereby avoiding erroneous detection. result.
  • the reference image generated by the reference image generating unit 201 may be a binary image, but the embodiment is not limited thereto, and the reference image may also be a color image or a grayscale image.
  • the matching determination unit 202 may determine whether the image of the predetermined area in the occlusion detection area of the current frame monitoring image matches the reference image, and the matching may be matching of the image shape, matching of the area, and / or matching of other image features.
  • the current frame monitoring image can be converted into the same image type as the reference image, thereby facilitating determination of whether or not the matching is performed, for example, when the reference image is a binary image, a color image, or a grayscale image, the current frame monitoring image It can also be converted into a binary image, a color image or a grayscale image.
  • the matching determination unit 202 determines that the image of the predetermined area in the current frame monitoring image matches the reference image, it indicates that there is no occlusion object, and if the matching determination unit 202 determines that the image of the predetermined area in the current frame monitoring image is The reference images do not match, indicating that there is an occlusion object.
  • the predetermined area in the occlusion detection area may be an area that can still reflect the parking space position in the case of a change in illumination, for example, the predetermined area may be an area where at least a part of the parking space dividing line is located;
  • the dividing line on the side of the parking space entrance or the dividing line on the other side opposite to the entrance may be blocked by the car in the parking space without appearing in the monitoring image,
  • the predetermined area may be an area in which at least a part of the dividing line on the other side opposite to the inlet is located or an area in which at least a part of the dividing line on the side of the parking space entrance is located.
  • the detection according to the image of the area where the parking space dividing line is located can avoid the influence of the illumination change on the detection result.
  • the predetermined area may also be other areas, and the embodiment is not limited thereto.
  • FIG. 3 is a schematic diagram of the reference image generating unit 201 of Embodiment 1 of the present application.
  • the reference image generating unit 201 may include a binary image converting unit 301 and a generating unit 302.
  • the binary image conversion unit 301 can be configured to convert the predetermined number of frame monitoring images into corresponding binary images; the generating unit 302 can be based on the pixels of the pixels at the same position in the binary image of the predetermined number of frames. The value forms the reference image.
  • the binary image conversion unit 301 may compare each frame of the monitoring image in the predetermined number of frame monitoring images with a threshold corresponding to the frame monitoring image, and set a pixel whose pixel value is equal to or higher than the threshold. Set to a white pixel, a pixel point whose pixel value is lower than the threshold is set as a black pixel, thereby generating a binary image of a predetermined number of frames.
  • the threshold corresponding to the frame monitoring image may be set according to the pixel value of each pixel in the frame monitoring image.
  • the frame monitoring image may be filtered to generate a gray histogram, according to the brightest
  • the threshold is calculated by the pixel value of 25% of the pixel and the pixel value of the darkest 25% of the pixel.
  • the threshold may be the pixel value of the brightest 25% of the pixel and the darkest 25%. The midpoint value of the pixel value of the pixel.
  • the embodiment may not be limited thereto, and the threshold corresponding to each frame monitoring image may be set based on other methods.
  • the binary image conversion unit 301 can also process the current frame monitoring image to convert the current frame monitoring image into a binary image.
  • the generating unit 302 may form the reference image according to the number of pixels having predetermined pixel values at the same position in the binary image of the predetermined number of frames. For example, for a pixel in the reference image, if a binary image having a greater than or equal to the first number of frames in the predetermined number of frame binary images has a pixel value of 1 at a position having the same position as the pixel, then The pixel value of the pixel in the reference image is set to 1, otherwise, the pixel value of the pixel in the reference image is set to 0, and the first number may be, for example, a number greater than or equal to half of the predetermined number, for example, The predetermined number is 100 and the first number is 60.
  • the reference image generated by the reference image generating unit 201 is a binary image. Since the pixel value of each pixel in the binary image occupies only 1 bit of data, the requirement for the storage amount is reduced. Moreover, the pixel value of the pixel in the binary image is less affected by the change of the external light, so the accuracy of the detection can be improved.
  • the present embodiment is not limited thereto, and the reference image generating unit 201 may also adopt a structure similar to that of FIG. 3 to generate a color image or a grayscale image as the reference image.
  • the area including the parking space dividing line at the apex of the parking space entrance may be used as a predetermined area, for example, the predetermined area may be set with the parking space dividing line at the apex of the parking space entrance as being centered.
  • the position of the predetermined area is known. Since there are vertices on both sides of the entrance to the parking space, there may be two of the predetermined areas.
  • other areas can be selected as the predetermined area.
  • only the method in which the first occlusion detecting unit 103 performs detection based on one predetermined area will be described, and the method of detecting based on two or more predetermined areas may refer to the description.
  • FIG. 4 is a schematic diagram of an image of a predetermined area in the case where there is no occlusion in the daytime, and as shown in FIG. 4, the image 400 of the predetermined area may be a part of the current frame monitoring image.
  • FIG. 5 is a schematic diagram of the binary image corresponding to FIG. 4.
  • an image 501 corresponding to the parking space line can be observed in the image 500 of the binarized predetermined area.
  • the current frame monitoring image corresponding to FIG. 4 may be filtered to generate a grayscale histogram, and the pixel value of the brightest 25% pixel point and the pixel value of the darkest 25% pixel point are taken.
  • the midpoint value is used as a threshold value, thereby converting the current frame monitoring image into a binary image, and the portion of the binary image corresponding to the image 400 of the predetermined region becomes an image of the binarized predetermined region. 500.
  • FIG. 6 is a schematic diagram of a partial image of a reference image in the daytime case of the present embodiment, as shown in FIG. 6, the partial image 600 of the reference image has the same position and size as the image 500 of the binarized predetermined region, and An image 601 corresponding to the parking space line can be observed in the partial image 600 of the reference image.
  • the reference image may be obtained according to the binarized image corresponding to the 100-frame monitoring image before the current frame monitoring image.
  • the first occlusion detecting unit 103 can determine that the image 500 of the binarized predetermined area shown in FIG. 5 and the partial image 600 of the reference image shown in FIG. 6 are matched, and therefore, the first occlusion detection
  • the detection result of the unit 103 may be that there is no occlusion object in the occlusion detection area.
  • FIG. 7 is a schematic diagram of an image of a predetermined area in the case where there is occlusion in the daytime
  • FIG. 8 is a schematic diagram of a binary image corresponding to the image 700 of the predetermined area of FIG.
  • the first occlusion detecting unit 103 can determine that the image 800 of the binarized predetermined area shown in FIG. 8 and the partial image 600 of the reference image shown in FIG. 6 are not matched, and therefore, the first occlusion
  • the detection result of the detecting unit 103 may be that there is an occlusion object in the occlusion detection area.
  • FIG. 9 is a schematic diagram of an image of a predetermined area in the case where the night is not blocked at night
  • FIG. 10 is a schematic diagram of a binary image corresponding to the image 900 of the predetermined area of FIG. 9, and
  • FIG. 11 is a night situation of the present embodiment.
  • the first occlusion detecting unit 103 can determine that the image 1000 of the binarized predetermined area shown in FIG. 10 and the partial image 1100 of the reference image shown in FIG. 11 are matched, and therefore, the first occlusion detection
  • the detection result of the unit 103 may be that there is no occlusion object in the occlusion detection area.
  • Embodiment 2 of the present application provides an electronic device, which includes: a parking space state detecting device as described in Embodiment 1.
  • Fig. 12 is a block diagram showing the configuration of an electronic apparatus according to a second embodiment of the present application.
  • the electronic device 1200 can include a central processing unit (CPU) 1201 and a memory 1202; the memory 1202 is coupled to the central processing unit 1201. Wherein the memory 1202 can store various data; in addition, the check for the state of the parking space is also stored. The program is tested and executed under the control of the central processing unit 1201.
  • CPU central processing unit
  • memory 1202 is coupled to the central processing unit 1201.
  • the memory 1202 can store various data; in addition, the check for the state of the parking space is also stored.
  • the program is tested and executed under the control of the central processing unit 1201.
  • the functionality in the detection device can be integrated into the central processor 1201.
  • the central processing unit 1201 can be configured to:
  • whether the occlusion detection area has occlusion blocking the parking space is detected according to whether the image of the predetermined area in the monitoring image matches the reference image. And determining a state of the parking space according to a detection result of whether the occlusion object exists in the occlusion detection area.
  • the central processing unit 1201 can also be configured to:
  • the central processing unit 1201 can also be configured to:
  • the central processing unit 1201 can also be configured to:
  • the central processing unit 1201 can also be configured to:
  • the central processing unit 1201 can also be configured to:
  • the occlusion detection area If it is detected that the occlusion detection area is not flat, detecting whether the image of the predetermined area matches the reference image, and determining whether the occlusion object exists in the occlusion detection area.
  • the central processing unit 1201 can also be configured to:
  • the predetermined area in the occlusion detection area includes an area in which at least a portion of the parking space dividing line of the parking space is located.
  • the electronic device 1200 may further include: an input and output unit 1203 and a display list. Element 1204, etc.; wherein the functions of the above components are similar to those of the prior art, and are not described herein again. It should be noted that the electronic device 1200 does not have to include all the components shown in FIG. 12; in addition, the electronic device 1200 may further include components not shown in FIG. 12, and reference may be made to the prior art.
  • Embodiment 3 of the present application provides a method for detecting a parking space state, which detects a state of a parking space based on a monitoring image of a parking space, and corresponds to the detecting device 100 of the first embodiment.
  • FIG. 13 is a schematic diagram of a detection method of this embodiment. As shown in FIG. 13, the method includes:
  • Step 1301 Detect whether there is a moving object in the parking space according to a monitoring image of the parking space;
  • Step 1302 If there is no moving object in the parking space, detecting whether there is a moving object in the occlusion detecting area, wherein the occlusion detecting area is adjacent to the parking space;
  • Step 1303 If there is no moving object in the occlusion detection area, according to whether the image of the predetermined area in the monitoring image matches the reference image, it is detected whether there is any parking space in the occlusion detection area. Occlusion object; and
  • Step 1304 Determine a state of the parking space according to whether a detection result of the occlusion object exists in the occlusion detection area.
  • FIG. 14 is a schematic diagram of step 1303 of the embodiment. As shown in FIG. 14, step 1303 may include:
  • Step 1401 Generate a reference image according to a predetermined number of frame monitoring images before the current frame monitoring image, and generate the reference image;
  • Step 1402 Determine whether an image of the predetermined area in the current frame monitoring image matches the reference image.
  • FIG. 15 is a schematic diagram of step 1401 of the embodiment. As shown in FIG. 15, step 1401 may include:
  • Step 1501 Convert the predetermined number of frame monitoring images into corresponding binary images
  • Step 1502 form the reference image according to pixel values of pixels at the same position in the binary image of the predetermined number of frames.
  • Fig. 16 is a flow chart showing a method of detecting the parking space state of the embodiment. As shown in Figure 16, the detection Methods include:
  • Step 1601 According to the monitoring image, detecting that there is no moving object in the current parking space, and there is no moving object in the occlusion detecting area of the current parking space;
  • Step 1602 Perform foreground detection on the occlusion detection area of the current parking space
  • Step 1603 determining whether there is a foreground in the occlusion detection area, if the determination is "No”, proceed to step 1604, if the determination is YES, proceed to step 1605;
  • Step 1604 Based on the trained classifier, detecting whether a vehicle exists in the current parking space;
  • Step 1606 determining whether the vehicle is detected in the current parking space, "Yes” proceeds to step 1607, and "No” proceeds to 1615;
  • Step 1607 setting the current parking status to "having a car"
  • Step 1605 similar to step 1604, detecting whether there is a vehicle in the current parking space based on the trained classifier;
  • Step 1608 determining whether the vehicle is detected in the current parking space, "Yes” proceeds to step 1607, “No” proceeds to step 1609;
  • Step 1609 Perform flat detection on the occlusion detection area of the current parking space
  • Step 1610 determining whether the occlusion detection area is flat, "Yes” proceeds to step 1615, "No” proceeds to step 1611;
  • Step 1611 Generate a reference image based on a predetermined number of monitoring images before the current parking space
  • Step 1612 Perform a detection of whether an image of a predetermined area in the current frame monitoring image matches the reference image
  • Step 1613 to determine whether the match, "Yes” proceeds to step 1615, "No” proceeds to step 1614;
  • step 1614 the parking space status is not updated.
  • the current frame monitoring image may be used to update a predetermined number of frame monitoring images corresponding to the next frame monitoring image, so that the reference image corresponding to the next frame monitoring image is updated.
  • Fig. 16 only the detection method for one parking space is shown. For each parking space in the monitoring image, the method shown in Fig. 16 can be used to detect the state of the parking space.
  • the embodiment of the present application further provides a computer readable program, wherein the program causes the detecting device or the electronic device to perform the detecting method described in Embodiment 3 when the program is executed in a positioning device or an electronic device.
  • the embodiment of the present application further provides a storage medium storing a computer readable program, wherein the storage medium stores the computer readable program, wherein the computer readable program causes the detecting device or the electronic device to perform the embodiment described in Embodiment 3. Detection method.
  • the detection device described in connection with the embodiments of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of both.
  • one or more of the functional blocks shown in Figures 1-3 and/or one or more combinations of functional blocks may correspond to various software modules of a computer program flow, or to individual hardware modules.
  • These software modules may correspond to the respective steps shown in Embodiment 3, respectively.
  • These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the software module can be stored in the memory of the mobile terminal or in a memory card that can be inserted into the mobile terminal.
  • the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
  • One or more of the functional block diagrams described with respect to Figures 1-3 and/or one or more combinations of functional block diagrams may be implemented as a general purpose processor, digital signal processor (DSP) for performing the functions described herein.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • One or more of the functional blocks described with respect to Figures 1-3 and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供一种停车位状态的监测方法、检测装置和电子设备,该检测装置包括:停车位运动检测单元,其根据对停车位的监控图像,检测所述停车位中是否有运动物体;遮挡运动检测单元,其在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;第一遮挡检测单元,其在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;停车位状态判断单元,其根据所述第一遮挡检测单元的检测结果,判定所述停车位的状态。根据本实施例,能够提高停车位状态的检测准确度。

Description

停车位状态的检测方法、检测装置和电子设备 技术领域
本申请涉及信息技术领域,尤其涉及一种停车位状态的检测方法、检测装置和电子设备。
背景技术
当今社会,越来越多的家庭开始拥有并使用汽车,随之而来的停车问题正越来越困扰车辆使用者,他们迫切需要随时随地得知附近停车场的空满信息以提高停车效率。因此,停车场需要检测停车位状态,从而能够将停车位状态的信息实时通知给用户。对于大型停车场而言,靠人力去跟踪每个停车位的状态变化显然是不现实的。
随着科技的进步,图像处理技术正被越来越广泛的运用到各个领域,其中也包括停车位状态检测的领域。
在本案申请人的在前申请文件1(CN201510705589.X)中,记载了一种能够快速而准确地进行停车位状态检测的方法,该方法基于对停车位的监控图像,检测停车位中物体的运动状态,对不含有物体运动的停车位生成稳态停车位图,并且通过一定的后处理技术使图像清晰化,然后基于轮廓法和分类器检测来确定停车位的状态。
应该注意,上面对技术背景的介绍只是为了方便对本申请的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本申请的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
申请内容
本申请的发明人发现,对于许多繁华地段的停车场,停车位周围的车辆行进道经常会有车辆驶过,对于车辆行进道上运动的车辆,上述在前申请文件1能够进行识别,所以不会对停车位状态的检测结果产生影响。
但是在有些情况下,车辆会停在车辆行进道上几分钟至几十分钟,以等待上下客、装卸货或等待其它车辆驶离停车位等,由此,在监控图像中,这些静停在车辆行进道上的车辆会把几个停车位中车辆的图像都遮挡住,并且,这些静停的车辆会使稳态停车位图像发生明显变化,从而使停车位状态的监测结果产生错误,例如,由于静停的 车辆造成遮挡,原本有车的停车位中检测不到车辆信息而转变为空状态,这种空状态会持续到造成遮挡的静停车辆离开才会变回正确的有车状态。
虽然可以对停车位周围的遮挡检测区域进行前景检测和平坦检测来确定停车位是否被遮挡,但是,在有些情况下会产生虚假前景,并且难以通过平坦检测法来过滤,因此,难以准确地检测停车位是否被遮挡。
例如,在下雨天,地面积水很多,遮挡检测区域的积水犹如一面镜子反射周边的影像,如果停车位中有车,则整个车前脸将倒映在路面得积水中,当车离开停车位之后,原来倒映的车前脸将消失,在这种情况下,由于倒影的变化,使得遮挡检测区域出现强烈的前景图像,而这前景图像由于细节丰富,无法通过平坦检测法进行过滤,所以会得到该停车位被遮挡的错误检测结果,从而导致虽然实际停车位的状态已经变化,但是由于错误的检测结果,使得停车位的状态无法更新,严重影响停车位的使用效果。
此外,引起虚假前景的原因还可以包括阳光引起的快速阴影变化等情况,本申请仅以路面积水为例进行说明。
本申请的实施例提供一种停车位状态的检测方法、检测装置和电子设备,通过判断停车位周围预定区域的图像与参考图像是否匹配,来判断停车位是否被遮挡,从而确定停车位状态,由此,能够提高停车位状态的检测准确度。
根据本申请实施例的第一方面,提供一种停车位状态的检测装置,基于对停车位的监控图像,检测停车位的状态,该检测装置包括:
停车位运动检测单元,其根据对停车位的监控图像,检测所述停车位中是否有运动物体;
遮挡运动检测单元,其在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;
第一遮挡检测单元,其在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;以及
停车位状态判断单元,其根据所述第一遮挡检测单元的检测结果,判定所述停车位的状态。
根据本申请实施例的第二方面,提供一种停车位状态的检测方法,基于对停车位 的监控图像,检测停车位的状态,该检测方法包括:
根据对停车位的监控图像,检测所述停车位中是否有运动物体;
在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;
在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;以及
根据所述遮挡检测区域中是否存在所述遮挡物体的检测结果,判定所述停车位的状态。
根据本申请实施例的第三方面,提供一种电子设备,包括本申请实施例第一方面所述的停车位状态检测装置。
本申请的有益效果在于:提高停车位状态的检测准确度。
参照后文的说明和附图,详细公开了本申请的特定实施方式,指明了本申请的原理可以被采用的方式。应该理解,本申请的实施方式在范围上并不因而受到限制。在所附权利要求的精神和条款的范围内,本申请的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在,但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
所包括的附图用来提供对本申请实施例的进一步的理解,其构成了说明书的一部分,用于例示本申请的实施方式,并与文字描述一起来阐释本申请的原理。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1是本申请实施例1的检测装置的一个示意图;
图2是本申请实施例1的第一遮挡检测单元的一个示意图;
图3是本申请实施例1的参考图像生成单元的一个示意图;
图4是本申请实施例1的白天没有遮挡的情况下预定区域的图像的一个示意图;
图5是图4对应的二值图像的一个示意图;
图6是本申请实施例1的白天情况下参考图像的局部图像的一个示意图;
图7是本申请实施例1的白天有遮挡的情况下预定区域的图像的一个示意图;
图8是图7对应的二值图像的一个示意图;
图9是本申请实施例1的夜晚无遮挡的情况下预定区域的图像的一个示意图;
图10是图9对应的二值图像的一个示意图;
图11是本申请实施例1的夜晚情况下参考图像的局部图像的一个示意图;
图12是本申请实施例2的电子设备的一个构成示意图;
图13是本申请实施例3的检测方法的一个示意图;
图14是本申请实施例3的步骤1303的一个示意图;
图15是本申请实施例3的步骤1401的一个示意图;
图16是本申请实施例3的停车位状态的检测方法的一个流程图。
具体实施方式
参照附图,通过下面的说明书,本申请的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本申请的特定实施方式,其表明了其中可以采用本申请的原则的部分实施方式,应了解的是,本申请不限于所描述的实施方式,相反,本申请包括落入所附权利要求的范围内的全部修改、变型以及等同物。下面结合附图对本申请的各种实施方式进行说明。这些实施方式只是示例性的,不是对本申请的限制。
实施例1
本申请实施例1提供一种停车位状态的检测装置,该监测装置基于对停车位的监控图像,检测停车位的状态。
图1是实施例1的检测装置的一个示意图,如图1所示,检测装置100可以包括:停车位运动检测单元101,遮挡运动检测单元102,第一遮挡检测单元103,以及停车位状态判断单元104。
在本实施例中,停车位运动检测单元101根据对停车位的监控图像,检测所述停车位中是否有运动物体;遮挡运动检测单元102在所述停车位中没有运动物体的情况 下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;第一遮挡检测单元103在所述遮挡检测区域中没有运动物体时,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;停车位状态判断单元104根据第一遮挡检测单元103的检测结果,判定所述停车位的状态。
根据本实施例,通过判断停车位周围预定区域的图像与参考图像是否匹配,来判断停车位是否被遮挡,从而确定停车位状态,由此,能够提高停车位状态的检测准确度,此外,还能提高检测速度。
在本实施例中,对停车位的监控图像可以使用现有技术而获得,例如,通过在停车场设置摄像头对停车位进行摄像来获得。
在本实施例中,停车位运动检测单元101可以根据现有技术来检测停车位中是否有运动物体,例如,停车位运动检测单元101可以使用前景检测的方法对该监控图像进行处理,从而检测停车位中是否存在运动物体。此外,本实施例的运动物体可以是运动的车或运动的人等。
当停车位运动检测单元101检测到停车位中存在运动物体时,表明该停车位处于非稳定状态,例如,有车辆正在进入或驶离该停车位;当停车位运动检测单元101检测到停车位中不存在运动物体时,表明该停车位处于稳定状态,至于该稳定状态是该停车位被占据的状态或是该停车位为空的状态,则需要由停车位状态判断单元104来进行判定。
在本实施例中,当停车位运动检测单元101检测出停车位中没有运动物体的情况下,遮挡运动检测单元102可以进一步检测遮挡检测区域中是否有运动物体,其中,该遮挡检测区域可以是与该停车位邻近的某个区域,并且,该遮挡检测区域的形状和尺寸可以依据需要进行设定,例如,该遮挡检测区域可以位于该停车位入口和/或出口,且位于该停车位的外侧,该遮挡检测区域可以是长方形,其边长可以与该停车位的宽度大致相同。
在本实施例中,遮挡运动检测单元102检测该遮挡检测区域中是否存在运动物体的方法可以参考现有技术,本实施例不做限定。
当遮挡运动检测单元102检测到该遮挡检测区域中存在运动物体时,表明该遮挡检测区域处于非稳定状态,例如,有车辆正在行驶经过该遮挡检测区域;当遮挡运动 检测单元102检测到该遮挡检测区域中不存在运动物体时,表明该遮挡检测区域处于稳定状态,至于该遮挡检测区域的稳定状态是存在遮挡物体的状态或是不存在遮挡物体的状态,则需要由第一遮挡检测单元103来进行检测。
在本实施例中,当遮挡运动检测单元102检测到在该遮挡检测区域中没有运动物体时,第一遮挡检测单元103可以检测该遮挡检测区域中是否存在对该停车位造成遮挡的遮挡物体。
在本实施例中,第一遮挡检测单元103可以根据多种方法来检测遮挡检测区域中是否存在遮挡物体,关于这些检测方法的说明,将在后文中述及。
在本实施例中,停车位状态判断单元104能够根据遮挡检测单元103的检测结果,判定该停车位的状态,例如,当第一遮挡检测单元103检测出该遮挡检测区域中不存在遮挡物体时,停车位状态判断单元104可以生成该停车位的稳态图像,并基于该车位的稳态图像判定该停车位的状态,当遮挡检测单元103检测出该遮挡检测区域中存在遮挡物体时,停车位状态判断单元104可以判定为该停车位的状态不变,即,如果前一次判定为该停车位被占据,则本次继续判定为该停车位被占据,如果前一次判定为该停车位为空,则本次继续判定为该停车位为空。
在本实施例中,停车位状态判断单元104根据该停车位的稳态图像判定该停车位的状态的方法可以参考上述的在前申请文件1,本实施例不再进行说明。
此外,在本实施例中,如图1所示,检测装置100还可以具有分类单元105,分类单元105可以基于训练出的分类器,检测停车位中是否存在车辆。在本实施例中,无论停车位是否被遮挡,只要监控图像中存在停车位图像,分类单元105就能基于分类其对该停车位图像进行检测,并且只要该停车位图像中存在车辆图像,分类单元105就能正确检测出该车辆图像。关于分类单元105的检测方法的说明,以及获得训练出的分类器的方法的说明,可以参考现有技术,本实施例不再进行说明。
此外,在本实施例中,如图1所示,检测装置100还可以具有前景检测单元106和平坦检测单元107。
在本实施例中,前景检测单元106可以用于对遮挡检测区域进行前景检测,以判断是否存在遮挡物体;平坦检测单元107可以用于在前景检测单元106检测到前景,且分类单元105检测到停车位中没有车辆的情况下,检测该遮挡检测区域是否平坦。
在本实施例中,在平坦检测单元107检测到该遮挡检测区域平坦的情况下,可以 判断为该遮挡检测区域中不存在遮挡物体;在平坦检测单元107检测到该遮挡检测区域不平坦的情况下,第一遮挡检测单元103可以进一步检测该监控图像中的该预定区域的图像与参考图像是否匹配,从而判断该遮挡检测区域中是否存在遮挡物体。
在本实施例中,当检测装置100中具有分类单元105、前景检测单元106、以及平坦检测单元107的情况下,停车位状态判断单元104可以根据第一遮挡检测单元103的检测结果、分类单元105的分类结果、前景检测单元106的检测结果、以及平坦检测单元107的检测结果来判定停车位的状态。
下面,结合附图对第一遮挡检测单元103的结构进行说明。
在本实施例中,第一遮挡检测单元103可以根据预定区域的图像与参考图像是否匹配,来检测遮挡检测区域中是否存在遮挡物体,由此,检测结果会较少受到虚假前景的影响,因而更加准确。
图2是本申请实施例1的第一遮挡检测单元103的一个示意图,如图2所示,第一遮挡检测单元103可以包括参考图像生成单元201和匹配判断单元202。
在本实施例中,参考图像生成单元201可以根据当前帧监控图像之前的预定数量帧监控图像,生成该参考图像;匹配判断单元202可以用于判断当前帧监控图像中的预定区域的图像是否与该参考图像匹配。
在本实施例中,预定数量帧监控图像可以是连续的,也可以是非连续的,该预定数量例如可以是100。
在本实施例中,随着当前帧监控图像的变化,该预定数量帧监控图像中所包含的监控图像也可以变化,例如,在当前帧监控图像检测完成后,该当前帧监控图像可以用来替换当前帧监控图像所对应的该预定数量帧监控图像中的一帧,从而形成下一帧监控图像所对应的预定数量帧监控图像,由此,能够对预定数量帧监控图像进行更新。
在本实施例中,参考图像生成单元201根据预定数量帧监控图像来生成参考图像,由此,可以随环境的变化,更新参考图像,以使检测结果与环境的变化相适应,避免错误的检测结果。
在本实施例中,参考图像生成单元201所生成的参考图像可以是二值图像,但本实施例不限于此,该参考图像也可以是彩色图像或者灰度图像。
在本实施例中,匹配判断单元202可以判断当前帧监控图像的遮挡检测区域中的预定区域的图像是否与参考图像匹配,该匹配可以是图像形状的匹配、面积的匹配和 /或其它图像特征的匹配。在本实施例中,当前帧监控图像可以被转化为与参考图像相同的图像类型,从而便于判断是否匹配,例如,当参考图像是二值图像、彩色图像或灰度图像时,当前帧监控图像也可以被转化为二值图像、彩色图像或灰度图像。
在本实施例中,如果匹配判断单元202判断为当前帧监控图像中预定区域的图像与参考图像匹配,说明不存在遮挡物体,如果匹配判断单元202判断为当前帧监控图像中预定区域的图像与参考图像不匹配,说明存在遮挡物体。
在本实施例中,该遮挡检测区域中的预定区域可以是在光照变化的情况下仍然能够反映停车位位置的区域,例如,该预定区域可以是停车位分割线的至少一部分所处的区域;此外,由于在停车位中有车的情况下,停车位入口一侧的分割线或与入口相对的另一侧的分割线有可能被停车位中的车遮挡而不出现在监控图像中,因此,该预定区域也可以是与入口相对的另一侧的分割线的至少一部分所处的区域或停车位入口一侧的分割线的至少一部分所处的区域。由于停车位分割线一般为亮度较高的线条,因此,根据停车位分割线所处区域的图像来进行检测,能够避免光照变化对检测结果的影响。此外,在本实施例中,该预定区域也可以是其它的区域,本实施例并不限于此。
图3是本申请实施例1的参考图像生成单元201的一个示意图,如图3所示,参考图像生成单元201可以包括二值图像转化单元301和生成单元302。
在本实施例中,二值图像转化单元301可以用于将该预定数量帧监控图像转化为对应的二值图像;生成单元302可以根据该预定数量帧的二值图像中相同位置处像素的像素值,形成该参考图像。
在本实施例中,二值图像转化单元301可以将该预定数量帧监控图像中的每一帧监控图像与该帧监控图像对应的阈值进行比较,将像素值等于或高于该阈值的像素点设置为白色像素,将像素值低于该阈值的像素点设置为黑色像素,由此,生成预定数量帧的二值图像。
在本实施例中,该帧监控图像对应的阈值可以根据该帧监控图像中各像素的像素值来设定,例如,可以对该帧监控图像进行滤波后生成灰度直方图,根据最亮的25%的像素点的像素值和最暗的25%的像素点的像素值,来计算该阈值,例如,该阈值可以是最亮的25%的像素点的像素值和最暗的25%的像素点的像素值的中点值。当然,本实施例可以不限于此,也可以基于其它的方法来设置每一帧监控图像所对应的阈值。
此外,在本实施例中,该二值图像转化单元301也可以对当前帧监控图像进行处理,以将当前帧监控图像转化为二值图像。
在本实施例中,生成单元302可以根据该预定数量帧的二值图像中相同位置处具有预定像素值的像素的数量,形成该参考图像。例如,对于参考图像中的某一个像素,如果该预定数量帧二值图像中有大于或等于第一数量帧的二值图像在与该像素具有相同位置处的像素的像素值为1,那么,参考图像中的该像素的像素值被设置为1,否则,参考图像中的该像素的像素值被设置为0,该第一数量例如可以是大于或等于该预定数量的一半的数量,例如,该预定数量为100,该第一数量为60。
在本实施例中,参考图像生成单元201所生成的参考图像为二值图像,由于二值图像中每一个像素的像素值仅占用1比特的数据量,因此,减少了对存储量的需求,并且,二值图像中像素的像素值受外界光线变化的影响较小,所以能够提高检测的准确性。
当然,本实施例可以不限于此,参考图像生成单元201也可以采用与图3类似的结构,来生成彩色图像或灰度图像以作为该参考图像。
下面,结合一个实例来说明第一遮挡检测单元103的工作原理。
在下面的实例中,可以将停车位入口顶点处的包含停车位分割线的区域作为预定区域,例如,该预定区域可以是以停车位入口顶点处的停车位分割线为中心所设定的具有预定大小的区域,由于停车位分割线是人为划定的,所以该预定区域的位置已知。由于在停车位入口处的两侧都有顶点,所以,该预定区域可以有两个。此外,还可以选择其它的区域作为预定区域。在下面的实例中,仅说明第一遮挡检测单元103基于一个预定区域进行检测的方法,基于两个以上的预定区域进行检测的方法可以参照该说明内容。
图4是本实施例的白天没有遮挡的情况下预定区域的图像的一个示意图,如图4所示,该预定区域的图像400可以是当前帧监控图像的一部分。
图5是图4对应的二值图像的一个示意图,如图5所示,二值化的预定区域的图像500中可以观察到车位线对应的图像501。在本实施例中,可以将图4对应的当前帧监控图像进行滤波后生成灰度直方图,取最亮的25%的像素点的像素值和最暗的25%的像素点的像素值的中点值作为阈值,从而将当前帧监控图像转化为二值图像,并且,该二值图像中与预定区域的图像400对应的部分成为二值化的预定区域的图像 500。
图6是本实施例的白天情况下参考图像的局部图像的一个示意图,如图6所示,参考图像的局部图像600与二值化的预定区域的图像500具有相同的位置和大小,并且,参考图像的局部图像600中可以观察到车位线对应的图像601。在本实施例中,参考图像可以根据当前帧监控图像之前的100帧监控图像所对应的二值化图像来得到。
在本实施例中,第一遮挡检测单元103可以将图5所示的二值化的预定区域的图像500与图6所示的参考图像的局部图像600判断为匹配,因此,第一遮挡检测单元103的检测结果可以是遮挡检测区域中不存在遮挡物体。
图7是本实施例的白天有遮挡的情况下预定区域的图像的一个示意图,图8是图7的预定区域的图像700对应的二值图像的一个示意图。
在本实施例中,第一遮挡检测单元103可以将图8所示的二值化的预定区域的图像800与图6所示的参考图像的局部图像600判断为不匹配,因此,第一遮挡检测单元103的检测结果可以是遮挡检测区域中存在遮挡物体。
图9是本实施例的夜晚无遮挡的情况下预定区域的图像的一个示意图,图10是图9的预定区域的图像900对应的二值图像的一个示意图,图11是本实施例的夜晚情况下参考图像的局部图像的一个示意图。
在本实施例中,第一遮挡检测单元103可以将图10所示的二值化的预定区域的图像1000与图11所示的参考图像的局部图像1100判断为匹配,因此,第一遮挡检测单元103的检测结果可以是遮挡检测区域中不存在遮挡物体。
在本实施例中,根据监控图像中的预定区域的图像与参考图像是否匹配,能够准确检测出是否存在对停车位造成遮挡的遮挡物体,避免检测结果受到周围环境光线变化的影响。
实施例2
本申请实施例2提供一种电子设备,所述电子设备包括:如实施例1所述的停车位状态的检测装置。
图12是本申请实施例2的电子设备的一个构成示意图。如图12所示,电子设备1200可以包括:中央处理器(CPU)1201和存储器1202;存储器1202耦合到中央处理器1201。其中该存储器1202可存储各种数据;此外还存储用于停车位状态的检 测的程序,并且在中央处理器1201的控制下执行该程序。
在一个实施方式中,检测装置中的功能可以被集成到中央处理器1201中。
其中,中央处理器1201可以被配置为:
根据对停车位的监控图像,检测所述停车位中是否有运动物体;
在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;
在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;以及根据所述遮挡检测区域中是否存在所述遮挡物体的检测结果,判定所述停车位的状态。
其中,中央处理器1201还可以被配置为:
根据当前帧监控图像之前的预定数量帧监控图像,生成所述参考图像;以及判断当前帧监控图像中的所述预定区域的图像是否与所述参考图像匹配。
其中,中央处理器1201还可以被配置为:
将所述预定数量帧监控图像转化为对应的二值图像;以及根据所述预定数量帧的二值图像中相同位置处像素的像素值,形成所述参考图像。
其中,中央处理器1201还可以被配置为:
基于训练出的分类器,检测所述停车位中是否存在车辆。
其中,中央处理器1201还可以被配置为:
对所述遮挡检测区域进行前景检测,以判断是否存在遮挡物体;以及在检测到前景,且基于所述分类器检测到停车位中没有车辆的情况下,检测所述遮挡检测区域是否平坦。
其中,中央处理器1201还可以被配置为:
在检测到所述遮挡检测区域不平坦的情况下,检测所述预定区域的图像与所述参考图像是否匹配,来判断所述遮挡检测区域中是否存在所述遮挡物体。
其中,中央处理器1201还可以被配置为:
所述遮挡检测区域中的所述预定区域包括所述停车位的停车位分割线的至少一部分所处的区域。
此外,如图12所示,电子设备1200还可以包括:输入输出单元1203和显示单 元1204等;其中,上述部件的功能与现有技术类似,此处不再赘述。值得注意的是,电子设备1200也并不是必须要包括图12中所示的所有部件;此外,电子设备1200还可以包括图12中没有示出的部件,可以参考现有技术。
实施例3
本申请实施例3提供一种停车位状态的检测方法,基于对停车位的监控图像,检测停车位的状态,与实施例1的检测装置100相对应。
图13是本实施例的检测方法的一个示意图,如图13所示,该方法包括:
步骤1301、根据对停车位的监控图像,检测所述停车位中是否有运动物体;
步骤1302、在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;
步骤1303、在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;以及
步骤1304、根据所述遮挡检测区域中是否存在所述遮挡物体的检测结果,判定所述停车位的状态。
图14是本实施例的步骤1303的一个示意图,如图14所示,步骤1303可以包括:
步骤1401、根据当前帧监控图像之前的预定数量帧监控图像,生成所述参考图像;以及
步骤1402、判断当前帧监控图像中的所述预定区域的图像是否与所述参考图像匹配。
图15是本实施例的步骤1401的一个示意图,如图15所示,步骤1401可以包括:
步骤1501、将所述预定数量帧监控图像转化为对应的二值图像;以及
步骤1502、根据所述预定数量帧的二值图像中相同位置处像素的像素值,形成所述参考图像。
在本实施例中,关于各步骤的说明,可以参考实施例1中对于各单元的说明,此处不再说明。
下面,结合一个实例来说明本实施例的停车位状态的检测方法。
图16是本实施例的停车位状态的检测方法的一个流程图。如图16所述,该检测 方法包括:
步骤1601、根据监控图像,检测到当前停车位中没有运动物体,并且当前停车位的遮挡检测区域中也没有运动物体;
步骤1602、对当前停车位的遮挡检测区域进行前景检测;
步骤1603、判断该遮挡检测区域中是否存在前景,如果判断为“否”,进入步骤1604,如果判断为“是”,进入步骤1605;
步骤1604、基于训练出的分类器,检测当前车位中是否存在车辆;
步骤1606、判断当前车位中是否检测到车辆,“是”则进入步骤1607,“否”则进入1615;
步骤1607、将当前停车位状态设定为“有车”;
步骤1605、与步骤1604相同,基于训练出的分类器,检测当前车位中是否存在车辆;
步骤1608、判断当前车位中是否检测到车辆,“是”则进入步骤1607,“否”则进入步骤1609;
步骤1609、对当前车位的遮挡检测区域进行平坦检测;
步骤1610、判断该遮挡检测区域是否平坦,“是”则进入步骤1615,“否”则进入步骤1611;
步骤1611、基于当前车位之前的预定数量的监控图像,生成参考图像;
步骤1612、进行当前帧监控图像中的预定区域的图像是否与参考图像匹配的检测;
步骤1613、判断是否匹配,“是”则进入步骤1615,“否”则进入步骤1614;
步骤1614、不对停车位状态进行更新。
在本实施例中,当前帧监控图像可以用于对下一帧监控图像所对应的预定数量帧监控图像进行更新,从而使下一帧监控图像对应的参考图像得到更新。
图16中,仅示出了针对一个停车位的检测方法,对于监控图像中的每一个停车位,都可以采用图16所示的方法来检测该停车位的状态。
根据本实施例,通过判断停车位的预定区域的图像与参考图像是否匹配,来判断停车位是否被遮挡,从而确定停车位状态,由此,能够提高停车位状态的检测准确度,此外,还能提高检测速度。
本申请实施例还提供一种计算机可读程序,其中当在定位装置或电子设备中执行所述程序时,所述程序使得所述检测装置或电子设备执行实施例3所述的检测方法。
本申请实施例还提供一种存储有计算机可读程序的存储介质,其中,所述存储介质存储上述计算机可读程序,所述计算机可读程序使得检测装置或电子设备执行实施例3所述的检测方法。
结合本发明实施例描述的检测装置可直接体现为硬件、由处理器执行的软件模块或二者组合。例如,图1-3中所示的功能框图中的一个或多个和/或功能框图的一个或多个组合,既可以对应于计算机程序流程的各个软件模块,亦可以对应于各个硬件模块。这些软件模块,可以分别对应于实施例3所示的各个步骤。这些硬件模块例如可利用现场可编程门阵列(FPGA)将这些软件模块固化而实现。
软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域已知的任何其它形式的存储介质。可以将一种存储介质耦接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息;或者该存储介质可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。该软件模块可以存储在移动终端的存储器中,也可以存储在可插入移动终端的存储卡中。例如,若设备(例如移动终端)采用的是较大容量的MEGA-SIM卡或者大容量的闪存装置,则该软件模块可存储在该MEGA-SIM卡或者大容量的闪存装置中。
针对图1-3描述的功能框图中的一个或多个和/或功能框图的一个或多个组合,可以实现为用于执行本申请所描述功能的通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑器件、分立门或晶体管逻辑器件、分立硬件组件、或者其任意适当组合。针对图1-3描述的功能框图中的一个或多个和/或功能框图的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本申请进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本申请保护范围的限制。本领域技术人员可以根据本申请的原理对本申请做出各种变型和修改,这些变型和修改也在本申请的范围内。

Claims (15)

  1. 一种停车位状态的检测装置,基于对停车位的监控图像,检测停车位的状态,该检测装置包括:
    停车位运动检测单元,其根据对停车位的监控图像,检测所述停车位中是否有运动物体;
    遮挡运动检测单元,其在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;
    第一遮挡检测单元,其在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;以及
    停车位状态判断单元,其根据所述第一遮挡检测单元的检测结果,判定所述停车位的状态。
  2. 如权利要求1所述的停车位状态的检测装置,其中,所述第一遮挡检测单元包括:
    参考图像生成单元,其根据当前帧监控图像之前的预定数量帧监控图像,生成所述参考图像;以及
    匹配判断单元,其用于判断当前帧监控图像中的所述预定区域的图像是否与所述参考图像匹配。
  3. 如权利要求2所述的停车位状态的检测装置,其中,所述参考图像生成单元包括:
    二值图像转化单元,其用于将所述预定数量帧监控图像转化为对应的二值图像;以及
    生成单元,其用于根据所述预定数量帧的二值图像中相同位置处像素的像素值,形成所述参考图像。
  4. 如权利要求1所述的停车位状态的检测装置,其中,所述检测装置还包括:
    分类单元,其基于训练出的分类器,检测所述停车位中是否存在车辆。
  5. 如权利要求4所述的停车位状态的检测装置,其中,所述检测装置还包括:
    前景检测单元,其用于对所述遮挡检测区域进行前景检测,以判断是否存在遮挡 物体;以及
    平坦检测单元,其用于在所述前景检测单元检测到前景,且分类单元检测到停车位中没有车辆的情况下,检测所述遮挡检测区域是否平坦。
  6. 如权利要求5所述的停车位状态的检测装置,其中,
    在所述平坦检测单元检测到所述遮挡检测区域不平坦的情况下,所述第一遮挡检测单元检测所述预定区域的图像与所述参考图像是否匹配,来判断所述遮挡检测区域中是否存在所述遮挡物体。
  7. 如权利要求1所述的停车位状态的检测装置,其中,
    所述遮挡检测区域中的所述预定区域包括所述停车位的停车位分割线的至少一部分所处的区域。
  8. 一种电子设备,包括权利要求1-7中任一项所述的停车位状态检测装置。
  9. 一种停车位状态的检测方法,基于对停车位的监控图像,检测停车位的状态,该检测方法包括:
    根据对停车位的监控图像,检测所述停车位中是否有运动物体;
    在所述停车位中没有运动物体的情况下,检测遮挡检测区域中是否有运动物体,其中,所述遮挡检测区域与所述停车位邻近;
    在所述遮挡检测区域中没有运动物体的情况下,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体;以及
    根据所述遮挡检测区域中是否存在所述遮挡物体的检测结果,判定所述停车位的状态。
  10. 如权利要求9所述的停车位状态的检测方法,其中,根据所述监控图像中的预定区域的图像与参考图像是否匹配,来检测所述遮挡检测区域中是否存在对所述停车位造成遮挡的遮挡物体包括:
    根据当前帧监控图像之前的预定数量帧监控图像,生成所述参考图像;以及
    判断当前帧监控图像中的所述预定区域的图像是否与所述参考图像匹配。
  11. 如权利要求10所述的停车位状态的检测方法,其中,生成所述参考图像包括:
    将所述预定数量帧监控图像转化为对应的二值图像;以及
    根据所述预定数量帧的二值图像中相同位置处像素的像素值,形成所述参考图像。
  12. 如权利要求9所述的停车位状态的检测方法,其中,所述检测方法还包括:
    基于训练出的分类器,检测所述停车位中是否存在车辆。
  13. 如权利要求12所述的停车位状态的检测方法,其中,所述检测方法还包括:
    对所述遮挡检测区域进行前景检测,以判断是否存在遮挡物体;以及
    在检测到前景,且基于所述分类器检测到停车位中没有车辆的情况下,检测所述遮挡检测区域是否平坦。
  14. 如权利要求13所述的停车位状态的检测方法,其中,
    在检测到所述遮挡检测区域不平坦的情况下,检测所述预定区域的图像与所述参考图像是否匹配,来判断所述遮挡检测区域中是否存在所述遮挡物体。
  15. 如权利要求9所述的停车位状态的检测方法,其中,
    所述遮挡检测区域中的所述预定区域包括所述停车位的停车位分割线的至少一部分所处的区域。
PCT/CN2016/103778 2016-10-28 2016-10-28 停车位状态的检测方法、检测装置和电子设备 WO2018076281A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/103778 WO2018076281A1 (zh) 2016-10-28 2016-10-28 停车位状态的检测方法、检测装置和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/103778 WO2018076281A1 (zh) 2016-10-28 2016-10-28 停车位状态的检测方法、检测装置和电子设备

Publications (1)

Publication Number Publication Date
WO2018076281A1 true WO2018076281A1 (zh) 2018-05-03

Family

ID=62023151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/103778 WO2018076281A1 (zh) 2016-10-28 2016-10-28 停车位状态的检测方法、检测装置和电子设备

Country Status (1)

Country Link
WO (1) WO2018076281A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428547A (zh) * 2019-06-24 2020-07-17 杭州海康威视数字技术股份有限公司 车位确定方法及装置
CN113570872A (zh) * 2021-08-13 2021-10-29 深圳市捷顺科技实业股份有限公司 一种遮挡停车位事件的处理方法及装置
CN113781823A (zh) * 2020-06-09 2021-12-10 恒景科技股份有限公司 环境光估算系统
TWI750673B (zh) * 2020-05-26 2021-12-21 恆景科技股份有限公司 環境光估算系統
CN114282623A (zh) * 2021-12-29 2022-04-05 北京商海文天科技发展有限公司 一种基于客运车辆违规上下客的分析方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150086071A1 (en) * 2013-09-20 2015-03-26 Xerox Corporation Methods and systems for efficiently monitoring parking occupancy
CN105390021A (zh) * 2015-11-16 2016-03-09 北京蓝卡科技股份有限公司 车位状态的检测方法及装置
CN105894529A (zh) * 2016-06-03 2016-08-24 北京精英智通科技股份有限公司 车位状态检测方法和装置及系统
CN106023594A (zh) * 2016-06-13 2016-10-12 北京精英智通科技股份有限公司 一种车位遮挡的判定方法、装置及车辆管理系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150086071A1 (en) * 2013-09-20 2015-03-26 Xerox Corporation Methods and systems for efficiently monitoring parking occupancy
CN105390021A (zh) * 2015-11-16 2016-03-09 北京蓝卡科技股份有限公司 车位状态的检测方法及装置
CN105894529A (zh) * 2016-06-03 2016-08-24 北京精英智通科技股份有限公司 车位状态检测方法和装置及系统
CN106023594A (zh) * 2016-06-13 2016-10-12 北京精英智通科技股份有限公司 一种车位遮挡的判定方法、装置及车辆管理系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428547A (zh) * 2019-06-24 2020-07-17 杭州海康威视数字技术股份有限公司 车位确定方法及装置
CN111428547B (zh) * 2019-06-24 2024-03-01 杭州海康威视数字技术股份有限公司 车位确定方法及装置
TWI750673B (zh) * 2020-05-26 2021-12-21 恆景科技股份有限公司 環境光估算系統
CN113781823A (zh) * 2020-06-09 2021-12-10 恒景科技股份有限公司 环境光估算系统
CN113570872A (zh) * 2021-08-13 2021-10-29 深圳市捷顺科技实业股份有限公司 一种遮挡停车位事件的处理方法及装置
CN113570872B (zh) * 2021-08-13 2022-10-14 深圳市捷顺科技实业股份有限公司 一种遮挡停车位事件的处理方法及装置
CN114282623A (zh) * 2021-12-29 2022-04-05 北京商海文天科技发展有限公司 一种基于客运车辆违规上下客的分析方法

Similar Documents

Publication Publication Date Title
WO2018076281A1 (zh) 停车位状态的检测方法、检测装置和电子设备
US8798314B2 (en) Detection of vehicles in images of a night time scene
US10373024B2 (en) Image processing device, object detection device, image processing method
US12026904B2 (en) Depth acquisition device and depth acquisition method
WO2020258077A1 (zh) 一种行人检测方法及装置
JP2018010634A (ja) 駐車スペース状態検出方法、検出装置及び電子機器
CN109784487B (zh) 用于事件检测的深度学习网络、该网络的训练装置及方法
US11017552B2 (en) Measurement method and apparatus
JP7185419B2 (ja) 車両のための、対象物を分類するための方法および装置
US20220207750A1 (en) Object detection with image background subtracted
CN110298302B (zh) 一种人体目标检测方法及相关设备
US9798940B2 (en) Vehicular image processing apparatus
Santos et al. Car recognition based on back lights and rear view features
CN104008518A (zh) 对象检测设备
US20210089818A1 (en) Deposit detection device and deposit detection method
CN112784642B (zh) 车辆检测方法及装置
CN112287905A (zh) 车辆损伤识别方法、装置、设备及存储介质
CN112017065A (zh) 车辆定损理赔方法、装置及计算机可读存储介质
CN112308061B (zh) 一种车牌字符识别方法及装置
CN115240163A (zh) 一种基于一阶段检测网络的交通标志检测方法及系统
CN114298987A (zh) 一种反光条检测方法及装置
JPH1166490A (ja) 車両検出方法
CN113869292B (zh) 用于自动驾驶的目标检测方法、装置及设备
EP3611655A1 (en) Object detection based on analysis of a sequence of images
CN111597959B (zh) 行为检测方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16920302

Country of ref document: EP

Kind code of ref document: A1