CN111814560A - Parking space state identification method, system, medium and equipment - Google Patents

Parking space state identification method, system, medium and equipment Download PDF

Info

Publication number
CN111814560A
CN111814560A CN202010523649.7A CN202010523649A CN111814560A CN 111814560 A CN111814560 A CN 111814560A CN 202010523649 A CN202010523649 A CN 202010523649A CN 111814560 A CN111814560 A CN 111814560A
Authority
CN
China
Prior art keywords
frame
image frame
parking space
state
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010523649.7A
Other languages
Chinese (zh)
Other versions
CN111814560B (en
Inventor
张孟贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Guanchao Intelligent Technology Co ltd
Original Assignee
Henan Guanchao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Guanchao Intelligent Technology Co ltd filed Critical Henan Guanchao Intelligent Technology Co ltd
Priority to CN202010523649.7A priority Critical patent/CN111814560B/en
Publication of CN111814560A publication Critical patent/CN111814560A/en
Application granted granted Critical
Publication of CN111814560B publication Critical patent/CN111814560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams
    • G08G1/127Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams to a central station ; Indicators in a central station
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas

Abstract

The disclosure relates to a parking space state identification method, system, medium and device. The method comprises the following steps: acquiring a monitoring video shot by a roadside parking space; calculating adjacent frame differences and average frame differences of all image frames; selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences, and a third image frame with the frame difference closest to the average frame difference with the previous image frame; respectively calculating a first frame difference between the first image frame and the third image frame and a second frame difference between the third image frame and the second image frame; identifying a first occupancy state of the roadside parking space in the third image frame; a second occupancy state in the first image frame and/or a third occupancy state in the second image frame is determined. According to the scheme provided by the disclosure, the parking space state can be identified by utilizing the obvious difference of the parking space reflected on the frame difference of the image frames in the occupied state and the idle state and combining the parking space state in the third key image frame as a reference.

Description

Parking space state identification method, system, medium and equipment
Technical Field
The present disclosure relates to the field of parking space management technologies, and in particular, to a parking space state identification method, system, medium, and device.
Background
In the related art, the management of roadside parking spaces usually adopts a machine vision recognition mode to recognize the parking state of the parking spaces, the roadside parking spaces need to be continuously recognized, the consumed resources are high, the requirement on the accuracy of a model is high, and the equipment cost is high.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a parking space state identification method, system, medium, and device.
According to a first aspect of the embodiments of the present disclosure, a parking space state identification method is provided, including:
acquiring a monitoring video shot by a camera in a preset time period before the current moment on a roadside parking space;
calculating adjacent frame differences and average frame differences of all image frames in the monitoring video;
selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences, and a third image frame with the frame difference closest to the average frame difference from the previous image frame as three candidate key frames;
respectively calculating a first frame difference of the first image frame and a third image frame and a second frame difference of the third image frame and a second image frame;
identifying a first occupation state of a roadside parking space in the third image frame through a target detection algorithm;
and judging a second occupation state in the first image frame and/or a third occupation state in the second image frame of the roadside parking space according to the first frame difference, the second frame difference and the first occupation state.
According to a second aspect of the embodiments of the present disclosure, a parking space state recognition system is provided, including:
the acquisition module is used for acquiring a monitoring video shot by a camera in a preset time period before the current time for the roadside parking space;
the first calculation module is used for calculating the adjacent frame difference and the average frame difference of all the image frames in the monitoring video;
the selection module is used for selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences and a third image frame with the frame difference closest to the average frame difference from the previous image frame as three candidate key frames;
the second calculation module is used for calculating a first frame difference between the first image frame and a third image frame and a second frame difference between the third image frame and a second image frame;
the identification module is used for identifying a first occupation state of the roadside parking space in the third image frame through a target detection algorithm;
and the first judging module is used for judging a second occupation state in the first image frame and/or a third occupation state in the second image frame of the roadside parking space according to the first frame difference, the second frame difference and the first occupation state.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal device, including:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method as described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the obvious difference of the parking spaces in the video in the occupied state and the idle state is utilized, the two key image frames with the changed occupied states are positioned by utilizing the largest adjacent frame difference, the vehicles in the parking spaces do not need to be continuously identified, and in addition, the parking space states in the other two key image frames can be identified through the parking space state in the third key image frame as a reference.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 is a schematic flow chart illustrating a parking space status identification method according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic structural diagram illustrating a parking space status recognition system according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a computing device according to an exemplary embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
Technical solutions of embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart illustrating a parking space state identification method according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, the method includes:
s11, acquiring a monitoring video shot by a camera in a preset time period before the current time for the roadside parking space;
s12, calculating the adjacent frame difference and the average frame difference of all the image frames in the monitoring video;
s13, selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences, and a third image frame with the frame difference closest to the average frame difference with the previous image frame as three candidate key frames;
s14, respectively calculating a first frame difference between the first image frame and a third image frame and a second frame difference between the third image frame and a second image frame;
s15, identifying a first occupation state of the roadside parking space in the third image frame through a target detection algorithm;
and S16, judging a second occupation state in the first image frame and/or a third occupation state in the second image frame of the roadside parking space according to the first frame difference, the second frame difference and the first occupation state.
According to the technical scheme, the two key image frames with the changed occupation states are positioned by utilizing the obvious difference of the parking spaces in the video in the occupation and idle states, the vehicles in the parking spaces do not need to be continuously identified, and in addition, the parking space states in the other two key image frames can be identified through the parking space state in the third key image frame serving as a reference.
Wherein, step S16 specifically includes:
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is occupation, the second occupation state is occupation, and the third occupation state is idle;
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is idle, and the third occupation state is occupied;
when the first frame difference is out of the set range, the second frame difference is in the set range, and the first occupation state is occupation, the second occupation state is idle, and the third occupation state is occupation;
and when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is occupied, and the third occupation state is idle.
Specifically, when the first frame difference is within the setting range and the second frame difference is outside the setting range, it indicates that the occupation states of the first image frame and the third image frame are the same, whereas when the first frame difference is outside the setting range and the second frame difference is within the setting range, it indicates that the occupation states of the second image frame and the third image frame are the same, so that the occupation states in the first image frame and the second image frame can be determined accordingly.
In the above method, further comprising:
s17, identifying and marking the outline of the object in the image frame, wherein the object at least comprises a vehicle;
before step S16, the method further includes:
s18, placing the contour of the vehicle occupying the same parking space in the last image frame of the third image frame at the same position in the third image frame, judging whether the ratio of the overlapping part of the contour of the vehicle occupying the same parking space in the last image frame and the contour of other objects in the third image frame to the contour of the vehicle occupying the same parking space in the last image frame is lower than a preset ratio, if so, judging the second occupation state of the roadside parking space in the first image frame or the second image frame according to the first frame difference, the second frame difference and the first occupation state, otherwise, not executing, and selecting a fourth image frame which is close to the average frame difference in the frame difference of the previous image frame to replace the third image frame until the ratio is lower than the preset ratio.
Because except the vehicle in the parking stall in the video, objects such as pedestrian's vehicle of passing in addition can produce the interference to the rate of accuracy of the judgement of occupation state, through the ratio of comparing above-mentioned outline overlap portion, when the ratio is less, it is less to explain the degree of sheltering from of interference thing to the parking stall, can be used for judging the occupation state, otherwise just need select the image frame again.
In the above method, further comprising:
s19, intercepting image frames of a pre-defined virtual parking space area from the monitoring video;
step S12, specifically including:
and calculating adjacent frame differences and average frame differences of the image frames of all the virtual parking space areas.
Specifically, in order to reduce the amount of calculation, only the image frames of the pre-defined virtual parking space area may be cut out to perform the frame difference calculation.
Corresponding to the embodiment of the application function implementation method, the invention further provides a parking space state identification system, terminal equipment and a corresponding embodiment.
Fig. 2 is a schematic structural diagram of a parking space state identification system according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, the system includes:
the acquisition module is used for acquiring a monitoring video shot by a camera in a preset time period before the current time for the roadside parking space;
the first calculation module is used for calculating the adjacent frame difference and the average frame difference of all the image frames in the monitoring video;
the selection module is used for selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences and a third image frame with the frame difference closest to the average frame difference from the previous image frame as three candidate key frames;
the second calculation module is used for calculating a first frame difference between the first image frame and a third image frame and a second frame difference between the third image frame and a second image frame;
the identification module is used for identifying a first occupation state of the roadside parking space in the third image frame through a target detection algorithm;
and the first judging module is used for judging a second occupation state in the first image frame and/or a third occupation state in the second image frame of the roadside parking space according to the first frame difference, the second frame difference and the first occupation state.
In the above system, the first determining module is specifically configured to:
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is occupation, the second occupation state is occupation, and the third occupation state is idle;
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is idle, and the third occupation state is occupied;
when the first frame difference is out of the set range, the second frame difference is in the set range, and the first occupation state is occupation, the second occupation state is idle, and the third occupation state is occupation;
and when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is occupied, and the third occupation state is idle.
In the above system, further comprising:
the identification marking module is used for identifying and marking the outline of an object in the image frame, wherein the object at least comprises a vehicle;
a second judging module, configured to, before the first judging module judges, according to the first frame difference, the second frame difference, and the first occupancy state, that a second occupancy state in the first image frame at a roadside parking space and/or a third occupancy state in the second image frame, place the contour of the vehicle occupying the same parking space in a previous image frame of the third image frame at the same position in the third image frame, judge whether a ratio of an overlapping portion of the contour of the vehicle occupying the same parking space in the previous image frame and the contours of other objects in the third image frame to the contour of the vehicle occupying the same parking space in the previous image frame is lower than a preset ratio, if so, call the first judging module, otherwise, not call, and select a fourth image frame that is close to an average frame difference in frame number to the previous image frame to replace the third image frame, until the ratio is lower than the preset ratio.
In the above system, further comprising:
the intercepting module is used for intercepting image frames of a pre-defined virtual parking space area from the monitoring video;
the first calculation module is specifically configured to:
and calculating adjacent frame differences and average frame differences of the image frames of all the virtual parking space areas. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 3 is a schematic diagram illustrating a computing device according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, computing device 300 includes memory 310 and processor 320.
The Processor 320 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 310 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 320 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 310 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 310 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 310 has stored thereon executable code that, when processed by the processor 320, may cause the processor 320 to perform some or all of the methods described above.
The aspects of the present disclosure have been described in detail above with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required by the invention. In addition, it can be understood that steps in the method of the embodiment of the present disclosure may be sequentially adjusted, combined, and deleted according to actual needs, and modules in the device of the embodiment of the present disclosure may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present disclosure may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present disclosure.
Alternatively, the present disclosure may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or computing device, server, or the like), causes the processor to perform some or all of the various steps of the above-described method according to the present disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A parking space state identification method is characterized by comprising the following steps:
acquiring a monitoring video shot by a camera in a preset time period before the current moment on a roadside parking space;
calculating adjacent frame differences and average frame differences of all image frames in the monitoring video;
selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences, and a third image frame with the frame difference closest to the average frame difference from the previous image frame as three candidate key frames;
respectively calculating a first frame difference of the first image frame and a third image frame and a second frame difference of the third image frame and a second image frame;
identifying a first occupation state of a roadside parking space in the third image frame through a target detection algorithm;
and judging a second occupation state in the first image frame and/or a third occupation state in the second image frame of the roadside parking space according to the first frame difference, the second frame difference and the first occupation state.
2. The method according to claim 1, wherein the determining a second occupancy state in the first image frame and/or a third occupancy state in the second image frame of a roadside parking space according to the first frame difference, the second frame difference and the first occupancy state specifically comprises:
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is occupation, the second occupation state is occupation, and the third occupation state is idle;
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is idle, and the third occupation state is occupied;
when the first frame difference is out of the set range, the second frame difference is in the set range, and the first occupation state is occupation, the second occupation state is idle, and the third occupation state is occupation;
and when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is occupied, and the third occupation state is idle.
3. The parking space state identification method according to claim 1, further comprising: identifying and marking a contour of an object in an image frame, the object including at least a vehicle;
before determining a second occupation state in the first image frame and/or a third occupation state in the second image frame of a roadside parking space according to the first frame difference, the second frame difference and the first occupation state, the method further includes:
placing the contour of the vehicle occupying the same parking space in the last image frame of the third image frame at the same position in the third image frame, judging whether the ratio of the overlapping part of the contour of the vehicle occupying the same parking space in the last image frame and the contour of other objects in the third image frame to the contour of the vehicle occupying the same parking space in the last image frame is lower than a preset ratio, if so, judging the second occupation state of the roadside parking space in the first image frame or the second image frame according to the first frame difference, the second frame difference and the first occupation state, otherwise, not executing the judgment, and selecting a fourth image frame which is close to the average frame difference in the frame difference of the previous image frame to replace the third image frame until the ratio is lower than the preset ratio.
4. A parking space status recognition method according to any one of claims 1 to 3, further comprising:
intercepting an image frame of a pre-defined virtual parking space area from the monitoring video;
the calculating of the adjacent frame difference and the average frame difference of all the image frames in the monitoring video specifically includes:
and calculating adjacent frame differences and average frame differences of the image frames of all the virtual parking space areas.
5. The utility model provides a parking stall state identification system which characterized in that includes:
the acquisition module is used for acquiring a monitoring video shot by a camera in a preset time period before the current time for the roadside parking space;
the first calculation module is used for calculating the adjacent frame difference and the average frame difference of all the image frames in the monitoring video;
the selection module is used for selecting a first image frame and a second image frame corresponding to the maximum frame difference in all adjacent frame differences and a third image frame with the frame difference closest to the average frame difference from the previous image frame as three candidate key frames;
the second calculation module is used for calculating a first frame difference between the first image frame and a third image frame and a second frame difference between the third image frame and a second image frame;
the identification module is used for identifying a first occupation state of the roadside parking space in the third image frame through a target detection algorithm;
and the first judging module is used for judging a second occupation state in the first image frame and/or a third occupation state in the second image frame of the roadside parking space according to the first frame difference, the second frame difference and the first occupation state.
6. The parking space state identification system according to claim 5, wherein the first determination module is specifically configured to:
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is occupation, the second occupation state is occupation, and the third occupation state is idle;
when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is idle, and the third occupation state is occupied;
when the first frame difference is out of the set range, the second frame difference is in the set range, and the first occupation state is occupation, the second occupation state is idle, and the third occupation state is occupation;
and when the first frame difference is within a set range, the second frame difference is outside the set range, and the first occupation state is idle, the second occupation state is occupied, and the third occupation state is idle.
7. The parking space state recognition system according to claim 5, further comprising:
the identification marking module is used for identifying and marking the outline of an object in the image frame, wherein the object at least comprises a vehicle;
a second judging module, configured to, before the first judging module judges, according to the first frame difference, the second frame difference, and the first occupancy state, that a second occupancy state in the first image frame at a roadside parking space and/or a third occupancy state in the second image frame, place the contour of the vehicle occupying the same parking space in a previous image frame of the third image frame at the same position in the third image frame, judge whether a ratio of an overlapping portion of the contour of the vehicle occupying the same parking space in the previous image frame and the contours of other objects in the third image frame to the contour of the vehicle occupying the same parking space in the previous image frame is lower than a preset ratio, if so, call the first judging module, otherwise, not call, and select a fourth image frame that is close to an average frame difference in frame number to the previous image frame to replace the third image frame, until the ratio is lower than the preset ratio.
8. The parking space state recognition system according to any one of claims 5 to 7, further comprising:
the intercepting module is used for intercepting image frames of a pre-defined virtual parking space area from the monitoring video;
the calculation module is specifically configured to:
and calculating adjacent frame differences and average frame differences of the image frames of all the virtual parking space areas.
9. A terminal device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-4.
10. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-4.
CN202010523649.7A 2020-06-10 2020-06-10 Parking space state identification method, system, medium and equipment Active CN111814560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010523649.7A CN111814560B (en) 2020-06-10 2020-06-10 Parking space state identification method, system, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010523649.7A CN111814560B (en) 2020-06-10 2020-06-10 Parking space state identification method, system, medium and equipment

Publications (2)

Publication Number Publication Date
CN111814560A true CN111814560A (en) 2020-10-23
CN111814560B CN111814560B (en) 2023-12-26

Family

ID=72845694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010523649.7A Active CN111814560B (en) 2020-06-10 2020-06-10 Parking space state identification method, system, medium and equipment

Country Status (1)

Country Link
CN (1) CN111814560B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656023A (en) * 2009-08-26 2010-02-24 西安理工大学 Management method of indoor car park in video monitor mode
CN101978696A (en) * 2008-03-18 2011-02-16 英特尔公司 Capturing event information using a digital video camera
KR101031995B1 (en) * 2010-08-16 2011-05-02 (유)티에스산업개발 Guiding system for traffic safety in school zone
US20120133766A1 (en) * 2010-11-26 2012-05-31 Hon Hai Precision Industry Co., Ltd. Vehicle rearview back-up system and method
CN104236866A (en) * 2014-09-01 2014-12-24 南京林业大学 Automobile headlamp test data error correction method based on driving direction
JP2015185135A (en) * 2014-03-26 2015-10-22 株式会社Jvcケンウッド Parking recognition device, parking recognition method and program
JP2017001638A (en) * 2015-06-16 2017-01-05 西日本旅客鉄道株式会社 Train position detection system using image processing, and train position and environmental change detection system using image processing
CN106504580A (en) * 2016-12-07 2017-03-15 深圳市捷顺科技实业股份有限公司 A kind of method for detecting parking stalls and device
CN106677094A (en) * 2017-03-27 2017-05-17 深圳市捷顺科技实业股份有限公司 Barrier gate control method and device
CN107665599A (en) * 2016-07-28 2018-02-06 北海和思科技有限公司 The parking position automatic identifying method of view-based access control model detection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101978696A (en) * 2008-03-18 2011-02-16 英特尔公司 Capturing event information using a digital video camera
CN101656023A (en) * 2009-08-26 2010-02-24 西安理工大学 Management method of indoor car park in video monitor mode
KR101031995B1 (en) * 2010-08-16 2011-05-02 (유)티에스산업개발 Guiding system for traffic safety in school zone
US20120133766A1 (en) * 2010-11-26 2012-05-31 Hon Hai Precision Industry Co., Ltd. Vehicle rearview back-up system and method
JP2015185135A (en) * 2014-03-26 2015-10-22 株式会社Jvcケンウッド Parking recognition device, parking recognition method and program
CN104236866A (en) * 2014-09-01 2014-12-24 南京林业大学 Automobile headlamp test data error correction method based on driving direction
JP2017001638A (en) * 2015-06-16 2017-01-05 西日本旅客鉄道株式会社 Train position detection system using image processing, and train position and environmental change detection system using image processing
CN107665599A (en) * 2016-07-28 2018-02-06 北海和思科技有限公司 The parking position automatic identifying method of view-based access control model detection
CN106504580A (en) * 2016-12-07 2017-03-15 深圳市捷顺科技实业股份有限公司 A kind of method for detecting parking stalls and device
CN106677094A (en) * 2017-03-27 2017-05-17 深圳市捷顺科技实业股份有限公司 Barrier gate control method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘光蓉, 管庶安, 周红: "基于数字图像处理技术的汽车轮廓提取", 计算机与数字工程, no. 04 *
屈俞岐;曹佳乐;杨彭晨;: "物联模式下的停车引导系统", 物联网技术, no. 12, pages 43 - 45 *
杨泷迪;姜月秋;高宏伟;: "视觉的车位智能识别研究", 沈阳理工大学学报, no. 01 *
颜江峰;毛恩荣;: "基于机器视觉的高速公路车辆停车检测", 计算机工程与设计, no. 16 *

Also Published As

Publication number Publication date
CN111814560B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN113822223A (en) Method and device for detecting shielding movement of camera
CN111930874A (en) Data acquisition method and electronic equipment
CN110689134A (en) Method, apparatus, device and storage medium for performing machine learning process
CN111562973B (en) Map data task execution method and electronic equipment
CN115100654A (en) Water level identification method and device based on computer vision algorithm
CN114882145A (en) Lane line fitting method, lane line fitting apparatus, and computer-readable storage medium
CN111814560B (en) Parking space state identification method, system, medium and equipment
CN117197796A (en) Vehicle shielding recognition method and related device
CN110555344B (en) Lane line recognition method, lane line recognition device, electronic device, and storage medium
CN113160572A (en) Method and device for managing car rental violation and computing equipment
CN111563425A (en) Traffic incident identification method and electronic equipment
CN115170851A (en) Image clustering method and device
CN109784238A (en) A kind of method and device of determining object to be identified
CN114397671A (en) Course angle smoothing method and device of target and computer readable storage medium
CN112749677A (en) Method and device for identifying mobile phone playing behaviors and electronic equipment
CN111046820A (en) Statistical method and device for vehicles in automobile roll-on-roll-off ship and intelligent terminal
CN116092039B (en) Display control method and device of automatic driving simulation system
CN111930464B (en) Resource allocation method and device of map engine and electronic equipment
CN113313770A (en) Calibration method and device of automobile data recorder
CN112288908B (en) Time generation method and device of automobile data recorder
CN113538546B (en) Target detection method, device and equipment for automatic driving
CN113821734B (en) Method, device, equipment and medium for identifying double drivers based on track data
CN115112051A (en) Drilling machine angle detection method and device based on computer vision
CN113901926A (en) Target detection method, device, equipment and storage medium for automatic driving
CN113705735A (en) Label classification method and system based on mass information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant