CN116844097B - Intelligent man-vehicle association analysis method and system - Google Patents

Intelligent man-vehicle association analysis method and system Download PDF

Info

Publication number
CN116844097B
CN116844097B CN202310810084.4A CN202310810084A CN116844097B CN 116844097 B CN116844097 B CN 116844097B CN 202310810084 A CN202310810084 A CN 202310810084A CN 116844097 B CN116844097 B CN 116844097B
Authority
CN
China
Prior art keywords
target
detection information
area
sub
associated sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310810084.4A
Other languages
Chinese (zh)
Other versions
CN116844097A (en
Inventor
陶智敏
汪志锋
刘全君
沈韬
王青旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anlu International Technology Co ltd
Original Assignee
Beijing Anlu International Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anlu International Technology Co ltd filed Critical Beijing Anlu International Technology Co ltd
Priority to CN202310810084.4A priority Critical patent/CN116844097B/en
Publication of CN116844097A publication Critical patent/CN116844097A/en
Application granted granted Critical
Publication of CN116844097B publication Critical patent/CN116844097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention is applicable to the field of computers, and provides an intelligent man-vehicle association analysis method and system, wherein the method comprises the following steps: locating a first sub-zone of a first zone, the first sub-zone being a single vehicle parking zone; summarizing first target access detection information of a plurality of associated subareas, wherein the first target access detection information comprises first time, and acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas; judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image, and according to the technical scheme of the embodiment of the application, accurate man-car association analysis can be performed even if the conditions such as intentional avoidance monitoring exist, and direct control basis and analysis reference can be provided for related personnel of an analysis center.

Description

Intelligent man-vehicle association analysis method and system
Technical Field
The invention belongs to the field of computers, and particularly relates to an intelligent human-vehicle association analysis method and system.
Background
In parking areas of vehicles, particularly in areas where multiple vehicles are parked in a gathering manner, association analysis is required for people and vehicles to ensure safety of parked vehicles and reduce property loss.
The correlation analysis of the aggregated vehicles is mainly completed by combining monitoring manually, but under the circumstance that visual blind areas possibly exist between vehicles or personnel avoid monitoring deliberately, the prior art cannot meet the requirement of the correlation analysis.
Disclosure of Invention
The embodiment of the invention aims to provide an intelligent man-vehicle association analysis method and system, which aim to solve the problems in the background technology.
The embodiment of the invention is realized in such a way that, on the one hand, the intelligent man-vehicle association analysis method comprises the following steps:
locating a first sub-zone of a first zone, the first sub-zone being a single vehicle parking zone;
summarizing first target access detection information of a plurality of associated subareas, wherein the first target access detection information comprises first time, and acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas;
judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image;
if not, acquiring second target access detection information comprising second time, and positioning a first associated sub-area corresponding to the second target access detection information, wherein the second time is a time after the first time, and the plurality of associated sub-areas comprise the first associated sub-area;
identifying the entering and exiting direction of the movable target in the second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting a target monitoring image covering the capturing zone, and transmitting the target monitoring image to an analysis center.
As a further aspect of the present invention, the locating the first sub-area of the sub-areas of the first area specifically includes:
reading the vehicle occupation information of the subareas;
judging the effective vehicle occupation included in the vehicle occupation information according to the vehicle access statistical information;
and taking the subarea where each effective vehicle occupation is as a first subarea.
As still further aspect of the present invention, the summarizing the first target entry and exit detection information including the first time of the plurality of associated sub-areas specifically includes:
acquiring target detection information reported by first detection equipment, wherein the first detection equipment is arranged in a preset adjacent range of an associated sub-area, a preset side range of the associated sub-area and a preset middle range of the first sub-area;
identifying the movable range of the movable target in a preset adjacent range according to the target detection information and the position distribution of the first detection equipment;
judging whether the movable range covers the head and tail set non-edge bits of the first subarea or not;
if yes, the associated sub-area corresponding to the end-to-end non-edge bit coverage is set as the target associated sub-area, and the target detection information of the target associated sub-area is set as the first target in-out detection information.
As still further aspect of the present invention, the summarizing the first target entry and exit detection information including the first moment of time for the plurality of associated sub-areas specifically includes:
when the movable range does not cover the head-tail setting non-edge bits of the first sub-region, judging whether the movable range reaches a section between the head-tail setting non-edge bits;
if yes, judging whether the duration reaching the target detection information corresponding to the section reaches the first duration;
if so, taking the associated sub-area with the duration reaching the first time length as a target associated sub-area, and taking the target detection information of the target associated sub-area as first target in-out detection information.
As a further aspect of the present invention, the determining whether the moving object corresponding to the object detection information appears in the first monitored image includes:
extracting the duration time of the first target in-out detection information;
and judging whether a moving target appears in the first monitoring image within the duration time.
As a further aspect of the present invention, the method further includes:
when detecting that a moving target corresponding to the first target in-out detection information appears in the first monitoring image, directly associating the first monitoring image corresponding to the moving target with an associated sub-area;
and sending the correlated first monitoring image to an analysis center.
As a further aspect of the present invention, the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, and determining the capturing zone according to the first associated sub-area and the entering and exiting direction specifically includes:
and when the direction of the moving object in the second object entering and exiting detection information is detected to be close to the associated sub-area, determining a rectangular area comprising the first associated sub-area, and taking the rectangular area as a capturing area.
As a further aspect of the present invention, the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, and determining the capturing zone according to the first associated sub-area and the entering and exiting direction, specifically further includes:
when the direction of the movable target in the second target access detection information is far away from the associated subarea, a first route between the far-away position of the movable target and all the exits of the first area is identified, and a capturing route comprising the first route is determined.
As a further aspect of the present invention, the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, and determining the capturing zone according to the first associated sub-area and the entering and exiting direction, specifically further includes:
when the direction of the movable target in the second target in and out detection information passes through the preset middle range, identifying a first subarea corresponding to the preset middle range, and judging that the vehicle bottom of the corresponding first subarea is crossed by the movable target;
and determining a plurality of subareas adjacent to the corresponding first subareas as capturing areas according to the direction of the moving target passing through the preset intermediate range.
As a further aspect of the present invention, in another aspect, an intelligent human-vehicle association analysis system, the system includes:
the sub-area positioning module is used for positioning a first sub-area in the sub-areas of the first area, wherein the first sub-area is a single vehicle parking area;
the detection information summarizing module is used for summarizing first target in-out detection information including first time of a plurality of associated subareas, acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas;
the judging module is used for judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image;
the first associated sub-area positioning module is used for acquiring second target access detection information including second moment if the movable target does not appear, positioning a first associated sub-area corresponding to the second target access detection information, wherein the second moment is a moment after the first moment, and the plurality of associated sub-areas comprise the first associated sub-area;
the capturing and sending module is used for identifying the entering and exiting direction of the movable target in the second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting a target monitoring image covering the capturing zone, and transmitting the target monitoring image to the analysis center.
According to the intelligent man-vehicle association analysis method and system provided by the embodiment of the invention, the first association subarea corresponding to the second target access detection information is positioned, the second moment is the moment after the first moment, and the plurality of association subareas comprise the first association subarea; the method comprises the steps of identifying the entering and exiting direction of a movable target in second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting target monitoring images covering the capturing zone, transmitting the target monitoring images to an analysis center, analyzing the first subarea where the movable target enters a vehicle, carrying out accurate personnel-vehicle association analysis even if personnel possibly avoid monitoring and the like, tracking the first associated subarea where property loss possibly occurs, further obtaining the capturing zone through analysis, collecting the target monitoring images covering the capturing zone, and providing direct control basis and analysis reference for related personnel of the analysis center.
Drawings
FIG. 1 is a main flow chart of an intelligent human-vehicle correlation analysis method.
FIG. 2 is a flow chart of locating a first sub-region of a first region in an intelligent human-vehicle association analysis method.
Fig. 3 is a first flowchart of a method for analyzing intelligent man-vehicle association, wherein the first flowchart includes first target access detection information at a first moment in a plurality of association sub-areas.
Fig. 4 is a second flowchart of a method for analyzing intelligent man-vehicle association, wherein the method is used for summarizing first target access detection information including a first moment in a plurality of association subregions.
FIG. 5 is a flow chart of one of the intelligent human-vehicle correlation analysis methods for determining a capture zone based on the first correlation sub-region and the ingress and egress directions.
Fig. 6 is a main structure diagram of an intelligent human-vehicle correlation analysis system.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
The intelligent human-vehicle association analysis method and system provided by the invention solve the technical problems in the background technology.
It should be noted that, in the present application, the first monitoring image may be obtained by using a panoramic camera and/or a detail camera, and generally, a panoramic camera is adopted, and for the target monitoring image, the first monitoring image is generally obtained by using a detail camera, which has a larger range of application and may be adopted.
As shown in fig. 1, a main flow chart of an intelligent human-vehicle association analysis method according to an embodiment of the present invention is provided, where the intelligent human-vehicle association analysis method includes:
step S10: locating a first sub-zone of a first zone, the first sub-zone being a single vehicle parking zone; the first area comprises a plurality of subareas, the first subarea is a plurality of subareas, vehicles are parked in the first subarea, and the first subarea is used for indicating that the first area possibly relates to the relevant analysis and detection of the plurality of vehicles;
step S11: summarizing first target access detection information of a plurality of associated subareas, wherein the first target access detection information comprises first time, and acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas; each associated sub-area comprises first sub-areas adjacent to each other, and because vehicles are necessarily arranged in the first sub-areas, the positions among the first sub-areas of the associated sub-areas possibly form a monitoring blind area, the target access condition of each associated sub-area can be directly detected through the acquisition of target access detection information, wherein the target access detection information of the target access exists at the initial moment is the first target access detection information, and when the target is detected in the target access detection information at a certain moment, the moment is marked as the first moment, and the first moment generally comprises the earliest moment; a presence must be entered, thus indicating at a first time that there is at least an active target entering the associated sub-region; the first monitoring image directly acts on the first area and is provided with at least one;
step S12: judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image; because the positions among the first subareas of the associated subareas possibly form a monitoring blind area, a moving target corresponding to the first target in-out detection information cannot exist in the first monitoring image, or a scene that the moving target enters the associated subareas does not exist in the first monitoring image; this situation indicates that the active target is most likely to have a behavior that is intended to evade monitoring (e.g., bending the waist into an associated sub-area) or that the height cause is not being monitored, etc.; the moving target corresponding to the first target in-out detection information comprises at least one of a plurality of targets in the first target in-out detection information detected at the same time;
step S13: if not, acquiring second target access detection information comprising second time, and positioning a first associated sub-area corresponding to the second target access detection information, wherein the second time is a time after the first time, and the plurality of associated sub-areas comprise the first associated sub-area; when the movable target corresponding to the first target access detection information is not detected, the movable target has the possibility of being camouflaged to enter the associated sub-area, analysis is continuously performed based on the collected target access detection information, and when the corresponding second target access detection information is detected after the first moment, the second target access detection information corresponds to the same associated sub-area as the first target access detection information; marking the associated sub-area corresponding to the second target in-out detection information as a first associated sub-area;
step S14: identifying the entering and exiting direction of the movable target in the second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting a target monitoring image covering the capturing zone, and transmitting the target monitoring image to an analysis center. Regarding the in-out direction of the movable target in the second target in-out detection information, since the movable target in-out detection information of the second target may go deep into the associated sub-area after the first moment, or go out from the other side of the associated sub-area, the target monitoring image covering the capturing zone is collected at this time, and the movable target with suspicious behavior can be further and directly analyzed based on the target monitoring image, so as to determine that the situation that the vehicle in the first associated sub-area is stolen or damaged may be caused, so that the related personnel in the analysis center can directly view and analyze based on the target monitoring image, and can take corresponding measures such as field viewing and tracking of the movable target if necessary.
When the embodiment is applied, the first subarea in the subarea of the first area is positioned; summarizing first target access detection information of a plurality of associated subareas, wherein the first target access detection information comprises first time, and acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas; judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image; if not, acquiring second target access detection information comprising second time, and positioning a first associated sub-area corresponding to the second target access detection information, wherein the second time is a time after the first time, and the plurality of associated sub-areas comprise the first associated sub-area; the method comprises the steps of identifying the entering and exiting direction of a movable target in second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting target monitoring images covering the capturing zone, transmitting the target monitoring images to an analysis center, analyzing the first subarea where the movable target enters a vehicle, carrying out accurate personnel-vehicle association analysis even if personnel possibly avoid monitoring and the like, tracking the first associated subarea where property loss possibly occurs, further obtaining the capturing zone through analysis, collecting the target monitoring images covering the capturing zone, and providing direct control basis and analysis reference for related personnel of the analysis center.
As shown in fig. 2, as a preferred embodiment of the present invention, the locating the first sub-area of the sub-areas of the first area specifically includes:
step S101: reading the vehicle occupation information of the subareas; the so-called vehicle occupancy information indicates information that the vehicle actually occupies a corresponding position; vehicles in this section may include illegally occupied vehicles that have not been subjected to access statistics;
step S102: judging the effective vehicle occupation included in the vehicle occupation information according to the vehicle access statistical information; vehicles passing through the access statistics information are effective vehicles, and vehicles not passing through the access statistics are regarded as non-effective vehicles;
step S103: and taking the subarea where each effective vehicle occupation is as a first subarea. The sub-area where the effective vehicle occupation is located is used as the first sub-area, so that detection and elimination of illegal vehicles such as illegal parking can be ensured.
It can be understood that by excluding the non-compliant vehicles, the sub-area where the effective vehicle occupation is located is taken as the first sub-area, so that the first sub-area where the compliant vehicle is located can be effectively detected, the validity of detection data is ensured, and the data operation processing amount is reduced.
As shown in fig. 3, as a preferred embodiment of the present invention, the first target entry and exit detection information including the first time of summarizing the plurality of associated sub-areas specifically includes:
step S1111: acquiring target detection information reported by first detection equipment, wherein the first detection equipment is arranged in a preset adjacent range of an associated sub-area, a preset side range of the associated sub-area and a preset middle range of the first sub-area; the first detection device comprises one or more of an infrared sensor, a camera, a LiDAR (laser radar), a laser sensor and the like; the first detection device may be temporarily installed or may be permanently installed; the device is generally arranged on the ground and is not easy to find; the first detection devices of each associated sub-area are provided with a number, and the coordinate position of each first detection device is known; presetting adjacent ranges as ranges of middle parts of every two first sub-regions of the associated sub-regions; the preset side range is the range of the two sides of the associated subarea; the first detection device in the preset intermediate range is generally used for detecting that the moving object crosses the first sub-area; the above range includes the set width and length, which can be set according to the actual requirement; the moving target generally includes a non-driver;
step S1112: identifying the movable range of the movable target in a preset adjacent range according to the target detection information and the position distribution of the first detection equipment; when the first detection equipment at the corresponding position acquires the target detection information, the moving target is indicated to pass through the position, so that the moving range of the moving target in the preset adjacent range can be identified according to the target detection information and the position distribution of the first detection equipment;
step S1113: judging whether the movable range covers the head and tail set non-edge bits of the first subarea or not; judging whether the moving range covers the head-to-tail setting non-edge bits, namely judging whether the moving target enters the head-to-tail setting non-edge bits of the associated sub-area from at least the head-to-tail sides of the first sub-area, wherein the head-to-tail setting non-edge bits comprise head setting non-edge bits and tail setting non-edge bits, and the head setting non-edge bits comprise parts along the two sides of the first sub-area, which are close to the middle, such as parts close to front and rear doors of a vehicle; only near the part can cause the loss of property in the vehicle and the like;
step S1114: if yes, the associated sub-area corresponding to the end-to-end non-edge bit coverage is set as the target associated sub-area, and the target detection information of the target associated sub-area is set as the first target in-out detection information. When at least one of the head setting non-edge bit and the tail setting non-edge bit of the head and tail setting non-edge bits is covered, the head and tail setting non-edge bit of the first subarea is judged to be covered by the movable range, at the moment, the target detection information of the target associated subarea is used as first target in-out detection information, the condition that the movable target enters the associated subarea and a place of illegal operation exists is indicated.
As shown in fig. 4, the summarizing the first target entry and exit detection information including the first time of the plurality of associated sub-areas includes:
step S1121: when the movable range does not cover the head-tail setting non-edge bits of the first sub-region, judging whether the movable range reaches a section between the head-tail setting non-edge bits; when the movable range does not cover the head-to-tail set non-edge position of the first subarea, the movable target is indicated not to enter the preset adjacent range from the head-to-tail side of the associated subarea, but the movable target can enter from the two sides of the first subarea in a crossing way, namely, enter a section between the head-to-tail set non-edge positions from the bottom of one side of the vehicle, namely, the corresponding position of the door of the other side, in the case, the monitoring can be better avoided, such as the door position of the other side crossing from the bottom of one side of the vehicle far away from the monitoring, the door position of the other side has shielding of the adjacent vehicle, and a blind area can be formed;
step S1122: if yes, judging whether the duration reaching the target detection information corresponding to the section reaches the first duration; considering that some small animals and the like can cross from two sides of the first subarea, the duration of the target detection information reaching the section is taken as an exclusion condition, if the small animals cross, the stay time of the small animals is very likely to not reach the first duration, and the moving targets can cross from any one of two sides of the first subarea and then perform 'illegal operation' on the interior of the vehicle through the vehicle door, the vehicle window and the like, so that time is required to be consumed;
step S1123: if so, taking the associated sub-area with the duration reaching the first time length as a target associated sub-area, and taking the target detection information of the target associated sub-area as first target in-out detection information. Thus, the associated sub-region whose duration reaches the first time period is taken as the target associated sub-region, which may also be the case, but with a comparatively small probability.
It should be understood that by judging whether the active range covers the head and tail of the first sub-region and setting the non-edge bit, and further distinguishing the first target in-out detection information under different conditions, the condition division can cover various conditions more comprehensively, omission of in-out detection information is avoided, and sufficient basis is provided for capturing subsequent target monitoring images respectively.
As a preferred embodiment of the present invention, the determining whether the moving object corresponding to the object detection information appears in the first monitored image includes:
step S121: extracting the duration time of the first target in-out detection information; the duration may be constant, or may be intermittent;
step S122: and judging whether a moving target appears in the first monitoring image within the duration time. The duration time at least needs to comprise a first moment when the first target enters and exits the detection information;
it should be noted that, the duration indicates that the moving target is always detected in the corresponding time in the first target entry and exit detection information, the duration is correspondingly identified, and in the corresponding duration, whether the moving target appears in the first monitored image of the first sub-region is judged, that is, whether the moving target which is easy to be seen appears in the first sub-region is judged, and based on the identification of the same place and the same time, whether the moving target appears can be identified, so that a basis is provided for whether the second target entry and exit detection information is acquired.
As a preferred embodiment of the present invention, the method further comprises:
step S20: when detecting that a moving target corresponding to the first target in-out detection information appears in the first monitoring image, directly associating the first monitoring image corresponding to the moving target with an associated sub-area;
step S21: and sending the correlated first monitoring image to an analysis center.
It can be understood that, in this embodiment, as a complementary embodiment, considering that the scene of the moving object directly appears in the first monitoring image, at this time, the associated sub-area where the moving object is located is associated with the corresponding first monitoring image, and after the associated first monitoring image is sent to the analysis center, the most direct analysis basis can be provided for the analyst of the analysis center.
As a preferred embodiment of the present invention, the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, and determining the capturing zone according to the first associated sub-area and the entering and exiting direction specifically includes:
step S1411: and when the direction of the moving object in the second object entering and exiting detection information is detected to be close to the associated sub-area, determining a rectangular area comprising the first associated sub-area, and taking the rectangular area as a capturing area. Considering the situation that the moving target continues to go deep into the associated sub-area, namely, when the moving target is close to the associated sub-area, a rectangular area is directly defined at the moment, and the range of the rectangular area covers the first associated sub-area, the moving target is very likely to be captured at the edge position of the first associated sub-area, so that the method provided by the embodiment also provides the most direct basis for target investigation and the like based on the target monitoring image.
Specifically, the identifying the direction of the moving object in the second object entering and exiting detection information, and determining the capturing zone according to the first associated sub-area and the entering and exiting direction specifically includes:
step S1421: when the direction of the movable target in the second target access detection information is far away from the associated subarea, a first route between the far-away position of the movable target and all the exits of the first area is identified, and a capturing route comprising the first route is determined. The method of the present embodiment can provide the most direct basis for capturing the target monitoring image of the route subsequently, and is also the most likely basis.
Specifically, both embodiments may be implemented in parallel or separately, as shown in fig. 5, where the identifying the direction of the moving object in the second object entering and exiting detection information, and determining the capturing zone according to the first associated sub-area and the entering and exiting direction specifically includes:
step S1431: when the direction of the movable target in the second target in and out detection information passes through the preset middle range, identifying a first subarea corresponding to the preset middle range, and judging that the vehicle bottom of the corresponding first subarea is crossed by the movable target; in the second target in-out detection information, when the movable target passes through the preset middle range, the movable target is very likely to cross the first subarea, namely, the movable target passes through the bottom of a vehicle in a certain first subarea, and in this case, the movable target does not need to enter from the head side to the tail side of the associated subarea, so that the movable target is more concealed;
step S1432: and determining a plurality of subareas adjacent to the corresponding first subareas as capturing areas according to the direction of the moving target passing through the preset intermediate range. The direction of passage generally includes the direction in which the travel of the moving object is directed, for example, the direction leading from the first sub-area a to the first sub-area B, where several sub-areas adjacent to the first sub-area B should be regarded as capturing areas; the target monitoring image may cover a non-first sub-area and a first sub-area of the sub-areas when the capturing area is covered subsequently;
it should be noted that, the crossing in the above statement includes vertical crossing, where a plurality of sub-areas adjacent to the first sub-area B are a plurality of sub-areas side by side, and also includes oblique crossing, where a plurality of sub-areas located at an oblique position of the first sub-area B are capturing areas along an oblique crossing direction, so that the method provided by the embodiment can consider a scene which is difficult to identify and is most likely to actually exist, and can better cover the capturing areas which are difficult to commonly exist, thereby enriching the content of the target monitoring image.
As shown in fig. 6, as another preferred embodiment of the present invention, in another aspect, an intelligent human-vehicle association analysis system includes:
a sub-region positioning module 100 for positioning a first sub-region of a first region, the first sub-region being a single vehicle parking region;
the detection information summarizing module 200 is used for summarizing first target in-out detection information including a first moment of a plurality of associated subareas, and obtaining a first monitoring image acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas;
the judging module 300 is configured to judge whether a moving target corresponding to the first target in-out detection information appears in the first monitored image;
the first associated sub-area positioning module 400 is configured to obtain second target entry and exit detection information including a second moment if the active target does not appear, and position a first associated sub-area corresponding to the second target entry and exit detection information, where the second moment is a moment after the first moment, and the plurality of associated sub-areas include the first associated sub-area;
the capturing and sending module 500 is configured to identify an entry and exit direction of an active target in the second target entry and exit detection information, determine a capturing zone according to the first associated sub-area and the entry and exit direction, wherein the capturing zone includes a capturing area and/or a capturing route, collect a target monitoring image covering the capturing zone, and transmit the target monitoring image to an analysis center.
It should be noted that, referring to the description of the specific implementation of the intelligent human-vehicle association analysis method in the foregoing embodiment, the system corresponds to the implementation method of the method completely, and will not be described herein.
The embodiment of the invention provides an intelligent man-vehicle association analysis method, and provides an intelligent man-vehicle association analysis system based on the intelligent man-vehicle association analysis method, wherein the intelligent man-vehicle association analysis system is used for positioning a first subarea in subareas of a first area; summarizing first target access detection information of a plurality of associated subareas, wherein the first target access detection information comprises first time, and acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas; judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image; if not, acquiring second target access detection information comprising second time, and positioning a first associated sub-area corresponding to the second target access detection information, wherein the second time is a time after the first time, and the plurality of associated sub-areas comprise the first associated sub-area; the method comprises the steps of identifying the entering and exiting direction of a movable target in second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting target monitoring images covering the capturing zone, transmitting the target monitoring images to an analysis center, analyzing the first subarea where the movable target enters a vehicle, carrying out accurate personnel-vehicle association analysis even if personnel possibly avoid monitoring and the like, tracking the first associated subarea where property loss possibly occurs, further obtaining the capturing zone through analysis, collecting the target monitoring images covering the capturing zone, and providing direct control basis and analysis reference for related personnel of the analysis center.
In order to be able to load the method and system described above to function properly, the system may include more or less components than those described above, or may combine some components, or different components, in addition to the various modules described above, for example, may include input and output devices, network access devices, buses, processors, memories, and the like.
The processor may be a central processing unit (CentralProcessingUnit, CPU), other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), off-the-shelf programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the above system, and various interfaces and lines are used to connect the various parts.
The memory may be used to store a computer and a system program and/or module, and the processor may perform the various functions described above by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as an information acquisition template presentation function, a product information distribution function, etc.), and the like. The storage data area may store data created according to the use of the berth status display system (e.g., product information acquisition templates corresponding to different product types, product information required to be released by different product providers, etc.), and so on. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SmartMediaCard, SMC), secure digital (SecureDigital, SD) card, flash card (FlashCard), at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. An intelligent human-vehicle association analysis method, which is characterized by comprising the following steps:
locating a first sub-zone of a first zone, the first sub-zone being a single vehicle parking zone;
summarizing target access detection information of a plurality of associated sub-areas, acquiring first target access detection information including first moment in the target access detection information, and acquiring first monitoring images acting on the first sub-areas, wherein each associated sub-area comprises every two adjacent first sub-areas;
judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image;
if not, acquiring second target access detection information comprising second time, and positioning a first associated sub-area corresponding to the second target access detection information, wherein the second time is a time after the first time, and the plurality of associated sub-areas comprise the first associated sub-area;
identifying the entering and exiting direction of a movable target in second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, acquiring a target monitoring image covering the capturing zone, and transmitting the target monitoring image to an analysis center;
the first target in-out detection information comprising the first moment for summarizing the plurality of associated subregions specifically comprises: acquiring target detection information reported by first detection equipment, wherein the arrangement range of the first detection equipment comprises a preset adjacent range of an associated sub-area, a preset side range of the associated sub-area and a preset middle range of the first sub-area; identifying the movable range of the movable target in a preset adjacent range according to the target detection information and the position distribution of the first detection equipment; judging whether the movable range covers the head and tail set non-edge bits of the first subarea or not; if yes, the associated sub-area corresponding to the end-to-end non-edge bit coverage is set as a target associated sub-area, and target detection information of the target associated sub-area is set as first target in-out detection information;
the first target in-out detection information comprising the first moment for summarizing the plurality of associated subregions specifically further comprises: when the movable range does not cover the head-tail setting non-edge bits of the first sub-region, judging whether the movable range reaches a section between the head-tail setting non-edge bits; if yes, judging whether the duration reaching the target detection information corresponding to the section reaches the first duration; if so, taking the associated sub-area with the duration reaching the first time length as a target associated sub-area, and taking the target detection information of the target associated sub-area as first target in-out detection information.
2. The intelligent human-vehicle association analysis method according to claim 1, wherein the locating a first sub-region of the sub-regions of the first region specifically comprises:
reading the vehicle occupation information of the subareas;
judging the effective vehicle occupation included in the vehicle occupation information according to the vehicle access statistical information;
and taking the subarea where each effective vehicle occupation is as a first subarea.
3. The intelligent human-vehicle association analysis method according to claim 1, wherein the determining whether the moving object corresponding to the first object entry and exit detection information appears in the first monitored image comprises:
extracting the duration time of the first target in-out detection information;
and judging whether a moving target appears in the first monitoring image within the duration time.
4. A method of intelligent human-vehicle association analysis according to claim 1 or 3, further comprising:
when detecting that a moving target corresponding to the first target in-out detection information appears in the first monitoring image, directly associating the first monitoring image corresponding to the moving target with an associated sub-area;
and sending the correlated first monitoring image to an analysis center.
5. The intelligent human-vehicle association analysis method according to claim 1, wherein the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, determining the capturing zone according to the first association sub-area and the entering and exiting direction, specifically comprises:
and when the direction of the moving object in the second object entering and exiting detection information is detected to be close to the associated sub-area, determining a rectangular area comprising the first associated sub-area, and taking the rectangular area as a capturing area.
6. The intelligent human-vehicle association analysis method according to claim 1 or 5, wherein the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, determining the capturing zone according to the first association sub-area and the entering and exiting direction, specifically further comprises:
when the direction of the movable target in the second target access detection information is far away from the associated subarea, a first route between the far-away position of the movable target and all the exits of the first area is identified, and a capturing route comprising the first route is determined.
7. The intelligent human-vehicle association analysis method according to claim 1, wherein the identifying the entering and exiting direction of the moving object in the second object entering and exiting detection information, determining the capturing zone according to the first association sub-area and the entering and exiting direction, specifically further comprises:
when the direction of the movable target in the second target in and out detection information passes through the preset middle range, identifying a first subarea corresponding to the preset middle range, and judging that the vehicle bottom of the corresponding first subarea is crossed by the movable target;
and determining a plurality of subareas adjacent to the corresponding first subareas as capturing areas according to the direction of the moving target passing through the preset intermediate range.
8. An intelligent human-vehicle association analysis system, the system comprising:
the sub-area positioning module is used for positioning a first sub-area in the sub-areas of the first area, wherein the first sub-area is a single vehicle parking area;
the detection information summarizing module is used for summarizing first target in-out detection information including first time of a plurality of associated subareas, acquiring first monitoring images acting on the first subareas, wherein each associated subarea comprises every two adjacent first subareas;
the judging module is used for judging whether a movable target corresponding to the first target in-out detection information appears in the first monitoring image;
the first associated sub-area positioning module is used for acquiring second target access detection information including second moment if the movable target does not appear, positioning a first associated sub-area corresponding to the second target access detection information, wherein the second moment is a moment after the first moment, and the plurality of associated sub-areas comprise the first associated sub-area;
the capturing and sending module is used for identifying the entering and exiting direction of the movable target in the second target entering and exiting detection information, determining a capturing zone according to the first associated subarea and the entering and exiting direction, wherein the capturing zone comprises a capturing zone and/or a capturing route, collecting a target monitoring image covering the capturing zone, and transmitting the target monitoring image to an analysis center;
the first target in-out detection information comprising the first moment for summarizing the plurality of associated subregions specifically comprises: acquiring target detection information reported by first detection equipment, wherein the arrangement range of the first detection equipment comprises a preset adjacent range of an associated sub-area, a preset side range of the associated sub-area and a preset middle range of the first sub-area; identifying the movable range of the movable target in a preset adjacent range according to the target detection information and the position distribution of the first detection equipment; judging whether the movable range covers the head and tail set non-edge bits of the first subarea or not; if yes, the associated sub-area corresponding to the end-to-end non-edge bit coverage is set as a target associated sub-area, and target detection information of the target associated sub-area is set as first target in-out detection information;
the first target in-out detection information comprising the first moment for summarizing the plurality of associated subregions specifically further comprises: when the movable range does not cover the head-tail setting non-edge bits of the first sub-region, judging whether the movable range reaches a section between the head-tail setting non-edge bits; if yes, judging whether the duration reaching the target detection information corresponding to the section reaches the first duration; if so, taking the associated sub-area with the duration reaching the first time length as a target associated sub-area, and taking the target detection information of the target associated sub-area as first target in-out detection information.
CN202310810084.4A 2023-07-04 2023-07-04 Intelligent man-vehicle association analysis method and system Active CN116844097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310810084.4A CN116844097B (en) 2023-07-04 2023-07-04 Intelligent man-vehicle association analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310810084.4A CN116844097B (en) 2023-07-04 2023-07-04 Intelligent man-vehicle association analysis method and system

Publications (2)

Publication Number Publication Date
CN116844097A CN116844097A (en) 2023-10-03
CN116844097B true CN116844097B (en) 2024-01-23

Family

ID=88166589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310810084.4A Active CN116844097B (en) 2023-07-04 2023-07-04 Intelligent man-vehicle association analysis method and system

Country Status (1)

Country Link
CN (1) CN116844097B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100862398B1 (en) * 2008-07-18 2008-10-13 한국비전기술(주) Automatic police enforcement method of illegal-stopping and parking vehicle having cctv for preventing crime using multiple camera and system thereof
KR101049758B1 (en) * 2011-01-18 2011-07-19 한국비전기술(주) Method for monitoring total of school zone and system thereof
CN102129785A (en) * 2011-03-18 2011-07-20 沈诗文 Intelligent management system for large-scene parking lot
KR101570485B1 (en) * 2014-07-30 2015-11-23 주식회사 다이나맥스 System for monitoring illegal parking of camera blind spot
KR101698026B1 (en) * 2016-06-17 2017-01-19 주식회사 파킹패스 Police enfoforcement system of illegal stopping and parking vehicle by moving vehicle tracking
KR20170052286A (en) * 2015-11-04 2017-05-12 김창석 Intelligent camera and System for controlling going in and out of vehicle using Intelligent camera
CN107886757A (en) * 2017-10-19 2018-04-06 深圳市元征软件开发有限公司 Vehicle positioning method and parking management equipment
CN208094685U (en) * 2018-01-01 2018-11-13 智慧互通科技有限公司 A kind of system that Roadside Parking is managed based on polyphaser
CN111932901A (en) * 2019-05-13 2020-11-13 阿里巴巴集团控股有限公司 Road vehicle tracking detection apparatus, method and storage medium
CN112509270A (en) * 2020-11-19 2021-03-16 北京城市轨道交通咨询有限公司 Fire monitoring linkage method, device and system for train compartment
CN113435429A (en) * 2021-08-27 2021-09-24 广东电网有限责任公司中山供电局 Multi-target detection and tracking system based on field operation monitoring video
CN114662864A (en) * 2022-03-03 2022-06-24 国网新疆电力有限公司信息通信公司 Team work intelligent management and control method and system based on artificial intelligence
CN114915761A (en) * 2022-05-07 2022-08-16 中电科电科院科技有限公司 Linkage monitoring method and monitoring linkage device
CN116052275A (en) * 2023-01-28 2023-05-02 北京安录国际技术有限公司 Abnormal behavior detection method and system based on big data
CN116049262A (en) * 2023-03-29 2023-05-02 北京安录国际技术有限公司 Correlation analysis system and method based on big data
CN116125997A (en) * 2023-04-14 2023-05-16 北京安录国际技术有限公司 Intelligent inspection control method and system for robot
CN116127401A (en) * 2023-04-20 2023-05-16 西南石油大学 Data authority management and control method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10007501A1 (en) * 2000-02-18 2001-09-13 Daimler Chrysler Ag Road traffic monitoring method for automobile detects road lane, velocity and/or relative spacing of each preceding vehicle

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100862398B1 (en) * 2008-07-18 2008-10-13 한국비전기술(주) Automatic police enforcement method of illegal-stopping and parking vehicle having cctv for preventing crime using multiple camera and system thereof
KR101049758B1 (en) * 2011-01-18 2011-07-19 한국비전기술(주) Method for monitoring total of school zone and system thereof
CN102129785A (en) * 2011-03-18 2011-07-20 沈诗文 Intelligent management system for large-scene parking lot
KR101570485B1 (en) * 2014-07-30 2015-11-23 주식회사 다이나맥스 System for monitoring illegal parking of camera blind spot
KR20170052286A (en) * 2015-11-04 2017-05-12 김창석 Intelligent camera and System for controlling going in and out of vehicle using Intelligent camera
KR101698026B1 (en) * 2016-06-17 2017-01-19 주식회사 파킹패스 Police enfoforcement system of illegal stopping and parking vehicle by moving vehicle tracking
CN107886757A (en) * 2017-10-19 2018-04-06 深圳市元征软件开发有限公司 Vehicle positioning method and parking management equipment
CN208094685U (en) * 2018-01-01 2018-11-13 智慧互通科技有限公司 A kind of system that Roadside Parking is managed based on polyphaser
CN111932901A (en) * 2019-05-13 2020-11-13 阿里巴巴集团控股有限公司 Road vehicle tracking detection apparatus, method and storage medium
CN112509270A (en) * 2020-11-19 2021-03-16 北京城市轨道交通咨询有限公司 Fire monitoring linkage method, device and system for train compartment
CN113435429A (en) * 2021-08-27 2021-09-24 广东电网有限责任公司中山供电局 Multi-target detection and tracking system based on field operation monitoring video
CN114662864A (en) * 2022-03-03 2022-06-24 国网新疆电力有限公司信息通信公司 Team work intelligent management and control method and system based on artificial intelligence
CN114915761A (en) * 2022-05-07 2022-08-16 中电科电科院科技有限公司 Linkage monitoring method and monitoring linkage device
CN116052275A (en) * 2023-01-28 2023-05-02 北京安录国际技术有限公司 Abnormal behavior detection method and system based on big data
CN116049262A (en) * 2023-03-29 2023-05-02 北京安录国际技术有限公司 Correlation analysis system and method based on big data
CN116125997A (en) * 2023-04-14 2023-05-16 北京安录国际技术有限公司 Intelligent inspection control method and system for robot
CN116127401A (en) * 2023-04-20 2023-05-16 西南石油大学 Data authority management and control method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于社会应用的封闭区域人数智能统计系统研究;吴丽;崔铂晗;王克俭;赵洪涛;顾爱华;;数码世界(04);323-324 *
客运专线综合视频监控系统线路监控技术方案研究;凌力;;铁道通信信号(09);52-55 *
浅谈住宅小区监控系统中存在的问题及防治措施;李鸿玮;;电子制作(09);261 *

Also Published As

Publication number Publication date
CN116844097A (en) 2023-10-03

Similar Documents

Publication Publication Date Title
US9875405B2 (en) Video monitoring method, video monitoring system and computer program product
US9685079B2 (en) Short-time stopping detection from red light camera evidentiary photos
EP2835763B1 (en) A hybrid method and system of video and vision based access control for parking stall occupancy determination
DE102015200589B4 (en) Improved video-based system for automated detection of double parking violations
US9940633B2 (en) System and method for video-based detection of drive-arounds in a retail setting
CN112907982B (en) Method, device and medium for detecting vehicle illegal parking behavior
US20130266190A1 (en) System and method for street-parking-vehicle identification through license plate capturing
EP2093699A1 (en) Movable object status determination
US9858486B2 (en) Device and method for detecting circumventing behavior and device and method for processing cause of circumvention
KR101742490B1 (en) System for inspecting vehicle in violation by intervention and the method thereof
EP2858057A1 (en) System for traffic behaviour surveillance
CN116434148B (en) Data processing system and processing method based on Internet of things
KR102162130B1 (en) Enforcement system of illegal parking using single camera
CN116844097B (en) Intelligent man-vehicle association analysis method and system
US8983129B2 (en) Detecting and classifying persons in a prescribed area
KR102434154B1 (en) Method for tracking multi target in traffic image-monitoring-system
US8971579B2 (en) Windshield localization for occupancy detection
CN112907796B (en) Gate channel system, method and device for detecting passing behavior and storage medium
CN112185103A (en) Traffic monitoring method and device and electronic equipment
CN115294712A (en) Intrusion early warning method, early warning management system, electronic device and storage medium
JPH11353581A (en) Method and device for discriminating vehicle kind in the daytime
KR20140088630A (en) System and method for vehicle monitoring
CN107255470B (en) Obstacle detection device
WO2022267266A1 (en) Vehicle control method based on visual recognition, and device
CN112016423B (en) Method, device and equipment for identifying vehicle door state and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant