CN112036338B - Target behavior identification method, device and system - Google Patents

Target behavior identification method, device and system Download PDF

Info

Publication number
CN112036338B
CN112036338B CN202010917711.0A CN202010917711A CN112036338B CN 112036338 B CN112036338 B CN 112036338B CN 202010917711 A CN202010917711 A CN 202010917711A CN 112036338 B CN112036338 B CN 112036338B
Authority
CN
China
Prior art keywords
image
person
compared
preset area
personnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010917711.0A
Other languages
Chinese (zh)
Other versions
CN112036338A (en
Inventor
郜莉洁
苏恒钰
丁亚博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202010917711.0A priority Critical patent/CN112036338B/en
Publication of CN112036338A publication Critical patent/CN112036338A/en
Application granted granted Critical
Publication of CN112036338B publication Critical patent/CN112036338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a target behavior identification method, device and system, wherein the method comprises the following steps: respectively carrying out image acquisition on a first preset area and a second preset area, and determining whether the same person leaves the vehicle in the first preset area, has interaction behaviors in the second preset area and is accompanied with other people in the first preset area by detecting the acquired images; if so, the target behavior of the person in the process of carrying the passenger illegally is indicated. Therefore, in the scheme, the illegal passenger carrying is automatically identified, the field squatting of related personnel is not needed, and the labor consumption is reduced.

Description

Target behavior identification method, device and system
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a system for identifying a target behavior.
Background
In transportation hubs such as railway stations, airports, bus stops, etc., there are often some cases of offensive passenger traffic. For example, a driver parks a vehicle in an off-load area and leaves the vehicle, climbs on a strange passenger to a stop for a group of people, then the driver brings the passenger to the off-load area where the vehicle is located, and drives the vehicle away from the passenger. Such passenger carrying behavior affects urban capacity and also presents potential safety hazards, and related personnel need to manage such behavior.
Currently, it is generally only possible to rely on the relevant personnel to squat on the various traffic junctions to manually observe whether the situation of illegal passenger carrying of the driver exists. Thus, a lot of manpower is required for related personnel, so a scheme for automatically identifying target behaviors existing in the process of carrying passengers illegally is needed so as to be convenient for identifying passengers carrying illegally.
Disclosure of Invention
The embodiment of the application aims to provide a target behavior identification method, device and equipment so as to reduce manpower consumption.
In order to achieve the above objective, an embodiment of the present application provides a method for identifying target behavior, including:
acquiring a first image acquired aiming at a first preset area, and if the condition that a person is separated from a vehicle exists in the first image, determining the person as a person to be compared;
acquiring a second image acquired for a second preset area;
and if the fact that the interaction behavior of the person to be compared and other people exists in the second image is determined, and the fact that the situation that the person to be compared and other people appear along with the person to be compared exists in the first image is determined, determining that the person to be compared has target behavior.
In order to achieve the above object, an embodiment of the present application further provides an apparatus for identifying a target behavior, including:
The first acquisition module is used for acquiring a first image acquired for a first preset area;
the first determining module is used for determining the personnel as personnel to be compared if the condition that the personnel are separated from the vehicle exists in the first image;
the second acquisition module is used for acquiring a second image acquired for a second preset area;
and the second determining module is used for determining that the to-be-compared person has target behaviors if the to-be-compared person and other persons have interaction behaviors of a preset type in the second image and the situation of the to-be-compared person and the other persons in the first image is determined.
To achieve the above object, an embodiment of the present application further provides an electronic device, which is characterized by including a processor and a memory;
a memory for storing a computer program;
and the processor is used for realizing the identification method of any one of the target behaviors when executing the program stored in the memory.
To achieve the above object, an embodiment of the present application further provides a system for identifying a target behavior, including: the first acquisition device, the second acquisition device and the detection server, wherein,
The first acquisition equipment is used for acquiring an image aiming at a first preset area to obtain a first image, detecting whether a vehicle is parked in the first image, and reporting parking detection information to the detection server if the vehicle is parked in the first image;
the detection server is used for acquiring at least a part of the first image according to the parking detection information, wherein the acquired first image comprises the situation of the parked vehicle; detecting whether a person is separated from the vehicle in the acquired first image; if the person to be compared exists, determining the person to be compared, and sending control information for controlling the person to be compared to the first acquisition equipment and the second acquisition equipment;
the second acquisition device is used for acquiring images aiming at a second preset area to obtain a second image; detecting whether the personnel to be compared exist in the second image based on the control information; if the detection information exists, reporting the personnel detection information to the detection server;
the first acquisition equipment is further used for detecting whether the situation of the personnel to be compared and other personnel occurs in the acquired first image based on the control information, obtaining a detection result and sending the detection result to the detection server;
The detection server is further configured to obtain at least a part of the second image based on the personnel detection information reported by the second acquisition device, where the obtained second image includes situations that the personnel to be compared and other personnel are accompanied with each other; detecting whether the interaction behavior of the to-be-compared person and other people exists in a preset type in the acquired second image; if the person to be compared and other persons have interaction behaviors of preset types, and the detection result sent by the first acquisition equipment is received as follows: and if the situation that the person to be compared and other people are accompanied is detected to exist in the first image, determining that the person to be compared has target behaviors.
To achieve the above object, an embodiment of the present application further provides a system for identifying a target behavior, including: the system comprises a first acquisition device, a second acquisition device and a detection server, wherein the first acquisition device and the second acquisition device are respectively in communication connection with the electronic device;
the first acquisition equipment is used for acquiring images aiming at a first preset area to obtain a first image;
the second acquisition device is used for acquiring images aiming at a second preset area to obtain a second image;
The detection server is used for executing the identification method of any one of the target behaviors.
By applying the embodiment of the application, image acquisition is respectively carried out on a first preset area and a second preset area, and whether the same person leaves a vehicle in the first preset area, interactive behaviors exist in the second preset area and the situation which occurs along with other persons in the first preset area is determined by detecting the acquired images; if so, the target behavior of the person in the process of carrying the passenger illegally is indicated. Therefore, in the scheme, the illegal passenger carrying is automatically identified, the field squatting of related personnel is not needed, and the labor consumption is reduced.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for identifying target behavior according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic device interaction diagram provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an identification device for target behavior according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a first architecture of a target behavior recognition system according to an embodiment of the present application;
FIG. 6 is a second schematic structural diagram of a target behavior recognition system according to an embodiment of the present application;
FIG. 7 is a third schematic structural diagram of a target behavior recognition system according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In order to achieve the above objective, the embodiments of the present application provide a method, an apparatus, and a device for identifying target behaviors, where the method and the apparatus may be applied to various electronic devices, and each step in the method may be performed by different devices. The following first describes in detail a method for identifying target behavior provided in the embodiment of the present application.
The steps in the method embodiments provided in the present application may be performed in a logical order, and the step numbers or the sequence of introducing the steps do not limit the order of performing the steps.
Fig. 1 is a flow chart of a method for identifying target behavior according to an embodiment of the present application, including:
s101: a first image acquired for a first preset area is acquired.
The image in the embodiment of the present application may be a snap shot picture, a video image, or the like, and the image type is not limited. For distinguishing the description, an image acquired for the first preset area is referred to as a first image.
The first preset area in the embodiment of the present application may be understood as: the driver of the vehicle carrying passengers is violating the area where the vehicle is parked. For example, in railway stations, airports, bus stops, etc. scenarios, there are often specialized waiting areas where drivers typically park vehicles to wait for passengers to get on. However, the driver of some vehicles with passengers violating regulations usually does not park the vehicles in these specific waiting areas, but rather park the vehicles in some non-specific waiting areas such as roadsides, opposite roads at railway stations, etc., which are the first preset areas. In this embodiment of the present application, it may be predetermined in which areas the driver of the vehicle carrying the passengers is usually parked, that is, a first preset area is preset, and the image capturing device is set in the first preset area. For distinguishing the description, the image capturing apparatus set in the first preset area may be referred to as a first capturing apparatus.
The vehicle in the embodiment of the application may be an operating vehicle such as a taxi, etc., and the specific vehicle type is not limited.
S102: if it is determined that there is a person separated from the vehicle in the first image, the person is determined as the person to be compared.
The case where the person is separated from the vehicle in S102 means: the person driving the vehicle parks the vehicle and then separates from the vehicle. In one embodiment, it may be determined that there is a parking behavior in the first image and that a person in the parked vehicle is separated from the parked vehicle, and then the person is determined as the person to be aligned. For example, the driver is separated from the parked vehicle to engage in illicit solicitation; alternatively, the non-driver is separated from the parked vehicle to engage in illicit solicitation.
In one case, the first acquisition device detects, in the acquired first image, whether there is a parking behavior and a situation in which a person in a parked vehicle is separated from the parked vehicle.
Or in another case, the first acquisition device sends the acquired first image to the back-end device, and the back-end device detects whether the parking behavior exists and the person in the parked vehicle is separated from the parked vehicle in the first image. For example, the backend device may be a detection server or the like, which is not particularly limited.
Or in still another case, the first acquisition device detects whether a parked vehicle exists in the first image, if so, the first image containing the parked vehicle is sent to the back-end device, and the back-end device detects whether a parking behavior exists in the first image and a person in the parked vehicle is separated from the parked vehicle. In this way, compared with the first acquisition device which transmits all the acquired images to the back-end device, the network bandwidth is saved.
Or in another case, the first collecting device detects whether the vehicle is parked in the first image, if so, the first collecting device reports parking detection information to the back-end device, the parking detection information comprises parking detection time for detecting the condition of parking the vehicle, and the back-end device acquires the first image of a period of time before and after the parking detection time from the device for storing the first image, such as a cloud storage device, according to the parking detection time, and detects whether the condition that personnel in the parked vehicle are separated from the parked vehicle in the acquired first image.
For example, a human-vehicle separation model may be pre-established, and the first image may be matched with the model, where if the matching is successful, it indicates that there is a situation in the first image that the human is separated from the vehicle. Alternatively, other image recognition algorithms may be used to detect whether a person is separated from the vehicle in the first image. Alternatively, the neural network model obtained by training in advance may be used to detect whether the person is separated from the vehicle in the first image. The specific detection mode is not limited.
As described above, in one case, the first collecting device detects, in the collected first image, whether there is a situation in which the person is separated from the vehicle, and in this case, the first collecting device may extract the feature of the person to be compared in the first image and send the feature of the person to be compared to the back-end device.
As described above, in other cases, the back-end device detects whether there is a situation in which the person is separated from the vehicle in the first image, and if so, the back-end device may extract the feature of the person to be compared in the first image.
The back-end equipment can carry out control on the personnel to be compared based on the characteristics of the personnel to be compared.
S103: and acquiring a second image acquired for a second preset area.
The second preset area in the embodiment of the present application may be understood as: and (5) carrying out the region of the passenger by the vehicle driver who carries the passenger illegally. In general, in the scenes of train stations, airports, bus stops, etc., drivers of vehicles carrying passengers illegally often climb to areas where people gather (such as departure) and pick up passengers with strangers. In this embodiment of the present application, it may be predetermined which areas the driver of the vehicle carrying the offending passenger normally reaches to pick up the passengers, that is, a second preset area is preset, and the image capturing device is set in the second preset area. For the purpose of distinguishing the description, the image capturing device provided in the second preset area is referred to as a second capturing device, and the image captured by the second capturing device is referred to as a second image.
In the embodiment of the present application, the number of the first acquisition device and the second acquisition device is not limited. A plurality of first collection devices and a plurality of second collection devices may be set. For example, a driver of a vehicle carrying passengers in violation may park the vehicle in a different area, that is, there may be a plurality of first preset areas, and the first collecting device may be provided for each of the first preset areas, respectively. Or the area of the first preset area is larger, and a plurality of first acquisition devices are required to be arranged to cover the first preset area.
As another example, the driver of the vehicle carrying passengers illegally may pick up passengers in a plurality of different areas, that is, there may be a plurality of second preset areas, and the second collecting device may be set for each of the second preset areas. Or the second preset area is larger, and a plurality of first acquisition devices are required to be arranged to cover the second preset area.
The second image in the embodiment of the present application may be a second image acquired by a different second acquisition device, but the first image in the embodiment of the present application may be a first image acquired by the same first acquisition device. Because the vehicle driver who carries passengers illegally parks the vehicle at a certain position and then returns to the position, the vehicle driver can be collected by the same first collecting device, and the vehicle driver can walk around and can be collected by different second collecting devices when carrying out passenger collecting.
S104: and if the fact that the interaction behavior of the person to be compared and other people exists in the second image is determined, and the fact that the situation that the person to be compared and the other people appear along with each other exists in the first image is determined, the fact that the person to be compared has the target behavior is determined.
As described above, the back-end device obtains the characteristics of the personnel to be compared, and the back-end device can perform control on the personnel to be compared based on the characteristics of the personnel to be compared.
Under the condition, the second acquisition device can send the acquired second image to the back-end device, and the back-end device detects whether the person to be compared exists in the second image or not and detects whether the person to be compared has interaction behavior of a preset type or not based on the characteristics of the person to be compared.
Or in another case, the back-end device may send the feature of the person to be compared to the second collecting device, and the second collecting device detects whether the person to be compared exists in the second image based on the feature of the person to be compared, and detects whether the person to be compared has interaction behavior of a preset type.
Or, in yet another case, the backend device may send the feature of the person to be compared to the second acquisition device, and the second acquisition device detects whether the person to be compared exists in the second image based on the feature of the person to be compared. And if the second image exists, the second acquisition device sends the second image containing the personnel to be compared to the back-end device. And the back-end equipment detects whether the personnel to be compared have interaction behaviors of a preset type in the received second image.
Or, in yet another case, the backend device may send the feature of the person to be compared to the second acquisition device, and the second acquisition device detects whether the person to be compared exists in the second image based on the feature of the person to be compared. If the comparison personnel detection time exists, the second acquisition equipment reports detection information of the comparison personnel to the back-end equipment, the detection information of the comparison personnel comprises detection time of the comparison personnel, the back-end equipment acquires a second image of a period of time before and after the detection time of the comparison personnel from equipment storing the second image, such as cloud storage equipment, according to the detection time of the comparison personnel, and whether the interaction behavior of a preset type exists in the acquired second image is detected.
For example, the feature of the person to be compared may be a face feature, and it may be determined whether the person to be compared exists in the second image through face comparison. Or, the characteristics of the person to be compared can be the body characteristics such as the height, the clothing color, the knapsack or the like, and whether the person to be compared exists in the second image can be judged through the comparison of the body characteristics.
For example, the preset type of interaction may include: the person to be compared is close to other persons, talking after the person is close to the person, and a separate behavior sequence after talking.
In one embodiment, the preset type of interaction may include: the occurrence frequency of the behavior sequences of the person to be compared, which is close to other persons, talking after the person to be compared and is separated after talking, meets the preset interaction condition. For example, in some situations, people drive vehicles to a train station, an airport, a bus stop, etc. to connect relatives and friends or to other partners without ever being seen, and in these situations, there are behaviors such as approaching a person to be compared and other people, talking after approaching, etc., which are not illegal passengers.
In this embodiment, instead of determining that the person to be compared has a preset type of interaction behavior after detecting that the person to be compared has behaviors such as walking up and talking after walking up with other persons, the person to be compared has a sequence of behaviors separated after walking up and talking after walking up with different persons for many times, and determining that the person to be compared has a preset type of interaction behavior after detecting that the number of times of the sequence of behaviors satisfies a preset interaction condition. Therefore, the situation of friends and partners is not mistakenly identified as illegal passenger carrying behavior, and the identification accuracy is improved.
Under the condition, whether the personnel to be compared has a behavior sequence which is separated from the personnel to be compared after the personnel to be compared approaches, talks after the personnel to be compared approaches, if the behavior sequence exists, the occurrence number of times is recorded, and if the recorded occurrence number of times reaches a preset threshold, the personnel to be compared is judged to have the interaction behavior of the preset type.
Or in an implementation manner, if the occurrence number is determined to meet a preset interaction condition, whether the situation that the person to be compared and other persons leave the second preset area or not is detected in the second image, and if so, the person to be compared is determined to have a preset type of interaction behavior.
For example, if the person to be compared has a plurality of behaviors of approaching, talking after approaching, and separating after talking with other persons, but does not successfully pick up the person, that is, the person to be compared and other persons are not detected to leave the second preset area, then it is determined that the person to be compared does not have a preset type of interaction behavior. Therefore, in the embodiment, the subsequent steps are continuously executed only under the condition that the personnel to be compared successfully take the customer, so that the identification accuracy is improved.
Or in another embodiment, the preset type of interaction behavior includes not only the above behavior sequence, but also: in the second image, there is a case where the person to be compared leaves the second preset area together with other persons. Or the preset type of interaction behavior comprises the behavior sequence, the occurrence times of the behavior sequence meet the preset interaction condition, and the situation that the person to be compared and other persons leave a second preset area together exists in the second image.
If the situation that the person to be compared and other persons leave the second preset area is existed in the second image, the method can be understood as follows: the driver of the offending passenger vehicle gets away concomitantly with the passenger who is about to ride his vehicle.
In one embodiment, the preset type of interaction behavior includes: and any one or more actions of talking after the person to be compared approaches to other persons and separating after talking.
Or in another embodiment, the preset type of interaction behavior includes: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking; and the occurrence times of any one or more behaviors respectively reach the corresponding preset threshold values.
In this embodiment, the preset threshold value corresponding to each of the above behaviors may be set in advance. For example, the number of times that the person to be compared gets close to other people, the number of times of talking after getting close, and the number of times of separating after talking are counted respectively, whether the three times reach their corresponding thresholds is determined respectively, and if all the three times reach the thresholds, it is determined that the person to be compared has a preset type of interaction behavior.
Or in another embodiment, the preset type of interaction behavior includes not only any one or more of the above behaviors, but also: in the second image, there is a case where the person to be compared leaves the second preset area together with other persons. Or the interaction behavior of the preset type comprises any one or more behaviors, the occurrence times of the any one or more behaviors respectively reach the corresponding preset threshold value, and the situation that the person to be compared and other persons leave a second preset area in the second image together exists.
In the above-mentioned case, the back-end device may detect, in the first image sent by the first acquisition device, whether the person to be compared and other people are present in the second image based on the characteristics of the person to be compared, and detect whether the person to be compared has a preset type of interaction behavior.
In the other case, the second collecting device detects whether the person to be compared exists in the second image based on the feature of the person to be compared, and detects whether the person to be compared exists in the interaction behavior of the preset type, and in this case, the second collecting device may send a prompt message of the preset type to the back-end device, where the prompt message is used to prompt that the person to be compared exists in the interaction behavior of the preset type. After receiving the prompt information, the back-end device can send the characteristics of the person to be compared to the first acquisition device, and the first acquisition device detects whether the condition of the person to be compared and other people occurs in the first image acquired by the first acquisition device.
In one embodiment, after detecting that the person to be compared has the interaction behavior of the preset type in the second image, the person having the interaction behavior with the person to be compared is identified as the interaction person. In this embodiment, in the first image, whether the situation that the person to be compared and the interaction person are accompanied with each other exists may be detected.
Continuing the above situation, detecting that the person to be compared has the interaction behavior of the preset type by the second acquisition device, in this case, the second acquisition device can identify the interaction person, extract the characteristics of the interaction person, and send the characteristics of the interaction person to the back-end device. The back-end equipment can send the characteristics of the personnel to be compared and the characteristics of the interactive personnel to the first acquisition equipment, and the first acquisition equipment detects whether the conditions of the personnel to be compared and the interactive personnel occur in the first image acquired by the first acquisition equipment.
Continuing the above embodiment, if the number of times of occurrence of the behavior sequence of the person to be compared, which is separated after the person to be compared approaches, talks after the person approaches, and talks, meets the preset interaction condition, and there is a situation that the person to be compared and the person to be compared leave the second preset area together, the person to be compared and leave together can be determined as the interaction person, the characteristics of the interaction person are extracted, and then, based on the characteristics of the person to be compared and the characteristics of the interaction person, whether the situation that the person to be compared and the interaction person are accompanied exists is detected in the first image.
In this case, an image capturing device may be further provided in a third preset area between the first preset area and the second preset area, the image capturing device being referred to as a third capturing device, and an image captured by the third capturing device being referred to as a third image.
In one embodiment, before determining that the situation that the person to be compared appears along with other people exists in the first image, a third image acquired for a third preset area may be acquired, and the situation that the person to be compared appears along with other people exists in the third image is determined.
That is, in the present embodiment, it is determined that the person to be aligned has a target behavior when the following conditions are satisfied: the interaction behavior of the to-be-compared person and other people in the second image is of a preset type, the situation of the to-be-compared person and the other people occurs in the third image, and the situation of the to-be-compared person and the other people occurs in the first image.
By applying the method and the device, when a person parks in a first preset area, catches a person in a second preset area, passes through a third preset area along with other persons, and returns to the first preset area with other persons, the person is determined to have target behaviors in the illicit carrier, and the accuracy of behavior identification is improved.
Or in another implementation manner, if the interaction behavior of the person to be compared and other people is not determined to exist in the second image, but the situation that the person to be compared and other people appear along with each other exists in the third image, and the situation that the person to be compared and other people appear along with each other exists in the first image, determining that the person to be compared has a target behavior.
For example, if the person to be compared is taken in the second preset area, but the person to be compared cannot be identified in the second image due to shielding or poor acquisition angle, missed detection may be caused. By applying the embodiment, if the situation of the person to be compared and other people is detected in the third image and the situation of the person to be compared and other people is detected in the first image, the situation that the person to be compared has target behaviors is determined, so that the condition of missed detection is reduced.
In one case, the third acquisition device may send the acquired third image to the back-end device, and the back-end device detects whether a situation occurs in the third image due to the person to be compared and other persons. Or in another case, the back-end device may send the feature of the person to be compared to the third collecting device, and the third collecting device detects whether the person to be compared and other people are accompanied in the third image.
In one embodiment, after determining that the situation that the person to be compared appears along with other people exists in the first image, before determining that the person to be compared has a target behavior, it may be determined that the situation that the person to be compared enters a vehicle along with the person appearing along with the person to be compared and drives the vehicle to leave is existed in the first image.
In this embodiment, after detecting that the person to be compared and other persons are present in the first image, the first image is continuously used to detect whether the person to be compared and the person present are present in the first image, enter the vehicle together, and drive the vehicle away; if so, determining that the personnel to be compared have target behaviors.
The target behavior can be understood as the behavior in the process of carrying passengers illegally, and the target behavior can be identified, so that the passengers carrying the regulations can be identified.
In the above embodiment, the interactive person is identified, and in the first image, whether the situation that the person to be compared and the interactive person appear is detected; in this embodiment, if the situation that the person to be compared and the interaction person are accompanied is detected, it may be further detected whether the person to be compared and the interaction person enter the vehicle together; if so, determining that the personnel to be compared have target behaviors.
In one embodiment, in the case where separation of the person from the vehicle is detected in the first image, the vehicle feature of the vehicle driven by the person to be compared may be extracted and stored as the first vehicle feature. In this way, in the case where it is detected that the person to be compared enters the vehicle together with the accompanying person, the feature of the vehicle can be further extracted as the second vehicle feature. And matching the second vehicle characteristic with the stored first vehicle characteristic, and if the matching is successful, determining that the person to be compared has target behaviors.
In this embodiment, the person to be compared and other persons come back to the vehicle in which the person to be compared is parked in the first preset area, which means that the person to be compared performs a series of illegal passenger carrying actions of parking in the first preset area, taking passengers in the second preset area, and returning to the first preset area.
In one embodiment, a first image in which the person to be compared is separated from the vehicle, a second image in which the person to be compared is present for a preset type of interaction, a first image in which the person to be compared is present for a situation in which the person to be compared is present with other persons, and a first image in which the person to be compared is present for a situation in which the person to be compared enters the vehicle together with the person to be present.
The images output in the embodiment can be understood as the evidence of the above-mentioned series of illegal passenger carrying behaviors, and the images are output, that is, the evidence of the illegal passenger carrying of personnel is output, so that the manpower consumption of related personnel is further saved. In addition, the image output in the embodiment forms a evidence chain of a series of behaviors that the personnel stop in the first preset area, pick up the passengers in the second preset area and return to the first preset area for carrying the passengers, thereby being more beneficial to the management of the illegal passenger carrying behaviors by the related personnel.
In the above-described case, the back-end device may detect a situation in which a person is separated from the vehicle, an interaction behavior, a situation in which a person to be compared appears along with other persons, and a situation in which a person to be compared enters the vehicle along with the person appearing along with the person to be compared, and in this case, the above-described evidence may be output by the back-end device.
Alternatively, in another case, when the first acquisition device detects that the person to be aligned enters the vehicle together with the accompanying person, the first acquisition device may output a first image in which the person to be aligned is separated from the vehicle, a first image in which the person to be aligned enters the vehicle together with the accompanying person, and a first image in which the person to be aligned enters the vehicle together with the accompanying person. The first acquisition device informs the second acquisition device through the back-end device, and the second acquisition device outputs a second image with the interaction behavior of the preset type of the personnel to be compared.
Alternatively, in another embodiment, a first image in which the person to be compared is separated from the vehicle, a second image in which the person to be compared has a preset type of interaction behavior, and a first image in which the person to be compared and other people are accompanied may be output.
According to actual requirements, which images need to be output can be set as evidence of people illegal carrying, and the specific output images are not limited.
For example, in some cases, due to the shielding or the poor acquisition angle, the first image in which the person to be compared enters the vehicle together with the accompanying person cannot be acquired, and in this case, the present embodiment may be adopted to output the image as evidence of the person carrying the passenger illegally.
In one embodiment, vehicle features of a vehicle driven by a person to be aligned may be extracted from the first image; based on the vehicle characteristics, registration information of the vehicle is acquired.
For example, the vehicle feature may be a color, a vehicle type, or a license plate number of the vehicle, etc., and the specific feature type is not limited. In general, a related department (e.g., a vehicle management department) typically registers a vehicle, such as registering a license plate number of the vehicle, a model number of the vehicle, information of a driver driving the vehicle, and the like. By matching the vehicle characteristics with registration information stored by the relevant departments, registration information of the offending vehicle can be obtained, thus helping relevant personnel to know more detailed offending information.
In one embodiment, the characteristics of the driver of the vehicle may be extracted from the registration information of the vehicle; and matching the characteristics of the driver with the characteristics of the personnel to be compared, and obtaining a matching result.
As described above, the registration information of the vehicle may include information of a driver driving the vehicle, and for example, the information of the driver may include information of a face image, a human body image, age, sex, and the like of the driver, without being limited in particular. The characteristics of the driver may be extracted from the face image or the body image of the driver. And matching the characteristics of the driver with the characteristics of the personnel to be compared extracted from the above, namely judging whether the personnel to be compared and the registered vehicle driver are the same personnel or not.
For example, in some cases, there may be a plurality of registered drivers for the same vehicle, or a plurality of drivers may drive the same vehicle in shift. In this case, the registration information of the vehicle includes information of a plurality of drivers, and the present embodiment is applied to determine which driver has the offensive passenger behavior by extracting characteristics of a plurality of drivers from the information of the plurality of drivers and matching the characteristics of the person to be compared with the characteristics of the plurality of drivers.
Alternatively, there are also some actions to implement the offensive passenger using the borrowed vehicle, or to steal the vehicle of another person, and so on. In these cases, by applying the present embodiment, the characteristics of the person to be compared are matched with the characteristics of the driver extracted from the registration information of the vehicle, and if the matching is unsuccessful, it is determined that the driver registered in the vehicle does not perform the offensive passenger behavior. Thus, the practical situation can be easily known to related personnel.
According to the embodiment of the application, in the first aspect, the automatic recognition of illegal passenger carrying is realized, the field squatting of related personnel is not needed, and the labor consumption is reduced. In a second aspect, in one implementation manner, a behavior sequence of a person to be compared, which is separated after the person to be compared walks close to, talks after the person to be compared, is detected, and after the detected times meet a preset interaction condition, the person to be compared is judged to have a preset type of interaction behavior; therefore, the situation of friends and partners is not mistakenly identified as illegal passenger carrying behavior, and the identification accuracy is improved. In the third aspect, in one embodiment, evidence of illegal passenger carrying of personnel can be output, and labor consumption of related personnel is further saved. In the fourth aspect, the management of relevant personnel on illegal passenger carrying behaviors is facilitated, the order of public places is maintained, and the management of the public places is beneficial to reducing potential safety hazards.
In some related aspects, the offending passenger behavior is identified by mounting cameras and locating devices on the vehicle. However, in this solution, the driver usually closes the camera and the positioning device during the process of carrying passengers in violation, so as to maximize profits. Thus, efficient identification of offending passengers is not achieved.
By the aid of the method, the device and the system, the cooperation of drivers is not needed, image acquisition is conducted on the first preset area and the second preset area, illegal passengers are automatically identified through image detection, and effective identification of the illegal passengers is achieved.
A specific embodiment is described below with reference to fig. 2 and 3:
fig. 2 may be understood as an application scenario diagram, in a scenario of a train station, an airport, a bus stop, etc., a first preset area and a second preset area are determined. The first preset area can be understood as: the driver of the vehicle carrying passengers is violating the area where the vehicle is parked. The second preset area can be understood as: and (5) carrying out the region of the passenger by the vehicle driver who carries the passenger illegally. The method comprises the steps of setting a first acquisition device in a first preset area, setting a second acquisition device in a second preset area, and for distinguishing description, enabling an image acquired by the first acquisition device to be called a first image and enabling an image acquired by the second acquisition device to be called a second image.
In this embodiment, the number of the first collecting device and the second collecting device is not limited. A plurality of first collection devices and a plurality of second collection devices may be set. For example, a driver of a vehicle carrying passengers in violation may park the vehicle in a different area, that is, there may be a plurality of first preset areas, and the first collecting device may be provided for each of the first preset areas, respectively. Or the area of the first preset area is larger, and a plurality of first acquisition devices are required to be arranged to cover the first preset area.
As another example, the driver of the vehicle carrying passengers illegally may pick up passengers in a plurality of different areas, that is, there may be a plurality of second preset areas, and the second collecting device may be set for each of the second preset areas. Or the second preset area is larger, and a plurality of first acquisition devices are required to be arranged to cover the second preset area.
The second image in the embodiment of the present application may be a second image acquired by a different second acquisition device, but the first image in the embodiment of the present application may be a first image acquired by the same first acquisition device. Because the vehicle driver who carries passengers illegally parks the vehicle at a certain position and then returns to the position, the vehicle driver can be collected by the same first collecting device, and the vehicle driver can walk around and can be collected by different second collecting devices when carrying out passenger collecting.
The interaction between the first collecting device, the second collecting device and the detection server may be as shown in fig. 3, where the detection server is the back-end device in the above content, and is in communication connection with the first collecting device and the second collecting device.
The first acquisition device detects whether a vehicle is parked in the first image, and if so, the first acquisition device reports parking detection information to the detection server.
In one case, the parking detection information may include therein a first image of a situation in which the parked vehicle is present. In this case, the detection server detects whether or not there is a person separated from the vehicle in the received first image.
In another case, the first capturing device may send the video image captured in real time to the video storage device, and the first capturing device may capture a single frame image in the video image and detect whether there is a situation of parking the vehicle in the captured single frame image. In this case, the parking detection information may include time information that the first collecting apparatus detects a parked vehicle condition. After receiving the time information, the detection server acquires a first image associated with the time information from the video storage device, wherein the acquired first image comprises the situation of parking the vehicle. The detection server detects whether a person is separated from the vehicle in the acquired first image.
For example, the first capturing device detects a situation of parking the vehicle in the first image captured at the time t, the parking detection information reported by the first capturing device to the detection server includes the time t, the detection server captures video images captured from the video storage device for a period of time before and after the time t, for example, video images 20 seconds before and after the time t, and detects whether a person is separated from the vehicle in the video images (first images).
If the detection server detects that the situation that the personnel are separated from the vehicle exists in the acquired first image, the personnel are determined to be the personnel to be compared, and the personnel to be compared are distributed and controlled in the first acquisition equipment and the second acquisition equipment. The control is understood to be: and sending the control information for controlling the personnel to be compared to the first acquisition equipment and the second acquisition equipment.
The second acquisition equipment can detect whether the personnel to be compared exist in the acquired second image based on the control information; and if the detection information exists, reporting the personnel detection information to the detection server.
For example, the detection server may extract the features of the person to be compared in the acquired first image, and the control information may include the features of the person to be compared. In this way, the second acquisition device may detect whether the person to be compared is present in the second image based on the characteristics of the person to be compared; if so, reporting the personnel detection information to a detection server.
And the detection server detects whether the interaction behavior of the person to be compared and other people exists in a preset type in the second image based on the person detection information. In this case, the person detection information may include a second image in which the person to be aligned is present. In this case, the detection server detects whether the person to be compared and other persons have interaction behaviors of a preset type in the received second image.
In another case, the second acquisition device may send the video image acquired in real time to the video storage device, and the second acquisition device may capture a single frame image in the video image, and detect whether a person to be compared exists in the captured single frame image. In this case, the person detection information may include time information that the second acquisition device detected the person to be compared. After receiving the time information, the detection server acquires a second image associated with the time information from the video storage device, wherein the acquired second image comprises personnel to be compared. And the detection server detects whether the personnel to be compared and other personnel have interaction behaviors of a preset type in the acquired second image.
The preset type of interaction behavior may be: the behavior sequences of the person to be compared, which are separated from each other after the person approaches, talks after the person approaches, or other behaviors may also be used, and detailed descriptions are omitted here.
The first acquisition device may detect whether a situation that the person to be compared and other persons appear in the acquired first image based on the above-mentioned control information, obtain a detection result, and send the detection result to the detection server.
As described above, the control information may include the characteristics of the person to be compared. In this way, the first acquisition device may detect, based on the characteristics of the person to be compared, whether the person to be compared appears with other persons, enters the vehicle together, and drives the vehicle to leave in the first image acquired by the first acquisition device, obtain a detection result, and send the detection result to the detection server.
If the detection result sent by the first acquisition device and received by the detection server is: the method comprises the steps that the situation that the person to be compared and other people are accompanied is detected to occur in a first image, and in an acquired second image, the detection server detects that the person to be compared and the other people have interaction behaviors of a preset type, and then the detection server determines that the person to be compared has target behaviors.
In one embodiment, the detection server may perform the control in the first collecting device and the second collecting device at the same time, that is, send the control information to the first collecting device and the second collecting device at the same time.
In this embodiment, the detection server determines whether the person to be compared pointed in the detection result sent by the first acquisition device and the person to be compared detected in the second image, for which the interaction behavior exists, are the same person, and if so, determines that the person has the target behavior.
In another embodiment, the detection server performs the control in the second acquisition device first and then performs the control in the first acquisition device. In this embodiment, after determining the person to be compared, the detection server sends the control information to the second acquisition device; and the follow-up detection server sends the control information to the first acquisition equipment under the condition that the interaction behavior of the personnel to be compared and other personnel exists in a preset type in the second image.
In this embodiment, in one case, the detection server may determine whether the person to be compared pointed in the detection result sent by the first acquisition device and the person to be compared detected in the second image that has the interaction behavior are the same person; or in another case, the detection server may not execute the judgment step and directly determine that the person to be compared has the target behavior.
After the detection server determines that the person to be compared has the target behavior, the following images can be output: the method comprises the steps of providing a first image of a situation that a person to be compared is separated from a vehicle, providing a second image of a preset type of interaction behavior of the person to be compared, providing a first image of a situation that the person to be compared and other people are accompanied, providing a first image of a situation that the person to be compared enters the vehicle together with the accompanying people and drives the vehicle to leave. The images form a evidence chain of a series of behaviors of stopping a person in a first preset area, collecting the person in a second preset area and returning to the first preset area for carrying the person, so that the management of the illegal carrying behavior by related persons is facilitated.
Therefore, according to the first aspect, the automatic recognition of illegal passengers is realized, the field squatting of related personnel is not needed, and the labor consumption is reduced. In the second aspect, the detection server detects whether the person to be compared has a behavior sequence which is separated from the person to be compared after the person approaches, talks after talking, and if the behavior sequence is repeated a plurality of times, and the person to be compared finally leaves a second preset area together with one or more persons, the person to be compared is judged to have the interaction behavior of the preset type; therefore, the situation of friends and partners is not mistakenly identified as illegal passenger carrying behavior, and the identification accuracy is improved. In the third aspect, evidence of illegal passenger carrying of personnel is output, and labor consumption of related personnel is further saved. In the fourth aspect, the management of relevant personnel on illegal passenger carrying behaviors is facilitated, the order of public places is maintained, and the management of the public places is beneficial to reducing potential safety hazards.
Corresponding to the above method embodiment, the embodiment of the present application further provides an apparatus for identifying a target behavior, as shown in fig. 4, including:
a first acquiring module 401, configured to acquire a first image acquired for a first preset area;
a first determining module 402, configured to determine a person to be compared as a person to be compared if it is determined that there is a situation in which the person is separated from the vehicle in the first image;
a second acquiring module 403, configured to acquire a second image acquired for a second preset area;
and a second determining module 404, configured to determine that the person to be compared has a target behavior if it is determined that in the second image, the person to be compared has a preset type of interaction behavior with other people, and it is determined that in the first image, there is a situation that the person to be compared has occurred along with other people.
In one embodiment, the first determining module 402 is specifically configured to: if it is determined that the parking behavior exists in the first image and the person in the parked vehicle is separated from the parked vehicle, it is determined that the person to be compared has a target behavior.
In one embodiment, the preset type of interaction behavior includes: the preset type of interaction behavior comprises the following steps: the person to be compared and other persons walk close to each other and talk after walking, and a behavior sequence separated after talking;
Or, the preset type of interaction behavior includes: and the personnel to be compared is close to other personnel, talks after the personnel to be compared is close to the personnel, and the action sequences are separated after talking, and the occurrence times of the action sequences meet the preset interaction conditions.
In one embodiment, the preset type of interaction behavior includes: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking;
or, the preset type of interaction behavior includes: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking; and the occurrence times of any one or more behaviors respectively reach the corresponding preset threshold values.
In one embodiment, the preset type of interaction behavior further includes: and in the second image, the situation that the person to be compared and other persons leave the second preset area together exists.
In one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring a third image acquired for a third preset area; wherein the third preset area is located between the first preset area and the second preset area;
And the third determining module is used for determining that the situation of the person to be compared and other persons occurs along with the person to be compared in the third image.
In one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring a third image acquired for a third preset area; wherein the third preset area is located between the first preset area and the second preset area;
the fourth determining module is configured to determine, if a preset type of interaction behavior between the person to be compared and other people is not determined in the second image, that a situation occurring along with the person to be compared and other people exists in the third image, and that a situation occurring along with the person to be compared and other people exists in the first image, determine that a target behavior exists in the person to be compared.
In one embodiment, the second determining module 404 is further configured to:
after the condition that the person to be compared and other people appear in the first image is determined, before the condition that the person to be compared has target behaviors is determined, the method further comprises:
and determining that the situation that the person to be compared enters the vehicle together with the accompanying person and drives the vehicle to leave exists in the first image.
According to the embodiment of the application, in the first aspect, the automatic recognition of illegal passenger carrying is realized, the field squatting of related personnel is not needed, and the labor consumption is reduced. In a second aspect, in one implementation manner, a behavior sequence of a person to be compared, which is separated after the person to be compared walks close to, talks after the person to be compared, is detected, and after the detected times meet a preset interaction condition, the person to be compared is judged to have a preset type of interaction behavior; therefore, the situation of friends and partners is not mistakenly identified as illegal passenger carrying behavior, and the identification accuracy is improved. In the third aspect, in one embodiment, evidence of illegal passenger carrying of personnel can be output, and labor consumption of related personnel is further saved. In the fourth aspect, the management of relevant personnel on illegal passenger carrying behaviors is facilitated, the order of public places is maintained, and the management of the public places is beneficial to reducing potential safety hazards.
Corresponding to the above method embodiment, the embodiment of the present application further provides a target behavior recognition system, as shown in fig. 5, including: a first acquisition device 510, a second acquisition device 520, and a detection server 530, wherein,
the first acquisition device 510 is configured to acquire an image of a first preset area, obtain a first image, detect whether a situation of parking a vehicle exists in the first image, and if so, report parking detection information to the detection server 530;
A detection server 530, configured to acquire at least a part of the first image according to the parking detection information, where the acquired first image includes a situation of the parked vehicle; detecting whether a person is separated from the vehicle in the acquired first image; if the person is present, determining the person to be compared as a person to be compared, and sending control information for controlling the person to be compared to the first acquisition device 510 and the second acquisition device 520;
the second acquisition device 520 is configured to acquire an image for a second preset area, so as to obtain a second image; detecting whether the personnel to be compared exist in the second image based on the control information; if so, reporting the personnel detection information to the detection server 530;
the first collecting device 510 is further configured to detect, based on the control information, whether a situation that the person to be compared appears with other persons exists in the collected first image, obtain a detection result, and send the detection result to the detection server 530;
the detection server 530 is further configured to obtain at least a portion of the second image based on the person detection information reported by the second acquisition device, where the obtained second image includes a situation that the person to be compared and other persons are accompanied; detecting whether the interaction behavior of the to-be-compared person and other people exists in a preset type in the acquired second image; if the person to be compared and other persons have interaction behaviors of preset types, and the detection result sent by the first acquisition equipment is received as follows: and if the situation that the person to be compared and other people are accompanied is detected to exist in the first image, determining that the person to be compared has target behaviors.
Corresponding to the above method embodiment, the embodiment of the present application further provides another target behavior recognition system, as shown in fig. 6, including: a first acquisition device 610, a second acquisition device 620, and an electronic device 630, wherein,
the first acquiring device 610 is configured to acquire an image of a first preset area, obtain a first image, and send the first image to the electronic device 630;
the second acquisition device 620 is configured to acquire a second image according to a second preset area, and send the second image to the electronic device 630;
the electronic device 630 is capable of performing any of the target behavior recognition methods described above.
Corresponding to the above method embodiment, the present application further provides a system for identifying a target behavior, as shown in fig. 7, including: a first acquisition device 710, a second acquisition device 720, and a detection server 730, wherein,
the first acquisition device 710 is configured to acquire, for a first preset area, an image to obtain a first image, where whether a person is separated from a vehicle is detected in the first image; if the person exists, determining the person as the person to be compared, and extracting the characteristics of the person to be compared from the first image; the characteristics of the personnel to be compared are sent to a detection server 730;
The detection server 730 is configured to send the characteristics of the person to be compared to the second acquisition device 720;
the second acquisition device 720 is configured to acquire an image for a second preset area, so as to obtain a second image; based on the characteristics of the personnel to be compared, detecting whether the personnel to be compared exists in the second image, and detecting whether the personnel to be compared has interaction behaviors of a preset type; if the interaction behavior exists, a prompt message of a preset type is sent to the detection server 730, wherein the prompt message is used for prompting that the interaction behavior of the preset type exists in the personnel to be compared;
the detection server 730 is further configured to send a detection instruction to the first acquisition device 710 after receiving the prompt message;
the first acquisition device 710 is further configured to detect, based on the characteristics of the person to be compared, whether a situation that the person to be compared appears with other people exists in the acquired first image after receiving the detection instruction; if so, determining that the personnel to be compared have illegal passenger carrying behaviors.
In the above-described three systems, each method step in the method embodiments is performed by a different device, and the embodiments of the present application do not limit which step is performed by which device.
The present embodiment also provides an electronic device, as shown in fig. 8, comprising a processor 801 and a memory 802,
a memory 802 for storing a computer program;
the processor 801 is configured to implement any one of the above target behavior recognition methods when executing the program stored in the memory 802.
The Memory mentioned in the electronic device may include a random access Memory (Random Access Memory, RAM) or may include a Non-Volatile Memory (NVM), such as at least one magnetic disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided herein, a computer readable storage medium is provided, in which a computer program is stored, which when executed by a processor implements the method for identifying any one of the target behaviors described above.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the method of identifying any one of the target behaviors described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus embodiments, system embodiments, apparatus embodiments, computer-readable storage medium embodiments, and computer program product embodiments, the description is relatively simple, as relevant to the method embodiments and for reasons of substantial similarity thereto.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (14)

1. A method for identifying a target behavior, comprising:
acquiring a first image acquired aiming at a first preset area, and if the condition that a person is separated from a vehicle exists in the first image, determining the person as a person to be compared;
acquiring a second image acquired for a second preset area;
if the fact that the interaction behavior of the person to be compared and other people exists in the second image is determined, and the fact that the situation that the person to be compared and the other people appear in the other first image collected for the first preset area exists is determined, the fact that the person to be compared has target behavior is determined; the first image, the second image and the other first image are acquired in sequence according to the time sequence.
2. The method of claim 1, wherein the determining that a person is separated from a vehicle in the first image comprises:
A determination is made that a parking behavior is present in the first image and that a person in a parked vehicle is separated from the parked vehicle.
3. The method of claim 1, wherein the predetermined type of interaction behavior comprises: the person to be compared and other persons walk close to each other and talk after walking, and a behavior sequence separated after talking;
or, the preset type of interaction behavior includes: and the personnel to be compared is close to other personnel, talks after the personnel to be compared is close to the personnel, and the action sequences are separated after talking, and the occurrence times of the action sequences meet the preset interaction conditions.
4. The method of claim 1, wherein the predetermined type of interaction behavior comprises: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking;
or, the preset type of interaction behavior includes: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking; and the occurrence times of any one or more behaviors respectively reach the corresponding preset threshold values.
5. The method according to claim 3 or 4, wherein the interaction behavior of the preset type further comprises:
And in the second image, the situation that the person to be compared and other persons leave the second preset area together exists.
6. The method of claim 1, wherein the determining, before the presence of the concomitant occurrence of the person to be compared with other persons in another first image acquired for the first preset area, further comprises:
acquiring a third image acquired aiming at a third preset area, and determining that the situation that the person to be compared and other persons appear along with each other exists in the third image; wherein the third preset area is located between the first preset area and the second preset area; the first image, the second image, the third image and the other first image are acquired in sequence according to the time sequence.
7. The method according to claim 1, wherein the method further comprises:
acquiring a third image acquired for a third preset area; wherein the third preset area is located between the first preset area and the second preset area;
if the interaction behavior of the person to be compared and other persons in the preset type is not determined in the second image, the method further comprises:
Determining that the situation of the person to be compared which occurs with other persons exists in the third image, and determining that the situation of the person to be compared which occurs with other persons exists in another first image acquired for the first preset area, and determining that the person to be compared has target behaviors; the first image, the second image, the third image and the other first image are acquired in sequence according to the time sequence.
8. The method of claim 1, wherein the determining, after the presence of the person to be aligned in the other first image acquired for the first preset area along with the presence of other persons, is before the determining that the person to be aligned has a target behavior, the method further comprises:
determining that the situation that the person to be compared enters a vehicle together with the accompanying person and drives the vehicle to leave exists in a first image acquired for the first preset area; the first image, the second image, the other first image and the other first image are acquired in sequence according to the time sequence.
9. An apparatus for identifying a target behavior, comprising:
the first acquisition module is used for acquiring a first image acquired for a first preset area;
the first determining module is used for determining the personnel as personnel to be compared if the condition that the personnel are separated from the vehicle exists in the first image;
the second acquisition module is used for acquiring a second image acquired for a second preset area;
the second determining module is used for determining that the to-be-compared person has target behaviors if the to-be-compared person and other persons are determined to have interaction behaviors of a preset type in the second image and the to-be-compared person and other persons are determined to have conditions which occur along with the to-be-compared person in another first image acquired for the first preset area; the first image, the second image and the other first image are acquired in sequence according to the time sequence.
10. The apparatus of claim 9, wherein the predetermined type of interaction comprises: the person to be compared and other persons walk close to each other and talk after walking, and a behavior sequence separated after talking;
Or, the preset type of interaction behavior includes: and the personnel to be compared is close to other personnel, talks after the personnel to be compared is close to the personnel, and the action sequences are separated after talking, and the occurrence times of the action sequences meet the preset interaction conditions.
11. The apparatus of claim 9, wherein the predetermined type of interaction comprises: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking;
or, the preset type of interaction behavior includes: any one or more actions of talking after the person to be compared approaches to other persons and separating after talking; and the occurrence times of any one or more behaviors respectively reach the corresponding preset threshold values.
12. An electronic device comprising a processor and a memory;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 8 when executing a program stored on a memory.
13. A system for identifying a target behavior, comprising: the first acquisition device, the second acquisition device and the detection server, wherein,
The first acquisition equipment is used for acquiring an image aiming at a first preset area to obtain a first image, detecting whether a vehicle is parked in the first image, and reporting parking detection information to the detection server if the vehicle is parked in the first image;
the detection server is used for acquiring at least a part of the first image according to the parking detection information, wherein the acquired first image comprises the situation of the parked vehicle; detecting whether a person is separated from the vehicle in the acquired first image; if the person to be compared exists, determining the person to be compared, and sending control information for controlling the person to be compared to the first acquisition equipment and the second acquisition equipment;
the second acquisition device is used for acquiring images aiming at a second preset area to obtain a second image; detecting whether the personnel to be compared exist in the second image based on the control information; if the detection information exists, reporting the personnel detection information to the detection server;
the first acquisition equipment is further used for detecting whether the situation of the personnel to be compared and other personnel occurs in another first image based on the control information, obtaining a detection result and sending the detection result to the detection server;
The detection server is further configured to obtain at least a part of the second image based on the personnel detection information reported by the second acquisition device, where the obtained second image includes situations that the personnel to be compared and other personnel are accompanied with each other; detecting whether the interaction behavior of the to-be-compared person and other people exists in a preset type in the acquired second image; if the person to be compared and other persons have interaction behaviors of preset types, and the detection result sent by the first acquisition equipment is received as follows: if the condition that the person to be compared and other persons are accompanied is detected to exist in the other first image, determining that the person to be compared has target behaviors;
the first image, the second image and the other first image are acquired in sequence according to the time sequence.
14. A system for identifying a target behavior, comprising: the first acquisition device, the second acquisition device, and the electronic device of claim 12, wherein,
the first acquisition equipment is used for acquiring images aiming at a first preset area to obtain a first image;
The second acquisition device is used for acquiring images aiming at a second preset area to obtain a second image.
CN202010917711.0A 2020-09-03 2020-09-03 Target behavior identification method, device and system Active CN112036338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010917711.0A CN112036338B (en) 2020-09-03 2020-09-03 Target behavior identification method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010917711.0A CN112036338B (en) 2020-09-03 2020-09-03 Target behavior identification method, device and system

Publications (2)

Publication Number Publication Date
CN112036338A CN112036338A (en) 2020-12-04
CN112036338B true CN112036338B (en) 2024-02-02

Family

ID=73592335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010917711.0A Active CN112036338B (en) 2020-09-03 2020-09-03 Target behavior identification method, device and system

Country Status (1)

Country Link
CN (1) CN112036338B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147338A (en) * 2018-08-15 2019-01-04 杭州海康威视系统技术有限公司 A kind of recognition methods of illegal parking, device and server
CN110428604A (en) * 2019-07-30 2019-11-08 山东交通学院 It is a kind of based on the taxi illegal parking of GPS track data and map datum monitoring and method for early warning
KR102041871B1 (en) * 2019-04-23 2019-11-27 주식회사 시큐원 Parking patrol and management system, and method of parking patrol and management using the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147338A (en) * 2018-08-15 2019-01-04 杭州海康威视系统技术有限公司 A kind of recognition methods of illegal parking, device and server
KR102041871B1 (en) * 2019-04-23 2019-11-27 주식회사 시큐원 Parking patrol and management system, and method of parking patrol and management using the same
CN110428604A (en) * 2019-07-30 2019-11-08 山东交通学院 It is a kind of based on the taxi illegal parking of GPS track data and map datum monitoring and method for early warning

Also Published As

Publication number Publication date
CN112036338A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN108986539B (en) Parking management system, method, vehicle information acquisition device and management server
CN105303883B (en) A kind of Roadside Parking management system and method
CN109800633B (en) Non-motor vehicle traffic violation judgment method and device and electronic equipment
CN106056839A (en) Security monitoring system and method for internet-based car hailing service
US20120148092A1 (en) Automatic traffic violation detection system and method of the same
JP2020061079A (en) Traffic violation vehicle identification system, server, and vehicle control program
CN106355874B (en) Method, device and system for monitoring and alarming violation vehicle
CN106971552B (en) Fake plate phenomenon detection method and system
CN107730898A (en) Parking lot illegal vehicle recognition methods and system
CN105741560A (en) Vehicle identification method used for gas station
WO2017128874A1 (en) Traffic violation evidence producing method and system thereof
KR101626377B1 (en) A system for detecting car being violated parking and stopping of based on big date using CCTV camera and black box vehicle
CN109191829B (en) road safety monitoring method and system, and computer readable storage medium
CN108197526A (en) Detection method, system and computer readable storage medium
CN105046966A (en) System and method for automatically detecting illegal parking behaviors in drop-off areas
KR100948382B1 (en) Security service method and system
CN111935281A (en) Method and device for monitoring illegal parking
CN114694197A (en) Driving early warning method, electronic device and computer readable storage medium
CN112036338B (en) Target behavior identification method, device and system
CN112532928B (en) Bus-mounted system based on 5G and face recognition and use method
CN107767189A (en) A kind of advertisement sending method
CN109147338B (en) Illegal parking identification method and device and server
CN110070748A (en) Parking management system applied to open parking stall
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN110717352B (en) Platform passenger flow volume statistical method, server and image acquisition equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant