CN111222373B - Personnel behavior analysis method and device and electronic equipment - Google Patents

Personnel behavior analysis method and device and electronic equipment Download PDF

Info

Publication number
CN111222373B
CN111222373B CN201811420072.6A CN201811420072A CN111222373B CN 111222373 B CN111222373 B CN 111222373B CN 201811420072 A CN201811420072 A CN 201811420072A CN 111222373 B CN111222373 B CN 111222373B
Authority
CN
China
Prior art keywords
target
information
area
target area
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811420072.6A
Other languages
Chinese (zh)
Other versions
CN111222373A (en
Inventor
刘璐
尹义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201811420072.6A priority Critical patent/CN111222373B/en
Publication of CN111222373A publication Critical patent/CN111222373A/en
Application granted granted Critical
Publication of CN111222373B publication Critical patent/CN111222373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a personnel behavior analysis method, a device and electronic equipment, wherein the method comprises the following steps: acquiring a face image to be searched of a target person; determining a target area in a map interface; according to the target area, a monitoring database for retrieving the face images to be retrieved is determined, wherein the monitoring database comprises face images of all monitoring objects in the target area; searching the face image to be searched in the monitoring database to obtain target face information of the target person; and obtaining behavior information of the target person in the target area according to the target face information. Because the face images to be searched of the target personnel are searched in the monitoring database in the corresponding target area, the searching workload is reduced, the searching speed is increased, the consumption of hardware resources can be reduced, and meanwhile, the accuracy of acquiring the target behavior information can be improved.

Description

Personnel behavior analysis method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and apparatus for analyzing personnel behavior, and an electronic device.
Background
Image retrieval is one of the techniques currently in common use, such as: face image retrieval or retrieval of other images. At present, the face image retrieval mainly adopts a mode that a large number of face images are collected and stored in an image library, and when the face image retrieval is required, the similarity between the face image to be retrieved and the face image stored in the image library is calculated, so that an image with the highest similarity with the image to be retrieved or the similarity meeting the preset condition is obtained as a retrieval result of the face image. In the technical scheme, the number of face images in the image library is too large, so that the accuracy of behavior analysis on target personnel is low.
Disclosure of Invention
The embodiment of the invention provides a personnel behavior analysis method, a device and electronic equipment, which can improve the behavior analysis accuracy of target personnel.
In a first aspect, an embodiment of the present invention provides a method for analyzing personnel behavior, including:
acquiring a face image to be searched of a target person;
determining a target area in a map interface;
and determining a monitoring database for retrieving the face images to be retrieved according to the target area, wherein the monitoring database comprises face images of all people in the target area.
Searching the face image to be searched in the monitoring database to obtain target face information of the target person;
and obtaining behavior information of the target person in the target area according to the target face information.
Optionally, the determining the target area in the map interface includes:
receiving a frame selection activation request of the map interface, and responding to the frame selection activation request to activate a frame selection function;
receiving frame selection information in the map interface, wherein the frame selection information comprises an area where a frame-selected image pickup device is located;
and determining the area where the framed image pickup device is located as a target area.
Optionally, the determining the target area in the map interface includes:
and receiving a selection request of a preset scene button in a map interface, and determining a scene area corresponding to the selected scene button as the target area, wherein each scene button corresponds to one scene area, and at least one image pickup device is arranged in each scene area.
Optionally, the determining, according to the target area, a monitoring database for retrieving the face image to be retrieved includes:
acquiring monitoring data of each camera device in the target area, wherein the monitoring data comprises face images of monitoring objects;
And integrating the monitoring data of each camera device in the target area to form the monitoring database.
Optionally, the target face information includes statistical information and/or identity information;
the obtaining the behavior information of the target person in the target area according to the target face information comprises the following steps:
and analyzing the statistical information and/or the identity information to obtain the behavior information of the target personnel in the target area.
In a second aspect, an embodiment of the present invention provides an image retrieval apparatus, including:
the acquisition module is used for acquiring the face image to be searched of the target person;
the area determining module is used for determining a target area in the map interface;
the database determining module is used for determining a monitoring database for retrieving the face images to be retrieved according to the target area, wherein the monitoring database comprises face images of all people in the target area;
the matching module is used for searching the face image to be searched in the monitoring database to obtain target face information of the target person;
and the analysis module is used for obtaining the behavior information of the target personnel in the target area according to the target face information.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps in the personnel behavior analysis method provided by the embodiment of the invention when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the steps in the method for analyzing personnel behavior provided by the embodiment of the present invention.
In the embodiment of the invention, a face image to be searched of a target person is obtained; determining a target area in a map interface; according to the target area, a monitoring database for retrieving the face images to be retrieved is determined, wherein the monitoring database comprises face images of all monitoring objects in the target area; searching the face image to be searched in the monitoring database to obtain target face information of the target person; and obtaining behavior information of the target person in the target area according to the target face information. Because the face images to be searched of the target personnel are searched in the monitoring database in the corresponding target area, the searching workload is reduced, the searching speed is increased, the consumption of hardware resources can be reduced, and meanwhile, the accuracy of acquiring the target behavior information can be improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an alternative network architecture provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a map interface according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of another map interface provided by an embodiment of the present invention;
FIG. 4 is a schematic illustration of another map interface provided by an embodiment of the present invention;
FIG. 5 is a schematic illustration of another map interface provided by an embodiment of the present invention;
FIG. 6 is a schematic flow chart of a method for analyzing personnel behaviors according to an embodiment of the present invention;
FIG. 7 is a flow chart of another method for analyzing personnel behavior according to an embodiment of the present invention;
FIG. 8 is a flow chart of another method for analyzing personnel behavior according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a personnel behavior analysis device according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of another device for analyzing personnel behavior according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another device for analyzing personnel behavior according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The network architecture to which the schemes of embodiments of the present application may be applied will be first described by way of example with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic diagram of an alternative network architecture according to an embodiment of the present invention, as shown in fig. 1, the network architecture includes: a plurality of image pickup devices 101 and monitor terminals 102 disposed at a preset place.
The image capturing devices 101 disposed at a plurality of preset places are used for capturing portrait information of the monitored object.
The preset location may be a location for installing the image pickup device, such as a subway, a supermarket, a public security, a customs, etc., which is planned in advance in the area to be monitored, and these locations for installing the image pickup device are known specific scenes; alternatively, the plurality of preset locations may be unknown positions in the area to be monitored, such as a roadside, a river side, a grassland, etc. in the suburb, in which case each camera 101 may communicate with the monitoring terminal 102 to report the respective position information. Alternatively, the monitoring terminal 102 may locate the position of each camera based on satellite positioning technology. It should be noted that the number of the image capturing devices may be one or more, the number of the preset locations may be one or more, and the number of the monitoring objects may be one or more.
The monitoring terminal 102 is in communication with the plurality of image capturing devices 101 in the monitoring area and is used for identifying and analyzing the face information of each monitoring object, wherein the face information comprises a face image, the face image can be identified and analyzed through a face image identification engine arranged at the monitoring terminal to obtain the identity information of each monitoring object, of course, the face image identification engine can also be a face image identification engine arranged at other places, for example, the face image identification engine is arranged on the image capturing device, in this case, the image capturing device is used for identifying and analyzing the face image, comparing the face images of the personnel in the identity card library to obtain the identity information of the monitoring object, and optionally, an information processor can be arranged on the image capturing device and is used for generating structural information for the face image and the identity information of the monitoring object and sending the structural information to the monitoring terminal. The face recognition and analysis can be carried out through a face engine of the background server, in this case, the camera device sends the collected face information to the background server, the face recognition engine of the background server extracts and recognizes and analyzes the face image in the face information, the identity information of the monitored object can be obtained through comparing the face images of the identity card library, and optionally, the background server can also generate structural information for the face image and the identity information of the monitored object and send the structural information to the monitoring terminal, and the information forms the monitoring information. The identity information can be identity information such as name, identity card number, occupation and the like.
Optionally, the monitor terminal 102 may be any one of a computer, a notebook computer, a tablet computer, a mobile phone, and the like. The application for displaying the map information and the monitored object information is arranged in the monitoring terminal, and the application can display the position information of all the camera devices which acquire the target object in the camera devices through a map interface under the condition that a search instruction for the target object is detected, wherein the target object can be one or more monitored objects, and the search instruction comprises the characteristic information of the target object. The characteristic information can be characteristic information such as name, identification card number, occupation, face characteristics, time period, similarity, gender, age group, hat, glasses, race, picture quality and the like.
It should be noted that, after the monitor terminal 102 collects the portrait information of one or more monitor objects through a plurality of image capturing devices disposed at a plurality of preset places, the map interface may display the position information of all the currently disposed image capturing devices by default, where the position information of the image capturing devices may be longitude and latitude coordinates based on geographic positions, so that the longitude and latitude coordinates of the image capturing devices may be added to the map information, and the position information of the image capturing devices corresponding to the longitude and latitude coordinates is displayed on the corresponding longitude and latitude coordinates under the map interface. The longitude and latitude coordinates can be obtained based on satellite positioning technology. The portrait information of the monitoring object acquired by the plurality of camera devices can be displayed in a preset information field displayed on the interface. For example, fig. 2 is a schematic diagram of an optional monitoring interface according to an embodiment of the present invention, as shown in fig. 2, a plurality of image capturing devices disposed are shown on a map, and portrait information of one or more monitoring objects collected by the plurality of image capturing devices disposed currently may be shown in an information field on the left side of the map. Alternatively, only the face image of each monitoring object may be displayed in the information field. Alternatively, the user may click on the "view acquisition" button on the interface to view all the snapshot head portraits on the information bar, as shown in FIG. 2. Alternatively, the user may view all of the snap shot face images in the selected area on the information bar by clicking on the "area select" button on the interface, as shown in FIG. 3. Alternatively, the user may retrieve the target person by uploading the target face image in an "upload box," as shown in fig. 4. Alternatively, the user may input the identity information of the target person through a search box on the interface to search the target person, as shown in fig. 5.
In addition, the monitoring terminal 102 is further provided with a monitoring database for storing monitoring information, where the monitoring information may be a structured face image including identity information and a face image.
It should be noted that the above network architecture is only a preferred network architecture provided by the embodiments of the present invention, and should not be construed as limiting the embodiments of the present invention. In addition, the portrait information in the present application may be referred to as image information, the map interface in the present application may be referred to as a map mode, and the face image in the present application may be referred to as a portrait, a head portrait, or a person image.
Referring to fig. 6, fig. 6 is a flowchart of a method for analyzing personnel behaviors according to an embodiment of the present invention, as shown in fig. 6, including the following steps:
201. and acquiring a face image to be retrieved of the target person.
The face image to be searched can be uploaded through an uploading frame on an interface by a user, the uploading mode can be that an image storage path is selected for uploading, or the image is dragged from the image set to the uploading frame for uploading, or the image can be uploaded through scanning, such as scanning a photo or an image of a target person through a scanner, or photographing and uploading, such as photographing the image of the target person through a camera device during border detection, and then uploading. The uploaded face image to be retrieved is displayed in an uploading frame.
202. A target area is determined in the map interface.
The above-mentioned target area is an area selected by the user on the map interface, as shown in fig. 3, the selection may be performed by "area selection" of the map interface, in which case, the map interface may be provided with a scene button corresponding to a specific scene area, for example, a "public security" button corresponding to a public security office scene, a "supermarket" button corresponding to a supermarket scene, a "market" button corresponding to a market scene, a "customs" button corresponding to a customs scene, a "subway" button corresponding to a subway scene, or the like, and the target area may also be determined by performing frame selection on the map interface through a "frame selection camera", for example, performing frame selection of the area on the map interface through a frame selection instruction. The frame selection instruction may be a frame selection by a mouse or a frame selection by a touch device.
In step 202, the target area may be referred to as a search area or a screening area, and the target area may be one or more target areas.
203. And determining a monitoring database for retrieving the face images to be retrieved according to the target area, wherein the monitoring database comprises face images of all monitoring objects in the target area.
In this step, the monitoring database may be a local database or a remote database, and further the database may refer to data stored on one or more storage devices. The monitoring database may store a structured face image of a monitored object, where the structured face image includes: the face image can be extracted from the face information acquired by the camera device through a face image recognition engine, and the identity information can be obtained by comparing the corresponding face image in an identity card library of a public security system, and can be name, identity card number, gender and the like; the time information may be the time from the capturing device capturing the image, and may be a time point, a time period, etc.; the above-mentioned position information may be the location of the camera device that captures the target person, may be the domain name, longitude and latitude coordinates, etc., the above-mentioned characteristic information includes characteristic information of all monitoring objects, the above-mentioned characteristic information may be characteristic information of face characteristics, time period, similarity, sex, age group, cap, glasses, race and picture quality, etc. of all monitoring objects collected, the characteristic information of all monitoring objects may be obtained by extracting the monitoring information of all monitoring objects by the camera device, further, the image captured by all monitoring objects may be obtained by extracting the image captured by the camera device by the face image recognition engine.
It should be noted that, when the target area is plural, the above-mentioned monitoring database may be plural correspondingly. In addition, the monitoring database can be arranged in the monitoring terminal, and the monitoring terminal directly requests data; the monitoring terminal can be arranged in the camera device to be remotely connected with the monitoring terminal, and when the monitoring database is arranged in the camera device, the monitoring terminal can obtain corresponding monitoring data from the monitoring database in the camera device through a network after determining a target area to form a temporary monitoring database.
204. And searching the face image to be searched in the monitoring database to obtain the target face information of the target person.
In step 204, the face image to be searched is searched in the monitoring database, which may be that the face image to be searched is compared with the face image in the monitoring database to obtain the similarity between the face image to be searched and the face image in the monitoring database, and according to the preset similarity threshold, the face image with the similarity greater than the similarity threshold in the monitoring database is selected as the target face image of the target person, and the face information corresponding to the target face image is obtained. When the face image to be searched is searched in the monitoring database, the face image with the similarity larger than a preset similarity threshold value can be selected to obtain the target face image of the target person by acquiring the similarity between the face image to be searched and the face images of all monitoring objects in the monitoring database. The similarity can be obtained through a face recognition engine, the face images to be searched and the face images of all monitoring objects in the monitoring database are extracted into a face recognition engine for traversing comparison, so that the similarity of the face images to be searched and each face image in the monitoring database is obtained, and the probability that the face images to be searched and the monitoring database are identical is higher as the similarity is higher. In some possible embodiments, the face images of all the monitored objects in the monitoring database may also be classified according to the monitored objects, for example: and storing a partition in a monitoring database for capturing all face images of the monitored object A, wherein the partition only stores the face images of the monitored object A, and selects one face image as an index image, so that all face images of the monitored object A can be obtained through the index image, and the similarity between the face image to be searched and the index image can be obtained only by comparing the face image to be searched with the index image, and a large amount of searching work can be reduced.
The above-mentioned target face information includes face image, statistical information and identity information of the target person in the monitoring database, where the above-mentioned statistical information may be statistical information such as number of times the target person is captured by the camera device, capturing time period, weather during capturing, etc., and the above-mentioned identity information may be name, identification number, sex, occupation, etc. of the target person.
In addition, when the number of the target areas is plural, the number of the monitoring databases may be plural, the face images to be searched may be compared with the face images in the plurality of monitoring databases, so as to obtain the similarity between the face images to be searched and the face images in the monitoring databases, and the face images with the similarity greater than the similarity threshold in the monitoring databases are selected as target face images of the target person, and the target face images of the target person are formed into a set.
The target face image of the target person can be a structured face image, and when the face information of the target person is the structured face image, the structured information such as the statistical information, the identity information and the like of the target person can be obtained simultaneously when the face image of the target person is obtained by searching; unstructured face information, semi-structured face information, etc. may also be used. In the embodiment of the present application, the face image of the target person is preferably structured face information, and the monitoring database is preferably a structured monitoring database.
In step 204, the obtained face image in the target face information of the target person may be one or more, and the target face information of the target person may be referred to as a search result.
205. And obtaining behavior information of the target person in the target area according to the target face information.
The target face information is the target face information of the target person obtained in step 204, and the target face image is preferably a structured face image, where the structured face image includes structural information such as a face image, statistical information, and identity information of the target person. The behavior information may be behavior information such as appearance, stay, abnormal behavior, peer, etc. The occurrence can be the snapshot occurrence; the above-mentioned stay can be judged by whether the time of the target person in the field of view of a certain camera is greater than a preset stay time threshold, if so, the target person can be judged to stay, and the acquisition of the existing time can be calculated by a video tracking algorithm, which is not repeated here; the abnormal behavior may be that the number of occurrences of the ring ratio of the target person or the number of occurrences of the same ratio is analyzed, so as to obtain whether the behavior of the target person is abnormal, for example: the number of times of appearance of target personnel in supermarkets in the last 5 months of the year is 4 times, and the number of times of appearance of target personnel in supermarkets in the 5 months of the year is 50 times, which can be regarded as abnormal behavior; or the analysis is performed according to feature information, wherein the feature information can be feature information such as name, identification card number, occupation, face feature, time period, similarity, gender, age group, hat, glasses, race, picture quality and the like, for example: the target personnel are IT personnel, but continuously are snapped in suburbs at overtime time of working days, or the target personnel are full-time stock citizens, continuously are snapped in the sky-floor of a building, and the like; the above-mentioned peer may be that the determination is performed according to the number of times that the image capturing device simultaneously captures other monitoring objects that are simultaneously present in the field of view with the target person, for example: and capturing the target person for 100 times in the area A for a month, wherein if 88 times of capturing the target person for three times appear in the image, the relationship between the third and fourth items can be considered to exist.
The structured face image may be displayed on a map interface, for example, a face image of a target person in a target area may be displayed, statistical information may be displayed on the map interface, identity information may be displayed on the map interface, and so on. Furthermore, the face image, the statistical information and the identity information can be displayed in the information column of the map interface in a unified manner, so that the map and the information column can be separated, and a user can observe the map better. The map can be displayed by using the face image, so that a user can intuitively know the position where the target person appears; the method can also obtain the face image of the monitored object captured by the image pickup device by clicking a certain image pickup device, and certainly, a link relation needs to be established between the image pickup device on the map and the corresponding monitoring database.
The behavior information may be referred to as a behavior rule or a behavior attribute, and the field of view of the imaging device may be referred to as a field of view or an imaging range of the imaging device.
In the above steps, the face image to be searched of the target person is searched in the target area, so that the calculation amount of the search can be reduced, the hardware resources of the search can be saved, in addition, the target person is placed in the target area for searching, the search result of the target person in the target area is obtained, the analysis is more targeted, and the accuracy of acquiring the target behavior information is improved.
It should be noted that, the method for analyzing personnel behaviors provided in the embodiment of the present invention may be applied to a personnel behavior analysis device, for example: a computer, a server, a mobile phone, or other devices capable of performing a person behavior analysis may also be referred to as a monitoring terminal or a terminal device.
In the embodiment of the invention, a face image to be searched of a target person is obtained; determining a target area in a map interface; according to the target area, a monitoring database for retrieving the face images to be retrieved is determined, wherein the monitoring database comprises face images of all monitoring objects in the target area; searching the face image to be searched in the monitoring database to obtain a target face image of the target person, wherein the target face image comprises statistical information and identity information; and obtaining behavior information of the target person in the target area according to the target face image. Because the face images to be searched of the target personnel are searched in the monitoring database in the corresponding target area, the searching workload is reduced, the searching speed is increased, the consumption of hardware resources can be reduced, and meanwhile, the accuracy of acquiring the target behavior information can be improved.
Referring to fig. 7, fig. 7 is a flowchart of another method for analyzing personnel behaviors according to an embodiment of the present invention, as shown in fig. 7, including the following steps:
301. and acquiring a face image to be retrieved of the target person.
302. Receiving a frame selection activation request of the map interface, and responding to the frame selection activation request to activate a frame selection function;
303. receiving frame selection information in the map interface, wherein the frame selection information comprises an area where a frame-selected image pickup device is located;
304. and determining the area where the framed image pickup device is located as the target area.
305. And determining a monitoring database for retrieving the face images to be retrieved according to the target area, wherein the monitoring database comprises face images of all monitoring objects in the target area.
306. And searching the face image to be searched in the monitoring database to obtain the target face information of the target person.
307. And obtaining behavior information of the target person in the target area according to the target face information.
In step 302, please refer to fig. 3, the above-mentioned frame selection activation request may be sent to the system by triggering (e.g. clicking or selecting) the "frame selection camera" by the user; the above-mentioned frame selection activating function can be understood as that when the system receives the frame selection activating request, a frame selection activating instruction is sent out to activate the frame selection function, so that the user can perform frame selection on the map interface. The triggering of the frame selection camera can be triggered by clicking the mouse, or by pressing a certain function key of the mouse for a certain time (such as 1 second, 2 seconds, etc.), or by setting a shortcut key of the keyboard, or by touching the touch device (such as point touch, number of touch points, touch duration or gesture track, etc.), or the combination of the triggering modes, such as the combination of the mouse and the keyboard, the combination of the touch point and the touch duration, etc.
In step 303, the above-mentioned frame selection information may be obtained by performing frame selection on the map interface by the user after activating the frame selection function in step 302. The frame selection information comprises a frame selection boundary, a frame selection area is determined through the frame selection boundary, the frame-selected image pickup device can be understood as an image pickup device identifier positioned in the frame selection area on a map interface, wherein the image pickup device identifier of the map interface can be corresponding to the position of the image pickup device in a real scene, namely, the image pickup device 01 is arranged at a site A in the real scene, and the image pickup device 01 identifier is arranged at the corresponding site A in the map interface; in addition, the image capturing device identifier of the map interface may be associated with the position of the image capturing device in the real scene, that is, there is a link relationship, and the identifier of the image capturing device 01 in the map interface may be clicked to obtain the information of the corresponding image capturing device 01. The image capturing device identifier may be a selectable pixel block in the map interface, and the selectable pixel block may be a camera shape or a digital number. In addition, the frame selection area can be adjusted by adjusting the frame selection boundary, for example, a smaller frame selection boundary of a 'mouth' font is added in the frame selection area of a 'mouth' font (the frame selection boundary is a 'mouth' font), so that the frame selection area is formed into a frame selection area of a 'back' font or a smaller frame selection area of a 'mouth' font is formed. In addition, in some possible cases, the selectable pixel blocks may fall on the frame selection boundary, where the frame selection boundary may be considered as a selected state or a non-selected state, and may be specifically set according to actual needs.
In step 304, the determined target area may be a frame selection area formed by the frame selection boundary, or may be an effective monitoring area of the image capturing device in the frame selection area, for example: if the target area is a frame selection area, assuming that the frame selection boundary is square, the target area is a square area formed by the frame selection boundary; if the target area is the effective monitoring area of the image pickup device in the frame selection area, the frame selection boundary is also assumed to be square, and only one image pickup device is assumed to be in the frame selection area, but the effective monitoring area of the image pickup device is circular, the target area is a circular area. In the case where the target area is an effective monitoring area of the image pickup apparatus in the frame selection area, if the effective monitoring area of the image pickup apparatus is larger than the frame selection area obtained by the frame selection boundary, the effective monitoring area may be used as the target area, or the frame selection area may be used as the target area.
In this embodiment, the target area is determined by selecting the target area on the map interface, so that the user can flexibly select the target area, and the selectivity and pertinence of the target area are improved, thereby further improving the accuracy of behavior analysis of the target personnel.
It should be noted that, the steps 302, 303, 304 in this embodiment may be regarded as an alternative implementation of the step 202 in the embodiment of fig. 6.
Optionally, the determining the target area in the map interface includes:
and receiving a selected instruction of a scene button preset in a map interface, and determining a scene area corresponding to the selected scene button as the target area in response to the selected instruction, wherein each scene button corresponds to one scene area, and at least one image pickup device is arranged in each scene area.
The preset scene buttons may be scene buttons corresponding to scene areas, and the scene areas where the scene buttons are set may be areas commonly used for analysis and retrieval, such as a "public security" button corresponding to a public security bureau scene, a "supermarket" button corresponding to a supermarket scene, a "market" button corresponding to a market scene, a "customs" button corresponding to a customs scene, a "subway" button corresponding to a subway scene, and the like; the selected instruction may be a preset instruction corresponding to a scene button, and the instruction may be obtained by triggering (for example, clicking or selecting) a scene key by a user, for example: and when the user triggers the supermarket button, sending a command that the supermarket button is triggered to the system, and receiving the command that the supermarket button is triggered by the system. The triggering can be the triggering of the frame selection camera by clicking a mouse, or the triggering can be the triggering by setting a keyboard shortcut key, or the triggering can be the triggering by touching the equipment to perform touch action, etc. In this embodiment, the scene button corresponds to the scene area may be understood as: the scene button has a link relation with the corresponding scene area, and the corresponding scene area can be selected on the map interface by clicking the scene button; it can also be understood that: the scene area on the map interface is a selectable pixel block, a link exists between the scene button and the selectable pixel block, the scene button is linked to the corresponding selectable pixel block, and the scene area can be the area where one or more camera devices are identified. In addition, after the scene button is selected, the screening of the scene area in the corresponding scene area may be performed, for example, in some possible scenes, the "subway" button corresponds to a plurality of subway scenes, and the plurality of subway scenes may be screened, where the screening manner may be that the image capturing device identifier in the corresponding scene area in the map interface is clicked to remove or select, or that the image capturing device identifier in the corresponding scene area is framed in the map interface to remove or select, or that in addition, a selection area is framed in the map interface first, and then the corresponding scene area is selected in the selection area through the scene button.
It should be noted that this embodiment may be regarded as an alternative to the embodiment of step 202 in the embodiment of fig. 6, and may also be regarded as an alternative to steps 302, 303, 304 in the embodiment of fig. 7.
Optionally, the determining, according to the target area, a monitoring database for retrieving the face image to be retrieved includes:
acquiring monitoring data of each camera device in the target area, wherein the monitoring data comprises face images of monitoring objects;
and integrating the monitoring data of each camera device in the target area to form the monitoring database.
The number of the target areas may be one or more, and the number of the image capturing devices in the target areas may be one or more. Under the condition that the target area is selected in a frame, after the target area is determined, the monitoring data of all the camera devices in the target area can be integrated to form a temporary monitoring database, and the temporary monitoring database is used as a monitoring database for searching the face images to be searched, for example: assuming that A, B, C, D four image capturing devices exist in the target area determined by frame selection, and corresponding monitoring data are collected to be a, b, c, d respectively, a temporary monitoring database can be established, and the monitoring data a, b, c, d are written into the temporary monitoring database, so that the temporary monitoring database contains the monitoring data of the target area, and the face image to be searched can be searched in the target area by searching the temporary monitoring database. The integration can be sequence integration according to the time sequence of the snap shots, and the monitoring data of each camera device can be written into a temporary monitoring database in time sequence; the integration may be performed by sorting the image capturing devices, and the monitoring data of each image capturing device may be written into different areas in the temporary monitoring database in a partitioning manner according to the image capturing devices, and each area correspondingly stores the monitoring data of one image capturing device. Thus, the flexibility of the database and the accuracy of the data in the database can be increased by retrieving the monitoring data of the image pickup device of the target area to form a temporary monitoring database. Under the condition that the target area is obtained through the scene button, the monitoring data of the camera device in the scene area corresponding to the scene button can be integrated to form a temporary monitoring database, and the specific integration method can refer to the monitoring database integration method based on the frame selection to obtain the target area; in addition, since the scene button corresponds to the scene area, a monitoring database corresponding to the scene area may be preset, and monitoring data collected by all the image capturing devices in the scene area are stored in the monitoring database corresponding to the scene area, for example: assuming that A, B, C three image pick-up devices are arranged in the public security scene area, and the corresponding collected monitoring data are a, b and c respectively, a monitoring database is built in advance for the public security scene area, wherein the monitoring database stores A, B, C monitoring data a, b and c of the three image pick-up devices, can update the monitoring data collected by the three image pick-up devices A, B, C in real time, and can be used as a monitoring database for searching face images to be searched in response to the triggering of a public security button, that is, the monitoring database has an index relation with the corresponding scene button, and can be used as a monitoring database by indexing the corresponding monitoring database through the scene button.
It should be noted that, in other possible embodiments, the search result of the target person may be obtained in a large database, and then the target face information of the target area may be obtained by screening according to the scene, as shown in fig. 8.
In this embodiment, by integrating the monitoring databases corresponding to the target areas, the pertinence of the monitoring data in the monitoring databases can be increased, and since the monitoring data are all the monitoring data collected by the target area camera device, the retrieval range of the face images to be retrieved is reduced, and the retrieval speed is improved.
In some possible embodiments, after step 306, the obtained target face information may be further displayed in a sorted manner according to a preset sorting rule. In step 306, the obtained target face information may be a structured face image, where the structured face image includes structural information such as a face image, statistical information, and identity information of the target person. The preset sorting rule may be a sorting rule of time sorting, sorting by camera device, sorting by frequency of occurrence, sorting by identity information, etc., for example: the time sequencing can be sequenced according to the snapshot time (namely, the time information in the structured face information), the latest snapshot can be placed at the forefront for sequencing, and the latest snapshot can be placed at the rearmost for sequencing. The sorting according to the image pickup device may be sorting according to the number of the image pickup device that is snapped, the monitoring information of the image pickup device No. 1 is placed at the forefront for sorting, the monitoring information of the image pickup device No. 2 is placed at the back of the information of the image pickup device No. 1, and under the sorting of the image pickup devices, the monitoring information of the image pickup device may also be sorted, for example, the monitoring information in the image pickup device No. 1 may also be sorted according to the time sorting. Sequencing according to the occurrence frequency, namely sequencing according to the number of times that the camera captures a target person, for example, 46 times that the camera 1 captures the target person, 57 times that the camera 2 captures the target person, and sequencing according to the principle of high speed by putting the retrieval result of the camera 2 at the forefront; the sorting can be performed by taking the shots of the target person for a certain time period, for example, 8 months and 2 days, taking the shots of the target person for 8 times, 8 months and 3 days, taking the shots of the target person for 12 times, and 8 months and 4 days, taking the shots of the target person for 2 times, and sorting the retrieval results of all the cameras of 8 months and 3 days at the forefront according to the principle of high.
It should be noted that the above-mentioned ordering rule may be used alone or in combination, for example, the above-mentioned date and time combination. In step 305, the target person to be retrieved may be one or more, and the above-mentioned ranking may be the ranking rule in step 305, for example: when the searched target person is one, the searched result of the target person can be displayed in a sequence according to time, so that the user can directly obtain the latest monitoring information of the target person, and further the user is helped to analyze the latest behaviors of the user, such as that the target person is frequently in a place recently. The retrieved retrieval results of the target personnel can be sequenced and displayed according to the camera devices, so that a user can intuitively know which camera device the target personnel is captured by, and the user can be helped to analyze the appearance place of the target personnel and the movement track of the target personnel, such as how the target personnel moves from the place where the target personnel originally appears to the place where the target personnel finally appears. The retrieved retrieval results of the target personnel can be sequenced and displayed according to the occurrence frequency to obtain behavior information, so that a user can more intuitively know in which time period or in which place the target personnel are frequently appeared, and the user is helped to analyze the behavior rules of the target personnel, such as whether the target personnel are working or working. Therefore, the user can intuitively know the traveling condition of the target personnel.
Optionally, the target face information includes statistical information and/or identity information;
the obtaining the behavior information of the target person in the target area according to the target face information comprises the following steps:
and analyzing the statistical information and/or the identity information to obtain the behavior information of the target personnel in the target area.
The statistics information can be counted by the monitoring terminal, for example, the obtained target face image of the target person is 500, and the number of times that the target person is captured by the camera device is 500; for example, the time from the first snapshot to the target person on a certain day is 9:30, the time from the last day of snapshot to the target person is 17:30, the target person snap time period can be counted as 9:30-17:30; for example, the number of times that the target person is snapped in a rainy day is 200 times, the number of times that the target person is snapped in a sunny day is 300 times, the definition of weather can be based on local weather forecast, or weather can be identified outdoors through image identification, and the weather identification result is based on weather identification result. The identity information may be structured information in a structured face image, and the identity information may include information such as name, age, identification number, occupation, and history crime record. The above behavior information may be obtained by analyzing the statistical information of the target person, for example, by counting that the time period for the target person to appear in the subway is 17:00-19:30, the behavior information of the target personnel can be obtained as the next shift, and the two target personnel can be obtained by counting 20 times of common snapshot of the two target personnel in one day; the behavior information can also be obtained by analyzing the identity information of the target personnel, for example, the traffic police management office is snapped 5 times in one day through the occupation of the target personnel as a driver, so that the condition that the target personnel is dealing with traffic violations can be obtained; the sex of the target person is female, the target person is captured for 5 times in a gynecological hospital in one day, and before the sex is female, the target person is not captured in the gynecological hospital or identity information indicates that the target person is a gynecological doctor, so that the target person can be seen.
Optionally, the statistical information comprises at least one of a snapshot time period and a snapshot frequency, and the identity information comprises at least one of occupation and illegal crime records;
the step of analyzing the statistical information and/or the identity information to obtain the behavior information of the target person in the target area includes:
and analyzing at least one of the snapshot time period, the snapshot frequency, the occupation and illegal crime records to obtain the behavior information of the target personnel in the target area.
The statistical information may be structured information in a structured face image, and the capturing period may be a time of the target person obtained by taking a time from the first capturing to the target person as a start point and a time from the last capturing to the target person as an end point in a preset period, for example: at 9 months, the time from the first snapshot to the target person is 9 months and 5 days, and the time from the last snapshot to the target person is 9 months and 10 days, the snapshot time period of the target person can be considered to be 9 months and 5 days to 9 months and 10 days. The preset time period may be a date selection performed by the user through the map interface, for example, a calendar is set on the map interface, so that the user may select a time period set for the preview by the calendar for 9 months. In addition, an input box in which a time period can be input may be provided on the map interface for allowing the user to input a preview setting time period.
The above analysis of the snapshot time period, the obtaining of the behavior information of the target person in the target area can be understood as follows: counting the occurrence times of the target personnel in a certain time period of the target area, and obtaining behavior information according to the counting result; the behavior information may be an occurrence period, where the occurrence period may be understood as: the image pickup device captures a time period until the monitored object. The counting of the number of times may be counting the number of effective occurrences of a certain monitoring object in the period of time, where the number of effective occurrences uses the range from the field of view of the image capturing device to the range from the field of view of the image capturing device as the number of effective occurrences, and the range from the field of view of the image capturing device to the range from the field of view of the image capturing device may be clipped as a section of monitoring video related to the monitoring object, and the number of monitoring videos of the monitoring object is the number of effective occurrences. In addition, the image in the monitoring video of the monitoring object can be extracted for face recognition to obtain one or more face images of the monitoring object, and a structured face image is formed, that is, the monitoring video is an original image source, face recognition is performed on the basis of the original image to obtain the face image, the structured information of the monitoring object is obtained according to a face recognition engine, and the face image and the structured information are combined to form the structured face image. As a possible embodiment, in each section of monitoring video, an image with the best image quality in each section of monitoring video is obtained through an image quality evaluation (Image Quality Assessment, IQA) algorithm to perform face recognition, so as to obtain a face image of the monitored object, and thus, the number of occurrence times of the monitored object can be counted through the number of the retrieved target face images, for example: the monitoring object A is captured by the camera device in 9 months to 25 sections of monitoring videos, one image with the best image quality is extracted from each section of monitoring video in the 25 sections of monitoring videos to carry out face recognition, then 25 sections of monitoring videos correspondingly obtain 25 face images of the monitoring object, when a target person carries out the monitoring on the monitoring object A, 25 target face images can be searched, then the effective occurrence number of the target person is 25, and the first occurrence time and the last occurrence time of the target person can be obtained according to the 25 effective occurrence numbers, so that the occurrence time period of the target person is obtained. According to the appearance time period of the target personnel, whether the target personnel go to work, go off work or stroll can be analyzed, for example, the target area is a supermarket, and the appearance time period of the target personnel is 17:30, the target person can be considered to be off duty; for example, the target area is a subway, and the appearance time period of the target personnel is 6:00, the target person can be considered to be on the way to work; for example, the target area square, the target personnel appearance time period is 20:30, the target person may be considered to be strolling.
The above analysis of snapshot frequency information can be understood as: counting the number of times that a target person is snapped in a certain time period of a target area, and obtaining snap frequency information according to a counting result; the above behavior information may be occurrence frequency, where the occurrence frequency may be the number of valid occurrences of a certain monitored object in the time period, where the number of valid occurrences may be counted with reference to the foregoing number of valid occurrences, which is not described herein. According to the snapshot frequency information in the snapshot time period, the obtaining of the behavior information of the target personnel in the target area can be understood as follows: by counting the snap shots of the target personnel, the behavior rule of the target personnel in the target area is analyzed, for example: target person is at 10:00-13: the number of times of 30 appearing in the restaurant is 8, and the target person can be judged to be takeaway; or the target person is a non-bank staff, and the number of times of the target person appearing at the gate of the bank in one day is 10, so that the target person can be judged to wander nearby the bank.
It should be noted that, the above-mentioned snapshot frequency information is different from the occurrence frequency, the snapshot frequency can be understood as the number of times that the camera device captures the monitored object, the occurrence frequency can be the number of times that the target person appears, the target person appears once, and the target person can be captured many times by the camera device. For example, the target person may be snapped 10 times by the camera device within 10 seconds, but the target person is actually present only once.
The identity information may be structured information in a structured face image, and the professional information of the target person may be structured information in the identity information, where the structured information in the identity information may further include information such as a name, an identity card number, a residence address, a contact manner, and the like. The above-mentioned obtaining the behavior information of the target person in the target area according to the occupation information can be understood as: and analyzing the behavior rule of the target personnel according to the association degree of the occupation and the target area. For example: and (3) searching that the target person is a common financial person, and judging that the behavior of the target person is abnormal if the occurrence frequency of the target area of a certain advanced hotel is 10 times per month. The above association degree between the occupation and each area can be understood as the probability that some personnel appear in a specific area, for example, the probability that the personnel of a public office appear in an administrative center is high, the association degree between the occupation and the administrative center is high, and the frequent occurrence of the personnel of the public office in the administrative center can be regarded as normal.
The identity information may further include a crime record of the target person, where the crime record may be obtained from an externally disclosed archive of the public security authority, and if the target person does not have the crime record, the record may be marked as none or not displayed. According to the crime records of the target personnel, a searched target area can be determined for the target personnel in a targeted manner, for example, if the target personnel have records of robbing banks, the target personnel can be searched by taking a bank or a gold store as the target area, whether the target personnel have suspicion of stepping points or not is further analyzed, for example, if the target personnel are frequently shot in the gold store, the target personnel can be considered to have suspicion of stepping points.
In the embodiment, more behavior information of the target person can be obtained under the condition that the face image is retrieved and the structural information of the face image is obtained, so that the depth and accuracy of behavior analysis of the target person can be improved.
In this embodiment, a plurality of alternative implementations are added on the basis of the embodiment shown in fig. 6, and the accuracy of acquiring the target behavior information can be further improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a personnel behavior analysis device according to an embodiment of the present invention, as shown in fig. 9, including:
an acquisition module 401, configured to acquire a face image to be retrieved of a target person;
a region module 402 for determining a target region;
a database determining module 403, configured to determine a monitoring database for retrieving the face image to be retrieved according to the target area, where the monitoring database includes face images of all people in the target area;
the matching module 404 is configured to search the face image to be searched in the monitoring database to obtain a target face image of the target person, where the target face image includes statistical information and identity information;
And the analysis module 405 is configured to obtain behavior information of the target person in the target area according to the target face image.
Optionally, as shown in fig. 10, the area determining module 402 includes:
a frame selection unit 4021, configured to receive a frame selection activation request for the map interface, and activate a frame selection function in response to the frame selection activation request;
a receiving unit 4022 configured to receive frame selection information in the map interface, where the frame selection information includes an area where the frame-selected image capturing apparatus is located;
a determination unit 4023 configured to determine, as a target area, an area in which the framed image pickup apparatus is located.
Optionally, as shown in fig. 9, the area determining module 402 is configured to receive a selected instruction of a preset scene button in a map interface, and determine, in response to the selected instruction, a scene area corresponding to the selected scene button as the target area, where each scene button corresponds to one scene area, and at least one image capturing device is disposed in each scene area.
Optionally, as shown in fig. 11, the database determining module 403 includes:
an acquisition unit 4031 for acquiring monitoring data of each image pickup device in the target area, the monitoring data including a face image of a monitored object;
An integrating unit 4032, configured to integrate the monitoring data of each camera device in the target area to form the monitoring database.
Optionally, as shown in fig. 9, the target face information includes statistical information and/or identity information;
the analysis module 405 is configured to analyze the statistical information and/or the identity information to obtain behavior information of the target person in the target area.
Optionally, as shown in fig. 9, the statistical information includes at least one of a snapshot time period and a snapshot frequency
Items, the identity information including at least one of occupation and crime records;
the analysis module 405 is configured to analyze at least one of a snapshot time period, a snapshot frequency, occupation and illegal crime records, and obtain behavior information of the target person in the target area.
It should be noted that the above-described apparatus may be applied to a person behavior analysis device, for example: and a computer, a server, a mobile phone and other devices capable of performing personnel behavior analysis.
The personnel behavior analysis device provided by the embodiment of the present invention can implement each implementation manner and corresponding beneficial effects in the method embodiments of fig. 6 and fig. 7, and in order to avoid repetition, the description is omitted here.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 12, including: a memory 602, a processor 601 and a computer program stored on the memory 602 and executable on the processor 601, wherein:
the processor 601 is configured to call a computer program stored in the memory 602, and perform the following steps:
acquiring a face image to be searched of a target person;
determining a target area in a map interface;
according to the target area, a monitoring database for retrieving the face images to be retrieved is determined, wherein the monitoring database comprises face images of all monitoring objects in the target area;
searching the face image to be searched in the monitoring database to obtain target face information of the target person;
and obtaining behavior information of the target person in the target area according to the target face information.
Optionally, the determining the target area in the map interface performed by the processor 601 includes:
receiving a frame selection activation request of the map interface, and responding to the frame selection activation request to activate a frame selection function;
receiving frame selection information in the map interface, wherein the frame selection information comprises an area where a frame-selected image pickup device is located;
And determining the area where the framed image pickup device is located as a target area.
Optionally, the determining the target area in the map interface performed by the processor 601 includes:
and receiving a selected instruction of a scene button preset in a map interface, and determining a scene area corresponding to the selected scene button as the target area in response to the selected instruction, wherein each scene button corresponds to one scene area, and at least one image pickup device is arranged in each scene area.
Optionally, the determining, by the processor 601, a monitoring database for retrieving the face image to be retrieved according to the target area includes:
acquiring monitoring data of each camera device in the target area, wherein the monitoring data comprises face images of monitoring objects;
and integrating the monitoring data of each camera device in the target area to form the monitoring database.
Optionally, the target face information includes statistical information and/or identity information;
the obtaining, by the processor 601, the behavior information of the target person in the target area according to the target face information includes:
and analyzing the statistical information and/or the identity information to obtain the behavior information of the target personnel in the target area.
Optionally, the statistical information comprises at least one of a snapshot time period and a snapshot frequency, and the identity information comprises at least one of occupation and illegal crime records;
the analyzing the statistical information and/or the identity information performed by the processor 601, to obtain the behavior information of the target person in the target area includes:
and analyzing at least one of the snapshot time period, the snapshot frequency, the occupation and illegal crime records to obtain the behavior information of the target personnel in the target area.
It should be noted that, the electronic device may be a personnel behavior analysis device, for example: and a computer, a server, a mobile phone and other devices capable of performing personnel behavior analysis.
The personnel behavior analysis device provided by the embodiment of the present invention can implement each implementation manner and corresponding beneficial effects in the method embodiments of fig. 6 and fig. 7, and in order to avoid repetition, the description is omitted here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process of the image retrieval method embodiment provided by the embodiment of the invention, and can achieve the same technical effect, so that repetition is avoided, and no further description is provided herein.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM) or the like.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (9)

1. A method of human behavior analysis, comprising:
acquiring a face image to be searched of a target person;
determining a target area in a map interface;
according to the target area, a monitoring database for retrieving the face images to be retrieved is determined, wherein the monitoring database comprises face images of all monitoring objects in the target area;
searching the face image to be searched in the monitoring database to obtain target face information of the target person;
According to the target face information, behavior information of the target person in the target area is obtained;
the determining the target area in the map interface comprises:
receiving a selected instruction of a scene button preset in a map interface, and determining a scene area corresponding to the selected scene button as the target area in response to the selected instruction, wherein each scene button corresponds to one scene area, at least one image pickup device is arranged in each scene area, the scene area on the map interface is a selectable pixel block, a link exists between the scene button and the selectable pixel block, and the link is connected to the corresponding selectable pixel block through the scene button.
2. The method of claim 1, wherein the determining a target area in a map interface comprises:
receiving a frame selection activation request of the map interface, and responding to the frame selection activation request to activate a frame selection function;
receiving frame selection information in the map interface, wherein the frame selection information comprises an area where a frame-selected image pickup device is located;
and determining the area where the framed image pickup device is located as a target area.
3. The method according to any one of claims 1 to 2, wherein the determining a monitoring database for retrieving the face image to be retrieved according to the target area comprises:
Acquiring monitoring data of each camera device in the target area, wherein the monitoring data comprises face images of monitoring objects;
and integrating the monitoring data of each camera device in the target area to form the monitoring database.
4. The method according to any one of claims 1 to 2, wherein the target face information comprises statistical information and/or identity information;
the obtaining the behavior information of the target person in the target area according to the target face information comprises the following steps:
and analyzing the statistical information and/or the identity information to obtain the behavior information of the target personnel in the target area.
5. The method of claim 4, wherein the statistical information comprises at least one of a snapshot time period and a snapshot frequency, and the identity information comprises at least one of a occupation and a crime record;
the step of analyzing the statistical information and/or the identity information to obtain the behavior information of the target person in the target area includes:
and analyzing at least one of the snapshot time period, the snapshot frequency, the occupation and illegal crime records to obtain the behavior information of the target personnel in the target area.
6. A person behavior analysis apparatus, comprising:
the acquisition module is used for acquiring the face image to be searched of the target person;
the area determining module is used for determining a target area in the map interface;
the database determining module is used for determining a monitoring database for retrieving the face images to be retrieved according to the target area, wherein the monitoring database comprises face images of all people in the target area;
the matching module is used for searching the face image to be searched in the monitoring database to obtain target face information of the target person;
the analysis module is used for obtaining the behavior information of the target person in the target area according to the target face information;
the region determination module is further configured to:
receiving a selected instruction of a scene button preset in a map interface, and determining a scene area corresponding to the selected scene button as the target area in response to the selected instruction, wherein each scene button corresponds to one scene area, at least one image pickup device is arranged in each scene area, the scene area on the map interface is a selectable pixel block, a link exists between the scene button and the selectable pixel block, and the link is connected to the corresponding selectable pixel block through the scene button.
7. The apparatus of claim 6, wherein the region determination module comprises:
the frame selection sub-module is used for receiving a frame selection activation request of the map interface and responding to the frame selection activation request to activate a frame selection function;
the receiving sub-module is used for receiving frame selection information in the map interface, wherein the frame selection information comprises an area where a frame-selected image pickup device is located;
and the determining submodule is used for determining the area where the framed image pickup device is located as a target area.
8. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the human behavior analysis method according to any one of claims 1 to 5 when the computer program is executed.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps in the person behavior analysis method according to any one of claims 1 to 5.
CN201811420072.6A 2018-11-26 2018-11-26 Personnel behavior analysis method and device and electronic equipment Active CN111222373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811420072.6A CN111222373B (en) 2018-11-26 2018-11-26 Personnel behavior analysis method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811420072.6A CN111222373B (en) 2018-11-26 2018-11-26 Personnel behavior analysis method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111222373A CN111222373A (en) 2020-06-02
CN111222373B true CN111222373B (en) 2024-01-05

Family

ID=70828770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811420072.6A Active CN111222373B (en) 2018-11-26 2018-11-26 Personnel behavior analysis method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111222373B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021102760A1 (en) * 2019-11-27 2021-06-03 深圳云天励飞技术有限公司 Method and apparatus for analyzing behavior of person, and electronic device
CN111813979A (en) * 2020-07-14 2020-10-23 杭州海康威视数字技术股份有限公司 Information retrieval method and device and electronic equipment
CN111737292B (en) * 2020-07-16 2021-01-05 腾讯科技(深圳)有限公司 Data retrieval method and related device
CN112052733A (en) * 2020-07-31 2020-12-08 中国建设银行股份有限公司 Database construction method, face recognition device and electronic equipment
CN112560958A (en) * 2020-12-17 2021-03-26 北京赢识科技有限公司 Person reception method and device based on portrait recognition and electronic equipment
CN113064935A (en) * 2021-03-30 2021-07-02 重庆中科云从科技有限公司 Data analysis method, apparatus and medium
CN114863364B (en) * 2022-05-20 2023-03-07 碧桂园生活服务集团股份有限公司 Security detection method and system based on intelligent video monitoring

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)
CN103841374A (en) * 2012-11-27 2014-06-04 华为技术有限公司 Display method and system for video monitoring image
CN104519319A (en) * 2014-12-11 2015-04-15 北京中星微电子有限公司 Method and device for surveillance video display of electronic map
CN108197565A (en) * 2017-12-29 2018-06-22 深圳英飞拓科技股份有限公司 Target based on recognition of face seeks track method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)
CN103841374A (en) * 2012-11-27 2014-06-04 华为技术有限公司 Display method and system for video monitoring image
CN104519319A (en) * 2014-12-11 2015-04-15 北京中星微电子有限公司 Method and device for surveillance video display of electronic map
CN108197565A (en) * 2017-12-29 2018-06-22 深圳英飞拓科技股份有限公司 Target based on recognition of face seeks track method and system

Also Published As

Publication number Publication date
CN111222373A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN111222373B (en) Personnel behavior analysis method and device and electronic equipment
US11527071B2 (en) Person search system and person search method
CN109886078B (en) Retrieval positioning method and device for target object
US9141184B2 (en) Person detection system
CN111131902B (en) Method for determining target object information and video playing equipment
CN112183353B (en) Image data processing method and device and related equipment
CN101778260B (en) Method and system for monitoring and managing videos on basis of structured description
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
US20210297580A1 (en) Person monitoring system and person monitoring method
TWI747341B (en) File application method, device and storage medium
WO2020259099A1 (en) Information processing method and device, and storage medium
CN110263613A (en) Monitor video processing method and processing device
KR20080075091A (en) Storage of video analysis data for real-time alerting and forensic analysis
JP6013923B2 (en) System and method for browsing and searching for video episodes
US20210337133A1 (en) Method, apparatus and computer program for generating and displaying a heatmap based on video surveillance data
WO2020103293A1 (en) Method, device, and electronic device for presenting individual search information
US11657623B2 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method
US20180150683A1 (en) Systems, methods, and devices for information sharing and matching
JP2020047069A (en) Information processing system, and method and program for controlling information processing system
CN108073858A (en) Crowd massing monitoring identifying system based on depth camera
BR112021011817A2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND PROGRAM
CN115830076B (en) Personnel track video intelligent analysis system
JP4976036B2 (en) Image search support device
WO2021102760A1 (en) Method and apparatus for analyzing behavior of person, and electronic device
JP7235612B2 (en) Person search system and person search method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant