WO2021102760A1 - 一种人员行为分析方法、装置和电子设备 - Google Patents

一种人员行为分析方法、装置和电子设备 Download PDF

Info

Publication number
WO2021102760A1
WO2021102760A1 PCT/CN2019/121363 CN2019121363W WO2021102760A1 WO 2021102760 A1 WO2021102760 A1 WO 2021102760A1 CN 2019121363 W CN2019121363 W CN 2019121363W WO 2021102760 A1 WO2021102760 A1 WO 2021102760A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
area
target area
target person
Prior art date
Application number
PCT/CN2019/121363
Other languages
English (en)
French (fr)
Inventor
刘璐
尹义
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Priority to PCT/CN2019/121363 priority Critical patent/WO2021102760A1/zh
Publication of WO2021102760A1 publication Critical patent/WO2021102760A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to the field of image processing, in particular to a method, device and electronic equipment for analyzing human behavior.
  • Image retrieval is one of the currently commonly used technologies, such as face image retrieval or other image retrieval.
  • the main method of face image retrieval is to collect a large number of face images and store them in the image library.
  • face image retrieval is needed, calculate the similarity between the face image to be retrieved and the face image stored in the image library. In this way, the image with the highest similarity to the image to be retrieved or the similarity meeting the preset condition is obtained as the retrieval result of the face image.
  • the number of face images in the image library is too large, resulting in low accuracy of the behavior analysis of the target person.
  • the embodiments of the present invention provide a method, device, and electronic equipment for analyzing the behavior of a person, which can improve the accuracy of analyzing the behavior of a target person.
  • an embodiment of the present invention provides a method for analyzing personnel behavior, including:
  • a monitoring database for retrieving the face image to be retrieved is determined, and the monitoring database includes the face images of all persons in the target area.
  • the behavior information of the target person in the target area is obtained.
  • the determining the target area in the map interface includes:
  • the area where the framed camera device is located is determined as the target area.
  • the determining the target area in the map interface includes:
  • each scene button corresponds to a scene area, and each scene area is provided with At least one camera.
  • the determining a monitoring database used to retrieve the face image to be retrieved according to the target area includes:
  • the monitoring data of each camera device in the target area is integrated to form the monitoring database.
  • the target face information includes statistical information and/or identity information
  • the obtaining behavior information of the target person in the target area according to the target face information includes:
  • an image retrieval device including:
  • the acquisition module is used to acquire the face image of the target person to be retrieved
  • the area determination module is used to determine the target area in the map interface
  • the database determining module is configured to determine a monitoring database for retrieving the face image to be retrieved according to the target area, and the monitoring database includes the face images of all persons in the target area.
  • the matching module is used to search the face image to be retrieved in the monitoring database to obtain the target face information of the target person;
  • the analysis module is used to obtain the behavior information of the target person in the target area according to the target face information.
  • an embodiment of the present invention provides an electronic device including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and when the processor executes the computer program The steps in the personnel behavior analysis method provided by the embodiment of the present invention are realized.
  • an embodiment of the present invention provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the human behavior analysis method provided by the embodiment of the present invention is implemented Steps in.
  • the face image of the target person to be retrieved is acquired; the target area is determined in the map interface; according to the target area, a monitoring database for retrieving the face image to be retrieved is determined, and the monitoring database includes Face images of all monitored objects in the target area.
  • the face image to be retrieved is retrieved in the monitoring database to obtain the target face information of the target person; according to the target face information, the behavior information of the target person in the target area is obtained . Since the face image of the target person to be retrieved is retrieved in the monitoring database in the corresponding target area, it not only reduces the workload of retrieval and speeds up retrieval, but also reduces the consumption of hardware resources. At the same time, it can also improve the behavior of acquiring the target. The accuracy of the information.
  • FIG. 1 is a schematic diagram of an optional network architecture provided by an embodiment of the present invention.
  • Figure 2 is a schematic diagram of a map interface provided by an embodiment of the present invention.
  • Figure 3 is a schematic diagram of another map interface provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of another map interface provided by an embodiment of the present invention.
  • Figure 5 is a schematic diagram of another map interface provided by an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a method for analyzing human behavior according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of another method for analyzing human behavior according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of another method for analyzing human behavior according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a personnel behavior analysis device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of another personnel behavior analysis device provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of another personnel behavior analysis device provided by an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of an optional network architecture according to an embodiment of the present invention.
  • the above-mentioned network architecture includes: multiple camera devices 101 and monitoring terminals 102 deployed at preset locations .
  • multiple camera devices 101 deployed at multiple preset locations are used to collect portrait information of the monitored object.
  • the above-mentioned preset locations may be pre-planned locations for installing camera devices in the area to be monitored, such as subways, supermarkets, public security, customs, etc., these locations for installing camera devices are known specific scenes; optional Ground, the multiple preset locations mentioned above may be unknown locations in the area to be monitored, such as a roadside, a riverside, a piece of grass, etc. in the suburbs.
  • each camera device 101 can communicate with the monitoring terminal 102 To report their respective location information.
  • the monitoring terminal 102 may locate the position of each camera device based on satellite positioning technology.
  • the aforementioned camera device may be one or more
  • the aforementioned preset location may be one or more
  • the aforementioned monitored object may also be one or more.
  • the monitoring terminal 102 communicates with multiple camera devices 101 in the monitoring area, and is used to identify and analyze the portrait information of each monitored object.
  • the above-mentioned portrait information includes facial images, which can be recognized by the facial image set in the monitoring terminal
  • the engine recognizes and analyzes the face image to obtain the identity information of each monitored object.
  • the face recognition engine can also be a face image recognition engine set elsewhere, for example, face image recognition set on a camera device Engine, in this case, the camera device recognizes and analyzes the face image, and compares the face image of the person in the ID card library to obtain the identity information of the monitored object.
  • the camera device can also be equipped with an information processor to use To generate structured information from the face image and the identity information of the monitored object, and send it to the monitoring terminal.
  • Face recognition and analysis can also be performed through the face engine of the back-end server.
  • the camera device sends the collected portrait information to the back-end server, and the face recognition engine of the back-end server compares the information in the portrait information.
  • the face image is extracted, recognized and analyzed.
  • the identity information of the monitored object can be obtained.
  • the back-end server can also structure the face image and identity information of the monitored object The information is sent to the monitoring terminal, and the above information is formed into monitoring information.
  • the above-mentioned identity information can be name, ID number, occupation and other identity information.
  • the aforementioned monitoring terminal 102 may be any terminal such as a computer, a notebook computer, a tablet computer, and a mobile phone.
  • the above-mentioned monitoring terminal is provided with an application for displaying map information and monitoring object information.
  • the above-mentioned application can display all the camera devices that collect the target object in the camera device with the map interface when the retrieval instruction for the target object is detected.
  • the target object can be one or more monitoring objects
  • the retrieval instruction contains the characteristic information of the target object.
  • the aforementioned feature information may be feature information such as name, ID number, occupation, facial features, time period, similarity, gender, age group, hat, glasses, race, and picture quality.
  • the aforementioned monitoring terminal 102 collects portrait information of one or more monitored objects through multiple camera devices deployed in multiple preset locations
  • all currently deployed camera devices can be displayed on the map interface.
  • the position information of the camera device mentioned above can be based on the latitude and longitude coordinates of the geographic location.
  • the longitude and latitude coordinates of the camera device can be added to the map information, and the corresponding longitude and latitude coordinates are displayed on the corresponding longitude and latitude coordinates under the map interface.
  • Device location information The above-mentioned latitude and longitude coordinates can be obtained based on satellite positioning technology.
  • the portrait information of the monitored object collected by multiple cameras can also be displayed in the preset information column displayed on the interface. For example, FIG.
  • FIG. 2 is a schematic diagram of an optional monitoring interface according to an embodiment of the present invention.
  • multiple camera devices deployed are displayed on the map, and the currently deployed camera can be displayed in the information bar on the left side of the map.
  • only the face image of each monitored object may be displayed in the information bar.
  • the user can click the "View Collection” button on the interface to view all the captured avatars on the information bar, as shown in Figure 2.
  • the user can view all the captured face images in the selected area on the information bar by clicking the "area selection" button on the interface, as shown in Figure 3.
  • the user can retrieve the target person by uploading the target face image in the "upload box", as shown in Figure 4.
  • the user can enter the identity information of the target person through the search box on the interface to search the target person, as shown in Figure 5.
  • the above-mentioned monitoring terminal 102 is also provided with a monitoring database for storing monitoring information, and the above-mentioned monitoring information may be a structured face image including identity information and a face image.
  • the above-mentioned network architecture is only a better network architecture provided by the embodiment of the present invention, and should not be regarded as a limitation to the embodiment of the present invention.
  • the portrait information in this application can also be called image information
  • the map interface in this application can also be called a map mode
  • the face image in this application can also be called a portrait, avatar, or person image.
  • FIG. 6 is a schematic flowchart of a method for analyzing human behavior according to an embodiment of the present invention. As shown in FIG. 6, it includes the following steps:
  • the above-mentioned face image to be retrieved can be obtained by uploading the user in the "upload box" on the interface.
  • the upload method can be uploaded by selecting the image storage path, or by dragging the image from the image collection to the "upload box".
  • it can also be uploaded by scanning, such as scanning the photo or image of the target person through a scanner for uploading, or uploading by taking a photo, for example, during border inspection, the image of the target person is captured by the camera device. Then upload.
  • the face image to be retrieved after upload is displayed in the "upload box".
  • the above target area is the area selected by the user on the map interface. As shown in Figure 3, it can be selected through the "area selection" of the map interface.
  • the map interface can be provided with a scene button corresponding to a specific scene area.
  • the "Public Security” button corresponding to the public security bureau scene
  • the "Supermarket” button corresponding to the supermarket scene
  • the "Mall” button corresponding to the shopping mall scene
  • the "Customs” button corresponding to the customs scene
  • the “Metro” button corresponding to the subway scene etc.
  • the target area can be determined by frame selection on the map interface through the "frame selection camera", for example, the frame selection command is used to frame the area on the map interface.
  • the camera device can prompt the user that the selection is an invalid selection.
  • the above-mentioned frame selection instruction can be frame selection by mouse, or frame selection by touch device.
  • the foregoing target area may be referred to as a retrieval area or a screening area, and the foregoing target area may be one or more target areas.
  • 203 Determine, according to the target area, a monitoring database for retrieving the face image to be retrieved, the monitoring database including the face images of all monitored objects in the target area.
  • the aforementioned monitoring database may be a local database or a remote database, and further, the aforementioned database may refer to data stored on one or more storage devices.
  • the monitoring database can store the structured face image of the monitored object.
  • the above-mentioned structured face image includes: face image, identity information, time information, location information, feature information and other information.
  • the facial image recognition engine extracts from the portrait information collected by the camera device.
  • the above-mentioned identity information can be the information obtained by comparing the corresponding facial image in the ID database of the public security system, which can be name, ID number , Gender, etc.; the above-mentioned time information can be the time when the image is captured by the camera, which can be the time point, time period, etc.; the above-mentioned location information can be the location of the camera where the target person is captured, and can be the area name, latitude and longitude coordinates
  • the above-mentioned feature information includes the feature information of all monitored objects.
  • the above-mentioned feature information can be the collected facial features, time period, similarity, gender, age, hat, glasses, race, and picture quality of all monitored objects.
  • Feature information The feature information of all the monitored objects mentioned above can be obtained by extracting the monitoring information of all monitored objects by the camera device, and further, the images captured by the camera device can be extracted by the facial image recognition engine of all the monitored objects. .
  • the aforementioned monitoring database may also correspond to multiple.
  • the above-mentioned monitoring database can be set in the monitoring terminal, and the monitoring terminal directly makes data requests; it can also be set in the camera device for remote connection with the monitoring terminal.
  • the monitoring database is set in the camera device, it is determined After the target area, the monitoring terminal can obtain corresponding monitoring data from the monitoring database in the camera device through the network to form a temporary monitoring database.
  • the face image to be retrieved is retrieved in the monitoring database, which may be to compare the face image to be retrieved with the face image in the monitoring database to obtain the face image to be retrieved and the person in the monitoring database.
  • the similarity of the face image according to the preset similarity threshold, select the face image with the similarity greater than the similarity threshold in the monitoring database as the target face image of the target person, and obtain the face information corresponding to the target face image .
  • the similarity between the face image to be retrieved and the face images of all monitored objects in the monitoring database can be obtained, and the similarity is selected to be greater than the preset similarity.
  • the above-mentioned similarity can be obtained through the face recognition engine.
  • the face image to be retrieved and the face images of all monitored objects in the monitoring database are extracted into the face recognition engine for traversal comparison, so as to obtain the person to be retrieved.
  • the similarity between the face image and each face image in the monitoring database the higher the similarity, the higher the probability that the two are the same.
  • This partition only stores the face image of the monitored object A, and selects a face image as the index image. Through the index image, all face images of the monitored object A can be obtained. The retrieved face image is compared with the index image to obtain the similarity between the face image and the index image, which can reduce a lot of retrieval work.
  • the above-mentioned target face information includes the face image, statistical information, and identity information of the target person in the monitoring database.
  • the above-mentioned statistical information may be the number of times the target person is captured by the camera device, the time period of the capture, the weather at the time of the capture, etc.
  • Information, the above-mentioned identity information can be the name, ID number, gender, occupation, etc. of the target person.
  • the aforementioned monitoring database can also correspond to multiple, and the face image to be retrieved can be compared with the face images in multiple monitoring databases to obtain the face image to be retrieved and The similarity of the face images in the monitoring database is selected, the face images with the similarity greater than the similarity threshold in the monitoring database are selected as the target face images of the target person, and the target face images of the above target person are formed into a set.
  • the target face image of the target person mentioned above may be a structured face image.
  • the face information of the target person is a structured face image
  • the statistics of the target person can be obtained at the same time.
  • Structured information such as information and identity information; it can also be unstructured face information, semi-structured face information, etc.
  • the face image of the target person is preferably structured face information
  • the monitoring database is preferably a structured monitoring database.
  • the target face image of the target person obtained may be one or more face images, and the target face information of the target person may also be referred to as a retrieval result.
  • the aforementioned target face information is the target face information of the target person obtained in step 204.
  • the target face image is preferably a structured face image, and the aforementioned structured face image includes the face image and statistical information of the target person. , Identity information and other structured information.
  • the above-mentioned behavior information may be behavior information such as appearance, stay, abnormal behavior, and peers.
  • the above-mentioned appearance can be captured when the camera is captured; the above-mentioned stay can be determined by whether the target person's time in the field of view of a certain camera is greater than a preset stay time threshold. If there is a time in the field of view of the camera device that is greater than the dwell time threshold, it can be judged as a dwell.
  • the acquisition of the existence time can be calculated by the video tracking algorithm, which will not be repeated here; the above abnormal behavior can be analyzed by the target personnel.
  • the number of occurrences or the number of occurrences year-on-year can be used to determine whether the target person’s behavior is abnormal. For example, the target person’s appearance in the supermarket was 4 times in May last year, and the 50 appearances in the supermarket in May this year can be considered abnormal behavior; It is analyzed based on characteristic information.
  • the above-mentioned characteristic information can be name, ID number, occupation, facial characteristics, time period, similarity, gender, age group, hat, glasses, race, image quality and other characteristic information, such as :
  • the target person is an IT person, but is continuously captured in the suburbs during overtime hours on workdays, or the target person is a full-time shareholder who is continuously captured on the rooftop of a certain building, etc.; the above-mentioned counterparts can be, according to the camera device, both captured and captured at the same time.
  • the number of times the target person appears in other monitored objects in the field of view at the same time is used to determine the number of times.
  • the target person Li Si was captured 100 times in the A area in a certain month, of which 88 times there was a picture in the image.
  • Li Si and Zhang San have a peer relationship.
  • the above-mentioned structured face image can be displayed on the map interface, for example, the face image of the target person in the target area is displayed, the statistical information is displayed on the map interface, and the identity information is displayed on the map interface Wait. Further, the face image, statistical information, and identity information can be unified for display in the information bar of the map interface. In this way, the map can be separated from the information bar, so that the user can better observe the map.
  • the face image can be displayed on the map, so that the user can intuitively know the location of the target person; or click a camera device to obtain the face image of the monitored object captured by the camera device. Of course In this way, it is necessary to establish a link relationship between the camera device on the map and the corresponding monitoring database.
  • the foregoing behavior information may be referred to as a behavior rule or a behavior attribute.
  • the field of view range of the foregoing camera device may also be referred to as the field of view range or shooting range of the camera device.
  • the face image of the target person to be retrieved is retrieved in the target area, the calculation amount of retrieval can be reduced, and the hardware resources for retrieval can also be saved.
  • the target person is placed in the target area for retrieval. Obtain the search results of the target personnel in the target area, which makes the analysis more targeted and improves the accuracy of obtaining target behavior information.
  • personnel behavior analysis method can be applied to personnel behavior analysis equipment, such as computers, servers, mobile phones, and other equipment that can perform personnel behavior analysis.
  • personnel behavior analysis equipment can also be called Monitoring terminal or terminal equipment.
  • the face image of the target person to be retrieved is acquired; the target area is determined in the map interface; according to the target area, a monitoring database for retrieving the face image to be retrieved is determined, and the monitoring database includes Face images of all monitored objects in the target area.
  • the face image to be retrieved is searched in the monitoring database to obtain the target face image of the target person, the target face image includes statistical information and identity information; according to the target face image, Behavior information of the target person in the target area. Since the face image of the target person to be retrieved is retrieved in the monitoring database in the corresponding target area, it not only reduces the workload of retrieval and speeds up retrieval, but also reduces the consumption of hardware resources. At the same time, it can also improve the behavior of acquiring the target. The accuracy of the information.
  • FIG. 7 is a schematic flowchart of another method for analyzing human behavior according to an embodiment of the present invention. As shown in FIG. 7, it includes the following steps:
  • step 302 referring to Figure 3, the above frame selection activation request can be triggered by the user (such as clicking or selecting) on the "frame selection camera", and then a frame selection activation request for setting settings is sent to the system;
  • Activating the frame selection function can be understood as when the system receives the frame selection activation request, it will issue a frame selection activation instruction to activate the frame selection function so that the user can make a frame selection on the map interface.
  • the above-mentioned triggering of "frame selection camera” can be triggered by clicking the mouse, or by pressing a certain function key of the mouse for a certain period of time (such as 1 second, 2 seconds, etc.), or by setting the keyboard shortcut It can be triggered by the key, or by the touch action (touch, number of touch points, touch duration or stroke track, etc.) of the touch device. Of course, it can also be triggered by a combination of the above trigger methods, such as The combination of mouse and keyboard, the combination of touch points and touch duration, etc.
  • the aforementioned frame selection information can be obtained by the user performing frame selection on the map interface after activating the frame selection function in step 302.
  • the aforementioned frame selection information includes frame selection boundary.
  • the frame selection area is determined by the frame selection boundary.
  • the aforementioned frame selection camera device can be understood as the camera device identification located in the frame selection area on the map interface.
  • the camera device identifier can correspond to the location of the camera device in the real scene, that is, if the location A in the real scene is equipped with camera device 01, then there is the logo of camera device 01 on the corresponding location A in the map interface; in addition, the location of the camera device in the map interface
  • the camera device identifier and the location of the camera device in the real scene may be associated, that is, there is a link relationship.
  • the aforementioned camera device identifier can be a selectable pixel block in the map interface, and the aforementioned selectable pixel block can be a camera shape or a digital number.
  • you can also adjust the frame selection area by adjusting the frame selection boundary for example, add a smaller " ⁇ " shape frame selection boundary to the " ⁇ " shape frame selection area (the frame selection boundary is the " ⁇ " shape).
  • the frame selection area is formed into a "back" shape frame selection area or a smaller "mouth” shape frame selection area.
  • the above-mentioned selectable pixel blocks will fall on the frame selection boundary.
  • the frame selection boundary can be regarded as the selected state or the non-selected state. The specifics can be based on Actually need to be set.
  • the determined target area can be the frame selection area formed by the aforementioned frame selection boundary, or it can be the effective monitoring area of the camera device in the frame selection area.
  • the target area is a frame selection area
  • the selection boundary is square
  • the target area is the square area formed by the frame selection boundary; if the target area is the effective monitoring area of the camera device in the frame selection area, it is also assumed that the frame selection boundary is square, and there is only one camera device in the frame selection area.
  • the effective monitoring area of the camera device is circular, and the target area is a circular area.
  • the effective monitoring area can be the target area or the framed area As the target area.
  • the user can select the target area flexibly, improve the selectivity and pertinence of the target area, and further improve the target personnel The accuracy of behavior analysis.
  • step 302, step 303, and step 304 in this embodiment can be regarded as optional implementations of step 202 in the embodiment of FIG. 6.
  • the determining the target area in the map interface includes:
  • each scene button corresponds to a scene area
  • At least one camera device is provided in each scene area.
  • the aforementioned preset scene button can be a scene button corresponding to the scene area, and the scene area where the scene button is set can be an area commonly used for analysis and retrieval, such as the "Public Security” button corresponding to the public security bureau scene, and the "Supermarket” button corresponding to the supermarket scene.
  • the instruction is obtained after triggering (such as clicking or selecting).
  • the system will send the instruction that the "Supermarket” button is triggered, and the system will receive the trigger of the "Supermarket” button. instruction.
  • the above-mentioned trigger can be the above-mentioned triggering of the “frame selection camera”, which can be triggered by clicking the mouse, or by a set keyboard shortcut, or by touching the device to perform a touch action.
  • the scene button corresponding to the scene area can be understood as: the scene button has a link relationship with the corresponding scene area. By clicking the scene button, the corresponding scene area can be selected on the map interface; it can also be understood as: the map interface
  • the scene area above is a selectable pixel block.
  • the scene button is linked to the corresponding selectable pixel block.
  • the above-mentioned scene area may be the area where one or more camera device identifiers are located.
  • the "Metro" button corresponds to multiple subway scenes.
  • the scene is filtered.
  • the filtering method can be to remove or select the camera device identifier in the corresponding scene area in the map interface, or to select the camera device identifier in the corresponding scene area in the map interface to remove or select.
  • this embodiment can be regarded as an optional implementation of the implementation of step 202 in the embodiment of FIG. 6, and can also be regarded as an optional replacement of step 302, step 303, and step 304 in the embodiment of FIG. 7 the way.
  • the determining a monitoring database used to retrieve the face image to be retrieved according to the target area includes:
  • the monitoring data of each camera device in the target area is integrated to form the monitoring database.
  • the above-mentioned target area may be one or more, and there may be one or more camera devices in the above-mentioned target area.
  • the monitoring data of all the camera devices in the target area can be integrated to form a temporary monitoring database, and this temporary monitoring database is used as the retrieval to be retrieved
  • the monitoring database of face images for example: if there are four camera devices A, B, C, D in the target area determined by the frame selection, and the corresponding monitoring data are collected as a, b, c, d, then one can be established Temporary monitoring database. Write monitoring data a, b, c, and d into this temporary monitoring database.
  • this temporary monitoring database contains the monitoring data of the target area, which can be treated in this temporary monitoring database. Retrieving the face image for retrieval can also realize the retrieval of the face image to be retrieved in the target area.
  • the above-mentioned integration can be sorted and integrated according to the time sequence captured, and the monitoring data of each camera device can be written into a temporary monitoring database in chronological order; the above-mentioned integration can also be sorted and integrated according to the camera device.
  • the monitoring data of each camera device can be divided into different areas in the temporary monitoring database according to the camera device, and each area corresponds to storing the monitoring data of one camera device.
  • a temporary monitoring database is formed by obtaining the monitoring data of the camera device in the target area, which can increase the flexibility of the database and the accuracy of the data in the database.
  • the monitoring data of the camera device in the scene area corresponding to the scene button can be integrated to form a temporary monitoring database.
  • the specific integration method can refer to the above-mentioned frame selection to obtain the monitoring database of the target area. Integration method; in addition, since the scene button corresponds to the scene area, a monitoring database corresponding to the scene area can be preset, and the monitoring database corresponding to the scene area stores the monitoring data collected by all the cameras in the scene area.
  • a monitoring database is established for the public security scene area in advance, and the monitoring database stores The monitoring data a, b, and c of the three camera devices of A, B, and C.
  • This monitoring database can update the monitoring data collected from the three camera devices of A, B, and C in real time, and this monitoring database can respond to "Public Security"
  • the trigger of the button is used to retrieve the monitoring database of the face image to be retrieved, that is to say, this type of monitoring database has an index relationship with the corresponding scene button, and the corresponding one of this type of monitoring database can be indexed through the scene button for monitoring database.
  • the retrieval result of the target person may be obtained in a large database, and then the target face information of the target area can be obtained by screening according to the scene, as shown in FIG. 8.
  • the pertinence of the monitoring data in the monitoring database can be increased. Since the monitoring data is the monitoring data collected by the camera device in the target area, the number of people to be retrieved is reduced.
  • the search range of the face image improves the search speed.
  • the obtained target face information may be displayed in a sorted manner according to a preset sorting rule.
  • the obtained target face information may be a structured face image
  • the aforementioned structured face image includes structured information such as the face image, statistical information, and identity information of the target person.
  • the aforementioned preset sorting rules can be sorted by time, sorted by camera, sorted by frequency of appearance, sorted by identity information, etc. For example, sorted by time can be based on the capture time (that is, the time information in the structured face information). ) Sort, you can put the latest snapshot at the top for sorting, or you can put the latest snapshot at the last for sorting.
  • Sorting by camera device can be sorted according to the number of the captured camera device.
  • the monitoring information of the No. 1 camera device is placed at the top for sorting, and the monitoring information of the No. 2 camera device is placed after the information of the No. 1 camera device for sorting.
  • the monitoring information of the camera devices can also be sorted.
  • the monitoring information in the camera device No. 1 can also be sorted according to time.
  • Sort by frequency of occurrence which can be sorted by the number of times the camera device captures the target person. For example, the number of times that the camera device No. 1 captured the target person is 46 times, and the number of times the camera device No. 2 captures the target person 57 times, then The retrieval results of camera No.
  • the target person 2 can be sorted according to the principle of "highest”; it can also be sorted by the number of times the target person is captured in a certain period of time. For example, on August 2, all cameras captured the target in total Personnel 8 times. On August 3, all camera devices captured the target person 12 times in total. On August 4, all camera devices captured the target person 2 times. Then all the camera devices on August 3 can be captured according to the principle of heightening. The search results of is placed first for sorting.
  • the aforementioned sorting rules can be used alone or in combination, such as the aforementioned combination of dates and times.
  • the retrieved target person may be one or more, and the above sorting may be the sorting rule in step 305, for example: when there is one target person to be retrieved, the retrieved target person’s
  • the search results are displayed in chronological order, so that the user can directly obtain the latest monitoring information of the target person, and then help the user analyze the user's recent behavior, for example, the target person frequently appears in one place recently.
  • the target face information includes statistical information and/or identity information
  • the obtaining behavior information of the target person in the target area according to the target face information includes:
  • the above statistical information can be counted by the monitoring terminal. For example, if the target person’s face image is 500, the number of times the target person is captured by the camera device can be counted as 500 times; for example, the target person is captured for the first time on a certain day The time is 9:30, and the time when the target person is captured at 17:30 on the last day, the target person’s capture time period can be counted as 9:30-17:30; for example, the target person is captured 200 times in rainy days. If the number of times captured by the camera on a sunny day is 300 times, then the number of times the target person was captured by the camera on a rainy day is 200 times, and the number of times captured by the camera on a sunny day is 300 times.
  • the definition of weather can be based on the local weather forecast or it can be It is to recognize the weather through image recognition outdoors, and the weather recognition result shall prevail.
  • the above-mentioned identity information may be structured information in a structured face image, and the above-mentioned identity information may include information such as name, age, ID number, occupation, and historical criminal record.
  • the above-mentioned behavior information can be obtained by analyzing the statistical information of the target person. For example, by counting the time period that the target person appears in the subway from 17:00 to 19:30, the behavior information of the target person can be obtained as off-duty. The number of times that a target person is captured together is 20 times a day, and two target persons can be obtained.
  • the above behavior information can also be obtained by analyzing the identity information of the target person.
  • the target person For example, if the target person’s occupation is a driver, one day The internal traffic police management bureau was captured 5 times, and it can be obtained that the target person was dealing with traffic violations; the target person was female and was captured 5 times a day in the gynecological hospital, but before that, the target person was not captured in the gynecological hospital Or there is no identity information indicating that the target person is a gynecologist, and it can be obtained that the target person is seeing a doctor.
  • the statistical information includes at least one of a capture time period and a capture frequency
  • the identity information includes at least one of an occupation and a criminal record
  • the analyzing the statistical information and/or the identity information to obtain the behavior information of the target person in the target area includes:
  • the above statistical information can be the structured information in the structured face image.
  • the above capture time period can be within a preset time period, starting from the time when the target person is captured for the first time, and taking the last capture
  • the time of the target person is the time of the target person obtained at the end point. For example, in September, the first time the target person was captured was September 5, and the last time the target person was captured was September 10, then It can be considered that the capture period of the target person is from September 5th to September 10th.
  • the aforementioned preset time period may be a date selection performed by the user through a map interface. For example, a calendar is set on the map interface so that the user can select the time period set for the preview in September through the calendar.
  • an input box for inputting the time period can also be set on the map interface to allow the user to enter the pre-announced set time period.
  • the above analysis of the capture time period to obtain the behavior information of the target person in the target area can be understood as: counting the number of times the target person appears in the target area in a certain period of time, and obtaining behavior information according to the statistical results; the aforementioned behavior information It can be the appearance time period, where the appearance time period can be understood as: the time period during which the camera device captures the monitored object.
  • the above-mentioned counting the number of times may be counting the effective number of occurrences of a monitored object in the time period, where the effective number of occurrences is from the time the monitored object appears in the field of view of the camera device to when it leaves the field of view of the camera device.
  • the surveillance object can be edited from the field of view of the camera device to the field of view of the camera device as a surveillance video about the surveillance object, and the number of surveillance videos of the surveillance object is the effective number of occurrences .
  • the image in the surveillance video of the surveillance object can be extracted for face recognition, and one or more face images of the surveillance object can be obtained, and a structured face image can be formed, that is, the surveillance video is the original image source , Perform face recognition on the basis of the original image to obtain a face image, and obtain the structured information of the monitored object according to the face recognition engine, and combine the face image and the structured information to form a structured face image.
  • the image quality assessment (IQA) algorithm is used to obtain the image with the best image quality in each surveillance video for face recognition.
  • the face image of the monitored object in this way, the number of the retrieved target face images can be used to count the number of occurrences of the monitored object, for example: monitoring object A captured 25 surveillance videos by the camera in September , From these 25 surveillance videos, each video extracts the best image quality image for face recognition, then the 25 surveillance videos correspond to 25 face images of the monitored object.
  • 25 target face images can be retrieved, and the effective number of appearances of the target person is 25.
  • the first appearance time and the last appearance time of the target person can be obtained, thereby obtaining the appearance of the target person period.
  • the time period of the target person's appearance it can be analyzed whether the target person is going to work, getting off work or hanging out.
  • the target area is a supermarket, and the time period of the target person's appearance is 17:30, then the target person can be considered to be off work; for example, the target area is the subway.
  • the target person appears at 6:00 it can be considered that the target person is on the way to work; for example, the target area square, the target person appears in the time frame of 20:30, then it can be considered that the target person is hanging out.
  • the above analysis of the capture frequency information can be understood as: the number of times the target person is captured in a certain period of time in the target area is counted, and the capture frequency information is obtained according to the statistical results; the above behavior information can be the frequency of occurrence, where the frequency of occurrence It may be to count the effective number of occurrences of a certain monitored object in the time period, where the effective number of occurrences can be counted with reference to the aforementioned effective number of occurrences, which will not be repeated here.
  • the acquisition of the behavior information of the target person in the target area can be understood as: by counting the capture frequency of the target person, the behavior law of the target person in the target area is analyzed, for example, : The number of times the target person appears in the restaurant from 10:00-13:30 is 8 times, then it can be judged that the target person is a takeaway person; or the target person is a non-bank staff, and the number of times that the target person appears at the bank gate is 10 times in a day. It can be judged that the target person is hovering near the bank.
  • the capture frequency information is not the same as the appearance frequency.
  • the capture frequency can be understood as the number of times the camera device captures the monitored object.
  • the appearance frequency can be the number of times the target person appears. If the target person appears once, it can be The camera captures multiple times. For example, within 10 seconds, the target person can be captured by the camera 10 times, but in fact the target person only appears once.
  • the above-mentioned identity information may be structured information in a structured face image
  • the above-mentioned occupational information of the target person may be structured information in the identity information
  • the structured information in the above-mentioned identity information may also include name and ID card. Number, residential address, contact information and other information.
  • the foregoing acquisition of the behavior information of the target person in the target area based on the occupation information can be understood as: analyzing the behavior law of the target person according to the degree of association between the occupation and the target area. For example, if it is retrieved that the target person is an ordinary financial person, and the appearance frequency of a high-end hotel in the target area is 10 times a month, it can be judged that the target person's behavior is abnormal.
  • the relationship between the above occupations and various regions can be understood as the probability of certain people appearing in a specific area. For example, the probability of public officials appearing in the administrative center is high, and the relationship between their occupations and the administrative center is high, and public officials frequently appear in the administrative center. It can be considered normal behavior.
  • the above-mentioned identity information may also include the illegal and criminal records of the target person.
  • the illegal and criminal records can be obtained from the public archives of the public security organ. If the target person has no illegal or criminal records, it can be recorded as none or not displayed.
  • the target area for retrieval can be determined for the target person. For example, if the target person has a record of robbing a bank, the target person can be retrieved with a bank or a gold shop as the target area, and then analyzed. Whether the target person is suspected of stepping on the spot, for example, if the target person is captured frequently in the gold shop, it can be considered that the target person is suspected of stepping on the spot.
  • FIG. 9 is a schematic structural diagram of a personnel behavior analysis apparatus provided by an embodiment of the present invention. As shown in FIG. 9, it includes:
  • the obtaining module 401 is used to obtain the face image of the target person to be retrieved
  • the area module 402 is used to determine the target area
  • the database determining module 403 is configured to determine a monitoring database for retrieving the face image to be retrieved according to the target area, and the monitoring database includes the face images of all persons in the target area.
  • the matching module 404 is configured to retrieve the face image to be retrieved in the monitoring database to obtain a target face image of the target person, and the target face image includes statistical information and identity information;
  • the analysis module 405 is configured to obtain behavior information of the target person in the target area according to the target face image.
  • the area determining module 402 includes:
  • the frame selection unit 4021 is configured to receive a frame selection activation request for the map interface, and activate the frame selection function in response to the frame selection activation request;
  • the receiving unit 4022 is configured to receive frame selection information in the map interface, where the frame selection information includes the area where the frame selected camera device is located;
  • the determining unit 4023 is configured to determine the area where the frame-selected camera device is located as the target area.
  • the area determination module 402 is configured to receive a selection instruction of a scene button preset in the map interface, and respond to the selected instruction to determine the scene area corresponding to the selected scene button It is determined as the target area, wherein each scene button corresponds to a scene area, and at least one camera device is provided in each scene area.
  • the database determining module 403 includes:
  • the acquiring unit 4031 is configured to acquire monitoring data of each camera device in the target area, where the monitoring data includes the face image of the monitored object;
  • the integration unit 4032 is configured to integrate the monitoring data of each camera device in the target area to form the monitoring database.
  • the target face information includes statistical information and/or identity information
  • the analysis module 405 is configured to analyze the statistical information and/or the identity information to obtain behavior information of the target person in the target area.
  • the statistical information includes at least one of a time period for capturing photos and a frequency of capturing photos, and the identity information includes at least one of occupation and criminal record;
  • the analysis module 405 is configured to analyze at least one of the capture time period, capture frequency, occupation, and criminal record to obtain behavior information of the target person in the target area.
  • the above device can be applied to personnel behavior analysis equipment, such as computers, servers, mobile phones, and other equipment that can perform personnel behavior analysis.
  • the personnel behavior analysis device provided by the embodiment of the present invention can implement each implementation manner in the method embodiment of FIG. 6 and FIG. 7 and the corresponding beneficial effects. In order to avoid repetition, details are not described herein again.
  • FIG. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention. As shown in FIG. 12, it includes: a memory 602, a processor 601, and a memory 602 stored in the memory 602 and available in the processor The computer program running on 601, of which:
  • the processor 601 is configured to call a computer program stored in the memory 602, and execute the following steps:
  • a monitoring database for retrieving the face image to be retrieved, the monitoring database including the face images of all monitored objects in the target area;
  • the behavior information of the target person in the target area is obtained.
  • the determining the target area in the map interface executed by the processor 601 includes:
  • the area where the framed camera device is located is determined as the target area.
  • the determining the target area in the map interface executed by the processor 601 includes:
  • each scene button corresponds to a scene area
  • At least one camera device is provided in each scene area.
  • the determining, based on the target area, performed by the processor 601 to retrieve the monitoring database for the face image to be retrieved includes:
  • the monitoring data of each camera device in the target area is integrated to form the monitoring database.
  • the target face information includes statistical information and/or identity information
  • the obtaining, by the processor 601, the behavior information of the target person in the target area according to the target face information includes:
  • the statistical information includes at least one of a capture time period and a capture frequency
  • the identity information includes at least one of an occupation and a criminal record
  • the analysis of the statistical information and/or the identity information performed by the processor 601 to obtain the behavior information of the target person in the target area includes:
  • the above electronic device may be a human behavior analysis device, such as a computer, a server, a mobile phone, and other devices that can perform human behavior analysis.
  • a human behavior analysis device such as a computer, a server, a mobile phone, and other devices that can perform human behavior analysis.
  • the personnel behavior analysis device provided by the embodiment of the present invention can implement each implementation manner in the method embodiment of FIG. 6 and FIG. 7 and the corresponding beneficial effects. In order to avoid repetition, details are not described herein again.
  • the embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, each process of the embodiment of the image retrieval method provided by the embodiment of the present invention is implemented, and To achieve the same technical effect, in order to avoid repetition, I will not repeat them here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

一种人员行为分析方法、装置和电子设备,该方法包括:获取目标人员的待检索人脸图像(201);在地图界面中确定目标区域(202);根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库(203),所述监控数据库包括所述目标区域内所有监控对象的人脸图像。将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息(204);根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息(205)。由于将目标人员的待检索人脸图像在对应目标区内的监控数据库中进行检索,不仅减少了检索的工作量加快检索速度,还可以减少硬件资源的消耗,同时,还能提高对获取目标行为信息的准确度。

Description

一种人员行为分析方法、装置和电子设备 技术领域
本发明涉及图像处理领域,尤其涉及一种人员行为分析方法、装置和电子设备。
本申请要求于2018年11月26日提交中国专利局,申请号为201811420072.6、发明名称为“一种人员行为分析方法、装置和电子设备”的中国专利申请,其全部内容通过引用结合在本申请中。
背景技术
图像检索是当前常用的技术之一,例如:人脸图像检索或者其他图像的检索。目前人脸图像检索主要采用的方式是,采集大量人脸图像存储在图像库中,当需要进行人脸图像检索时,计算待检索人脸图像与图像库存储的人脸图像之间的相似度,从而得到与待检索图像相似度最高或者相似度满足预设条件的图像作为人脸图像的检索结果。该技术方案中,图像库中的人脸图像数量过大,导致对目标人员的行为分析准确度低。
发明概述
技术问题
问题的解决方案
技术解决方案
本发明实施例提供一种人员行为分析方法、装置和电子设备,能够提高对目标人员的行为分析准确度。
第一方面,本发明实施例提供一种人员行为分析方法,包括:
获取目标人员的待检索人脸图像;
在地图界面中确定目标区域;
根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有人员的人脸图像。
将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目 标人脸信息;
根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
可选的,所述在地图界面中确定目标区域包括:
接收对所述地图界面的框选激活请求,并响应所述框选激活请求激活框选功能;
接收所述地图界面中的框选信息,所述框选信息包括被框选的摄像装置所在的区域;
将所述被框选的摄像装置所在的区域确定为目标区域。
可选的,所述在地图界面中确定目标区域包括:
接收在地图界面中预先设置的场景按钮的选取请求,将被选取的场景按钮对应的场景区域确定为所述目标区域,其中,每个场景按钮对应于一个场景区域,每个场景区域内设有至少一个摄像装置。
可选的,所述根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,包括:
获取所述目标区域内每个摄像装置的监控数据,所述监控数据包括监控对象的人脸图像;
将所述目标区域内每个摄像装置的监控数据整合形成所述监控数据库。
可选的,所述目标人脸信息包括统计信息及/或身份信息;
所述根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息包括:
对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息。
第二方面,本发明实施例提供一种图像检索装置,包括:
获取模块,用于获取目标人员的待检索人脸图像;
区域确定模块,用于在地图界面中确定目标区域;
数据库确定模块,用于根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有人员的人脸图像。
匹配模块,用于将所述待检索人脸图像在所述监控数据库中进行检索,得到所 述目标人员的目标人脸信息;
分析模块,用于根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的人员行为分析方法中的步骤。
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例提供的人员行为分析方法中的步骤。
本发明实施例中,获取目标人员的待检索人脸图像;在地图界面中确定目标区域;根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有监控对象的人脸图像。将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息;根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。由于将目标人员的待检索人脸图像在对应目标区内的监控数据库中进行检索,不仅减少了检索的工作量加快检索速度,还可以减少硬件资源的消耗,同时,还能提高对获取目标行为信息的准确度。
发明的有益效果
对附图的简要说明
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种可选的网络架构示意图;
图2是本发明实施例提供的一种地图界面示意图;
图3是本发明实施例提供的另一种地图界面示意图;
图4是本发明实施例提供的另一种地图界面示意图;
图5是本发明实施例提供的另一种地图界面示意图;
图6是本发明实施例提供的一种人员行为分析方法的流程示意图;
图7是本发明实施例提供的另一种人员行为分析方法的流程示意图;
图8是本发明实施例提供的另一种人员行为分析方法的流程示意图;
图9是本发明实施例提供的一种人员行为分析装置的结构示意图;
图10是本发明实施例提供的另一种人员行为分析装置的结构示意图;
图11是本发明实施例提供的另一种人员行为分析装置的结构示意图;
图12是本发明实施例提供的一种电子设备的结构示意图。
发明实施例
本发明的实施方式
请参见图1,图1是根据本发明实施例的一种可选的网络架构示意图,如图1所示,上述的网络架构包括:部署于预设地点的多个摄像装置101和监控终端102。
其中,部署于多个预设地点的多个摄像装置101,用于采集监控对象的人像信息。
上述预设地点可以是在待监控区域内预先规划的用于安装摄像装置的位置,比如地铁、超市、公安、海关等位置,这些用于安装摄像装置的位置是已知的特定场景;可选地,上述多个预设地点可以是待监控区域内未知的位置,比如郊区的一条路边,一条河边,一片草地等位置,这种情况下,每个摄像装置101可以与监控终端102通信,上报各自的位置信息。可选地,监控终端102可以基于卫星定位技术定位各个摄像装置的位置。需要说明的是,上述的摄像装置可以是一个或者多个,上述的预设地点可以是一个或者多个,上述的监控对象也可以是一个或者多个。
监控终端102,与监控区域内多个摄像装置101通信,用于对每个监控对象的人像信息进行识别和分析,上述的人像信息包括人脸图像,可以通过设置在监控终端的人脸图像识别引擎对人脸图像进行识别分析,得到每个监控对象的身份信息,当然,人脸识别引擎也可以是设置在其他地方的人脸图像识别引擎,比如,设置在摄像装置上的人脸图像识别引擎,这种情况下,摄像装置对人脸图 像进行识别分析,对比身份证库的人员人脸图像,可以得到监控对象的身份信息,可选的,摄像装置上还可以设置信息处理器,用于将人脸图像及监控对象的身份信息生成结构化信息,发送到监控终端。也可以是通过后台服务器的人脸引擎进行人脸的识别和分析,这种情况下,由摄像装置将采集到的人像信息发送到后台服务器,通过后台服务器的人脸识别引擎对人像信息中的人脸图像进行提取并进行识别和分析,通过对比身份证库的人脸图像,可以获取监控对象的身份信息,可选的,后台服务器还可以将监控对象的人脸图像与身份信息生成结构化信息,发送到监控终端,并将以上信息形成监控信息。上述的身份信息可以是姓名、身份证号、职业等身份信息。
可选的,上述监控终端102可以计算机、笔记本电脑、平板电脑、手机等任意一种终端。上述的监控终端内设置有用于展示地图信息和监控对象信息的应用,上述的应用可以在检测到对目标对象的检索指令的情况下,以地图界面展示摄像装置中采集到目标对象的所有摄像装置的位置信息,其中,目标对象可以是一个或者多个监控对象,检索指令中包含目标对象的特征信息。上述的特征信息可以是姓名、身份证号、职业、人脸特征、时间段、相似度、性别、年龄段、帽子、眼镜、种族和图片质量等特征信息。
需要说明的是,上述监控终端102通过部署于多个预设地点的多个摄像装置,采集到一个或多个监控对象的人像信息后,默认情况下,可以地图界面展示当前部署的所有摄像装置的位置信息,上述的摄像装置位置信息可以是基于地理位置的经纬度坐标,这样,可以将摄像装置的经纬度坐标加入地图信息中,在地图界面下的相应的经纬度坐标上展示出对应经纬度坐标的摄像装置位置信息。上述的经纬度坐标可以基于卫星定位技术得到。还可以在界面上展示的预设信息栏中展示多个摄像装置采集到的监控对象的人像信息。例如,图2是根据本发明实施例的一种可选的监控界面示意图,如图2所示,在地图上展示部署的多个摄像装置,在地图左侧的信息栏中可以展示当前部署的多个摄像装置采集到的一个或多个监控对象的人像信息。可选地,信息栏中可以只展示每个监控对象的人脸图像。可选的,用户可以点击界面上的“查看采集”按钮,以便在信息栏上查看所有的抓拍头像,如图2所示。可选的,用户可以通过点击界面上的 “区域选择”按钮,以在信息栏上查看所选区域内所有抓拍的人脸图像,如图3所示。可选的,用户可以通过在“上传框”中上传目标人脸图像以对目标人员进行检索,如图4所示。可选的,用户可以通过界面上的检索框输入目标人员的身份信息以对目标人员进行检索,如图5所示。
另外,上述的监控终端102还设置有用于存储监控信息的监控数据库,上述的监控信息可以是包括身份信息和人脸图像的结构化人脸图像。
需要说明的是,上述的网络架构只是本发明实施例提供的一种较优的网络架构,不应视为是对本发明实施例的限制。另外,本申请中的人像信息也可以称为图像信息,本申请中的地图界面也可以称为地图模式,本申请中的人脸图像也可以称为人像、头像或者人员图像。
请参见图6,图6是本发明实施例提供的一种人员行为分析方法的流程示意图,如图6所示,包括以下步骤:
201、获取目标人员的待检索人脸图像。
上述的待检索人脸图像可以通过用户在界面上的“上传框”上传获取,上传的方式可以是通过选取图像存储路径来上传,也可以是通过在将图像从图像集中拖在“上传框”来上传,另外,也可以通过扫描进行上传,比如通过扫描仪扫描目标人员的照片或是图像进行上传,或者是拍照上传,比如在进行边境检测的时候,通过摄像装置拍摄到目标人员的图像,然后上传。上传后的待检索人脸图像显示在“上传框”中。
202、在地图界面中确定目标区域。
上述的目标区域为用户在地图界面上选取的区域,如图3所示,可以通过地图界面的“区域选择”进行选取,在此情况下,地图界面可以设置有对应特定场景区域的场景按钮,比如,对应公安局场景的“公安”按钮,对应超市场景的“超市”按钮,对应商场场景的“商场”按钮,对应海关场景的“海关”按钮,对应地铁场景的“地铁”按钮等,也可以通过在地图界面上通过“框选摄像头”进行框选来确定目标区域,比如,通过框选指令在地图界面上进行区域的框选,当然,在进行框选时,如果框选区域内没有摄像装置,则可以提示用户该选区为无效选区。上述的框选指令可以是通过鼠标进行框选,也可以是通过 触摸设备进行框选。
在步骤102中,需要说明的是,上述目标区域可以称为检索区域或者筛选区域,上述的目标区域可以是一个或者多个目标区域。
203、根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有监控对象的人脸图像。
该步骤中,上述监控数据库可以是本地数据库,也可以是远端数据库,且进一步上述数据库可以是指一个或者多个存储设备上存储的数据。监控数据库中可以存储监控对象的结构化人脸图像,上述的结构化人脸图像包括:人脸图像、身份信息、时间信息、位置信息、特征信息等信息,上述的人脸图像可以是通过人脸图像识别引擎从摄像装置采集到的人像信息中进行提取得到,上述的身份信息可以是通过将对应人脸图像在公安系统身份证库中进行比对得到的信息,可以是姓名、身份证号、性别等;上述的时间信息可以是摄像装置抓拍到该图像的时间,可以是时间点、时间段等;上述的位置信息可以是抓拍到目标人员的摄像装置地点,可以是区域名、经纬度坐标等,上述的特征信息包括所有监控对象的特征信息,上述特征信息可以是采集到的所有监控对象的人脸特征、时间段、相似度、性别、年龄段、帽子、眼镜、种族和图片质量等特征信息,上述所有监控对象的特征信息可以是通过摄像装置对所有监控对象的监控信息中进行提取得到,进一步的,可以由摄像装置对所有监控对象抓拍到的图像通过人脸图像识别引擎提取得到。
需要说明的是,当目标区域为多个时,上述的监控数据库也可以对应是多个。另外,上述的监控数据库可以是设置在监控终端内,由监控终端直接进行数据请求;也可以是设置在摄像装置内与监控终端进行远程连接,当监控数据库是设置在摄像装置内时,在确定目标区域后,监控终端可以通过网络向摄像装置内的监控数据库求对应的监控数据,形成临时的监控数据库。
204、将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息。
在步骤204中,将待检索人脸图像在监控数据库中进行检索,可以是将待检索人脸图像与监控数据库中的人脸图像进行比对,得到待检索人脸图像与监控数 据库中的人脸图像的相似度,根据预先设置的相似度阈值,选取监控数据库中相似度大于相似度阈值的人脸图像做为目标人员的目标人脸图像,并获取对应于目标人脸图像的人脸信息。在将待检索人脸图像在所述监控数据库中进行检索时,可以通过获取所述待检索人脸图像与监控数据库中所有监控对象的人脸图像的相似度,选取相似度大于预先设置的相似度阈值的人脸图像来得到目标人员的目标人脸图像。其中,上述的相似度可以是通过人脸识别擎获取,将待检索人脸图像与监控数据库中所有监控对象的人脸图像提取到人脸识别引擎中进行遍历比对,从而获取到待检索人脸图像与监控数据库中每张人脸图像的相似度,相似度越高,代表两者为相同的概率就越高。在一些可能实施例中,也可以是将监控数据库中的所有监控对象的人脸图像按监控对象进行分类,比如:对于抓拍到监控对象A的所有人脸图像,存储在监控数据库中的一个分区,该分区只存储有监控对象A的人脸图像,并选取出一张人脸图像做为索引图像,通过索引图像可以获取到监控对象A的所有人脸图像,这样,就可以只需将待检索人脸图像与索引图像进行比对,获取人脸图像与索引图像的相似度就可以,可以减少大量的检索工作。
上述的目标人脸信息包括目标人员在监控数据库中的人脸图像、统计信息、身份信息,上述的统计信息可以是目标人员被摄像装置抓拍的次数、抓拍的时间段、抓拍时的天气等统计信息,上述的身份信息可以是目标人员的姓名、身份证号、性别、职业等。
另外,当目标区域为多个时,上述的监控数据库也可以对应是多个,可以将待检索人脸图像分别与多个监控数据库中的人脸图像进行比对,得到待检索人脸图像与监控数据库中的人脸图像的相似度,选取监控数据库中相似度大于相似度阈值的人脸图像做为目标人员的目标人脸图像,并将上述目标人员的目标人脸图像形成一个集合。
上述的目标人员的目标人脸图像可以是结构化人脸图像,当目标人员的人脸信息为结构化人脸图像时,可以在检索得到目标人员的人脸图像时,同时得到目标人员的统计信息、身份信息等结构化信息;也可以是非结构化人脸信息、半结构化人脸信息等。在本申请实施例中,优选为目标人员的人脸图像优选为结 构化人脸信息,监控数据库优选为结构化的监控数据库。
需要说明的是,在步骤104中,得到的目标人员的目标人脸信息中的人脸图像可以是一张也可以多张,目标人员的目标人脸信息也可以称为检索结果。
205、根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
上述的目标人脸信息为在步骤204中得到的目标人员的目标人脸信息,目标人脸图像优选为结构化人脸图像,上述的结构化人脸图像包括目标人员的人脸图像、统计信息、身份信息等结构化信息。上述的行为信息可以是出现、停留、异常行为、同行等行为信息。上述的出现可以是抓拍到即为出现;上述的停留可以是通过目标人员在某个摄像装置的视场范围内存在时间是否大于一个预先设置的停留时间阈值来进行判定,若是目标人员在该个摄像装置的视场范围内存在时间大于停留时间阈值,即可判定为停留,对于存在时间的获取可以通过视频跟踪算法来进行计算,在此不再赘述;上述的异常行为可以是分析目标人员环比出现次数或是同比出现次数,从而得到目标人员行为是否异常,比如:目标人员去年5月在超市出现的次数为4次,而今年5月在超市出现的50次,可以认为是行为异常;或者是根据特征信息来进行分析,上述的特征信息可以是姓名、身份证号、职业、人脸特征、时间段、相似度、性别、年龄段、帽子、眼镜、种族和图片质量等特征信息,比如:目标人员为IT人员,却连续在工作日的加班时间在郊区被抓拍,或者目标人员为全职股民,连续在某大楼天台被抓拍到等;上述的同行可以是,根据摄像装置同时抓拍到与目标人员同时出现在视场范围的其他监控对象的次数来进行判定,比如:目标人员李四某个月在A区域被抓拍到的次数100次,其中,有88次抓拍到图像中出现有张三,则可以认为李四与张三存在同行关系。
可以将上述的结构化人脸图像在地图界面上进行展示,例如,将目标区域内目标人员的人脸图像进行展示,将统计信息在地图界面上进行展示,将身份信息在地图界面上进行展示等。进一步的,可以将人脸图像、统计信息、身份信息统一在地图界面的信息栏进行展示,这样,可以将地图与信息栏进行分开,以便于用户更好观察地图。在地图上可以通过使用人脸图像进行展示,从而使用 户可以直观地知道目标人员出现过的位置;也可以通过点击某个摄像装置,获取该个摄像装置抓拍到监控对象的人脸图像,当然,这样需要在地图上的摄像装置与对应的监控数据库之间建立链接关系。
需要说明的是,上述的行为信息可以称为行为规律或者行为属性,另外,上述摄像装置的视场范围也可以称为摄像装置的视野范围或者拍摄范围。
上述步骤中,由于将目标人员的待检索人脸图像在目标区域内进行检索,这样可以减少检索的计算量,以及还可以节省检索的硬件资源,另外,对目标人员放在目标区域进行检索,得到目标人员在目标区域的检索结果,使得分析更有针对性,提高对获取目标行为信息的精准度。
需要说明的是,本发明实施例提供的人员行为分析方法可以应用于人员行为分析设备,例如:计算机、服务器、手机等可以进行人员行为分析的设备,上述的人员行为分析的设备也可称为监控终端或者终端设备。
本发明实施例中,获取目标人员的待检索人脸图像;在地图界面中确定目标区域;根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有监控对象的人脸图像。将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸图像,所述目标人脸图像包括统计信息及身份信息;根据所述目标人脸图像,得到所述目标人员在所述目标区域内的行为信息。由于将目标人员的待检索人脸图像在对应目标区内的监控数据库中进行检索,不仅减少了检索的工作量加快检索速度,还可以减少硬件资源的消耗,同时,还能提高对获取目标行为信息的准确度。
请参见图7,图7是本发明实施例提供的另一种人员行为分析方法的流程示意图,如图7所示,包括以下步骤:
301、获取目标人员的待检索人脸图像。
302、接收对所述地图界面的框选激活请求,并响应所述框选激活请求激活框选功能;
303、接收所述地图界面中的框选信息,所述框选信息包括被框选的摄像装置所在的区域;
304、将所述被框选的摄像装置所在的区域确定为所述目标区域。
305、根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有监控对象的人脸图像。
306、将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息。
307、根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
在步骤302中,请结合图3,上述的框选激活请求可以通过用户对“框选摄像头”进行触发(如点击或选取)后,向系统发送一个设选设置的框选激活请求;上述的激活框选功能可以理解为当系统接收到该框选激活请求后,会发出框选激活指令,用于激活框选功能,使用户可以在地图界面上进行框选。上述的对“框选摄像头”进行触发可以是通过鼠标进行点击来触发,或者是通过按信鼠标某个功能键一定时间来触发(比如1秒、2秒等),或者是通过设置的键盘快捷键来触发,或者是通过触摸设备进行触摸动作(点触、触控点数量、触控时长或者划动轨迹等)来触发等,当然,也可以是以上触发方式的结合来进行的触发,比如鼠标与键盘的结合,触控点与触控时长的结合等。
在步骤303中,上述的框选信息可以通过步骤302激活框选功能后,用户在地图界面上进行框选得到。上述的框选信息包括框选边界,通过框选边界,确定框选区域,上述的被框选的摄像装置可以理解为在地图界面上位于框选区域内的摄像装置标识,其中,地图界面的摄像装置标识与现实场景中的摄像装置的位置可以是对应的,即现实场景中的A地点装有摄像装置01,则在地图界面中对应A地点上有摄像装置01的标识;另外地图界面的摄像装置标识与现实场景中的摄像装置的位置可以是关联的,即存在链接关系,可以点击地图界面中摄像装置01的标识,来获取到对应摄像装置01的信息。其中,上述摄像装置标识可以是地图界面中的可选取像素块,上述的可选取像素块可以是摄像头形状,也可以是数字编号。另外,还可以通过调整框选边界对框选区域进行调整,比如,对“口”字形的框选区域(框选边界为“口”字形)内增加较小的“口”字形框选边界,使框选区域形成为“回”字形的框选区域或形成更小的“口”字形框选 区域。另外,在一些可能的情况下,上述可选取像素块会落在框选边界上,此时,可以将落在框选边界认定为被选取状态,也可以认定为不被选取状态,具体可根据实际需要进行设置。
在步骤304中,确定的目标区域可以是由上述框选边界形成的框选区域,也可以是由框选区域内摄像装置的有效监控区域,例如:若目标区域为框选区域时,假设框选边界为方形,则目标区域为框选边界形成的方形区域;若目标区域为框选区域内摄像装置的有效监控区域时,同样假设框选边界为方形,假设框选区域内只有一个摄像装置,但该个摄像装置的有效监控区域为圆形,则目标区域为圆形区域。在目标区域为框选区域内摄像装置的有效监控区域情况下,如果摄像装置的有效监控区域大于通过框选边界得到的框选区域,可以以有效监控区域为目标区域,也可以以框选区域为目标区域。
在本实施例中,通过在地图界面上的对目标区域进行框选以确定目标区域,可以使用户灵活的选择目标区域,提高对目标区域的可选择性及针对性,从而进一步提高对目标人员的行为分析准确度。
需要说明的是,本实施例中的步骤302、步骤303、步骤304可以看作是图6实施例中步骤202的可选实施方式。
可选的,所述在地图界面中确定目标区域包括:
接收在地图界面中预先设置的场景按钮的被选取指令,响应于所述被选取指令将被选取的场景按钮对应的场景区域确定为所述目标区域,其中,每个场景按钮对应于一个场景区域,每个场景区域内设有至少一个摄像装置。
上述预先设置的场景按钮可以是对应场景区域的场景按钮,设置场景按钮的场景区域可以是常用于分析检索的区域,比如对应公安局场景的“公安”按钮,对应超市场景的“超市”按钮,对应商场场景的“商场”按钮,对应海关场景的“海关”按钮,对应地铁场景的“地铁”按钮等;上述被选取指令可以是预先设置的与场景按钮对应的指令,可以通过用户对场景按键进行触发(如点击或选取)后获取得到的该指令,例如:用户对“超市”按钮进行触发,则向系统发送“超市”按钮被触发的指令,系统则接收到“超市”按钮被触发的指令。上述的触发可以是上述的对“框选摄像头”进行触发可以是通过鼠标进行点 击来触发,或者是通过设置的键盘快捷键来触发,或者是通过触摸设备进行触摸动作来触发等。在本实施方式中,场景按钮与场景区域对应可以理解为:场景按钮与对应场景区域存在链接关系,通过点击场景按钮,就可以在地图界面上选择对应的场景区域;也可以理解为:地图界面上的场景区域为可选取像素块,场景按钮与可选取像素块间存在链接,通过场景按钮链接到对应的可选取像素块上,上述的场景区域可以是一个或多个摄像装置标识所在区域。另外,还可以在选取场景按钮后,在对应的场景区域中进行场景区域所属地筛选,比如,在一些可能的场景中,“地铁”按钮对应于多个地铁场景,则可以对这多个地铁场景进行筛选,筛选方式可以是点击地图界面中对应场景区域中的摄像装置标识进行去除或者选定,也可以是在地图界面中框选对应场景区域中的摄像装置标识进行去除或者选定,另外,也可以是先在地图界面中框选出一个选择区域,然后再通过场景按钮在选择区域中选取出对应的场景区域。
需要说明的是,本实施方式可以看作是图6实施例中步骤202实施方式的可选实施方式,也可以看作是对图7实施例中步骤302、步骤303、步骤304的可选替换方式。
可选的,所述根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,包括:
获取所述目标区域内每个摄像装置的监控数据,所述监控数据包括监控对象的人脸图像;
将所述目标区域内每个摄像装置的监控数据整合形成所述监控数据库。
其中,上述的目标区域可以是一个或者多个,上述目标区域中的摄像装置可以是一个或者多个。在框选得到目标区域的情况下,确定目标区域后,可以将目标区域中所有的摄像装置的监控数据进行整合,形成一个临时的监控数据库,以这个临时的监控数据库做为用于检索待检索人脸图像的监控数据库,例如:假设框选确定的目标区域中存在A、B、C、D四个摄像装置,采集到对应的监控数据分别为a、b、c、d,则可以建立一个临时的监控数据库,将监控数据a、b、c、d写入这个临时的监控数据库中,这样,这个临时的监控数据库中就包含了目标区域的监控数据,通过在这个临时的监控数据库中对待检索人脸图像进 行检索,也就可以实现将待检索人脸图像在目标区域中进行检索。上述的整合可以是按抓拍到的时间顺序进行排序整合,可以将每个摄像装置的监控数据按时间顺序先后写入临时的监控数据库中;上述的整合也可以是按摄像装置进行排序进行整合,可以将每个摄像装置的监控数据按摄像装置进行分区写入临时的监控数据库中不同的区域,每个区域对应存储一个摄像装置的监控数据。这样,通过调取目标区域的摄像装置的监控数据形成临时的监控数据库,可以增加数据库的灵活性以及数据库中的数据精准性。在通过场景按钮得到目标区域的情况下,可以将场景按钮对应的场景区域中的摄像装置的监控数据整合形成一个临时的监控数据库,具体的整合方法可以参考上述基于框选得到目标区域的监控数据库整合方法;另外,由于场景按钮对应于场景区域,可以预先设置一个对应于场景区域的监控数据库,该个对应于场景区域的监控数据库中存储有该场景区域的所有摄像装置采集到的监控数据,例如:假设公安场景区域内设置有A、B、C三个摄像装置,对应采集到的监控数据分别为a、b、c,则预先对公安场景区域建立一个监控数据库,该个监控数据库存储有A、B、C三个摄像装置的监控数据a、b、c,该个监控数据库可以实时更新来自A、B、C三个摄像装置采集的监控数据,该个监控数据库可以响应于“公安”按钮的触发而被采用为检索待检索人脸图像的监控数据库,也就是说,该类监控数据库与对应的场景按钮存在索引关系,通过场景按钮可以索引到对应的一个该类监控数据库做为监控数据库。
需要说明的是,在另一些可能实施例中,还可采用在一个大的数据库中获取目标人员的检索结果,再按场景进行筛选得到目标区域的目标人脸信息,如图8所示。
在本实施方式中,通过对目标区域对应的监控数据库进行整合,可以增加监控数据库中的监控数据的针对性,由于监控数据都是目标区域摄像装置采集到的监控数据,所以减少了待检索人脸图像的检索范围,提高了检索的速度。
一些可能的实施方式中,在步骤306之后,还可以对得到的目标人脸信息按预先设置的排序规则进行排序展示。在步骤306中,得到的目标人脸信息可以是结构化人脸图像,上述的结构化人脸图像包括目标人员的人脸图像、统计信息、 身份信息等结构化信息。上述预先设置的排序规则可以是按时间排序、按摄像装置排序、按出现频次排序、按身份信息排序等排序规则,例如:按时间排序可以根据抓拍时间(即结构化人脸信息中的时间信息)排序,可以将最新抓拍放在最前面进行排序,也可以将最新抓拍放在最后面进行排序。按摄像装置排序可以是根据抓拍到的摄像装置编号进行排序,将1号摄像装置的监控信息放在最前面进行排序,2号摄像装置的监控信息放在1号摄像装置的信息后面进行排序,在摄像装置排序下,也可以对摄像装置监控信息进行排序,比如1号摄像装置中的监控信息也可以根据时间排序进行排序。按出现频次排序,可以是按摄像装置抓拍到目标人员的次数来进行排序,比如1号摄像装置抓拍到目标人员的次数为46次,2号摄像装置抓拍到目标人员的次数为57次,则可以按就高原则将2号摄像装置的检索结果放在最前面进行排序;也可以是某个时间段抓拍到目标人员的次数来进行排序,比如8月2日,所有摄像装置总共抓拍到目标人员8次,8月3日,所有摄像装置总共抓拍到目标人员12次,8月4日,所有摄像装置总共抓拍到目标人员2次,则可以根据就高原则将8月3日所有摄像装置的检索结果放在最前面进行排序。
需要说明的是,上述的排序规则可以是单独使用,也可以是结合使用,比如上述的日期与次数的结合等。在步骤305中,被检索的目标人员可以是一个或者多个,上述的排序可以为步骤305中的排序规则,例如:当被检索的目标人员为一个时,可以将检索出来的该目标人员的检索结果按时间进行排序展示,这样,就可以使用户可以直接得到该目标人员的最新监控信息,进而帮助用户分析用户最近的行为,比如目标人员最近经常出现在一个地方。也可以将检索出来的该目标人员的检索结果按摄像装置排序进行排序展示,这样,就可以使用户直观的知道目标人员是被哪个摄像装置给抓拍到的,从而帮助用户分析目标人员的出现地点及目标人员的移动轨迹,比如目标人员从最初出现地方移动到最后出现的地方是怎么移动的。还可以将检索出来的该目标人员的检索结果按出现频次排序进行排序展示,得到行为信息,这样,就可以使用户更加直观的知道目标人员经常在哪个时间段出现或者经常在哪个地方出现,从而帮助用户分析目标人员的行为规律,比如目标人员是上班还是下班。这样,就可以使用户直 观的知道目标人员的出行情况。
可选的,所述目标人脸信息包括统计信息及/或身份信息;
所述根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息包括:
对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息。
上述的统计信息可以由监控终端进行统计,比如获取到的目标人员的目标人脸图像为500张,则可以统计目标人员被摄像装置抓拍次数为500次;比如某日第一次抓拍到目标人员时间为9:30,最后一日抓拍到目标人员时间为17:30,则可以统计目标人员抓拍时间段为9:30-17:30;比如目标人员在雨天被抓拍的次数为200次,在晴天被抓拍的次数为300次,则可以统计目标人员雨天被摄像装置抓拍的次数为200次,晴天被抓拍的次数为300次,对于天气的定义可以是以当地的天气预报为准,也可以是在室外通过图像识别来识别天气,以天气识别结果为准。上述的身份信息可以是结构化人脸图像中的结构化信息,上述的身份信息可以包括姓名、年龄、身份证号、职业、历史犯罪记录等信息。上述的行为信息可以通过对目标人员的统计信息进行分析得到,比如,通过统计目标人员在地铁出现的时间段为17:00-19:30,可以得到目标人员的行为信息为下班,通过统计两个目标人员一天共同被抓拍的次数为20次,可以得到两个目标人员同行等;上述的行为信息还可以通过对目标人员的身份信息进行分析得到,比如,通过目标人员的职业为司机,一天内在交警管理局被抓拍5次,可以得到目标人员是在处理交通违法;通过目标人员性别为女,一天在妇科医院被抓拍到5次,而在此前,没有抓拍到该目标人员出现在妇科医院或者没有身份信息表明该目标人员为妇科医生,可以得到目标人员是在看病。
可选的,所述统计信息包括抓拍时间段、抓拍频次中至少一项,所述身份信息包括职业、违法犯罪记录中至少一项;
所述对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息包括:
对抓拍时间段、抓拍频次、职业、违法犯罪记录中至少一项进行分析,得到所 述目标人员在所述目标区域内的行为信息。
上述的统计信息可以是结构化人脸图像中的结构化信息,上述的抓拍时间段可以是在预先设置的时间段内,以第一次抓拍到目标人员的时间为起点,以最后一次抓拍到目标人员的时间为终点所得到的目标人员的时间,例如:在9月,第一次抓拍到目标人员的时间为9月5日,最后一次抓拍到目标人员的时间为9月10是,则可以认为目标人员的抓拍时间段为9月5日至9月10日。上述的预先设置的时间段可以是用户通过地图界面进行的日期选择,比如,在地图界面上设置日历,使用户可以通过日历选择9月为预告设置的时间段。另外,也可以在地图界面上设置可以输入时间段的输入框用于使用户可以输入预告设置时间段。
上述的对抓拍时间段进行分析,得到目标人员在目标区域内的行为信息可以理解为:对目标人员在目标区域某个时间段出现的次数进行统计,根据统计结果得到行为信息;上述的行为信息可以是出现时间段,其中,出现时间段可以理解为:摄像装置抓拍到监控对象的时间段。上述的对次数进行统计可以是统计该时间段内某个监控对象的有效出现次数,其中,有效出现次数以该个监控对象出现在摄像装置的视场范围到离开摄像装置的视场范围为一次有效次数,可以将该个监控对象出现在摄像装置的视场范围到离开摄像装置的视场范围剪辑为一段关于该个监控对象的监控视频,以该个监控对象的监控视频数量为有效出现次数。另外,可以提取该个监控对象的监控视频中的图像进行人脸识别,得到该个监控对象的一个或多个人脸图像,并形成结构化人脸图像,也就是说,监控视频为原始图像来源,在原始图像的基础上进行人脸识别,得到人脸图像,并根据人脸识别引擎得到该个监控对象的结构化信息,将人脸图像和结构化信息结合,形成结构化人脸图像。做为一种可能的实施例,在每段监控视频中,通过图像质量评价(Image Quality Assessment,IQA)算法,获取每段监控视频中图像质量最好的一张图像做人脸识别,得到该个监控对象的人脸图像,这样,就可以通过检索到的目标人脸图像的数量,对该个监控对象的出现次数进行统计,比如:监控对象A在9月被摄像装置抓拍到25段监控视频,从这25段监控视频中,每段视频提取图像质量最好的一张图像进行人脸识别,那么25段监控视频对应得到25张该监控对象的人脸图像,当目标人员对监控对象A时, 则可以检索出25张目标人脸图像,则目标人员的有效出现次数为25,可以根据这25次有效次数,得到目标人员第一出现时间和最后一次出现时间,从而得到目标人员的出现时间段。根据目标人员的出现时间段,可以分析到目标人员是上班、下班还是闲逛,比如目标区域为超市,目标人员出现时间段为17:30,则可以认为目标人员是下班;比如目标区域为地铁,目标人员出现时间段为6:00,则可以认为目标人员是上班路上;比如目标区域广场,目标人员出现时间段为20:30,则可以认为目标人员是在闲逛。
上述的对抓拍频次信息进行分析可以理解为:对目标人员在目标区域某个时间段被抓拍的次数进行统计,根据统计结果得到抓拍频次信息;上述的行为信息可以是出现频次,其中,出现频次可以是统计该时间段内某个监控对象的有效出现次数,其中,有效出现次数可以参照前述的有效出现次数进行统计,在此不再赘述。上述根据抓拍时间段内的抓拍频次信息,获取目标人员在所述目标区域内的行为信息可以理解为:通过对目标人员的抓拍频次进行统计,对目标人员在目标区域的行为规律进行分析,例如:目标人员在10:00-13:30出现在餐馆的次数为8次,则可以判断目标人员为外卖人员;或者目标人员为非银行工作人员,一天内出现在银行门口的次数为10次,则可以判断目标人员在银行附近徘徊。
需要说明的是,上述的抓拍频次信息与出现频次并不相同,抓拍频次可以理解为摄像装置对监控对象的抓拍次数,出现频次则可以是目标人员出现的次数,目标人员出现一次,是可以被摄像装置抓拍多次的。比如,在10秒内,目标人员可通被摄像装置抓拍到10次,但其实目标人员只出现了一次。
上述的身份信息可以是结构化人脸图像中的结构化信息,而上述的目标人员的职业信息可以是身份信息中的结构化信息,上述身份信息中的结构化信息还可以包括姓名、身份证号、居住地址、联系方式等信息。上述的根据职业信息,获取目标人员在所述目标区域内的行为信息可以理解为:根据职业与目标区域的关联度,分析目标人员的行为规律。例如:检索到目标人员为普通财务人员,在目标区域为某高级大酒店的出现频次为每月10次,则可以判断目标人员的行为异常。上述的职业与各区域的关联度可以理解为某些人员出现在特定区域 的概率,比如公职人员在行政中心出现的概率大,其职业与行政中心关联度就高,公职人员频繁出现在行政中心可以认为是行为正常。
上述的身份信息还可以包括目标人员的违法犯罪记录,其中,违法犯罪记录可以从公安机关的对外公开的档案库中获取,若目标人员无违法犯罪记录,则可以记为无或者是不展示。根据目标人员的违法犯罪记录,可以针对性对目标人员确定检索的目标区域,比如,目标人员有过抢银行的记录,则可以以银行或金店为目标区域对目标人员进行检索,进而分析到目标人员是否有踩点的嫌疑,比如,在金店抓拍到目标人员频繁出现,可以认为目标人员存在踩点的嫌疑。
该实施方式中,可以实现在检索出人脸图像,获取到人脸图像的结构化信息的情况下,获取到目标人员更多的行为信息,从而可以提高对目标人员行为分析的深度及准确度。
本实施例中,在图6所示的实施例的基础上增加了多种可选的实施方式,且可以进一步提高对获取目标行为信息的准确度。
请参见图9,图9是本发明实施例提供的一种人员行为分析装置的结构示意图,如图9所示,包括:
获取模块401,用于获取目标人员的待检索人脸图像;
区域模块402,用于确定目标区域;
数据库确定模块403,用于根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有人员的人脸图像。
匹配模块404,用于将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸图像,所述目标人脸图像包括统计信息及身份信息;
分析模块405,用于根据所述目标人脸图像,得到所述目标人员在所述目标区域内的行为信息。
可选的,如图10所示,所述区域确定模块402包括:
框选单元4021,用于接收对所述地图界面的框选激活请求,并响应所述框选激 活请求激活框选功能;
接收单元4022,用于接收所述地图界面中的框选信息,所述框选信息包括被框选的摄像装置所在的区域;
确定单元4023,用于将所述被框选的摄像装置所在的区域确定为目标区域。
可选的,如图9所示,所述区域确定模块402用于接收在地图界面中预先设置的场景按钮的被选取指令,响应于所述被选取指令将被选取的场景按钮对应的场景区域确定为所述目标区域,其中,每个场景按钮对应于一个场景区域,每个场景区域内设有至少一个摄像装置。
可选的,如图11所示,所述数据库确定模块403包括:
获取单元4031,用于获取所述目标区域内每个摄像装置的监控数据,所述监控数据包括监控对象的人脸图像;
整合单元4032,用于将所述目标区域内每个摄像装置的监控数据整合形成所述监控数据库。
可选的,如图9所示,所述目标人脸信息包括统计信息及/或身份信息;
分析模块405用于对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息。
可选的,如图9所示,所述统计信息包括抓拍时间段、抓拍频次中至少一项,所述身份信息包括职业、违法犯罪记录中至少一项;
分析模块405用于对抓拍时间段、抓拍频次、职业、违法犯罪记录中至少一项进行分析,得到所述目标人员在所述目标区域内的行为信息。
需要说明的是,上述装置可以应用于人员行为分析设备,例如:计算机、服务器、手机等可以进行人员行为分析的设备。
本发明实施例提供的人员行为分析装置能够实现图6和图7的方法实施例中的各个实施方式,以及相应有益效果,为避免重复,这里不再赘述。
参见图12,图12是本发明实施例提供的一种电子设备的结构示意图,如图12所示,包括:存储器602、处理器601及存储在所述存储器602上并可在所述处理器601上运行的计算机程序,其中:
处理器601用于调用存储器602存储的计算机程序,执行如下步骤:
获取目标人员的待检索人脸图像;
在地图界面中确定目标区域;
根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有监控对象的人脸图像;
将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息;
根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
可选的,处理器601执行的所述在地图界面中确定目标区域包括:
接收对所述地图界面的框选激活请求,并响应所述框选激活请求激活框选功能;
接收所述地图界面中的框选信息,所述框选信息包括被框选的摄像装置所在的区域;
将所述被框选的摄像装置所在的区域确定为目标区域。
可选的,处理器601执行的所述在地图界面中确定目标区域包括:
接收在地图界面中预先设置的场景按钮的被选取指令,响应于所述被选取指令将被选取的场景按钮对应的场景区域确定为所述目标区域,其中,每个场景按钮对应于一个场景区域,每个场景区域内设有至少一个摄像装置。
可选的,处理器601执行的所述根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,包括:
获取所述目标区域内每个摄像装置的监控数据,所述监控数据包括监控对象的人脸图像;
将所述目标区域内每个摄像装置的监控数据整合形成所述监控数据库。
可选的,所述目标人脸信息包括统计信息及/或身份信息;
处理器601执行的所述根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息包括:
对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息。
可选的,所述统计信息包括抓拍时间段、抓拍频次中至少一项,所述身份信息 包括职业、违法犯罪记录中至少一项;
处理器601执行的所述对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息包括:
对抓拍时间段、抓拍频次、职业、违法犯罪记录中至少一项进行分析,得到所述目标人员在所述目标区域内的行为信息。
需要说明的是,上述电子设备可以是人员行为分析设备,例如:计算机、服务器、手机等可以进行人员行为分析的设备。
本发明实施例提供的人员行为分析装置能够实现图6和图7的方法实施例中的各个实施方式,以及相应有益效果,为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现本发明实施例提供的图像检索方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。

Claims (13)

  1. 一种人员行为分析方法,其特征在于,包括:
    获取目标人员的待检索人脸图像;
    在地图界面中确定目标区域;
    根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有监控对象的人脸图像;
    将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息;
    根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
  2. 如权利要求1所述的方法,其特征在于,所述在地图界面中确定目标区域包括:
    接收对所述地图界面的框选激活请求,并响应所述框选激活请求激活框选功能;
    接收所述地图界面中的框选信息,所述框选信息包括被框选的摄像装置所在的区域;
    将所述被框选的摄像装置所在的区域确定为目标区域。
  3. 如权利要求1所述的方法,其特征在于,所述在地图界面中确定目标区域包括:
    接收在地图界面中预先设置的场景按钮的被选取指令,响应于所述被选取指令将被选取的场景按钮对应的场景区域确定为所述目标区域,其中,每个场景按钮对应于一个场景区域,每个场景区域内设有至少一个摄像装置。
  4. 如权利要求1至3中任一所述的方法,其特征在于,所述根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,包括:
    获取所述目标区域内每个摄像装置的监控数据,所述监控数据包 括监控对象的人脸图像;
    将所述目标区域内每个摄像装置的监控数据整合形成所述监控数据库。
  5. 如权利要求1至3中任一所述的方法,其特征在于,所述目标人脸信息
    包括统计信息及/或身份信息;
    所述根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息包括:
    对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息。
  6. 如权利要求5所述的方法,其特征在于,所述统计信息包括抓拍时间段、抓拍频次中至少一项,所述身份信息包括职业、违法犯罪记录中至少一项;
    所述对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息包括:
    对抓拍时间段、抓拍频次、职业、违法犯罪记录中至少一项进行分析,得到所述目标人员在所述目标区域内的行为信息。
  7. 一种人员行为分析装置,其特征在于,包括:
    获取模块,用于获取目标人员的待检索人脸图像;
    区域确定模块,用于在地图界面中确定目标区域;
    数据库确定模块,用于根据所述目标区域,确定用于检索所述待检索人脸图像的监控数据库,所述监控数据库包括所述目标区域内所有人员的人脸图像。
    匹配模块,用于将所述待检索人脸图像在所述监控数据库中进行检索,得到所述目标人员的目标人脸信息;
    分析模块,用于根据所述目标人脸信息,得到所述目标人员在所述目标区域内的行为信息。
  8. 如权利要求7所述的装置,其特征在于,所述区域确定模块包括:
    框选子模块,用于接收对所述地图界面的框选激活请求,并响应所述框选激活请求激活框选功能;
    接收子模块,用于接收所述地图界面中的框选信息,所述框选信息包括被框选的摄像装置所在的区域;
    确定子模块,用于将所述被框选的摄像装置所在的区域确定为目
    标区域。
  9. 如权利要求7所述的装置,其特征在于,所述数据库确定模块包括:
    获取单元,用于获取所述目标区域内每个摄像装置的监控数据,所述监控数据包括监控对象的人脸图像;
    整合单元,用于将所述目标区域内每个摄像装置的监控数据整合形成所述监控数据库。
  10. 如权利要求7-9中任一所述的装置,其特征在于,所述目标人脸信息包括统计信息及/或身份信息;
    所述分析模块还用于对所述统计信息及/或所述身份信息进行分析,得到所述目标人员在所述目标区域内的行为信息。
  11. 如权利要求10所述的装置,其特征在于,所述统计信息包括抓拍时间段、抓拍频次中至少一项,所述身份信息包括职业、违法犯罪记录中至少一项;
    分析模块还用于对抓拍时间段、抓拍频次、职业、违法犯罪记录中至少一项进行分析,得到所述目标人员在所述目标区域内的行为信息。
  12. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至6中任一项所述的人员行为分析方法中的步骤。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如 权利要求1至6中任一项所述的人员行为分析方法中的步骤。
PCT/CN2019/121363 2019-11-27 2019-11-27 一种人员行为分析方法、装置和电子设备 WO2021102760A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121363 WO2021102760A1 (zh) 2019-11-27 2019-11-27 一种人员行为分析方法、装置和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121363 WO2021102760A1 (zh) 2019-11-27 2019-11-27 一种人员行为分析方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2021102760A1 true WO2021102760A1 (zh) 2021-06-03

Family

ID=76128720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121363 WO2021102760A1 (zh) 2019-11-27 2019-11-27 一种人员行为分析方法、装置和电子设备

Country Status (1)

Country Link
WO (1) WO2021102760A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537130A (zh) * 2021-07-29 2021-10-22 佳都科技集团股份有限公司 数据处理方法、装置、设备和存储介质
CN118397255A (zh) * 2024-06-26 2024-07-26 杭州海康威视数字技术股份有限公司 分析区域确定以及智能分析的方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699677A (zh) * 2013-12-31 2014-04-02 河海大学常州校区 一种基于人脸识别技术的罪犯行踪地图绘制系统及其方法
CN108073577A (zh) * 2016-11-08 2018-05-25 中国电信股份有限公司 一种基于人脸识别的报警方法和系统
CN108206932A (zh) * 2016-12-16 2018-06-26 北京迪科达科技有限公司 一种校园智能监控管理系统
CN111222373A (zh) * 2018-11-26 2020-06-02 深圳云天励飞技术有限公司 一种人员行为分析方法、装置和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699677A (zh) * 2013-12-31 2014-04-02 河海大学常州校区 一种基于人脸识别技术的罪犯行踪地图绘制系统及其方法
CN108073577A (zh) * 2016-11-08 2018-05-25 中国电信股份有限公司 一种基于人脸识别的报警方法和系统
CN108206932A (zh) * 2016-12-16 2018-06-26 北京迪科达科技有限公司 一种校园智能监控管理系统
CN111222373A (zh) * 2018-11-26 2020-06-02 深圳云天励飞技术有限公司 一种人员行为分析方法、装置和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537130A (zh) * 2021-07-29 2021-10-22 佳都科技集团股份有限公司 数据处理方法、装置、设备和存储介质
CN118397255A (zh) * 2024-06-26 2024-07-26 杭州海康威视数字技术股份有限公司 分析区域确定以及智能分析的方法、装置及设备

Similar Documents

Publication Publication Date Title
CN111222373B (zh) 一种人员行为分析方法、装置和电子设备
CN107292240B (zh) 一种基于人脸与人体识别的找人方法及系统
CN109886078B (zh) 目标对象的检索定位方法和装置
US9805064B2 (en) System, apparatus, method, program and recording medium for processing image
CN105210048B (zh) 基于社交媒体的内容识别方法
JP5438436B2 (ja) 画像検索装置
CN110084103A (zh) 一种基于人脸识别技术的同行人分析方法及系统
US20210357678A1 (en) Information processing method and apparatus, and storage medium
CN112183353B (zh) 一种图像数据处理方法、装置和相关设备
WO2020259099A1 (zh) 一种信息处理方法及装置、存储介质
JP2020047110A (ja) 人物検索システムおよび人物検索方法
CN109033440A (zh) 一种视频侦查多维轨迹分析方法
TW202115648A (zh) 行為分析方法、電子設備和電腦儲存媒體
CN101778260A (zh) 一种基于结构化描述的视频监控管理方法及系统
CN101300588A (zh) 确定收集中的特定人的方法
WO2020103293A1 (zh) 一种人员检索信息的呈现方法、装置和电子设备
CN109492604A (zh) 人脸模型特征统计分析系统
JPWO2016162963A1 (ja) 画像検索装置、システム及び方法
CN115830076B (zh) 一种人员轨迹视频智能分析系统
WO2021102760A1 (zh) 一种人员行为分析方法、装置和电子设备
WO2014201891A1 (zh) 多目标跟踪方法、系统及计算机存储介质
CN114078277A (zh) 一人一档的人脸聚类方法、装置、计算机设备及存储介质
CN113190700B (zh) 用于房产交易的人脸抓拍、筛选和存储的方法及系统
JP5289211B2 (ja) 画像検索システム、画像検索プログラムおよびサーバ装置
CN111259789A (zh) 一种人脸识别智能安防监控管理方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954580

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954580

Country of ref document: EP

Kind code of ref document: A1