WO2021114985A1 - 一种同行对象识别方法、装置、服务器及系统 - Google Patents

一种同行对象识别方法、装置、服务器及系统 Download PDF

Info

Publication number
WO2021114985A1
WO2021114985A1 PCT/CN2020/127500 CN2020127500W WO2021114985A1 WO 2021114985 A1 WO2021114985 A1 WO 2021114985A1 CN 2020127500 W CN2020127500 W CN 2020127500W WO 2021114985 A1 WO2021114985 A1 WO 2021114985A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
peer
capture
time interval
time
Prior art date
Application number
PCT/CN2020/127500
Other languages
English (en)
French (fr)
Inventor
陈钦
丁玲德
蒋庆萍
周伟军
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2021114985A1 publication Critical patent/WO2021114985A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • This application relates to the field of security monitoring technology, in particular to a peer object recognition method, device, server and system.
  • Peer object recognition is a key technology in the security field. It can identify whether two objects have a peer relationship. On the one hand, it can clarify the social relationship between the objects, discover potential dangers and prevent them in time. On the other hand, in danger When it happens, the object information and the time and location of the danger can be obtained in time, providing important clues for the traceability of the incident.
  • the specific object in each image is first identified, and then it is judged whether other objects appear continuously in the same image with the specific object. If an object appears continuously in the same image as the specific object, it is determined
  • the relationship between the object and the specific object is a peer object relationship.
  • the purpose of the embodiments of the present application is to provide a peer object recognition method, device, server, and system to meet the increasingly complex object analysis requirements.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides a peer object recognition method, which includes:
  • a peer object between the at least two objects is generated once recording.
  • an embodiment of the present application provides a peer object recognition method, which includes:
  • the database stores the captured location in the object image collected by the monitoring equipment Records of peer objects between at least two objects whose interval is within a preset range and the time interval of the capture time is within the first preset time interval;
  • an embodiment of the present application provides a peer object recognition device, which includes:
  • the acquisition module is used to acquire the object image collected by the monitoring equipment
  • the analysis module is used to analyze the object image and determine the time and location of each object captured in the object image
  • the recognition and recording module is configured to generate the at least two objects once if the distance between the capture locations of at least two objects is within a preset range and the time interval of the capture time of the at least two objects is within the first preset time interval. Peer object records between objects.
  • an embodiment of the present application provides a peer object recognition device, which includes:
  • the obtaining module is used to obtain the peer object query request, where the peer object query request includes the target object to be queried;
  • the search module is used to find all the peer object records where the target object appears in the database, and count the number of occurrences of each object that is the same object as the target object in all the peer object records of the target object.
  • the database stores the data collected by the monitoring equipment A record of a peer object between at least two objects in the object image where the distance between the captured locations is within a preset range and the time interval of the capture time is within the first preset time interval;
  • the output module is used to output the peer records of the objects whose occurrence times are greater than the second preset threshold among all the peer object records of the target object.
  • an embodiment of the present application provides a server, including a processor and a memory, where:
  • Memory used to store computer programs
  • the processor is configured to implement the method provided in the first aspect or the method provided in the second aspect of the embodiments of the present application when executing the computer program stored in the memory.
  • the embodiments of the present application provide a non-temporary storage medium.
  • the non-temporary storage medium stores a computer program.
  • the computer program is executed by a processor, the method provided in the first aspect of the embodiments of the present application or The method provided in the second aspect.
  • an embodiment of the present application provides an application program for execution at runtime: the method provided in the first aspect of the embodiment of the present application or the method provided in the second aspect.
  • an embodiment of the present application provides a peer object recognition system, which includes multiple monitoring devices and servers;
  • the server is used to obtain the object image collected by the monitoring equipment; analyze the object image to determine the time and location of each object captured in the object image; if the distance between the capture locations of at least two objects is within the preset range and When the time interval of the capture time of the at least two objects is within the first preset time interval, then a peer object record between the at least two objects is generated once.
  • the method, device, server and system for peer object recognition acquire object images collected by monitoring equipment, analyze the object images, and determine the time and location of each object captured in the object image, if at least The distance between the capture locations of the two objects is within a preset range, and the time interval of the capture time of the at least two objects is within the first preset time interval, then a peer object record between the at least two objects is generated once.
  • the capture time and location of each object captured in the object image can be obtained, and it is finally determined that the distance between the capture locations is within the preset range and the time interval of the capture time is within the first preset time interval.
  • At least two objects are peer objects, a peer object record is generated once, and the peer object records between the objects are recorded locally to provide query basis for query personnel.
  • the peer object records of each object in the local record can be output instead of only output Peer object records of specific objects can meet the increasingly complex object analysis needs.
  • FIG. 1 is a schematic flowchart of a peer object recognition method according to an embodiment of this application
  • FIG. 2 is a schematic flowchart of a peer object recognition method according to another embodiment of this application.
  • FIG. 3 is a schematic flowchart of a peer object recognition method according to another embodiment of this application.
  • FIG. 4 is a schematic flowchart of a peer object recognition method according to still another embodiment of this application.
  • FIG. 5 is a schematic flowchart of a peer object recognition method according to another embodiment of this application.
  • FIG. 6 is a schematic diagram of a single-screen snapshot of a peer-to-peer scene in an embodiment of the application
  • FIG. 7 is a schematic diagram of a scene where a monitoring device collects multiple pictures to capture a peer-to-peer snapshot in an embodiment of the application;
  • FIG. 8 is a schematic diagram of a multi-screen capture peer scene collected by multiple monitoring devices according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of a peer-captured peer-to-peer scene for filtering repeated peers according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of a scene of the principle of peer object matching in four scenarios in an embodiment of the application.
  • FIG. 11 is a schematic flowchart of a peer object recognition method according to still another embodiment of this application.
  • FIG. 12 is a schematic structural diagram of a peer object recognition device according to an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of a peer object recognition device according to another embodiment of the application.
  • FIG. 14 is a schematic structural diagram of a server according to an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of a peer object recognition system according to an embodiment of the application.
  • the peer object information of the specific object In some scenarios, it is necessary to monitor the behavior of some specific objects. Therefore, it is generally necessary to know the peer object information of the specific object.
  • the corresponding peer object recognition method after acquiring multiple object images collected by the monitoring equipment, first identify the specific object in each object image, and then judge whether other objects appear in the same object image consecutively with the specific object , If an object continuously appears in the same object image with a specific object, it can be determined that the object and the specific object are in the same object relationship. In this way, when there is a query request, the user enters the object information of a specific object, and can learn about other objects that have a peer-object relationship with the specific object, providing an analysis basis for monitoring the behavior of the specific object.
  • embodiments of the present application provide a peer object recognition method, device, server, and system.
  • the method for identifying peer objects provided by the embodiments of the present application will be introduced first.
  • the execution subject of the peer object identification method provided by the embodiment of the application may be a server with core processing capabilities, and the method for implementing the peer object identification method provided by the embodiment of the application may be software, hardware circuits, and circuits provided in the execution subject. At least one of logic circuits.
  • the method for identifying peer objects may include the following steps:
  • S101 Acquire an object image collected by a monitoring device.
  • one or more monitoring devices are usually deployed for real-time monitoring.
  • object images can be collected.
  • the object image refers to an image including an object; the object here is a target that needs to be monitored, for example, a vehicle, an animal, or a pedestrian, etc.
  • the monitoring device may be a camera, a camera, and other capture devices.
  • the monitoring device once an object passes through the monitoring device, the monitoring device automatically recognizes the target, captures the object, and uploads the captured image of the object to a server for peer object recognition.
  • Monitoring equipment can be set up in areas where objects flow relatively large. When setting up monitoring equipment, conditions such as angle, height, and brightness need to be guaranteed to meet the requirements for capturing objects.
  • S102 Analyze the object image, and determine the time and location of each object captured in the object image.
  • the server used for peer object recognition provides intelligent analysis services. After obtaining the object image, the object image can be analyzed. The analysis process is to extract the capture time and location of each object in the object image. Object features are distinguished. Object features refer to the attribute characteristics of the object, for example, one or more of facial features, clothes color, height, or body shape.
  • the capture time refers to the time to capture the target, which can usually be read from the image attributes.
  • the capture address refers to the location of the target when the target is captured. It can usually be represented by the latitude and longitude information of the monitoring device or the erection position information. Capture the address.
  • the latitude and longitude information collected by the positioning module can be used to represent the capture address; for the monitoring equipment without the positioning module, you can The location information of the monitoring equipment is used to represent the capture address.
  • the capture address can also be the actual latitude and longitude of the object.
  • the coordinates of the object in the image can be obtained, and then the coordinates of the object in the world coordinate system can be obtained according to the pre-established correspondence between the image coordinates and the world coordinates. In order to get the actual longitude and latitude of the target.
  • the method for establishing the corresponding relationship between the image coordinates and the world coordinates can refer to the calibration method of camera external parameters in the related art, which will not be repeated here.
  • the capture time and location of an object can also form a piece of capture data, and multiple pieces of capture data can be obtained by analyzing an object image or multiple object images.
  • the peer object record between the at least two objects indicates that the at least two objects are peer objects.
  • the object record of the at least two objects in the same line may include the identity of the two objects, such as ID, etc., and may also include the captured images or image features of the two objects, where the image feature of the object may be depth Learn the characteristics of objects extracted by the network.
  • the embodiment of this application provides a rule for judging peer objects, that is, different objects pass through a certain area one after another (a few seconds interval, different pictures are captured) or at the same time (the same picture is captured) (the distance between the captured locations is within a preset range), then Treat it as a peer object, and generate a peer object record between the objects once.
  • the distance between the capture locations of at least two objects is within a preset range (for example, the same building, four directions of an intersection, the same square, etc.), and the capture time of the at least two objects If the interval is within the first preset time interval (ie, several seconds or at the same time), the at least two objects are identified as peer objects, and a peer object record between the at least two objects is generated once.
  • a preset range for example, the same building, four directions of an intersection, the same square, etc.
  • the peer object records of each object in the local record can be output instead of only output
  • the peer object record of a specific object the obtained peer object record is more comprehensive, which can meet the increasingly complex object analysis needs.
  • every time at least two objects are identified as peer objects a peer object record between the at least two objects will be generated once. There will be many peer object records for different objects. In this way, the peer object record is output The output is based on the number of generated peer object records, which provides support for the accuracy of output peer object records.
  • S103 can be specifically implemented through the following steps:
  • filter conditions are added: the same at least two objects are recognized as peer objects multiple times in a short period of time, and only one record of peer objects in the second preset time interval is retained. Delete other peer object records of the at least two objects in the second preset time interval. That is, in this case, no matter how many times it is recognized as a peer, it will only be recorded as one peer record.
  • the peer object records of each target target can be obtained.
  • the peer object rules two or more objects are continuously The number of times the objects are identified as peers is greater than or equal to the first preset threshold (for example, 20 peers are identified in a day), it can be determined that these objects are peers with each other, and only one record of the peers between these objects is recorded.
  • the first preset threshold for example, 20 peers are identified in a day
  • the last record of the peers within the second preset time interval can be retained.
  • the records of the peers recorded at other times can also be retained, and there is no specific limitation here.
  • the amount of data recorded by peer objects that need to be stored is reduced, effectively saving storage resources, and due to the limitation of the judgment rules, there may be a single misjudgment (for example, a misjudgment only due to accidental rubbings) For peers), and after multiple identifications, the objects with multiple peer relationships are recorded as a peer record, which can reduce the misjudgment of peers.
  • the step of obtaining the object image collected by the monitoring device may specifically be: obtaining at least one object image collected by one monitoring device; or obtaining multiple object images collected by multiple monitoring devices .
  • the steps of analyzing the object image to determine the time and location of each object captured in the object image can be specifically as follows: analyzing an object image collected by a monitoring device to determine the object image captured The capture time and location of each object, where the capture time is the timestamp of the object image collected by the monitoring device, and the capture location is the installation location of the monitoring device; or, multiple object images captured by a monitoring device Analyze and determine the capture time and location of each object captured in each object image, where the capture time is the time stamp of each object image captured by the monitoring device, and the capture location is the installation location of the monitoring device; or, for many Multiple object images collected by a monitoring device are analyzed to determine the capture time and location of each object captured in each object image.
  • the capture time is the time stamp of each object image captured by each monitoring device, and the capture location is the capture location.
  • the recording step can specifically be: for at least two objects in an object image collected by a monitoring device, generating a peer object record between the at least two objects; or, if a monitoring device collects multiple objects If the time interval between the capture times of at least two objects in the image is within the first preset time interval, then a peer object record between the at least two objects is generated once; or, if multiple object images captured by multiple monitoring devices If the distance between the capture locations of at least two objects is within a preset range, and the time interval of the capture time of the at least two objects is within the first preset time interval, then a peer object record between the at least two objects is generated once .
  • One monitoring device can monitor a specified area, and multiple monitoring devices can also monitor the specified area at the same time.
  • the installation location of the monitoring device (such as the installed latitude and longitude) can be used to determine whether multiple monitoring devices are monitoring. Specify the area range. Because of the needs of the scene, multiple monitoring devices may be arranged at the same point. These monitoring devices have complementary capture capabilities, or the installation locations of multiple monitoring devices have different geographical latitudes and longitudes, but collaborative work can make up for the blind spots of monitoring.
  • the first scene to capture the peer object is a single-screen capture of the peer object scene.
  • a monitoring device monitors a monitoring area, analyzes the object image collected by the monitoring device at a certain moment, and uses the time stamp of the object image collected by the monitoring device as Capture time and the installation location of the monitoring device as the capture location.
  • the capture time i.e. time stamp
  • capture location i.e. installation location
  • the second scene of capturing peer objects is a multi-screen capturing of peer objects of the same monitoring device.
  • a monitoring device monitors a monitoring area, and multiple object images collected by the monitoring device in a certain period of time are analyzed, and the monitoring device Collect the time stamp of each object image as the capture time of each object in each object image, and use the installation location of the monitoring device as the capture location of each object in each object image, then when at least two objects are captured in each object image , Because there are the same capture location (i.e. installation location), it is only necessary to determine whether the time interval of the capture time (i.e. time stamp) of at least two objects is within the first preset time interval. If so, the at least two objects are generated once. Peer object records between objects.
  • the third scene of capturing peer objects is the scene of capturing peer objects in multiple images of different monitoring devices.
  • the embodiment of the present application also provides a peer object recognition method. As shown in FIG. 2, the method may include the following steps.
  • S201 Obtain an object image collected by a monitoring device.
  • S202 Analyze the object image, and determine the time and location of each object captured in the object image.
  • S201 and S202 in the embodiment shown in FIG. 2 are the same as S101 and S102 in the embodiment shown in FIG. 1, and will not be repeated here.
  • S203 Identify the object characteristics of each object, and cluster each object based on the object characteristics of each object.
  • the server used for peer object recognition also provides an object comparison service for object feature comparison.
  • the comparison process can be a comparison between the object features of each object, or the acquired object and the known object.
  • the object feature model is compared.
  • the purpose of the comparison is to cluster the objects, and the objects of the same type are grouped together. After clustering, different types of objects can be distinguished by marking the identity tags of each object.
  • the specific method for clustering objects may be: comparing the object features of each object to obtain the object feature similarity between the objects; the object feature similarity is greater than or Objects that are equal to the preset similarity threshold are labeled with the same identity label; objects whose feature similarity is less than the preset similarity threshold are labeled with different identity labels.
  • the method of comparing the object features may be to compare the object features of each object to obtain the similarity of the object features between the objects, for example, to compare the clothes color and hair of the objects respectively.
  • Object features such as length, height, etc., compare the object feature similarity between the two objects. The higher the similarity, the greater the probability that the two objects are the same object. Therefore, in the embodiments of the present application, A preset similarity threshold is set. If the object feature similarity is greater than or equal to the preset similarity threshold, the two objects can basically be regarded as the same object and marked with the same identity tag. If the object feature similarity is less than the preset similarity If the threshold is higher, the two objects can basically be identified as objects of different rows and marked with different identity tags.
  • object targets can be clustered.
  • Objects with similarity of object features greater than or equal to the preset similarity threshold are labeled with the same identity tags, and objects of the same category are classified into the same category.
  • the comparison process of object features can also be to compare the object features of each object with a known object feature model.
  • the object feature model is a model of a known object established based on experience. Through comparison, it can be more Know the identity of the subject clearly.
  • objects belonging to the same class are regarded as the same object. If the distance between the capture locations of at least two objects of different types is within the preset range, and the time interval of the capture time of the at least two objects of different types is within the first preset time interval, then the at least two objects are generated once. Peer object records between objects of different classes. After clustering the objects, the objects are divided more accurately. Objects belonging to the same category will not be recorded as a peer object record, but objects belonging to different categories will be recorded as a peer object record.
  • the embodiment of the present application also provides a peer object recognition method. As shown in FIG. 3, the method may include the following steps.
  • S301 Obtain an object image collected by a monitoring device.
  • S302 Analyze the object image, and determine the time and location of each object captured in the object image.
  • S301 and S302 in the embodiment shown in FIG. 3 are the same as S101 and S102 in the embodiment shown in FIG. 1, and will not be repeated here.
  • S303 Store the capture time and capture location of each object in the first database.
  • the capture time and location of each object can be stored in the first database to achieve the purpose of big data storage and provide a big data basis for subsequent peer object judgment.
  • S304 Extract the capture time and capture location of at least two objects from the first database.
  • S305 If the distance between the capture locations of the at least two objects is within a preset range, and the time interval of the capture time of the at least two objects is within the first preset time interval, generate a time between the at least two objects Of peer object records.
  • the first database and the second database may be the same database or different databases, and both are within the protection scope of this application. In one embodiment, the first database and the second database are different databases.
  • the first database is used to store the captured images of the object, the characteristics of the object, the time and location of the capture of the object; the second database is used to record peer objects Record, so as to facilitate the classification and management of data.
  • the embodiment of the present application also provides a peer object recognition method. As shown in FIG. 4, the method may include the following steps.
  • S401 Obtain an object image collected by a monitoring device.
  • S402 Analyze the object image, and determine the time and location of each object captured in the object image.
  • S401 to S403 in the embodiment shown in FIG. 4 are the same as S101 to S103 in the embodiment shown in FIG. 1, and will not be repeated here.
  • S404 Acquire a peer object query request, where the peer object query request includes the object information of the target object to be queried.
  • a peer object query request is sent to the server through the query client (usually a computer).
  • the peer object query request includes the object information of the target object to be queried, for example, the target object
  • the object information of can include one or more of the target object's name, ID, characteristics and other information.
  • S405 According to the object information of the target object, search for all the peer object records where the target object appears, and count the appearance times of each object that is the same as the target object in all the peer object records of the target object.
  • the server After the server receives the peer object query request, based on the target object's object information, it searches the recorded peer object records for all the peer object records where the target object appears. These peer object records include other objects that have a peer object relationship with the target object. Count the number of appearances of other objects, and get the number of appearances of each object that is the same object as the target object.
  • S406 Output the peer object records of the object whose appearance times are greater than the second preset threshold among all the peer object records of the target object.
  • Outputting the peer object records with the number of occurrences greater than the second preset threshold that is, selecting and outputting the peer object records of the object with the number of occurrences greater than the second preset threshold among all the peer object records of the target object.
  • the number of occurrences of the object is greater than the second preset threshold, which means that the number of times the object and the target object are in the same line is greater than the second preset threshold. Since the inquirer generally pays attention to the objects that often have a peer relationship with the target object, and there is a close relationship between these objects and the target object, therefore, when giving feedback to the inquirer, the number of peers with the target object can be reported to be greater than the number of peers. 2.
  • the peer record of the subject with a preset threshold where the second preset threshold can be set based on experience, and each peer record of the output target object contains objects whose number of peers with the target object is greater than the second preset threshold and target.
  • the target object's peer object record is searched out from the recorded peer object records, and based on the number of times each object has walked with the target object, the output of the peer object record with the target object is output.
  • the peer record of the subject whose peer count is greater than the second preset threshold. According to the real-time needs of the query personnel, the accurate query results of the peer object records can be output to the query personnel in a targeted manner, which meets the query requirements of the query personnel.
  • FIG. 5 is a schematic flowchart of a peer object recognition method provided by an embodiment of the application.
  • the monitoring equipment collects the object image, it reports the object image to the server that provides the central service.
  • the intelligent analysis service in the server analyzes the object image and determines the time and location of each object captured in the object image. Call the captured data (the captured time and location of each captured object) through the object application, and call out the object characteristics.
  • the comparison service will compare the object characteristics to determine the same object to facilitate the assignment of identities Identify and return the comparison result to the target application, send the snapshot data and comparison result to the data warehouse (the first database) for storage, and extract historical snapshot data from the data warehouse when performing peer object recognition.
  • the specific calculation process is as described in the above method embodiment, which will not be repeated here, and the calculation result is finally output to the database (second database) for storage.
  • FIG. 6 it is a schematic diagram of a single-screen capture of a peer-to-peer object scene.
  • the monitored device 1 captures the image.
  • the obtained object image is uploaded to the server, and the intelligent analysis service analyzes the object image to obtain object feature 1 and object feature 2, and compare the two object features through the comparison service to determine the same object.
  • the object binding identity ID 1 represented by the object feature 1 is the binding identity ID 2 of the object represented by the object feature 2.
  • the position of object 1 (the object bound by ID 1) in each image and the position of object 2 (the object bound by ID 2) in each image can be obtained, so that a snapshot of object 1 at time t1 can be obtained Record (including the capture time and location) and the capture record of object 2 at t1, and save the capture record to the data warehouse.
  • peer objects different objects sequentially (several seconds interval, different screen captures) or at the same time (same screen (Snapshot) After passing through a certain area, it is regarded as a peer object, and it can be determined that the object 1 and the object 2 are peer objects, and a peer object record is generated.
  • FIG 7 it is a schematic diagram of a scene where a monitoring device collects multiple pictures to capture peer objects.
  • the monitored device 1 captures the image, and there are objects 1 and Object 2, the two captured object images are uploaded to the server, and the intelligent analysis service analyzes the object image to obtain object feature 1 and object feature 2, and compare the two object features through the comparison service to determine the same object.
  • the object binding identity ID 1 represented by the object feature 1 is the binding identity ID 2 of the object represented by the object feature 2.
  • the position of object 1 (the object bound by ID 1) in the image and the position of object 2 (the object bound by ID 2) in the image can be obtained, so that the snapshot record of object 1 at time t1 and The snapshot record of object 2 at time t2, save the snapshot record to the data warehouse, according to the peer object rule: different objects pass through a certain area in succession (a few seconds interval, different screen captures) or at the same time (same screen capture).
  • the peer object since the difference between t1 and t2 is less than the first preset time interval, it can be determined that the object 1 and the object 2 are peer objects, and a peer object record is generated.
  • FIG 8 it is a schematic diagram of a scene where multiple monitoring devices collect multiple images to capture peer objects.
  • object 1 and object 2 are captured by monitoring device 1 and monitoring device 2 in the same monitoring area 1 at time t1 and t2, respectively.
  • the two captured object images are uploaded to the server, and the intelligent analysis service analyzes the object image to obtain object feature 1 and object feature 2, and compare the two object features through the comparison service to determine the same object.
  • the object binding identity ID 1 represented by the object feature 1 is the binding identity ID 2 of the object represented by the object feature 2.
  • the position of object 1 (the object bound by ID 1) in the image and the position of object 2 (the object bound by ID 2) in the image can be obtained, so that the snapshot record of object 1 at time t1 and The snapshot record of object 2 at time t2, save the snapshot record to the data warehouse, according to the peer object rule: different objects pass through a certain area in succession (a few seconds interval, different screen captures) or at the same time (same screen capture).
  • the peer object since the difference between t1 and t2 is less than the first preset time interval, it can be determined that the object 1 and the object 2 are peer objects, and a peer object record is generated.
  • the following takes the same camera capture as an example to describe in detail the implementation of filtering repeated peer object records to increase the effective peer rate.
  • the monitored device 1 captures the image, and there are object 1 and object 2 in the screen.
  • the captured object image is uploaded to the server, and the object image is analyzed by the intelligent analysis service to obtain object feature 1 and object feature 2, and the two object features are performed through the comparison service
  • the snapshot record of object 1 at time t1 and the snapshot record of object 2 at time t1 are obtained, and the snapshot records are saved in the data warehouse.
  • both the time interval between t2 and t1 and the time interval between t3 and t1 are less than the second preset time interval, it is considered that the peer recognition of the same group of objects has occurred in a short period of time, and it is only retained once and recorded as a peer record .
  • the peer object records of each object can be obtained, and the rules are calculated according to the peer objects (During a period of time (can be daily, weekly or monthly) two or more peers can be regarded as peers with each other multiple times.) Filter the records of peers to reach the threshold of the number of peers (such as five records a day). The object, as a peer object, gives feedback to the inquirer.
  • the execution subject is a system composed of a server and a database. As shown in FIG. 11, the method may include the following steps.
  • S1101 Acquire a peer object query request, where the peer object query request includes the target object to be queried.
  • the target object to be queried refers to the object that the user wants to query.
  • the identity is the object of Zhang San, or the identity is 111111, and so on.
  • the peer object query request includes the target object to be queried.
  • the target peer object query request includes the identity of the target object to be queried.
  • S1102 Search for all peer object records where the target object appears in the database, and count the number of occurrences of each object that is the same object as the target object in all peer object records of the target object, where the database stores the object images collected by the monitoring device The distance between the captured locations is within a preset range and the time interval of the capture time is within the first preset time interval between at least two objects in the same line.
  • S1103 Output the peer records of the object whose appearance times are greater than the preset second threshold among all the peer object records of the target object.
  • the peer object record where the target object appears is specifically searched from the database, and based on the number of occurrences of each object that is the peer object with the target object, all peers of the target object are output
  • the database provides big data storage. In the peer object query scenario, it provides a big data query basis to ensure the integrity of the data and provide a guarantee for the accuracy of the query results.
  • an embodiment of the present application provides a peer object recognition device.
  • the device may include:
  • the obtaining module 1210 is used to obtain the object image collected by the monitoring device
  • the analysis module 1220 is used to analyze the object image to obtain the time and location of each object captured in the object image;
  • the recognition recording module 1230 is configured to generate the at least two objects once if the distance between the capture locations of at least two objects is within a preset range and the time interval of the capture time of the at least two objects is within the first preset time interval. Peer object records between objects.
  • the device may also include:
  • the clustering module is used to identify the object characteristics of each object; based on the object characteristics of each object, cluster each object;
  • the identification record module 1230 can be specifically used for:
  • the distance between the capture locations of the at least two objects is within a preset range and the time interval of the capture time of the at least two objects is within the first preset time interval, then generate A record of the peer objects between at least two objects at a time.
  • the obtaining module 1210 can be specifically used for:
  • the analysis module 1220 can be specifically used for:
  • the capture time is the time stamp of each object image collected by the monitoring device, and the capture location is The installation location of the monitoring equipment;
  • the capture time is the time stamp and location of each object image captured by each monitoring device The installation position of each monitoring equipment for collecting each object image;
  • the identification record module 1230 can be specifically used for:
  • the time interval of the capturing time of at least two objects in the multiple object images collected by one monitoring device is within the first preset time interval, then generate a peer object record between the at least two objects once;
  • the distance between the capture locations of at least two objects in the multiple object images collected by multiple monitoring devices is within a preset range, and the time interval of the capture time of the at least two objects is within the first preset time interval, then generate A record of the peer objects between at least two objects at a time.
  • the device may also include:
  • the storage module is used to store the capture time and location of each object in the first database
  • the identification record module 1230 can be specifically used for:
  • the identification record module 1230 can be specifically used for:
  • the obtaining module 1210 can also be used to:
  • the peer object query request includes the object information of the target object to be queried
  • the device may also include:
  • the search module is used to find all the records of the peers where the target object appears according to the target information of the target, and count the number of occurrences of each object that is the same as the target object in all the records of the target object's peers;
  • the output module is used to output the peer object records of the object whose appearance times are greater than the second preset threshold among all the peer object records of the target object.
  • the peer object records of each object in the local record can be output instead of only output
  • the peer object record of a specific object the obtained peer object record is more comprehensive, which can meet the increasingly complex object analysis needs.
  • every time at least two objects are identified as peer objects a peer object record between the at least two objects will be generated once. There will be many peer object records for different objects. In this way, the peer object record is output The output is based on the number of generated peer object records, which provides support for the accuracy of output peer object records.
  • the embodiment of the present application also provides a peer object recognition device. As shown in FIG. 13, the device may include:
  • the obtaining module 1310 is configured to obtain a peer object query request, where the peer object query request includes the target object to be queried;
  • the search module 1320 is used to search for all peer object records where the target object appears in the database, and count the number of occurrences of each object that is the same object as the target object in all the peer object records of the target object.
  • the database stores the collection of monitoring equipment Records of peer objects between at least two objects in the object image where the distance between the capture locations is within a preset range and the time interval of the capture time is within the first preset time interval;
  • the output module 1330 is configured to output the peer records of the objects whose appearance times are greater than the second preset threshold among all the peer object records of the target object.
  • the peer object record in which the target object appears is searched in a targeted manner from the database, and based on the number of times each object travels with the target object, the output occurrence number is greater than the second preset threshold
  • the peer object record of the object is greater than the second preset threshold
  • the database provides big data storage. In the peer object query scenario, it provides the basis for big data query to ensure the integrity of the data and provide a guarantee for the accuracy of the query results.
  • An embodiment of the present application also provides a server, as shown in FIG. 14, including a processor 1401 and a memory 1402, where:
  • the memory 1402 is used to store computer programs
  • the processor 1401 is configured to execute any of the peer object recognition methods provided in the embodiment of the present application when executing the computer program stored in the memory 1402.
  • the foregoing memory may include RAM (Random Access Memory, random access memory), and may also include NVM (Non-Volatile Memory, non-volatile memory), such as at least one disk storage.
  • NVM Non-Volatile Memory, non-volatile memory
  • the memory may also be at least one storage device located far away from the foregoing processor.
  • the above-mentioned processor may be a general-purpose processor, including CPU (Central Processing Unit), NP (Network Processor, network processor), etc.; it may also be DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit, FPGA (Field-Programmable Gate Array, Field Programmable Gate Array) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor, network processor
  • DSP Digital Signal Processing, digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array, Field Programmable Gate Array
  • other programmable logic devices discrete gates or transistor logic devices, discrete hardware components.
  • the processor reads the computer program stored in the memory and runs the computer program to achieve: obtain the object image collected by the monitoring equipment, analyze the object image, and determine the captured object in the object image Capture time and capture location. If the distance between the capture locations of at least two objects is within a preset range, and the time interval of the capture time of the at least two objects is within the first preset time interval, then the at least two objects are generated once Peer object records between objects. Through the analysis of the object image, the capture time and location of each object captured in the object image can be obtained, and it is finally determined that the distance between the capture locations is within the preset range and the time interval of the capture time is within the first preset time interval.
  • At least two objects are peer objects, a peer object record is generated once, and the peer object records between the objects are recorded locally to provide query basis for query personnel.
  • the peer object records of each object in the local record can be output instead of only output
  • the peer object record of a specific object the obtained peer object record is more comprehensive, which can meet the increasingly complex object analysis needs.
  • every time at least two objects are identified as peer objects a peer object record between the at least two objects will be generated once. There will be many peer object records for different objects. In this way, the peer object record is output The output is based on the number of generated peer object records, which provides support for the accuracy of output peer object records.
  • the embodiment of the present application provides a non-temporary storage medium that stores a computer program in the non-temporary storage medium.
  • the computer program is executed by a processor, any of the peer object identification methods provided in the embodiments of the present application is implemented.
  • the non-temporary storage medium stores a computer program that executes the peer object recognition method provided by the embodiment of this application at runtime, so it can achieve: obtain the object image collected by the monitoring device, analyze the object image, and determine The capture time and location of each object captured in the object image, if the distance between the capture locations of at least two objects is within the preset range and the time interval of the capture time of the at least two objects is within the first preset time interval , Then a peer object record between the at least two objects is generated once. Through the analysis of the object image, the capture time and location of each object captured in the object image can be obtained, and it is finally determined that the distance between the capture locations is within the preset range and the time interval of the capture time is within the first preset time interval.
  • At least two objects are peer objects, a peer object record is generated once, and the peer object records between the objects are recorded locally to provide query basis for query personnel.
  • the peer object records of each object in the local record can be output instead of only output
  • the peer object record of a specific object the obtained peer object record is more comprehensive, which can meet the increasingly complex object analysis needs.
  • every time at least two objects are identified as peer objects a peer object record between the at least two objects will be generated once. There will be many peer object records for different objects. In this way, the peer object record is output The output is based on the number of generated peer object records, which provides support for the accuracy of output peer object records.
  • the embodiment of the present application also provides an application program for executing at runtime: any of the peer object recognition methods provided in the embodiment of the present application.
  • the embodiment of the present application provides a peer object recognition system.
  • the system includes one or more monitoring devices 1510 and a server 1520;
  • Monitoring equipment 1510 used to collect object images
  • the server 1520 is used to obtain the object image collected by the monitoring equipment; analyze the object image to determine the time and location of each object captured in the object image; if the distance between the capture locations of at least two objects is within a preset range, And the time interval of the capture time of the at least two objects is within the first preset time interval, then a peer object record between the at least two objects is generated once.
  • the system may also include a first database and a second database;
  • the first database is used to store the time and location of each object captured by the server after analyzing the object image
  • the second database is used to store peer object records generated by the server.
  • the system also includes a client;
  • the server is also used to obtain a peer object query request, where the peer object query request includes the object information of the target object to be queried; according to the object information of the target object, find all the peer object records where the target object appears, and count all the target objects The number of occurrences of each object that is the same as the target object in the peer object record;
  • the client terminal is used to display the peer object records of the object whose appearance times are greater than the second preset threshold among all the peer object records of the target object.
  • the server obtains the object image collected by the monitoring device, analyzes the object image, and determines the time and location of each object captured in the object image. If the distance between the capture locations of at least two objects is within a preset range And the time interval of the capture time of the at least two objects is within the first preset time interval, then a peer object record between the at least two objects is generated once.
  • the capture time and location of each object captured in the object image can be obtained, and it is finally determined that the distance between the capture locations is within the preset range and the time interval of the capture time is within the first preset time interval.
  • At least two objects are peer objects, a peer object record is generated once, and the peer object records between the objects are recorded locally to provide query basis for query personnel.
  • the peer object records of each object in the local record can be output instead of only output
  • the peer object record of a specific object the obtained peer object record is more comprehensive, which can meet the increasingly complex object analysis needs.
  • every time at least two objects are identified as peer objects a peer object record between the at least two objects will be generated once. There will be many peer object records for different objects. In this way, the peer object record is output The output is based on the number of generated peer object records, which provides support for the accuracy of output peer object records.

Abstract

一种同行对象识别方法、装置、服务器及系统,通过对获取到的对象图像进行分析,获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,能够满足日益复杂的对象分析需求。

Description

一种同行对象识别方法、装置、服务器及系统
本申请要求于2019年12月10日提交中国专利局、申请号为201911259493.X发明名称为“一种同行人识别方法、装置、服务器及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及安防监控技术领域,特别是涉及一种同行对象识别方法、装置、服务器及系统。
背景技术
同行对象识别是安防领域中的一项关键技术,识别两个对象是否具有同行关系,一方面,可以理清对象之间的社会关系,发现潜在危险并及时做好预防,另一方面,在危险发生时,可以及时获取到对象信息和危险发生的时间、地点等信息,为事件追溯提供重要线索。
现有技术中,首先识别出各图像中的特定对象,再对其他对象是否与特定对象连续出现在同一张图像中进行判断,如果某一对象连续与特定对象出现在相同图像中,则判断该对象与特定对象是同行对象关系。随着大数据的不断发展,同行对象分析的场景越来越复杂,用户需要查看的不仅限于与特定对象相关的同行对象,而由于仅记录有特定对象的同行对象,导致无法满足日益复杂的对象分析需求。
发明内容
本申请实施例的目的在于提供一种同行对象识别方法、装置、服务器及系统,以满足日益复杂的对象分析需求。具体技术方案如下:
第一方面,本申请实施例提供了一种同行对象识别方法,该方法包括:
获取监控设备采集的对象图像;
对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点;
若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
第二方面,本申请实施例提供了一种同行对象识别方法,该方法包括:
获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对 象;
从数据库中查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数,其中,数据库存储有将监控设备采集的对象图像中抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象之间的同行对象记录;
输出目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行记录。
第三方面,本申请实施例提供了一种同行对象识别装置,该装置包括:
获取模块,用于获取监控设备采集的对象图像;
分析模块,用于对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点;
识别记录模块,用于若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
第四方面,本申请实施例提供了一种同行对象识别装置,该装置包括:
获取模块,用于获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对象;
查找模块,用于从数据库中查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数,其中,数据库存储有将监控设备采集的对象图像中抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象之间的同行对象记录;
输出模块,用于输出目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行记录。
第五方面,本申请实施例提供了一种服务器,包括处理器和存储器,其中,
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的计算机程序时,实现本申请实施例第一方面所提供的方法或者第二方面所提供的方法。
第六方面,本申请实施例提供了一种非临时性存储介质,非临时性存储介质内存储有计算机程序,计算机程序被处理器执行时,实现本申请实施例 第一方面所提供的方法或者第二方面所提供的方法。
第七方面,本申请实施例提供了一种应用程序,用于在运行时执行:本申请实施例第一方面所提供的方法或者第二方面所提供的方法。
第八方面,本申请实施例提供了一种同行对象识别系统,该系统包括多个监控设备及服务器;
监控设备,用于采集对象图像;
服务器,用于获取监控设备采集的对象图像;对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点;若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
本申请实施例提供的一种同行对象识别方法、装置、服务器及系统,获取监控设备采集的对象图像,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。通过对对象图像进行分析,可以获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,最终确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,从而能够满足日益复杂的对象分析需求。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例的同行对象识别方法的流程示意图;
图2为本申请另一实施例的同行对象识别方法的流程示意图;
图3为本申请再一实施例的同行对象识别方法的流程示意图;
图4为本申请再一实施例的同行对象识别方法的流程示意图;
图5为本申请再一实施例的同行对象识别方法的流程示意图;
图6为本申请实施例的单一画面抓拍同行场景示意图;
图7为本申请实施例的一个监控设备采集到多画面抓拍同行场景示意图;
图8为本申请实施例的多个监控设备采集到多画面抓拍同行场景示意图;
图9为本申请实施例的对重复同行进行过滤的抓拍同行场景示意图;
图10为本申请实施例的四种场景下同行对象匹配原理的场景示意图;
图11为本申请再一实施例的同行对象识别方法的流程示意图;
图12为本申请一实施例的同行对象识别装置的结构示意图;
图13为本申请另一实施例的同行对象识别装置的结构示意图;
图14为本申请实施例的服务器的结构示意图;
图15为本申请实施例的同行对象识别系统的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
由于在一些场景下,需要对一些特定对象进行行为监控,因此,一般需要获知特定对象的同行对象信息。在相应的同行对象识别方法中,获取到监控设备采集的多张对象图像之后,首先识别出各对象图像中的特定对象,再对其他对象是否与特定对象连续出现在同一张对象图像中进行判断,如果某一对象连续与特定对象出现在同一张对象图像中,则可以确定该对象与特定对象是同行对象关系。这样,在有查询请求时,用户输入特定对象的对象信息,即可获知到与该特定对象有同行对象关系的其他对象,为对特定对象进行行为监控提供分析依据。
然而,随着大数据的不断发展,同行对象分析的场景越来越复杂,用户需要查看的不仅限于与特定对象相关的同行对象,而由于相关的同行对象识别方法中仅记录有特定对象的同行对象,导致无法满足日益复杂的对象分析需求。
为了满足日益复杂的对象分析需求,本申请实施例提供了一种同行对象识别方法、装置、服务器及系统。下面,首先对本申请实施例所提供的同行对象识别方法进行介绍。
本申请实施例所提供的同行对象识别方法的执行主体可以为具有核心处理能力的服务器,实现本申请实施例所提供的同行对象识别方法的方式可以 为设置于执行主体中的软件、硬件电路和逻辑电路中的至少一种。
如图1所示,本申请实施例所提供的一种同行对象识别方法,可以包括如下步骤:
S101,获取监控设备采集的对象图像。
在城市监控、交通监控、楼宇监控等应用场景下,通常布局有一台或者多台监控设备进行实时监控,当有对象进入监控区域时,可以采集到对象图像。其中,对象图像是指包括对象的图像;此处的对象为需要监控的目标,例如可以为车辆、动物或行人等,监控设备可以为摄像机、照相机等抓拍设备。对于监控设备,一旦有对象经过该监控设备,该监控设备就会自动识别对象目标,对对象进行抓拍,并将抓拍到的对象图像上传至用于进行同行对象识别的服务器。监控设备可以架设在对象流动较大的地区,在架设监控设备时,需要保证角度、高度和亮度等条件,以满足对对象进行抓拍的要求。
S102,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点。
用于进行同行对象识别的服务器提供智能分析服务,在获取到对象图像后,可以对对象图像进行分析,分析的过程就是提取对象图像中各对象的抓拍时间及抓拍地点,各对象之间可以用对象特征作以区分,对象特征是指对象目标的属性特征,例如,面部特征、衣服颜色、身高或体型等特征中的一个或多个。抓拍时间是指抓拍到对象目标的时间,通常可以从图像属性中读取到,抓拍地址是指抓拍到对象目标时对象目标所处位置,通常可以用监控设备的经纬度信息或者架设位置信息来代表抓拍地址。针对一些安装有定位模块,例如GPS模块或北斗模块的监控设备,可以直接通过定位模块采集到的经纬度信息,即监控设备的经纬度信息来代表抓拍地址;针对未安装有定位模块的监控设备,可以通过监控设备架设位置信息来代表抓拍地址。当然抓拍地址也可以为对象目标的实际经纬度,例如,可以获取对象目标在图像中的坐标,然后根据预先建立的图像坐标与世界坐标的对应关系,获取对象目标的在世界坐标系中的坐标,从而得到对象目标的实际经纬度。图像坐标与世界坐标的对应关系的建立方法可以参见相关技术中的相机外参标定方法,此处不再赘述。一个对象的抓拍时间和抓拍地点还可以组成一条抓拍数据,对一张对象图像或者多张对象图像进行分析,可以得到多条抓拍数据。
S103,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个 对象之间的同行对象记录。
该至少两个对象之间的同行对象记录表示该至少两个对象为同行对象。该至少两个对象之间的同行对象记录中,可以包括这两个对象的身份标识,例如ID等,还可以包括这两个对象的抓拍图像或图像特征等,其中对象的图像特征可以为深度学习网络提取的对象的特征。
本申请实施例提供了一种同行对象判定规则,即不同的对象先后(数秒间隔,不同画面抓拍)或者同时(同一画面抓拍)经过一定的区域(抓拍地点的间距在预设范围内),则视为同行对象,生成一次对象之间的同行对象记录。也就是说,如果至少两个对象的抓拍地点的间距在预设范围(例如同一座大楼、一个十字路口的四个方向、同一个广场等)内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内(即间隔数秒或同时),则将该至少两个对象识别为同行对象,生成一次该至少两个对象之间的同行对象记录。
应用本申请实施例,获取监控设备采集的对象图像,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。通过对对象图像进行分析,可以获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,最终确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,所获得的同行对象记录更为全面,从而能够满足日益复杂的对象分析需求。并且,每识别到至少两个对象为同行对象,就会生成一次该至少两个对象之间的同行对象记录,针对不同的对象会有很多次同行对象记录的结果,这样,在输出同行对象记录时,依据生成的同行对象记录的次数进行输出,为输出同行对象记录的准确性提供了支撑。
可选的,S103具体可以通过如下步骤实现:
识别抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象;统计在第二预设时间间隔内连续生成该至少两个对象之间的同行对象记录的次数;若次数大于或等于第一预设阈值,则保留第二预设时间间隔内生成的一次同行对象记录。
在同行对象识别时,在原有同行对象规则的基础上,增加过滤条件:相同的至少两个对象短时间内被多次识别为同行对象,只保留第二预设时间间隔内一次同行对象记录,删除第二预设时间间隔内该至少两个对象的其他的同行对象记录。即这种情况下,不管识别为几次同行对象,都只记为1次同行对象记录。
基于上述的同行对象规则,对不同的场景下的对象做出同行规则匹配后,可以得到各对象目标的同行对象记录,根据同行对象规则:第二预设时间间隔内两个或以上对象连续被识别为同行对象的次数大于或等于第一预设阈值(例如一天识别到20次同行),则可确定这些对象之间互为同行对象,且仅记录为一次这些对象之间的同行对象记录。在保留同行对象记录时,可以保留第二预设时间间隔内最后一次的同行对象记录,当然,也可以保留其他时刻记录的同行对象记录,这里不做具体限定。需要存储的同行对象记录的数据量得以减少,有效地节省了存储资源,并且,由于判断规则的限制,可能会存在单次误判的情况(例如,仅是因偶然擦肩而过被误判为同行),而经过多次的识别,将多次出现同行关系的对象记为一次同行对象记录,可以减少对同行对象的误判。
本申请实施例的一种实现方式中,获取监控设备采集的对象图像的步骤,具体可以为:获取一个监控设备采集的至少一张对象图像;或者,获取多个监控设备采集的多张对象图像。
相应的,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点的步骤,具体可以为:对一个监控设备采集的一张对象图像进行分析,确定该张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,抓拍时间为该监控设备采集该张对象图像的时间戳,抓拍地点为该监控设备的安装位置;或者,对一个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,抓拍时间为该监控设备采集各张对象图像的时间戳,抓拍地点为该监控设备的安装位置;或者,对多个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,抓拍时间为各监控设备采集各张对象图像的时间戳,抓拍地点为采集各张对象图像的各监控设备的安装位置。
若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录的步骤,具体可以为:针对一个监控设备采集的一张对象 图像中的至少两个对象,生成一次该至少两个对象之间的同行对象记录;或者,若一个监控设备采集的多张对象图像中至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录;或者,若多个监控设备采集的多张对象图像中至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
一台监控设备可以对一个指定区域范围进行监控,多台监控设备也可以同时对指定区域范围进行监控,可以利用监控设备的安装位置(例如安装的经纬度)来判断多台监控设备是否监控的是指定区域范围。因为场景需要,可能在同一个点位布置多台监控设备,这些监控设备之间互补抓拍能力,或者,多个监控设备安装位置在地理上经纬度有差异,但协同工作可以弥补监控盲区。
这样,实际存在三种抓拍同行对象场景。第一个抓拍同行对象场景为单一画面抓拍同行对象场景,一台监控设备监控一个监控区域,对该监控设备在某一个时刻采集的对象图像进行分析,将监控设备采集该对象图像的时间戳作为抓拍时间、将该监控设备的安装位置作为抓拍地点,则在该张对象图像中抓拍到至少两个对象时,由于有相同的抓拍时间(即时间戳)及抓拍地点(即安装位置),因此,将该张对象图像中的至少两个对象之间记为一次同行对象记录。
第二个抓拍同行对象场景为同一监控设备多画面抓拍同行对象场景,一台监控设备监控一个监控区域,对该监控设备在某一个时间段内采集的多张对象图像进行分析,将该监控设备采集各对象图像的时间戳作为各对象图像中各对象的抓拍时间、将该监控设备的安装位置作为各对象图像中各对象的抓拍地点,则在各张对象图像中抓拍到至少两个对象时,由于有相同的抓拍地点(即安装位置),仅需要判断至少两个对象的抓拍时间(即时间戳)的时间间隔是否在第一预设时间间隔内,如果是,则生成一次该至少两个对象之间的同行对象记录。
第三个抓拍同行对象场景为不同监控设备多画面抓拍同行对象场景,在实际应用场景下,布置有多台监控设备,如果这些监控设备在某一个时间段内采集了多张对象图像,对这些对象图像进行分析,将各监控设备采集各对象图像的时间戳作为各对象图像中各对象的抓拍时间、将各监控设备的安装位置作为各对象图像中各对象的抓拍地点,则在各张对象图像中抓拍到至少 两个对象时,需要判断至少两个对象的抓拍地点(即安装位置)的间距是否在预设范围内,以及该至少两个对象的抓拍时间(即时间戳)的时间间隔是否在第一预设时间间隔内,如果是,则生成一次该至少两个对象之间的同行对象记录。
基于图1所示实施例,本申请实施例还提供了一种同行对象识别方法,如图2所示,该方法可以包括如下步骤。
S201,获取监控设备采集的对象图像。
S202,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点。
图2所示实施例中的S201、S202与图1所示实施例中的S101、S102相同,这里不再赘述。
S203,识别各对象的对象特征,并基于各对象的对象特征,对各对象进行聚类。
用于进行同行对象识别的服务器还提供对象比对服务,用于进行对象特征比对,比对的过程可以是各对象的对象特征之间进行比对,也可以是获取到的对象与已知对象特征模型进行比对,比对的目的是对对象进行聚类,同一类型的对象聚为一类。在进行聚类后,可以通过标注各对象的身份标签,区分出不同类型的对象。
本申请实施例的一种实现方式中,对对象进行聚类的具体方式可以为:对各对象的对象特征进行比对,得到各对象之间的对象特征相似度;对对象特征相似度大于或等于预设相似度阈值的对象标注相同的身份标签;对对象特征相似度小于预设相似度阈值的对象标注不同的身份标签。
本申请实施例中,对对象特征进行比对的方式,可以是各对象的对象特征之间进行比对,得到各对象之间的对象特征相似度,例如,分别比对对象的衣服颜色、头发长短、身高等对象特征,比对出来对象两两之间的对象特征相似度,相似度越高则说明这两个对象为同一个对象的可能性越大,因此,在本申请实施例中,设置有一预设相似度阈值,如果对象特征相似度大于或等于预设相似度阈值,则两个对象基本可以认定为是同一个对象,标注相同的身份标签,如果对象特征相似度小于预设相似度阈值,则两个对象基本可以认定是不同行对象,标注不同的身份标签。
通过身份标签的标注,可以对对象目标进行聚类,通过对对象特征相似度大于或等于预设相似度阈值的对象标注相同的身份标签,将同一类别的对 象划分为同一类。
当然,对象特征的比对过程,还可以是各对象的对象特征分别与已知的对象特征模型进行比对,对象特征模型是基于经验建立的已知对象的模型,通过比对,可以更为明确的知道对象的身份。
S204,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
根据S203的聚类结果,将属于同一类的对象视为同一对象。若至少两个不同类的对象的抓拍地点的间距在预设范围内、且该至少两个不同类的对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个不同类的对象之间的同行对象记录。在对各对象进行聚类之后,对对象进行了更为精准的划分,属于同一类的对象不会记为一次同行对象记录,而是将属于不同类的对象之间记为一次同行对象记录。
基于图1所示实施例,本申请实施例还提供了一种同行对象识别方法,如图3所示,该方法可以包括如下步骤。
S301,获取监控设备采集的对象图像。
S302,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点。
图3所示实施例中的S301、S302与图1所示实施例中的S101、S102相同,这里不再赘述。
S303,将各对象的抓拍时间及抓拍地点存储至第一数据库。
在分析得到各对象的抓拍时间和抓拍地点后,可以将各对象的抓拍时间和抓拍地点存储至第一数据库中,以实现大数据存储的目的,为后续进行同行对象判断提供大数据基础。
S304,从第一数据库中,提取出至少两个对象的抓拍时间及抓拍地点。
S305,若该至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
在进行同行对象识别时,可以直接从第一数据库中提取出至少两个对象的抓拍时间和抓拍地点,并判断提取出的至少两个对象的抓拍地点的间距是否在预设范围内、且这些对象的抓拍时间的时间间隔是否在第一预设时间间隔内,如果是,则将这些对象之间记为一次同行对象记录。
S306,存储同行对象记录至第二数据库。
利用第二数据库对同行对象记录进行保存,后续如果接收到同行对象查询请求,则根据同行对象查询请求,从第二数据库中快速地查找出包含有需要查询的目标对象的同行对象记录进行反馈,或者直接反馈存在同行对象关系的对象,反馈的信息可以按照同行次数进行排序。第一数据库与第二数据库可以为相同的数据库也可以为不同的数据库,均在本申请的保护范围内。一种实施方式中,第一数据库与第二数据库为不同的数据库,第一数据库用于存储对象的抓拍图像、对象的特征、对象的抓拍时间及抓拍地点存储;第二数据库用于记录同行对象记录,从而方便数据的分类管理。
基于图1所示实施例,本申请实施例还提供了一种同行对象识别方法,如图4所示,该方法可以包括如下步骤。
S401,获取监控设备采集的对象图像。
S402,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点。
S403,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
图4所示实施例中的S401至S403与图1所示实施例中的S101至S103相同,这里不再赘述。
S404,获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对象的对象信息。
在用户有同行对象查询需求时,会通过查询客户端(一般为一台计算机)向服务器发送一个同行对象查询请求,该同行对象查询请求中包括待查询的目标对象的对象信息,例如,目标对象的对象信息可以包括目标对象的姓名、ID、特征等信息中的一种或多种。
S405,根据目标对象的对象信息,查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数。
服务器在收到同行对象查询请求后,基于目标对象的对象信息,从记录的同行对象记录中查找出现目标对象的所有同行对象记录,这些同行对象记录包括了与目标对象存在同行对象关系的其他对象的对象信息,对其他各对象出现的次数进行统计,分别得到与目标对象为同行对象的每个对象的出现 次数。
S406,输出目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行对象记录。
输出出现次数大于第二预设阈值的同行对象记录,即,在目标对象的所有同行对象记录中,选取并输出出现次数大于第二预设阈值的对象的同行对象记录。对象的出现次数大于第二预设阈值,表示该对象与目标对象的同行次数大于第二预设阈值。由于查询人员一般关注的是经常与目标对象为同行对象关系的对象,这些对象与目标对象之间存在密切的联系,因此,在向查询人员进行反馈时,可以反馈与目标对象的同行次数大于第二预设阈值的对象的同行对象记录,其中,第二预设阈值可以根据经验设置,输出的目标对象的各条同行对象记录中含有与目标对象的同行次数大于第二预设阈值的对象和目标对象。
本申请实施例中,在获取到同行对象查询请求后,针对性地从记录的同行对象记录中查找出目标对象的同行对象记录,并且基于各对象与目标对象同行的次数,输出与目标对象的同行次数大于第二预设阈值的对象的同行对象记录。能够依据查询人员的实时需求,针对性地向查询人员输出同行对象记录的准确查询结果,满足了查询人员的查询需求。
为了便于理解,下面结合具体实例,对本申请实施例所提供的同行对象识别方法进行详细介绍。
如图5所示为本申请实施例所提供的同行对象识别方法的流程示意图。
监控设备在采集的对象图像后,将对象图像上报给提供中心服务的服务器,该服务器中的智能分析服务对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点。通过对象应用程序对抓拍数据(抓拍到的各对象的抓拍时间及抓拍地点)进行调用,并调用出对象特征,由比对服务进行对象特征的比对,从而确定出相同的对象,以方便分配身份标识,并将比对结果返回给对象应用程序,将抓拍数据和比对结果发送至数据仓库(第一数据库)进行存储,在进行同行对象识别时,从数据仓库中提取历史抓拍数据,进行大数据计算,具体的计算过程如上述方法实施例所述,这里不再赘述,最终输出计算结果至数据库(第二数据库)中保存。
下面分别从三个抓拍同行原理场景,对同行对象的识别过程进行介绍。
如图6所示,为单一画面抓拍同行对象场景示意图,当对象1和对象2近乎同时经过监控区域1下的监控设备1时,被监控设备1抓拍,画面中有 对象1和对象2,抓拍到的对象图像上传至服务器,由智能分析服务分析对象图像得到对象特征1和对象特征2,将两个对象特征通过比对服务进行比对,确定出相同的对象。为对象特征1表示的对象绑定身份标识1,为对象特征2表示的对象的绑定身份标识2。这样就可以得到对象1(身份标识1绑定的对象)在各图像中的位置及对象2(身份标识2绑定的对象)在各图像中的位置,从而可以得到对象1在t1时间的抓拍记录(包括抓拍时间和抓拍地点)和对象2在t1时间的抓拍记录,将抓拍记录保存至数据仓库中,根据同行对象规则:不同的对象先后(数秒间隔,不同画面抓拍)或者同时(同一画面抓拍)经过某一区域,则视为同行对象,可以确定对象1和对象2为同行对象,生成一次同行对象记录。
如图7所示,为一个监控设备采集到多画面抓拍同行对象场景示意图,当对象1和对象2先后经过监控区域1下的监控设备1时,被监控设备1抓拍,画面中有对象1和对象2,抓拍到的两张对象图像上传至服务器,由智能分析服务分析对象图像得到对象特征1和对象特征2,将两个对象特征通过比对服务进行比对,确定出相同的对象。为对象特征1表示的对象绑定身份标识1,为对象特征2表示的对象的绑定身份标识2。这样就可以得到对象1(身份标识1绑定的对象)在图像中的位置及对象2(身份标识2绑定的对象)在图像中的位置,从而可以得到对象1在t1时间的抓拍记录和对象2在t2时间的抓拍记录,将抓拍记录保存至数据仓库中,根据同行对象规则:不同的对象先后(数秒间隔,不同画面抓拍)或者同时(同一画面抓拍)经过某一区域,则视为同行对象,由于t1与t2的差值小于第一预设时间间隔,可以确定对象1和对象2为同行对象,生成一次同行对象记录。
如图8所示,为多个监控设备采集到多画面抓拍同行对象场景示意图,当对象1和对象2在时间t1和t2分别被同一监控区域1下的监控设备1和监控设备2抓拍到,抓拍到的两张对象图像上传至服务器,由智能分析服务分析对象图像得到对象特征1和对象特征2,将两个对象特征通过比对服务进行比对,确定出相同的对象。为对象特征1表示的对象绑定身份标识1,为对象特征2表示的对象的绑定身份标识2。这样就可以得到对象1(身份标识1绑定的对象)在图像中的位置及对象2(身份标识2绑定的对象)在图像中的位置,从而可以得到对象1在t1时间的抓拍记录和对象2在t2时间的抓拍记录,将抓拍记录保存至数据仓库中,根据同行对象规则:不同的对象先后(数秒间隔,不同画面抓拍)或者同时(同一画面抓拍)经过某一区域,则视为同 行对象,由于t1与t2的差值小于第一预设时间间隔,可以确定对象1和对象2为同行对象,生成一次同行对象记录。
下面以同一相机抓拍为例,对重复同行对象记录过滤,提升有效同行率的实施方式进行详细说明,如图9所示,当对象1和对象2近乎同时经过监控区域1下的监控设备1,被监控设备1抓拍,画面中有对象1和对象2,抓拍到的对象图像上传至服务器,由智能分析服务分析对象图像得到对象特征1和对象特征2,将两个对象特征通过比对服务进行比对,得到对象1在t1时间的抓拍记录和对象2在t1时间的抓拍记录,将抓拍记录保存至数据仓库中。
由于等红绿灯、原地休息等原因,导致对象1和对象2在监控设备前长时间逗留,而在此监控区域下的监控设备1再次抓拍,抓拍到的对象图像上传至服务器,由智能分析服务分析对象图像得到对象特征1和对象特征2,将两个对象特征通过比对服务进行比对,得到对象1在t2时间的抓拍记录和对象2在t3时间的抓拍记录,将抓拍记录保存至数据仓库中。
由于不论是t2和t1的时间间隔还是t3和t1的时间间隔都小于第二预设时间间隔,认为短时间内发生了同一组对象的同行对象识别,则只保留一次,记为一次同行对象记录。
基于上述的对象目标同行规则,对不同的场景下的数据做出同行规则匹配后(例如图10所示的4种场景下的匹配),可以得到各对象的同行对象记录,根据同行对象计算规则(一段时间内(可以是日、周或月)两个或以上对象多次同行可视其互为同行对象)对同行对象记录做过滤,达到规定同行次数阈值(如一天五次记录)的各对象,作为同行对象,向查询人员进行反馈。
在本申请实施例所提供的另一种同行对象识别方法中,执行主体是服务器、数据库组成的系统,如图11所示,该方法可以包括如下步骤。
S1101,获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对象。
待查询的目标对象是指用户希望查询的对象。例如,身份标识为张三的对象,或身份标识为111111的对象等。同行对象查询请求包括待查询的目标对象,具体可以为目标同行对象查询请求包括待查询的目标对象的身份标识。
S1102,从数据库中查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数,其中,数据库存储有将监控设备采集的对象图像中抓拍地点的间距在预设范围内、 且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象之间的同行对象记录。
S1103,输出目标对象的所有同行对象记录中出现次数大于预设第二阈值的对象的同行记录。
本申请实施例中,在获取到同行对象查询请求后,针对性地从数据库中查找出现目标对象的同行对象记录,并且基于与目标对象为同行对象的各对象出现次数,输出目标对象的所有同行对象记录中出现次数大于预设第二阈值的对象的同行记录。由于与目标对象为同行对象的各对象出现次数可以反映各对象与目标对象的同行次数,本申请实施例相当于是基于各对象与目标对象同行的次数,输出与目标对象的同行次数大于第二预设阈值的对象的同行对象记录。数据库提供了大数据存储,在同行对象查询场景下,提供了大数据查询依据,保证了数据的完整性,能够为查询结果的准确性提供保证。
相应于上述方法实施例,本申请实施例提供了一种同行对象识别装置,如图12所示,该装置可以包括:
获取模块1210,用于获取监控设备采集的对象图像;
分析模块1220,用于对对象图像进行分析,得到对象图像中抓拍到各对象的抓拍时间及抓拍地点;
识别记录模块1230,用于若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
可选的,该装置还可以包括:
聚类模块,用于识别各对象的对象特征;基于各对象的对象特征,对各对象进行聚类;
识别记录模块1230,具体可以用于:
针对属于不同类的至少两个对象,若该至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
可选的,获取模块1210,具体可以用于:
获取一个监控设备采集的至少一张对象图像;或者,获取多个监控设备采集的多张对象图像;
分析模块1220,具体可以用于:
对一个监控设备采集的一张对象图像进行分析,确定该张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,抓拍时间为该监控设备采集该张对象图像的时间戳,抓拍地点为该监控设备的安装位置;
或者,
对一个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,抓拍时间为该监控设备采集各张对象图像的时间戳,抓拍地点为该监控设备的安装位置;
或者,
对多个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,抓拍时间为各监控设备采集各张对象图像的时间戳,抓拍地点为采集各张对象图像的各监控设备的安装位置;
识别记录模块1230,具体可以用于:
针对一个监控设备采集的一张对象图像中的至少两个对象,生成一次该至少两个对象之间的同行对象记录;
或者,
若一个监控设备采集的多张对象图像中至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录;
或者,
若多个监控设备采集的多张对象图像中至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
可选的,该装置还可以包括:
存储模块,用于将各对象的抓拍时间及抓拍地点存储至第一数据库;
识别记录模块1230,具体可以用于:
从第一数据库中,提取出至少两个对象的抓拍时间及抓拍地点;
若该至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录;
存储同行对象记录至第二数据库。
可选的,识别记录模块1230,具体可以用于:
识别抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设 时间间隔内的至少两个对象;
统计在第二预设时间间隔内连续生成该至少两个对象之间的同行对象记录的次数;
若该次数大于或等于第一预设阈值,则保留第二预设时间间隔内生成的一次同行对象记录。
可选的,获取模块1210,还可以用于:
获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对象的对象信息;
该装置还可以包括:
查找模块,用于根据目标对象的对象信息,查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数;
输出模块,用于输出目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行对象记录。
应用本申请实施例,获取监控设备采集的对象图像,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。通过对对象图像进行分析,可以获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,最终确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,所获得的同行对象记录更为全面,从而能够满足日益复杂的对象分析需求。并且,每识别到至少两个对象为同行对象,就会生成一次该至少两个对象之间的同行对象记录,针对不同的对象会有很多次同行对象记录的结果,这样,在输出同行对象记录时,依据生成的同行对象记录的次数进行输出,为输出同行对象记录的准确性提供了支撑。
本申请实施例还提供了一种同行对象识别装置,如图13所示,该装置可以包括:
获取模块1310,用于获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对象;
查找模块1320,用于从数据库中查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数,其中,数据库存储有将监控设备采集的对象图像中抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象之间的同行对象记录;
输出模块1330,用于输出目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行记录。
应用本申请实施例,在获取到同行对象查询请求后,针对性地从数据库中查找出现目标对象的同行对象记录,并且基于各对象与目标对象同行的次数,输出出现次数大于第二预设阈值的对象的同行对象记录。数据库提供了大数据存储,在同行对象查询场景下,提供了大数据查询依据,保证了数据的完整性,能够为查询结果的准确性提供保证。
本申请实施例还提供了一种服务器,如图14所示,包括处理器1401和存储器1402,其中,
存储器1402,用于存放计算机程序;
处理器1401,用于执行存储器1402上所存放的计算机程序时,实现本申请实施例所提供的任一同行对象识别方法。
上述存储器可以包括RAM(Random Access Memory,随机存取存储器),也可以包括NVM(Non-Volatile Memory,非易失性存储器),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述处理器可以是通用处理器,包括CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processing,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
本实施例中,处理器通过读取存储器中存储的计算机程序,并通过运行该计算机程序,能够实现:获取监控设备采集的对象图像,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。 通过对对象图像进行分析,可以获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,最终确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,所获得的同行对象记录更为全面,从而能够满足日益复杂的对象分析需求。并且,每识别到至少两个对象为同行对象,就会生成一次该至少两个对象之间的同行对象记录,针对不同的对象会有很多次同行对象记录的结果,这样,在输出同行对象记录时,依据生成的同行对象记录的次数进行输出,为输出同行对象记录的准确性提供了支撑。
另外,本申请实施例提供了一种非临时性存储介质,非临时性存储介质内存储有计算机程序,计算机程序被处理器执行时,实现本申请实施例所提供的任一同行对象识别方法。
本实施例中,非临时性存储介质存储有在运行时执行本申请实施例所提供的同行对象识别方法的计算机程序,因此能够实现:获取监控设备采集的对象图像,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。通过对对象图像进行分析,可以获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,最终确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,所获得的同行对象记录更为全面,从而能够满足日益复杂的对象分析需求。并且,每识别到至少两个对象为同行对象,就会生成一次该至少两个对象之间的同行对象记录,针对不同的对象会有很多次同行对象记录的结果,这样,在输出同行对象记录时,依据生成的同行对象记录的次数进行输出,为输出同行对象记录的准确性提供了支撑。
本申请实施例还提供了一种应用程序,用于在运行时执行:本申请实施例所提供的任一同行对象识别方法。
相应于上述实施例,本申请实施例提供了一种同行对象识别系统,如图15所示,该系统包括一个或多个监控设备1510及服务器1520;
监控设备1510,用于采集对象图像;
服务器1520,用于获取监控设备采集的对象图像;对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点;若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。
可选的,该系统还可以包括第一数据库及第二数据库;
第一数据库,用于存储服务器对对象图像进行分析得到的抓拍到各对象的抓拍时间及抓拍地点;
第二数据库,用于存储服务器生成的同行对象记录。
可选的,该系统还包括客户端;
服务器,还用于获取同行对象查询请求,其中,同行对象查询请求包括待查询的目标对象的对象信息;根据目标对象的对象信息,查找出现目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与目标对象为同行对象的各对象出现次数;
客户端,用于显示目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行对象记录。
应用本申请实施例,服务器获取监控设备采集的对象图像,对对象图像进行分析,确定对象图像中抓拍到各对象的抓拍时间及抓拍地点,若至少两个对象的抓拍地点的间距在预设范围内、且该至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次该至少两个对象之间的同行对象记录。通过对对象图像进行分析,可以获知对象图像中抓拍到各对象的抓拍时间和抓拍地点,最终确定抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象为同行对象,生成一次同行对象记录,本地记录下对象之间的同行对象记录,为查询人员提供查询依据,本地记录的各对象的同行对象记录都可以进行输出,而不是只输出特定对象的同行对象记录,所获得的同行对象记录更为全面,从而能够满足日益复杂的对象分析需求。并且,每识别到至少两个对象为同行对象,就会生成一次该至少两个对象之间的同行对象记录,针对不同的对象会有很多次同行对象记录的结果,这样,在输出同行对象记录时,依据生成的同行对象记录的次数进行输出,为输出同行对象记录的准确性提供了支撑。
对于服务器、非临时性存储介质、应用程序及系统实施例而言,由于其所涉及的方法内容基本相似于前述的方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、电子设备、机器可读存储介质、应用程序及系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (20)

  1. 一种同行对象识别方法,其特征在于,所述方法包括:
    获取监控设备采集的对象图像;
    对所述对象图像进行分析,确定所述对象图像中抓拍到各对象的抓拍时间及抓拍地点;
    若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  2. 根据权利要求1所述的方法,其特征在于,在所述若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录之前,所述方法还包括:
    识别所述各对象的对象特征;
    基于所述各对象的对象特征,对所述各对象进行聚类;
    所述若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录,包括:
    针对属于不同类的至少两个对象,若所述至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  3. 根据权利要求1所述的方法,其特征在于,所述获取监控设备采集的对象图像,包括:
    获取一个监控设备采集的至少一张对象图像;或者,获取多个监控设备采集的多张对象图像;
    所述对所述对象图像进行分析,确定所述对象图像中抓拍到各对象的抓拍时间及抓拍地点,包括:
    对一个监控设备采集的一张对象图像进行分析,确定该张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,所述抓拍时间为该监控设备采集该张对象图像的时间戳,所述抓拍地点为该监控设备的安装位置;
    或者,
    对一个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,所述抓拍时间为该监控设备采集 各张对象图像的时间戳,所述抓拍地点为该监控设备的安装位置;
    或者,
    对多个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,所述抓拍时间为各监控设备采集各张对象图像的时间戳,所述抓拍地点为采集各张对象图像的各监控设备的安装位置;
    所述若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录,包括:
    针对一个监控设备采集的一张对象图像中的至少两个对象,生成一次所述至少两个对象之间的同行对象记录;
    或者,
    若一个监控设备采集的多张对象图像中至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录;
    或者,
    若多个监控设备采集的多张对象图像中至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  4. 根据权利要求1所述的方法,其特征在于,在所述对所述对象图像进行分析,确定所述对象图像中抓拍到各对象的抓拍时间及抓拍地点之后,所述方法还包括:
    将所述各对象的抓拍时间及抓拍地点存储至第一数据库;
    所述若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录,包括:
    从所述第一数据库中,提取出至少两个对象的抓拍时间及抓拍地点;
    若所述至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录;
    存储所述同行对象记录至第二数据库。
  5. 根据权利要求1所述的方法,其特征在于,所述若至少两个对象的抓 拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录,包括:
    识别抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象;
    统计在第二预设时间间隔内连续生成所述至少两个对象之间的同行对象记录的次数;
    若所述次数大于或等于第一预设阈值,则保留所述第二预设时间间隔内生成的一次同行对象记录。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,在所述若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录之后,所述方法还包括:
    获取同行对象查询请求,所述同行对象查询请求包括待查询的目标对象的对象信息;
    根据所述目标对象的对象信息,查找出现所述目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与所述目标对象为同行对象的各对象出现次数;
    输出所述目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行对象记录。
  7. 一种同行对象识别方法,其特征在于,所述方法包括:
    获取同行对象查询请求,所述同行对象查询请求包括待查询的目标对象;
    从数据库中查找出现所述目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与所述目标对象为同行对象的各对象出现次数,所述数据库存储有将监控设备采集的对象图像中抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象之间的同行对象记录;
    输出所述目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行记录。
  8. 一种同行对象识别装置,其特征在于,所述装置包括:
    获取模块,用于获取监控设备采集的对象图像;
    分析模块,用于对所述对象图像进行分析,确定所述对象图像中抓拍到 各对象的抓拍时间及抓拍地点;
    识别记录模块,用于若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    聚类模块,用于识别所述各对象的对象特征;基于所述各对象的对象特征,对所述各对象进行聚类;
    所述识别记录模块,具体用于:
    针对属于不同类的至少两个对象,若所述至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  10. 根据权利要求8所述的装置,其特征在于,所述获取模块,具体用于:
    获取一个监控设备采集的至少一张对象图像;或者,获取多个监控设备采集的多张对象图像;
    所述分析模块,具体用于:
    对一个监控设备采集的一张对象图像进行分析,确定该张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,所述抓拍时间为该监控设备采集该张对象图像的时间戳,所述抓拍地点为该监控设备的安装位置;
    或者,
    对一个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,所述抓拍时间为该监控设备采集各张对象图像的时间戳,所述抓拍地点为该监控设备的安装位置;
    或者,
    对多个监控设备采集的多张对象图像进行分析,确定各张对象图像中抓拍到各对象的抓拍时间及抓拍地点,其中,所述抓拍时间为各监控设备采集各张对象图像的时间戳,所述抓拍地点为采集各张对象图像的各监控设备的安装位置;
    所述识别记录模块,具体用于:
    针对一个监控设备采集的一张对象图像中的至少两个对象,生成一次所述至少两个对象之间的同行对象记录;
    或者,
    若一个监控设备采集的多张对象图像中至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录;
    或者,
    若多个监控设备采集的多张对象图像中至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  11. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    存储模块,用于将所述各对象的抓拍时间及抓拍地点存储至第一数据库;
    所述识别记录模块,具体用于:
    从所述第一数据库中,提取出至少两个对象的抓拍时间及抓拍地点;
    若所述至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录;
    存储所述同行对象记录至第二数据库。
  12. 根据权利要求8所述的装置,其特征在于,所述识别记录模块,具体用于:
    识别抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象;
    统计在第二预设时间间隔内连续生成所述至少两个对象之间的同行对象记录的次数;
    若所述次数大于或等于第一预设阈值,则保留所述第二预设时间间隔内生成的一次同行对象记录。
  13. 根据权利要求8-12任一项所述的装置,其特征在于,所述获取模块,还用于:
    获取同行对象查询请求,所述同行对象查询请求包括待查询的目标对象的对象信息;
    所述装置还包括:
    查找模块,用于根据所述目标对象的对象信息,查找出现所述目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与所述目标对象为同行对象的各对象出现次数;
    输出模块,用于输出所述目标对象的所有同行对象记录中出现次数大于 第二预设阈值的对象的同行对象记录。
  14. 一种同行对象识别装置,其特征在于,所述装置包括:
    获取模块,用于获取同行对象查询请求,所述同行对象查询请求包括待查询的目标对象;
    查找模块,用于从数据库中查找出现所述目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与所述目标对象为同行对象的各对象出现次数,所述数据库存储有将监控设备采集的对象图像中抓拍地点的间距在预设范围内、且抓拍时间的时间间隔在第一预设时间间隔内的至少两个对象之间的同行对象记录;
    输出模块,用于输出所述目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行记录。
  15. 一种服务器,其特征在于,包括处理器和存储器,其中,
    所述存储器,用于存放计算机程序;
    所述处理器,用于执行所述存储器上所存放的计算机程序时,实现权利要求1-6或7任一项所述的方法。
  16. 一种非临时性存储介质,其特征在于,所述非临时性存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1-6或7任一项所述的方法。
  17. 一种应用程序,其特征在于,用于在运行时执行:权利要求1-6或7任一项所述的方法。
  18. 一种同行对象识别系统,其特征在于,所述系统包括一个或多个监控设备及服务器;
    所述监控设备,用于采集对象图像;
    所述服务器,用于获取所述监控设备采集的所述对象图像;对所述对象图像进行分析,确定所述对象图像中抓拍到各对象的抓拍时间及抓拍地点;若至少两个对象的抓拍地点的间距在预设范围内、且所述至少两个对象的抓拍时间的时间间隔在第一预设时间间隔内,则生成一次所述至少两个对象之间的同行对象记录。
  19. 根据权利要求18所述的系统,其特征在于,所述系统还包括第一数据库及第二数据库;
    所述第一数据库,用于存储所述服务器对所述对象图像进行分析得到的抓拍到各对象的抓拍时间及抓拍地点;
    所述第二数据库,用于存储所述服务器生成的所述同行对象记录。
  20. 根据权利要求18所述的系统,其特征在于,所述系统还包括客户端;
    所述服务器,还用于获取同行对象查询请求,所述同行对象查询请求包括待查询的目标对象的对象信息;根据所述目标对象的对象信息,查找出现所述目标对象的所有同行对象记录,并统计目标对象的所有同行对象记录中与所述目标对象为同行对象的各对象出现次数;
    所述客户端,用于显示所述目标对象的所有同行对象记录中出现次数大于第二预设阈值的对象的同行对象记录。
PCT/CN2020/127500 2019-12-10 2020-11-09 一种同行对象识别方法、装置、服务器及系统 WO2021114985A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911259493.X 2019-12-10
CN201911259493.XA CN111435435A (zh) 2019-12-10 2019-12-10 一种同行人识别方法、装置、服务器及系统

Publications (1)

Publication Number Publication Date
WO2021114985A1 true WO2021114985A1 (zh) 2021-06-17

Family

ID=71580953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127500 WO2021114985A1 (zh) 2019-12-10 2020-11-09 一种同行对象识别方法、装置、服务器及系统

Country Status (2)

Country Link
CN (1) CN111435435A (zh)
WO (1) WO2021114985A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449144A (zh) * 2022-01-04 2022-05-06 航天科工智慧产业发展有限公司 多路相机的抓拍联动装置及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435435A (zh) * 2019-12-10 2020-07-21 杭州海康威视数字技术股份有限公司 一种同行人识别方法、装置、服务器及系统
CN112651335A (zh) * 2020-12-25 2021-04-13 深圳集智数字科技有限公司 一种同行人识别方法、系统、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229335A (zh) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 关联人脸识别方法和装置、电子设备、存储介质、程序
CN108733819A (zh) * 2018-05-22 2018-11-02 深圳云天励飞技术有限公司 一种人员档案建立方法和装置
CN110334231A (zh) * 2019-06-28 2019-10-15 深圳市商汤科技有限公司 一种信息处理方法及装置、存储介质
CN110348347A (zh) * 2019-06-28 2019-10-18 深圳市商汤科技有限公司 一种信息处理方法及装置、存储介质
CN111435435A (zh) * 2019-12-10 2020-07-21 杭州海康威视数字技术股份有限公司 一种同行人识别方法、装置、服务器及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959667A (zh) * 2017-05-18 2018-12-07 株式会社日立制作所 行人跟随行为仿真方法及行人跟随行为仿真装置
CN109889773A (zh) * 2017-12-06 2019-06-14 中国移动通信集团四川有限公司 评标室人员的监控的方法、装置、设备和介质
CN108288025A (zh) * 2017-12-22 2018-07-17 深圳云天励飞技术有限公司 一种车载视频监控方法、装置及设备
CN109117714B (zh) * 2018-06-27 2021-02-26 北京旷视科技有限公司 一种同行人员识别方法、装置、系统及计算机存储介质
CN109299683B (zh) * 2018-09-13 2019-12-10 嘉应学院 一种基于人脸识别和行为大数据的安防评估系统
CN109934176B (zh) * 2019-03-15 2021-09-10 艾特城信息科技有限公司 行人识别系统、识别方法及计算机可读存储介质
CN110084103A (zh) * 2019-03-15 2019-08-02 深圳英飞拓科技股份有限公司 一种基于人脸识别技术的同行人分析方法及系统
CN110276272A (zh) * 2019-05-30 2019-09-24 罗普特科技集团股份有限公司 确认标签人员的同行人员关系的方法、装置、存储介质
CN110532929A (zh) * 2019-08-23 2019-12-03 深圳市驱动新媒体有限公司 一种同行人分析方法和装置以及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229335A (zh) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 关联人脸识别方法和装置、电子设备、存储介质、程序
CN108733819A (zh) * 2018-05-22 2018-11-02 深圳云天励飞技术有限公司 一种人员档案建立方法和装置
CN110334231A (zh) * 2019-06-28 2019-10-15 深圳市商汤科技有限公司 一种信息处理方法及装置、存储介质
CN110348347A (zh) * 2019-06-28 2019-10-18 深圳市商汤科技有限公司 一种信息处理方法及装置、存储介质
CN111435435A (zh) * 2019-12-10 2020-07-21 杭州海康威视数字技术股份有限公司 一种同行人识别方法、装置、服务器及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449144A (zh) * 2022-01-04 2022-05-06 航天科工智慧产业发展有限公司 多路相机的抓拍联动装置及方法
CN114449144B (zh) * 2022-01-04 2024-03-05 航天科工智慧产业发展有限公司 多路相机的抓拍联动装置及方法

Also Published As

Publication number Publication date
CN111435435A (zh) 2020-07-21

Similar Documents

Publication Publication Date Title
WO2021114985A1 (zh) 一种同行对象识别方法、装置、服务器及系统
US10970333B2 (en) Distributed video storage and search with edge computing
EP3253042B1 (en) Intelligent processing method and system for video data
Gauen et al. Comparison of visual datasets for machine learning
CN105336077B (zh) 数据处理设备和操作其的方法
RU2632473C1 (ru) Способ обмена данными между ip видеокамерой и сервером (варианты)
US20210357624A1 (en) Information processing method and device, and storage medium
CN103942811A (zh) 分布式并行确定特征目标运动轨迹的方法与系统
CN102422286A (zh) 利用图像获取参数和元数据自动和半自动的图像分类、注释和标签
CN111222373B (zh) 一种人员行为分析方法、装置和电子设备
JP2022518459A (ja) 情報処理方法および装置、記憶媒体
CN109522421B (zh) 一种网络设备的产品属性识别方法
CN111209776A (zh) 同行人识别方法、装置、处理服务器、存储介质及系统
Xu et al. Video analytics with zero-streaming cameras
CN106534784A (zh) 一种用于视频分析数据结果集的采集分析存储统计系统
CN112770265B (zh) 一种行人身份信息获取方法、系统、服务器和存储介质
CN109299307B (zh) 一种基于结构分析的商标检索预警方法及装置
CN112256682B (zh) 一种多维异构数据的数据质量检测方法及装置
CN110717358B (zh) 访客人数统计方法、装置、电子设备及存储介质
WO2023236514A1 (zh) 一种跨摄像头的多目标追踪方法、装置、设备及介质
WO2021184628A1 (zh) 一种图像处理方法及装置
CN111767432A (zh) 共现对象的查找方法和装置
CN114003672B (zh) 一种道路动态事件的处理方法、装置、设备和介质
WO2021104513A1 (zh) 对象展示方法、装置、电子设备及存储介质
CN111382281B (zh) 基于媒体对象的内容的推荐方法、装置、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20897883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20897883

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20897883

Country of ref document: EP

Kind code of ref document: A1