CN111767432A - Method and device for searching co-occurrence object - Google Patents

Method and device for searching co-occurrence object Download PDF

Info

Publication number
CN111767432A
CN111767432A CN202010616296.5A CN202010616296A CN111767432A CN 111767432 A CN111767432 A CN 111767432A CN 202010616296 A CN202010616296 A CN 202010616296A CN 111767432 A CN111767432 A CN 111767432A
Authority
CN
China
Prior art keywords
space
time
target
objects
occurrence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010616296.5A
Other languages
Chinese (zh)
Other versions
CN111767432B (en
Inventor
谢奕
张阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010616296.5A priority Critical patent/CN111767432B/en
Publication of CN111767432A publication Critical patent/CN111767432A/en
Application granted granted Critical
Publication of CN111767432B publication Critical patent/CN111767432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Alarm Systems (AREA)

Abstract

The disclosure provides a method for searching a co-occurrence object, and relates to the field of space-time big data in artificial intelligence and big data technology. The method comprises the following steps: determining a target object in the plurality of objects, wherein the target object has target characteristic information; determining at least one of a plurality of space-time points associated with the target object based on the target characteristic information, wherein a respective one of the plurality of space-time points is associated with a different one of the plurality of objects based on different characteristic information; and based on the objects associated with the at least one spatiotemporal point, finding a first co-occurrence object of the plurality of objects that occurs in the same spatiotemporal as the target object. The disclosure also discloses a finding device of the co-occurrence object, an electronic device and a computer readable storage medium.

Description

Method and device for searching co-occurrence object
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular to the field of spatiotemporal big data in big data technology. Specifically, the present disclosure provides a method and an apparatus for finding a co-occurrence object.
Background
Under some scenes in daily life, people/mobile phones/vehicles and the like often need to be searched for co-occurrence according to monitoring data, and related event clues are enriched accordingly.
At present, the existing co-occurrence object searching scheme is based on the co-occurrence relationship that can only be found under the same source data based on only a single data, for example, the existing co-occurrence object searching scheme only finds the people with the co-occurrence relationship under the mobile phone information monitoring data based on the mobile phone information of the user. Or the co-occurrence relationship under the multi-source data needs to be found based on various data, for example, not only people with the co-occurrence relationship are found based on the mobile phone information of the user, but also vehicles with the co-occurrence relationship are found based on other personal information of the user.
However, the inventors found that: the existing co-occurrence object searching scheme has the problems that the covered scenes are few, the requirement on the data acquisition rate of monitoring equipment is high, or the co-occurrence relation cannot be directly searched under multi-source data only based on single data, and the identification precision is low.
Disclosure of Invention
In view of the above, the present disclosure provides a method and an apparatus for searching a co-occurrence object, which improve the searching accuracy by using a space-time big data technology.
One aspect of the present disclosure provides a method for searching a co-occurrence object, including: determining a target object in a plurality of objects, wherein the target object has target characteristic information; determining at least one of a plurality of spatiotemporal points associated with the target object based on the target feature information, wherein at least some of the plurality of spatiotemporal points are associated with different ones of the plurality of objects based on different feature information; and searching a first co-occurrence object which appears in the same space-time with the target object in the plurality of objects based on the object associated with the at least one space-time point.
According to an embodiment of the present disclosure, the determining at least one of a plurality of space-time points associated with the target object based on the target feature information includes: determining a target time range; and taking all the space-time points of the target characteristic information collected in the target time range as the at least one space-time point.
According to an embodiment of the present disclosure, the taking all the space-time points, at which the target feature information is collected within the target time range, as the at least one space-time point includes: acquiring a data map, wherein the data map comprises an acquirer node set, an acquirer node set and edges connected between an acquirer node and a corresponding acquirer node, each acquirer node represents a space-time point, each acquirer node represents an object, each edge describes an association relation between the connected acquirer node and the corresponding acquirer node through corresponding edge attribute information, and the edge attribute information comprises acquired data and acquisition time; searching a target edge with edge attribute information meeting preset conditions by using the data map, wherein the preset conditions comprise that the acquisition time falls within the target time range and the acquired data is the target characteristic information; and taking the space-time point represented by the collector node connected with the target edge as the at least one space-time point.
According to the embodiment of the disclosure, each of the plurality of space-time points corresponds to a plurality of acquisition devices, and the plurality of acquisition devices are respectively used for acquiring data aiming at specific types of feature information.
According to an embodiment of the present disclosure, further comprising: aiming at an acquirer node connected with M edges in the data map, determining a space-time point represented by the acquirer node, wherein M is more than or equal to M0, M0Denotes a preset value, and M0Are all integers; dividing the space-time points represented by the collector nodes into a plurality of sub-space-time points in a time dimension and/or a space dimension; and modifying the data map based on the plurality of sub-space-time points.
According to an embodiment of the present disclosure, further comprising: aiming at an acquirer node connected with M edges in the data map, determining a space-time point represented by the acquirer node, wherein M is more than or equal to M0, M0Denotes a preset value, and M0Are all integers; classifying the M edges based on edge attribute information; dividing the space-time points represented by the collector nodes into a plurality of sub-space-time points based on the edge classification result; and modifying the data map based on the plurality of sub-space-time points.
According to an embodiment of the present disclosure, the finding a first co-occurrence object that occurs in the same space-time as the target object in the plurality of objects based on the object associated with the at least one space-time point includes: taking all objects associated with the at least one space-time point as the first co-occurrence object; or at least one first object among the objects associated with the at least one space-time point is taken as the first co-occurrence object; or at least one second object of the objects associated with the at least one space-time point is taken as the first co-occurrence object; wherein: each first object is associated with a corresponding space-time point in the at least one space-time point through the characteristic information which is the same as the target characteristic information; each second object is associated with a corresponding space-time point of the at least one space-time point by feature information of a different class than the target feature information.
According to an embodiment of the present disclosure, further comprising: after the first co-occurrence object is found, determining other space-time points in the plurality of space-time points, which are associated with the at least one space-time point; and searching a second co-occurrence object which appears in the same space-time with the target object in the plurality of objects based on the other space-time points; wherein the other space-time points include any one of the following: a time-space point adjacent to the at least one time-space point in the time domain and identical to the at least one time-space point in the space domain; a time-space point which is the same as the at least one time-space point in the time domain and is adjacent to the at least one time-space point in the space domain; and a space-time point adjacent to the at least one space-time point in both the time domain and the space domain.
According to an embodiment of the present disclosure, further comprising: and after the first co-occurrence object is found, screening a third co-occurrence object with the co-occurrence frequency more than or equal to N times from the first co-occurrence object, wherein N is an integer and is more than or equal to 2.
According to an embodiment of the present disclosure, further comprising: and filtering out objects which are co-occurred for a plurality of times in a plurality of adjacent areas in the spatial domain in the third co-occurring object.
According to an embodiment of the present disclosure, further comprising: and extracting a fourth co-occurrence object with overlapped moving tracks from the third co-occurrence objects.
According to an embodiment of the present disclosure, the different feature information includes different types of feature information or different feature information of the same type.
Another aspect of the present disclosure provides a finding apparatus of a co-occurrence object, including: an object determination module, configured to determine a target object in a plurality of objects, where the target object has target feature information; a space-time point determining module, configured to determine, based on the target feature information, at least one space-time point associated with the target object from among a plurality of space-time points, where each space-time point from among the plurality of space-time points is associated with each object from among the plurality of objects based on different feature information; and a searching module, configured to search, based on the object associated with the at least one space-time point, a first co-occurrence object that occurs in the same space-time as the target object in the plurality of objects.
Another aspect of the present disclosure provides an electronic device including: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods of embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, implement the method of embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions that when executed perform the method of embodiments of the present disclosure.
According to the embodiment of the disclosure, because the space-time points are introduced and each space-time point can be defined as a virtual body with a specific time attribute (e.g. corresponding to a time range) and a specific space attribute (e.g. corresponding to an area range), each virtual body can perform data acquisition on the objects (e.g. people, electronic devices, vehicles, etc.) appearing in the area range within the time range through various monitoring devices (e.g. cameras, base stations, etc.) arranged in the area range, so that different objects can generate various association relations (e.g. association relation based on mobile phone information association, association relation based on vehicle information association, etc.) with the virtual body based on the acquired data, therefore the embodiment of the disclosure can search co-occurrence objects directly under multi-source data (i.e. space-time points) based on only single data, and co-occurring objects can be identified more accurately.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the find method and apparatus of co-occurrence objects may be applied, according to an embodiment of the disclosure;
2A-2C schematically illustrate exemplary application scenarios in which the co-occurrence object lookup method and apparatus may be applied, according to embodiments of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of finding co-occurring objects according to an embodiment of the present disclosure;
4A-4C schematically illustrate a schematic diagram of setting a point in time and space according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of determining a point in time-space associated with a particular object based on a data atlas, in accordance with an embodiment of the disclosure;
FIGS. 6A and 6B schematically illustrate diagrams of an optimized search range according to an embodiment of the present disclosure;
7A-7C schematically illustrate a schematic diagram of optimizing a large space-time point according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of a finding apparatus for co-occurring objects according to an embodiment of the present disclosure; and
fig. 9 schematically illustrates a block diagram of an electronic device suitable for implementing a co-occurrence object finding method and apparatus according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a method for searching a co-occurrence object and a device for searching the co-occurrence object capable of applying the method. The method may include, for example, first determining a target object of the plurality of objects, wherein the target object has target characteristic information. And determining at least one space-time point associated with the target object from a plurality of space-time points based on the target characteristic information, wherein the corresponding space-time point from the plurality of space-time points is associated with different objects from the plurality of objects based on different characteristic information. Further based on the objects associated with the at least one spatiotemporal point, a first co-occurrence object of the plurality of objects that occurs in the same spatiotemporal as the target object is found.
The disclosure will be described in detail below with reference to the drawings and exemplary embodiments.
Fig. 1 schematically illustrates an exemplary system architecture to which a lookup method of co-occurring objects may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various messenger client applications such as, for example only, a web browser application, a search-type application, an instant messaging tool, a mailbox client, and/or social platform software.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing big data support for content queried by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., data obtained according to the user request, etc.) to the terminal device.
It should be noted that the finding method of the co-occurrence object provided by the embodiment of the present disclosure may be generally performed by the server 105. Accordingly, the finding apparatus of the co-occurrence object provided by the embodiment of the present disclosure may be generally disposed in the server 105. The finding method of the co-occurrence object provided by the embodiment of the present disclosure may also be performed by a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the finding device of the co-occurrence object provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the finding method of the co-occurrence object provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Correspondingly, the finding device of the co-occurrence object provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, a data graph constructed based on multi-source monitoring data may be stored in any one of the terminal apparatuses 101, 102, or 103 (e.g., the terminal apparatus 101, but not limited thereto), or stored on an external storage apparatus and may be imported into the terminal apparatus 101. Then, the terminal device 101 may locally perform the method for finding a co-occurrence object provided by the embodiment of the present disclosure, or send the data map to another terminal device, a server, or a server cluster, and perform the method for finding a co-occurrence object provided by the embodiment of the present disclosure by another terminal device, a server, or a server cluster receiving the data map.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that the co-occurrence object searching scheme provided by the embodiment of the present disclosure may be used for searching co-occurrence of people, mobile phones, vehicles, and the like in various application scenarios.
Fig. 2A to 2C schematically illustrate exemplary application scenarios in which the finding method and apparatus of co-occurrence objects may be applied according to an embodiment of the present disclosure. It should be noted that fig. 2A to 2C are only examples of application scenarios to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the embodiments of the present disclosure, but do not mean that the embodiments of the present disclosure may not be applied to other scenarios.
As shown in fig. 2A, for example, in a public security scenario, after a crime suspect opens three, other related persons and/or related vehicles in the case may be found based on the time and place of the crime and based on a large amount of monitoring data (such as a data map).
As shown in fig. 2B, for example, in the context of infectious disease flow and traceability, after knowing the diagnosed case of lie four, it is possible to find a direct or indirect close contact with lie four based on the activity trace of the last four for one to two weeks and based on a large amount of monitoring data (e.g., data maps).
As shown in fig. 2C, for example, in a traffic accident scenario, after learning the victim's king five, the witness of the accident, and/or the offender and the offender vehicle of the accident, etc. may be found based on the time and place of the accident and based on a large amount of monitoring data (e.g., data maps).
In addition, the embodiment of the disclosure can also provide an effective co-occurrence object mining means in a "co-occurrence" determination scenario for fusing multiple kinds of data, and the disclosure is not illustrated herein one by one.
It should be noted that, in the course of implementing the inventive concept of the present disclosure, the inventors found that: in the related art, when finding a co-occurrence object, the co-occurrence object can be found only under the same-source data (mobile phone information of other objects) based on single data (such as mobile phone information of an object), but the co-occurrence object cannot be found under the multi-source data based on the single data. Furthermore, the inventors have also found that: the co-occurrence object searching scheme has the advantages of less covered scenes and higher requirement on the data acquisition rate of monitoring equipment (such as a base station, a router and the like). Furthermore, the inventors have found that: in the related art, if a co-occurrence object is to be searched under multi-source data (such as mobile phone information and vehicle information), the co-occurrence object must be searched respectively based on multiple kinds of monitoring data (such as the co-occurrence object is searched respectively based on the mobile phone information and the vehicle information), and then the co-occurrence behavior aggregation processing is performed on the basis, so that the desired co-occurrence object can be finally found. And the inventors have also found that: by using the co-occurrence object searching method, once the selection of the time information and the space information is not accurate enough, the accuracy of the searching result is influenced.
Based on this, one of the inventive concepts of the embodiments of the present disclosure is to provide a processing method that can directly search for co-occurrence objects under multi-source data based on any single monitoring data, and this method is not affected by the selection of temporal information and spatial information, and can find co-occurrence objects more accurately than in the prior art.
Fig. 3 schematically shows a flow chart of a method of finding co-occurring objects according to an embodiment of the present disclosure.
As shown in FIG. 3, the method may include operations S302, S304, and S306, for example
In operation S302, a target object among a plurality of objects may be determined. The target object has target characteristic information, and in addition, the target object may have other characteristic information.
In operation S304, at least one of a plurality of spatiotemporal points associated with the target object may be determined based on the target feature information. Wherein at least some of the plurality of spatiotemporal points (e.g., each of the plurality of spatiotemporal points, or some of the plurality of spatiotemporal points) may be associated with different ones of the plurality of objects based on different characteristic information.
In operation S306, a first co-occurrence object occurring in the same space-time as the target object among the plurality of objects may be searched based on the object associated with the at least one space-time point.
The method shown in fig. 3 will be described in detail with reference to fig. 4A to 4C in conjunction with specific embodiments.
In the embodiment of the present disclosure, the space-time point may be set with reference to the schematic diagrams of fig. 4A to 4C. Specifically, a plurality of space-time points may be set in advance for a specified area. It should be understood that any region may be designated, and the disclosed embodiments are not limited thereto. For example, a global region may be specified, or one or several continents (e.g., asia) may be specified, or one or several countries (or cities, towns, streets/towns, etc.) may be specified.
For example, as shown in fig. 4A, the designated area 400 may be divided into any N areas, such as area 1 to area N, where N is any integer greater than or equal to 2. Each of regions 1-N may represent a real-time space-time point. The real space-time point can be expressed as [ region, T ]1-T2]Expression where "region" in this expression represents the position attribute information of a real-time space-point, "T1-T2"time attribute information indicating real time-space point, that is, real time-space point is aligned to the" T "through various monitoring devices (such as camera, base station, WiFi device, etc.) arranged in the range of the" area1-T2Objects (such as persons, electronic devices, vehicles, etc.) appearing within the above-described "area" within the "time range" collect monitoring data. For example, assuming that the area 1 represents a "garden bridge surrounding area", the real-time space point represented by the area 1 may be monitored by a camera, a WiFi device, or the like provided in the "garden bridge surrounding area" for people, vehicles, or the like that have appeared in the "garden bridge surrounding area" within the last 1 day.
Since real-time space-time points usually acquire monitoring data with a high frequency, even in a very short time frame, the monitoring data acquired by accumulating the real-time space-time points may reach an amazing amount, which may cause one real-time space-time point to be associated with a large number of objects, thereby causing difficulty in finding co-occurring objects.
Based on the method, the real space-time points can be further divided into a plurality of virtual space-time points in the time dimension, so that a plurality of objects are dispersedly associated to different virtual space-time points according to the time information acquired by the monitoring data, and the co-occurrence objects can be searched.
For example, as shown in fig. 4B, the real space-time point X represented by any one of the region 1 to the region N may be further divided into a virtual space-time point 1 to a virtual space-time point N shown in the figure, where X is any integer of 1 or more and N or less, and where N is any integer of 2 or more. And the time ranges corresponding to the virtual space-time point 1 to the virtual space-time point n do not overlap with each other, and the sum of the time ranges is equal to the time range corresponding to the real space-time point X. The starting point of the time range corresponding to the virtual space-time point 1 is the same as the starting point of the time range corresponding to the real-time space-time point X, and the ending point of the time range corresponding to the virtual space-time point n is the same as the ending point of the time range corresponding to the real-time space-time point X. The position attribute information of the virtual space-time point 1 to the virtual space-time point n are the same, i.e., all are the region X.
It should be noted that the "space-time point" mentioned in the above operation S302, operation S304, and operation S306 and the space-time point mentioned in the other embodiments described below all represent a "virtual space-time point" if not specifically stated.
In the embodiment of the present disclosure, each of the "plurality of space-time points" mentioned in operation S304 may correspond to a plurality of collecting devices (also referred to as monitoring devices) respectively used for data collection with respect to a specific type of feature information.
For example, as shown in fig. 4C, each virtual space-time point X of the virtual space-time points 1 to n corresponds to a camera, a base station, and a WiFi device that are set in the "area X", and can capture images of people and vehicles that appear through the camera that is set in the "area X", and can scan for electronic devices that appear, such as a mobile phone, etc., through the WiFi device and the base station that are set in the "area X", respectively, to acquire device information. In this way, each person can be associated with a virtual space-time point by the respective captured face image and device information and the corresponding capture time. Likewise, each vehicle may also be associated with a virtual space-time point by means of a respective captured vehicle snapshot and a corresponding capture time.
In the embodiment of the present disclosure, in order to facilitate searching for co-occurrence objects, after the monitoring data of any object is collected, each virtual space-time point may establish an association relationship with the collected object based on the collected monitoring data and the collection time. In this way, each virtual space-time point may establish an association relationship with different types of objects based on monitoring data collected by different types of monitoring devices. And each virtual space-time point can also establish an association relation with different objects of the same type based on monitoring data acquired by monitoring equipment of the same type.
Therefore, in the above operations S302, S304, and S306, when finding a co-occurrence object of the target object, the target object may be determined from the plurality of objects that have collected the monitoring data based on the object information (such as the object ID, the name of the object, and the like) input by the user for the target object. Then, based on one or more specific feature information (i.e., target feature information, such as mobile phone information, face image, etc.) of the target object, at least one space-time point (e.g., all space-time points at which monitoring data has been acquired for the target object) at which relevant monitoring data has been acquired for the specific feature information of the target object is determined from the set virtual space-time points. Then, all objects for which monitoring data has been acquired by the at least one spatiotemporal point are further determined, and a co-occurrence object (first co-occurrence object) occurring in the same spatiotemporal space as the target object is searched for among the all objects.
In one embodiment of the present disclosure, when determining an object for which monitoring data has been acquired by the at least one space-time point, all objects for which monitoring data has been acquired by the at least one space-time point for different feature information may be listed therein. It should be understood that, here, the different feature information may include, for example, different types of feature information (such as cell phone information and vehicle information) or different feature information of the same type (such as cell phone information of the first object and cell phone information of the second object).
Also, in one embodiment of the present disclosure, the target object may be one specific type of object, or may be any one of a plurality of specific types of objects. Illustratively, the target object may be one of a person, an electronic device, an appliance, a vehicle, and the like, for example.
Also, in one embodiment of the present disclosure, an object associated with a virtual space-time point may have a variety of characteristic information. For example, when the object is a person, one or more of an identity ID, mobile phone information (such as a mobile phone number, a mobile phone ID, and the like), a face image, and the like may be used as the feature information. And any of these pieces of feature information may be the target feature information described in the embodiments of the present disclosure. Or, for example, when the object is a vehicle, one or more of a vehicle ID (such as a license plate number), a vehicle color and a vehicle logo, a vehicle type, a vehicle image, and the like may be used as the characteristic information. And any of these pieces of feature information may also be the target feature information described in the embodiments of the present disclosure.
Through the embodiment of the disclosure, because the virtual space-time points are introduced, and each virtual space-time point can acquire monitoring data for different types of feature information through multiple different types of monitoring equipment, the co-occurrence object searching scheme provided by the embodiment of the disclosure can directly search co-occurrence objects in multi-source monitoring data (such as mobile phone information and vehicle information) based on single monitoring data (such as mobile phone information), and can identify the co-occurrence objects more accurately.
Further, for operation S306, in an embodiment of the present disclosure, all objects associated with the at least one space-time point may be taken as the first co-occurrence object. It will be appreciated that in a police setting, this embodiment may be used in a setting where the clerk is concerned about the co-clerk, the crime tool, and so forth.
Alternatively, in another embodiment, at least one first object among the objects associated with the at least one space-time point may be the first co-occurrence object. Each of the at least one first object may be an object associated with a corresponding space-time point of the at least one space-time point through feature information of the same type as the target feature information. For example, if the target object is associated with the space-time point 1 and the space-time point 2 through the mobile phone information, only all objects (at least one first object) associated with one or both of the space-time point 1 and the space-time point 2 through the mobile phone information may be listed as the first co-occurrence object. It will be appreciated that in a police setting, this embodiment may be used in a setting where the clerk is only concerned with the co-clerk.
Alternatively, in another embodiment, at least one second object among the objects associated with the at least one space-time point may be the first co-occurrence object. Wherein each of the at least one second object may be associated with a corresponding one of the at least one space-time point by different types of feature information from the target feature information. For example, if the target object is associated with the space-time point 1 and the space-time point 2 through the mobile phone information, only all objects (at least one second object) associated with one or both of the space-time point 1 and the space-time point 2 through the vehicle information may be listed as the first co-occurrence object. It will be appreciated that in a police setting, this embodiment may be used in a setting where the clerk is only concerned with the writing tool, etc.
Through the embodiment of the disclosure, the co-occurrence object meeting the searching requirement can be further and accurately searched based on the object characteristics concerned by the co-occurrence object searcher.
For example, in a public security application scenario, if a case clerk only focuses on common case related clerks, only co-occurrence objects associated based on mobile phone information may be found. Or, if the clerk pays attention to a crime tool (such as a vehicle) besides the common related clerk, the co-occurrence object associated based on the mobile phone information and the co-occurrence object associated based on the vehicle information can be searched simultaneously. Alternatively, if the clerk is only interested in the grading tool, only co-occurring objects associated based on vehicle information may be found.
In the embodiment of the present disclosure, in operation S304, if the at least one space-time point associated with the target object is determined based on only one or some target feature information of the target object, a large number of co-occurrence objects unrelated to the current event (such as the currently involved criminal case) may be searched out due to too long time span, resulting in a large amount of redundant data included in the search result, which is not favorable for quickly and accurately locating the co-occurrence objects that should be focused at present.
Based on this, further, in operation S304, a corresponding target time range may be determined according to the time information input by the user, so as to determine the above-mentioned at least one time-space point based on the monitoring data collected in the target time range. Specifically, all the space-time points at which the target feature information is acquired within the target time range may be further determined, and all the space-time points may be regarded as the at least one space-time point.
Illustratively, if there are X space-time points associated with an object a determined based on the mobile phone information of the object, where X represents an integer greater than or equal to 1. And if only the monitored data collection time corresponding to the spatio-temporal point 1 (denoted as [ garden bridge, 24 days 6/2020 at 11: 00-24 days 6/2020 at 12:00]), the spatio-temporal point 2 (denoted as [ garden bridge, 24 days 6/2020 at 13: 00-24 days 6/2020 at 14:00]) and the spatio-temporal point 3 (denoted as [ zizhao, 24 days 6/2020 at 12: 00-24 days 6/2020 at 13:00]) falls within 24 days 6/2020, of the X spatio-temporal points, when "24 days 6/2020" is specified as the query time (target time range), the spatio-temporal points 1 to 3 can be further locked from the above-mentioned X spatio-temporal points, and based on all objects associated with the spatio-temporal points 1 to 3, objects that appear together with the object a in "garden bridge" and "zizhao" at 24 days 6/2020 can be queried.
By the aid of the method and the device, the target characteristic information of the target object is used as the main query condition, the specified time range is used as the auxiliary query condition, the search is carried out based on the collected monitoring data, the query range of the co-occurrence object can be narrowed, and further a large amount of redundant data caused by unreasonable query range can be prevented.
In order to visually display the association relationship between each object and each space-time point and to accelerate the search speed of the co-occurrence object, a data map may be created based on the collected monitoring data and the time of collecting the monitoring data. Virtual space-time points are introduced into the data map, and the multi-source monitoring data are mapped into relation data from an acquired person (acquired object) to an acquirer (virtual space-time point), so that an association relation among the multi-source monitoring data is established. And then, for any one collected object, the data map is used for query, so that a local sub-graph can be quickly positioned, and the monitoring conditions of other monitoring equipment in a relevant space-time point are found, so that subsequent co-occurrence object identification is carried out.
It should be noted that, in the embodiment of the present disclosure, the constituent elements of the data map may include an acquirer node set, and an edge connected between an acquirer node and a corresponding acquirer node. Each acquirer node in the data map represents a virtual space-time point, each acquired node represents an acquired object, each edge describes the association relationship between the connected acquirer node and the acquired node through corresponding edge attribute information, and the edge attribute information comprises acquired data and acquisition time and can be marked on the associated edge of the acquired object to the virtual space point.
For example, as shown in fig. 5, "wang wu", "zhang san", "xiaming", "shang B1", "shang B2", "shang B3", etc. are all nodes of the collected object, and all represent the object; "Virtual 1", "Virtual 2", "Virtual 3", etc. are all collector nodes, and all represent Virtual space-time points, and "Virtual 1", "Virtual 2", "Virtual 3" respectively represent labels of "Virtual space-time point 1", "Virtual space-time point 2" and "Virtual space-time point 3". And a connecting line between the object and the space-time point represents a corresponding incidence relation and is used for storing the monitoring data acquired by the space-time point aiming at the object and the acquisition time of the monitoring data. The area range covered by the virtual space-time point 1 is the area range for acquiring the mobile phone information of Wangwu. And the time when the mobile phone information of 'wangwu' is collected by the virtual space-time point 1 falls within the time range covered by the virtual space-time point 1. In addition, the wifi marked on the side of the graph indicates that the monitoring data is acquired by a wifi scanning mode; the "shot" marked on the side indicates that the monitoring data was acquired by taking an image.
It should be noted that, in the embodiment of the present disclosure, in addition to storing the data acquired by the object and the acquisition time in the side attribute information, the area information for acquiring the monitoring data for the object may be stored in the side attribute information electronically.
Specifically, determining all the space-time points at which the target feature information is acquired within the target time range as at least one space-time point may include, for example, the following operations.
And acquiring a pre-constructed data map from cloud storage or other solidified storage space.
And searching a target edge with edge attribute information meeting preset conditions by using the data map. The preset conditions comprise that the acquisition time falls within a target time range and the acquired data is target characteristic information.
And taking the space-time point represented by the collector node connected with the target edge as at least one space-time point.
Illustratively, as shown in fig. 5, "wangwu" is associated with "Virtual 1" and "Virtual 2" through cell phone information. And [ region 1, 11: 00-11: 10] represents "Virtual 1", [ region 2, 11: 10-11: 20] represents "Virtual 2", and [ region 3, 11: 20-11: 30] represents "Virtual 3". For example, in a public security scene, after a case suspect 'wang wu' is caught, a clerk wants to search other related persons and/or related vehicles, etc., the 'wang wu' and the query time are 10: 00-15: 00, and at this time, the device can be positioned to a local sub-graph shown in fig. 5, and it can be determined from the local sub-graph that both the 'Virtual 1' and the 'Virtual 2' are associated with the 'wang wu' through mobile phone information, so that it is determined that both the 'Virtual 1' and the 'Virtual 2' are time-space points associated with the 'wang wu'.
Further, as shown in FIG. 5, "Zhang III" and "Hu B1" are both associated with "Virtual 1" and "Virtual 2", and "Hu B3" is associated with "Virtual 1". Therefore, if the clerk is concerned only with the co-operative clerk, "zhang san" can be regarded as the "co-occurrence object". If the clerk is only interested in the vehicle involved in the case, both "Shanghai B1" and "Shanghai B3" can be regarded as "co-occurrence objects". If the clerk concerned the case-related person and the case-related vehicle at the same time, Zhang III, Shanghai B1 and Shanghai B3 can be both the "co-occurrence objects".
Obviously, through the embodiment of the disclosure, the co-occurrence object having a co-occurrence relationship with the specific object can be quickly found based on the data map.
It should be noted that, in the embodiment of the present disclosure, the co-occurrence object may be directly queried in a cloud storage or other solidified storage space where the data map is stored, or may be queried in a cache space. Specifically, objects with a large number of queries or objects of major interest or objects appearing in a specific area may be cached in the corresponding cache space in advance, so as to facilitate quick query. In addition, for the object with the important attention, the previous query result can be cached in the corresponding cache space, so that the subsequent query can directly find the query result in the cache. Alternatively, the data map may be updated to the corresponding cache as it is constructed.
In the embodiments described above, in order to find out the space-time point where the co-occurrence object is more likely to occur, a target time range is specified as an auxiliary query condition. However, in this case, the target time range is not properly specified, which may cause some critical co-occurring objects to be missed during the search.
In order to prevent missing the relatively critical co-occurrence objects when finding the co-occurrence objects, the following operations may also be performed after finding the first co-occurrence object.
Other spatiotemporal points of the plurality of spatiotemporal points associated with at least one spatiotemporal point are determined.
Based on the other spatiotemporal points, a second co-occurrence object of the plurality of objects that occurs in the same spatiotemporal as the target object is found.
Wherein, the other space-time points comprise any one of the following; the time-space points adjacent to at least one time-space point in the time domain and identical to the time-space point in the space domain, for example, the time-space points used in determining the first co-occurrence object are [ purple bamboo bridge, 11: 05-11: 10], and the time-space points [ purple bamboo bridge, 11: 00-11: 05] and [ purple bamboo bridge, 11: 10-11: 15] can be listed as other associated time-space points in the embodiment of the present disclosure in determining the second co-occurrence object; the time-space points which are the same as at least one time-space point in the time domain and are adjacent to the time-space point in the space domain, for example, the time-space points used in the determination of the first co-occurrence object are purple bamboo bridges (11: 05-11: 10), and the time-space points can be listed as other associated time-space points in the embodiment of the disclosure when the second co-occurrence object is determined; the space-time points adjacent to at least one space-time point in both the time domain and the space domain, for example, there are [ purple bamboo bridges, 11: 05-11: 10] space-time points used in determining the first co-occurring object, and the space-time points [ garden bridges, 11: 10-11: 15] may be listed as other associated space-time points in the disclosed embodiments when determining the second co-occurring object.
It should be noted that, the co-occurrence object with a larger co-occurrence number indicates a higher possibility of a co-occurrence, and therefore, in the embodiment of the present disclosure, in order to find the co-occurrence object with a higher possibility of a co-occurrence, it may further: and after the first co-occurrence object is found, screening a third co-occurrence object with the co-occurrence frequency more than or equal to N times from the first co-occurrence object, wherein N is an integer and is more than or equal to 2.
As shown in FIG. 5, "Zhang San" and "Hu B1" both co-occur with "Wang Wu" at "Virtual 1" and "Virtual 2", and "Hu B3" and "Wang Wu" only co-occur with "Virtual 1". Therefore, "zhangsan" and "hun B1" can be set as objects (third co-occurrence objects) in which co-occurrence is possible.
Further, the method of the embodiment of the present disclosure may further include: and filtering out objects which are co-occurred in a plurality of adjacent areas in the third co-occurring object.
Specifically, after obtaining the space-time point of the local subgraph, the determined other objects having co-occurrence relationship with the input object can come from a plurality of data sources. In this regard, the associated co-occurring objects may be precisely locked through a filtering strategy. For example, the filtering policy may require that the co-occurrence object and the input object have more than 2 co-occurrence records, and the distance between two areas that co-occur two times before and after is greater than a certain threshold. This can eliminate the chance of multiple co-occurrences in the same or nearby areas (which may be mere occurrences), or multiple co-occurrences in nearby areas.
As shown in fig. 6A, if two objects (such as wangwei and zhangsan) co-occur in three adjacent time-space points from "temple" to "purple bamboo bridge" to "garden bridge", the probability of the two objects being met is high, and thus the co-occurring objects can be filtered out. However, as shown in FIG. 6B, if two objects co-occur at the space-time points "temple" and "Xizhuang", such co-occurring objects may remain because the space-time points "temple" and "Xizhuang" are far apart.
Through the embodiment of the disclosure, the co-occurrence object can be optimized, and the condition of object chance is eliminated, so that the object with larger co-occurrence suspicion can be accurately locked.
Further, the method of the embodiment of the present disclosure may further include: and extracting the co-occurrence objects with coincident movement tracks from the third co-occurrence objects so as to find out the co-occurrence objects in the same row as far as possible and achieve the purpose of accurately locking the key co-occurrence objects. In particular, trajectory fitting may be performed through co-occurring virtual space-time points to determine objects that are most likely to be co-located at all times.
Further, the method may further include, for example: aiming at an acquirer node connected with M edges in the data map, wherein M is larger than or equal to M0, M0 represents a preset value, and M0 are integers, the following operations are executed.
And determining the space-time point represented by the collector node.
The space-time point is divided into a plurality of sub-space-time points in a time dimension and/or in a space dimension.
The data map is modified based on the plurality of sub-spatiotemporal points.
Alternatively, the method may further include, for example: aiming at an acquirer node connected with M edges in the data map, wherein M is larger than or equal to M0, M0 represents a preset value, and M0 are integers, the following operations are executed.
And determining the space-time point represented by the collector node.
The M edges are classified based on the edge attribute information.
Based on the edge classification result, the space-time point is divided into a plurality of sub-space-time points.
The data map is modified based on the plurality of sub-spatiotemporal points.
For a monitoring area with a large pedestrian flow/traffic flow or a monitoring area provided with high-frequency monitoring data acquisition equipment, a virtual space-time point may be associated with a large number of sides, which may have a large influence on subsequent inquiry. Such as finding too many co-occurring objects that may contain many co-occurring objects that are not associated with the target object (e.g., incidental objects), which may result in a failure to accurately lock the co-occurring object of real interest.
Based on this, a large node optimization strategy can be used to divide the large node into a plurality of small nodes. Specifically, the region range may not be changed, and only the time-space points divided into a plurality of smaller time segments in the time dimension may be used. For example, as shown in FIG. 7A, a large space-time point [ region 1, T ] may be assigned1~T2]Divided into a plurality of small time-space points in the time range shown in the figure (such as t 1-t 2, t 2-t 3, … … tx-tn). Alternatively, the time range may not be changed, but only divided into a plurality of smaller spatial time-space points in the spatial dimension. For example, as shown in FIG. 7B, a large space-time point [ region 1, T ] may be assigned1~T2]Divided into a plurality of small space-time points, such as region 11-region 14 as shown. Alternatively, or without changing the time range and the area range, the time-space points with fewer monitoring devices can be classified based on the types of the edges. For example, as shown in FIG. 7C, a large space-time point [ region 1, T ] may be assigned1~T2]Divided into a number of small time-space points as shown. For example, the large space-time points capable of collecting the mobile phone information and the vehicle information are divided into two small space-time points capable of collecting only the mobile phone information and only the vehicle information.
By the aid of the method and the device, large virtual space-time points can be avoided, and co-occurrence objects which have incidence relations with the target objects and are focused on can be inquired.
Fig. 8 schematically shows a block diagram of a finding apparatus of co-occurring objects according to an embodiment of the present disclosure.
As shown in fig. 8, the finding apparatus 800 of the co-occurrence object may include an object determining module 802, a time-space point determining module 804 and a finding module 806, for example.
An object determination module 802 for determining a target object of the plurality of objects, wherein the target object has target characteristic information.
A space-time point determining module 804, configured to determine at least one space-time point associated with the target object from among a plurality of space-time points based on the target feature information, where at least some of the space-time points are associated with different objects from among the plurality of objects based on different feature information.
A finding module 806 for finding a first co-occurring object of the plurality of objects that occurs in the same spatio-temporal as the target object based on the objects associated with the at least one spatio-temporal point.
It should be noted that, the implementation of the apparatus portion in the embodiment of the present disclosure is the same as or similar to the implementation of the method portion in the embodiment of the present disclosure, and for the description of the apparatus portion embodiment, reference is specifically made to the description of the method portion embodiment, which is not described herein again.
Any of the modules according to embodiments of the present disclosure, or at least part of the functionality of any of them, may be implemented in one module. Any one or more of the modules according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules according to embodiments of the disclosure may be implemented at least partly as computer program modules which, when executed, may perform corresponding functions.
For example, any number of the object determination module 802, the space-time point determination module 804, and the lookup module 806 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into multiple modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the object determination module 802, the space-time point determination module 804, and the lookup module 806 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-a-chip, a system-on-a-substrate, a system-on-a-package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in any suitable combination of any of them. Alternatively, at least one of the object determination module 802, the point-in-time determination module 804 and the lookup module 806 may be at least partially implemented as a computer program module that, when executed, may perform corresponding functions.
Fig. 9 schematically illustrates a block diagram of an electronic device suitable for implementing a co-occurrence object finding method and apparatus according to an embodiment of the present disclosure. The computer system illustrated in FIG. 9 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 9, a computer system 900 according to an embodiment of the present disclosure includes a processor 901 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. Processor 901 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 901 may also include on-board memory for caching purposes. The processor 901 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the system 900 are stored. The processor 901, the ROM902, and the RAM 903 are connected to each other through a bus 904. The processor 901 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM902 and/or the RAM 903. Note that the programs may also be stored in one or more memories other than the ROM902 and the RAM 903. The processor 901 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
System 900 may also include an input/output (I/O) interface 905, input/output (I/O) interface 905 also connected to bus 904, according to an embodiment of the present disclosure. The system 900 may also include one or more of the following components connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The computer program, when executed by the processor 901, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM902 and/or the RAM 903 described above and/or one or more memories other than the ROM902 and the RAM 903.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (15)

1. A method for finding co-occurrence objects comprises the following steps:
determining a target object of a plurality of objects, wherein the target object has target characteristic information;
determining at least one of a plurality of spatiotemporal points associated with the target object based on the target feature information, wherein at least some of the plurality of spatiotemporal points are associated with different ones of the plurality of objects based on different feature information; and
based on the objects associated with the at least one spatiotemporal point, finding a first co-occurrence object of the plurality of objects that occurs in the same spatiotemporal as the target object.
2. The method of claim 1, wherein the determining at least one of a plurality of spatiotemporal points associated with the target object based on the target feature information comprises:
determining a target time range; and
and taking all the space-time points of the target characteristic information collected in the target time range as the at least one space-time point.
3. The method of claim 2, wherein the taking as the at least one space-time point all space-time points at which the target feature information was acquired within the target time range comprises:
obtaining a data graph, wherein the data graph comprises an acquirer node set, an acquirer node set and an edge connected between an acquirer node and a corresponding acquirer node,
each acquirer node represents a space-time point, each acquired node represents an object, each edge describes the association relationship between the connected acquirer node and the acquired node through corresponding edge attribute information, and the edge attribute information comprises acquired data and acquisition time;
searching a target edge with edge attribute information meeting preset conditions by using the data map, wherein the preset conditions comprise that the acquisition time falls within the target time range and the acquired data is the target characteristic information; and
and taking the space-time point represented by the collector node connected with the target edge as the at least one space-time point.
4. The method of claim 3, wherein each spatiotemporal point of the plurality of spatiotemporal points corresponds to a plurality of acquisition devices, each for data acquisition for a particular type of characteristic information.
5. The method of claim 3, further comprising: determining a space-time point represented by an acquirer node aiming at an acquirer node connected with M edges in the data map, wherein M is more than or equal to M0,M0Denotes a preset value, and M0Are all integers;
dividing the space-time points represented by the collector nodes into a plurality of sub-space-time points in a time dimension and/or a space dimension; and
modifying the data map based on the plurality of sub-spatiotemporal points.
6. The method of claim 3, further comprising: determining a space-time point represented by an acquirer node aiming at an acquirer node connected with M edges in the data map, wherein M is more than or equal to M0,M0Denotes a preset value, and M0Are all integers;
classifying the M edges based on edge attribute information;
dividing the space-time points represented by the collector nodes into a plurality of sub-space-time points based on the edge classification result; and
modifying the data map based on the plurality of sub-spatiotemporal points.
7. The method of claim 1, wherein said finding a first co-occurring object of the plurality of objects that occurs in the same spatio-temporal as the target object based on the objects associated with the at least one spatio-temporal point comprises:
taking all objects associated with the at least one space-time point as the first co-occurrence object; or
Determining at least one first object of the objects associated with the at least one space-time point as the first co-occurrence object; or
Determining a first co-occurrence object as at least one second object of the objects associated with the at least one space-time point;
wherein:
each first object is associated with a corresponding space-time point in the at least one space-time point through the characteristic information which is the same as the target characteristic information;
each second object is associated with a corresponding space-time point of the at least one space-time point through feature information of a different class than the target feature information.
8. The method of claim 1, further comprising: after the first co-occurrence object is located,
determining other ones of the plurality of space-time points associated with the at least one space-time point; and
based on the other spatiotemporal points, finding a second co-occurrence object of the plurality of objects that occurs in the same spatiotemporal as the target object;
wherein the other space-time points include any one of:
a time-space point adjacent to the at least one time-space point in the time domain and identical in the space domain;
a time-space point that is the same in the time domain and adjacent in the space domain as the at least one time-space point;
a space-time point adjacent to the at least one space-time point both in the time domain and in the space domain.
9. The method of claim 1, further comprising: after the first co-occurrence object is located,
and screening out a third co-occurrence object with the co-occurrence frequency more than or equal to N times from the first co-occurrence objects, wherein N is an integer and is more than or equal to 2.
10. The method of claim 9, further comprising:
filtering out objects of the third co-occurring object that co-occur multiple times within multiple adjacent regions of a spatial domain.
11. The method of claim 9, further comprising:
and extracting a fourth co-occurrence object with overlapped moving tracks from the third co-occurrence object.
12. The method of claim 1, wherein the different feature information comprises different types of feature information or different feature information of the same type.
13. A co-occurrence object finding apparatus, comprising:
an object determination module for determining a target object of a plurality of objects, wherein the target object has target characteristic information;
a space-time point determination module configured to determine at least one of a plurality of space-time points associated with the target object based on the target feature information, wherein at least some of the plurality of space-time points are associated with different ones of the plurality of objects based on different feature information; and
a searching module for searching a first co-occurrence object in the plurality of objects that occurs in the same space-time as the target object based on the object associated with the at least one space-time point.
14. An electronic device, comprising:
one or more processors; and
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-12.
15. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 12.
CN202010616296.5A 2020-06-30 2020-06-30 Co-occurrence object searching method and device Active CN111767432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010616296.5A CN111767432B (en) 2020-06-30 2020-06-30 Co-occurrence object searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010616296.5A CN111767432B (en) 2020-06-30 2020-06-30 Co-occurrence object searching method and device

Publications (2)

Publication Number Publication Date
CN111767432A true CN111767432A (en) 2020-10-13
CN111767432B CN111767432B (en) 2024-04-02

Family

ID=72724302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010616296.5A Active CN111767432B (en) 2020-06-30 2020-06-30 Co-occurrence object searching method and device

Country Status (1)

Country Link
CN (1) CN111767432B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328658A (en) * 2020-11-03 2021-02-05 北京百度网讯科技有限公司 User profile data processing method, device, equipment and storage medium
CN113286267A (en) * 2021-07-23 2021-08-20 深圳知帮办信息技术开发有限公司 Stream modulation method, system and storage medium for internet communication in high-speed state
CN114092868A (en) * 2021-09-24 2022-02-25 山东高速建设管理集团有限公司 Man and vehicle traceability monitoring management system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060184572A1 (en) * 2005-02-11 2006-08-17 Microsoft Corporation Sampling method for estimating co-occurrence counts
US20120315920A1 (en) * 2011-06-10 2012-12-13 International Business Machines Corporation Systems and methods for analyzing spatiotemporally ambiguous events
US20170053013A1 (en) * 2015-08-18 2017-02-23 Facebook, Inc. Systems and methods for identifying and grouping related content labels
CN108256032A (en) * 2018-01-11 2018-07-06 天津大学 A kind of co-occurrence pattern to space-time data carries out visualization method and device
CN109241912A (en) * 2018-09-08 2019-01-18 河南大学 The target identification method based on class brain across media intelligent towards unmanned autonomous system
CN110059668A (en) * 2019-04-29 2019-07-26 中国民用航空总局第二研究所 Behavior prediction processing method, device and electronic equipment
CN111190939A (en) * 2019-12-27 2020-05-22 深圳市优必选科技股份有限公司 User portrait construction method and device
CN111324643A (en) * 2020-03-30 2020-06-23 北京百度网讯科技有限公司 Knowledge graph generation method, relation mining method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060184572A1 (en) * 2005-02-11 2006-08-17 Microsoft Corporation Sampling method for estimating co-occurrence counts
US20120315920A1 (en) * 2011-06-10 2012-12-13 International Business Machines Corporation Systems and methods for analyzing spatiotemporally ambiguous events
US20170053013A1 (en) * 2015-08-18 2017-02-23 Facebook, Inc. Systems and methods for identifying and grouping related content labels
CN108256032A (en) * 2018-01-11 2018-07-06 天津大学 A kind of co-occurrence pattern to space-time data carries out visualization method and device
CN109241912A (en) * 2018-09-08 2019-01-18 河南大学 The target identification method based on class brain across media intelligent towards unmanned autonomous system
CN110059668A (en) * 2019-04-29 2019-07-26 中国民用航空总局第二研究所 Behavior prediction processing method, device and electronic equipment
CN111190939A (en) * 2019-12-27 2020-05-22 深圳市优必选科技股份有限公司 User portrait construction method and device
CN111324643A (en) * 2020-03-30 2020-06-23 北京百度网讯科技有限公司 Knowledge graph generation method, relation mining method, device, equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘杰;张戬;: "时空数据挖掘中的模式探讨", 现代测绘, no. 03, 25 May 2017 (2017-05-25) *
徐梅丹;张杭;: "基于CiteSpace的核心素养知识图谱分析", 高等理科教育, no. 02, 20 April 2018 (2018-04-20) *
文娜;张英卓;陈达;: "多粒度时空对象属性关联关系的组成与交互式构建方法", 地理信息世界, no. 02, 25 April 2018 (2018-04-25) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328658A (en) * 2020-11-03 2021-02-05 北京百度网讯科技有限公司 User profile data processing method, device, equipment and storage medium
CN112328658B (en) * 2020-11-03 2023-08-08 北京百度网讯科技有限公司 User profile data processing method, device, equipment and storage medium
CN113286267A (en) * 2021-07-23 2021-08-20 深圳知帮办信息技术开发有限公司 Stream modulation method, system and storage medium for internet communication in high-speed state
CN114092868A (en) * 2021-09-24 2022-02-25 山东高速建设管理集团有限公司 Man and vehicle traceability monitoring management system and method

Also Published As

Publication number Publication date
CN111767432B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
TWI743987B (en) Behavioral analysis methods, electronic devices and computer storage medium
CN110826594B (en) Track clustering method, equipment and storage medium
CN111767432B (en) Co-occurrence object searching method and device
US20190171668A1 (en) Distributed video storage and search with edge computing
CN102843547B (en) Intelligent tracking method and system for suspected target
US9280833B2 (en) Topology determination for non-overlapping camera network
CN108091140B (en) Method and device for determining fake-licensed vehicle
US20200042657A1 (en) Multi-dimensional event model generation
US20140343984A1 (en) Spatial crowdsourcing with trustworthy query answering
CN110866642A (en) Security monitoring method and device, electronic equipment and computer readable storage medium
CN105246033A (en) Terminal location-based crowd status monitoring method and monitoring device
CN111477007A (en) Vehicle checking, controlling, analyzing and managing system and method
Chen et al. Discovering urban traffic congestion propagation patterns with taxi trajectory data
Anedda et al. A social smart city for public and private mobility: A real case study
US20180150683A1 (en) Systems, methods, and devices for information sharing and matching
CN101901340A (en) Suspect tracking method and system
US11416542B1 (en) System and method for uploading still images of matching plates in response to an alert hit list using distributed LPR acquisition
Wei et al. Enhancing local live tweet stream to detect news
CN111274283A (en) Track display method and device
US10341617B2 (en) Public safety camera identification and monitoring system and method
US10506201B2 (en) Public safety camera identification and monitoring system and method
Cecaj et al. Data fusion for city life event detection
Kwee et al. Traffic-cascade: Mining and visualizing lifecycles of traffic congestion events using public bus trajectories
Wei et al. Delle: Detecting latest local events from geotagged tweets
CN113065016A (en) Offline store information processing method, device, equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant