CN111767432B - Co-occurrence object searching method and device - Google Patents

Co-occurrence object searching method and device Download PDF

Info

Publication number
CN111767432B
CN111767432B CN202010616296.5A CN202010616296A CN111767432B CN 111767432 B CN111767432 B CN 111767432B CN 202010616296 A CN202010616296 A CN 202010616296A CN 111767432 B CN111767432 B CN 111767432B
Authority
CN
China
Prior art keywords
space
time
spatiotemporal
target
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010616296.5A
Other languages
Chinese (zh)
Other versions
CN111767432A (en
Inventor
谢奕
张阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010616296.5A priority Critical patent/CN111767432B/en
Publication of CN111767432A publication Critical patent/CN111767432A/en
Application granted granted Critical
Publication of CN111767432B publication Critical patent/CN111767432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Alarm Systems (AREA)

Abstract

The disclosure provides a method for searching co-occurrence objects, and relates to the field of space-time big data in artificial intelligence and big data technology. The method comprises the following steps: determining a target object in the plurality of objects, wherein the target object has target feature information; determining at least one spatio-temporal point of the plurality of spatio-temporal points associated with the target object based on the target feature information, wherein respective spatio-temporal points of the plurality of spatio-temporal points are associated with different objects of the plurality of objects based on different feature information; and searching for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the object associated with the at least one space-time point. The disclosure also discloses a search device for the co-occurrence object, an electronic device and a computer readable storage medium.

Description

Co-occurrence object searching method and device
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular to the field of spatiotemporal big data in big data technology. Specifically, the disclosure provides a method and a device for searching co-occurrence objects.
Background
In some situations in daily life, it is often necessary to seek co-occurrence of personnel/cell phones/vehicles, etc. based on the monitored data, and thereby enrich clues about the relevant event.
Currently, existing co-occurrence object searching schemes are either based on searching co-occurrence relationships under homologous data only with single data, for example, searching personnel with co-occurrence relationships under mobile phone information monitoring data only based on mobile phone information of a user. Or the co-occurrence relation under the multi-source data needs to be searched based on various data, for example, people with the co-occurrence relation are searched based on mobile phone information of the user, and vehicles with the co-occurrence relation are also searched based on other personal information of the user.
However, the inventors found that: the existing co-occurrence object searching scheme has the problems that the covered scene is less, the data acquisition rate of the monitoring equipment is high, or the co-occurrence relation cannot be directly searched under multi-source data based on single data, and the recognition accuracy is low.
Disclosure of Invention
In view of this, the present disclosure provides a method and apparatus for searching co-occurrence objects that improves the accuracy of searching by using a spatio-temporal big data technique.
One aspect of the present disclosure provides a method for searching for co-occurrence objects, including: determining a target object in a plurality of objects, wherein the target object has target characteristic information; determining at least one spatiotemporal point associated with the target object from a plurality of spatiotemporal points based on the target feature information, wherein at least some spatiotemporal points of the plurality of spatiotemporal points are associated with different objects from the plurality of objects based on different feature information; and searching for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the object associated with the at least one space-time point.
According to an embodiment of the present disclosure, the determining, based on the target feature information, at least one spatiotemporal point associated with the target object from among a plurality of spatiotemporal points includes: determining a target time range; and taking all the space-time points acquired in the target time range as the at least one space-time point.
According to an embodiment of the present disclosure, the taking all spatiotemporal points where the target feature information is acquired in the target time range as the at least one spatiotemporal point includes: acquiring a data map, wherein the data map comprises a collector node set, a collector node set and edges connected between the collector node and the corresponding collector node, each collector node represents a space-time point, each collector node represents an object, each edge describes the association relationship between the connected collector node and the collector node through corresponding edge attribute information, and the edge attribute information comprises collected data and collection time; searching a target side with the side attribute information meeting a preset condition by using the data map, wherein the preset condition comprises that the acquisition time is within the target time range and the acquired data is the target characteristic information; and using the space-time point represented by the collector node connected with the target edge as the at least one space-time point.
According to an embodiment of the present disclosure, each of the plurality of spatiotemporal points corresponds to a plurality of acquisition devices for data acquisition for a specific type of characteristic information, respectively.
According to an embodiment of the present disclosure, further comprising: aiming at collector nodes connected with M edges in the data map, determining space-time points represented by the collector nodes, wherein M is more than or equal to M 0 ,M 0 Represents a preset value, and M 0 Are integers; dividing the space-time points represented by the collector nodes into a plurality of sub-space points in the time dimension and/or in the space dimension; and modifying the data map based on the plurality of sub-spatiotemporal points.
According to an embodiment of the present disclosure, further comprising: aiming at collector nodes connected with M edges in the data map, determining space-time points represented by the collector nodes, wherein M is more than or equal to M 0 ,M 0 Represents a preset value, and M 0 Are integers; classifying the M edges based on the edge attribute information; dividing the space-time points represented by the collector nodes into a plurality of sub-space points based on the edge classification result; and modifying the data map based on the plurality of sub-spatiotemporal points.
According to an embodiment of the present disclosure, the searching for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the object associated with the at least one space-time point includes: taking all objects associated with the at least one space-time point as the first co-occurrence object; or at least one first object among the objects associated with the at least one space-time point is taken as the first co-occurrence object; or at least one second object among the objects associated with the at least one space-time point is taken as the first co-occurrence object; wherein: each first object is associated with a corresponding space-time point in the at least one space-time point through the characteristic information similar to the target characteristic information; each second object is associated with a corresponding spatio-temporal point of the at least one spatio-temporal point by feature information of a different class from the target feature information.
According to an embodiment of the present disclosure, further comprising: after the first co-occurrence object is found, determining other space-time points associated with the at least one space-time point in the plurality of space-time points; and searching for a second co-occurrence object in the plurality of objects that occurs in the same space-time as the target object based on the other space-time points; wherein the other spatiotemporal points include any one of the following: a space-time point adjacent to the at least one space-time point in the time domain and identical in the space domain; a space-time point which is identical to the at least one space-time point in the time domain and is adjacent in the space domain; and a space-time point adjacent to the at least one space-time point in both the time domain and the space domain.
According to an embodiment of the present disclosure, further comprising: and after the first co-occurrence object is found, screening a third co-occurrence object with the co-occurrence times greater than or equal to N times from the first co-occurrence object, wherein N is an integer and N is greater than or equal to 2.
According to an embodiment of the present disclosure, further comprising: and filtering out the objects which co-occur in a plurality of adjacent regions in the spatial domain in the third co-occurrence object.
According to an embodiment of the present disclosure, further comprising: and extracting a fourth co-occurrence object with the movement track coincident from the third co-occurrence object.
According to an embodiment of the present disclosure, the different feature information includes different types of feature information or different feature information of the same type.
Another aspect of the present disclosure provides a lookup apparatus for co-occurrence objects, including: the object determining module is used for determining a target object in the plurality of objects, wherein the target object has target characteristic information; a spatiotemporal point determining module, configured to determine at least one spatiotemporal point associated with the target object from a plurality of spatiotemporal points based on the target feature information, where each spatiotemporal point of the plurality of spatiotemporal points is associated with each object of the plurality of objects based on different feature information; and a search module for searching a first co-occurrence object in the plurality of objects, which occurs in the same space as the target object, based on the object associated with the at least one space-time point.
Another aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods of embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method of an embodiment of the present disclosure.
Another aspect of the present disclosure provides a computer program product comprising a computer program for implementing the above-described method of the embodiments of the present disclosure when being executed by a processor.
According to the embodiments of the present disclosure, because spatiotemporal points are introduced, and each spatiotemporal point may be defined as one virtual body having a specific time attribute (e.g., corresponding to one time range) and a specific spatial attribute (e.g., corresponding to one area range), each virtual body may perform data acquisition on objects (e.g., characters, electronic devices, vehicles, etc.) appearing in the above-mentioned area range within the above-mentioned time range through various monitoring devices (e.g., cameras, base stations, etc.) disposed in the above-mentioned area range, so that different objects may generate various correlations (e.g., correlations based on mobile phone information, correlations based on vehicle information, etc.) with the virtual body based on the acquired data, thereby the embodiments of the present disclosure may find co-occurrence objects directly under multi-source data (instant null points) based on only single data and may recognize co-occurrence objects more precisely.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a method and apparatus for finding co-occurrence objects may be applied, according to an embodiment of the present disclosure;
FIGS. 2A-2C schematically illustrate exemplary application scenarios in which methods and apparatus for finding co-occurrence objects may be applied according to embodiments of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of finding co-occurrence objects according to an embodiment of the disclosure;
FIGS. 4A-4C schematically illustrate schematic diagrams of setting spatiotemporal points according to embodiments of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of determining spatiotemporal points associated with a particular object based on a data map in accordance with an embodiment of the disclosure;
FIGS. 6A and 6B schematically illustrate diagrams of optimizing a search range according to embodiments of the present disclosure;
7A-7C schematically illustrate schematic diagrams of optimizing large spatio-temporal points according to embodiments of the present disclosure;
FIG. 8 schematically illustrates a block diagram of a lookup apparatus of co-occurrence objects according to an embodiment of the disclosure; and
fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a method and apparatus for finding co-occurrence objects according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a method for searching co-occurrence objects and a device for searching co-occurrence objects, wherein the method can be applied to the method. The method may include, for example, first determining a target object of a plurality of objects, wherein the target object has target feature information. And determining at least one spatio-temporal point associated with the target object from a plurality of spatio-temporal points based on the target feature information, wherein respective spatio-temporal points of the plurality of spatio-temporal points are associated with different objects from the plurality of objects based on different feature information. And searching for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based further on the object associated with the at least one space-time point.
The disclosure will be described in detail below with reference to the attached drawings and exemplary embodiments.
FIG. 1 schematically illustrates an exemplary system architecture to which a co-occurrence object lookup method may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a search class application, an instant messaging tool, a mailbox client and/or social platform software, etc., may be installed on the terminal devices 101, 102, 103, as just examples.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing big data support for content queried by the user using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (for example, the data obtained according to the user request) to the terminal device.
It should be noted that, the method for searching for co-occurrence objects provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the lookup device of the co-occurrence object provided by the embodiments of the present disclosure may be generally disposed in the server 105. The method for searching for co-occurrence objects provided by the embodiments of the present disclosure may also be performed by a server or a cluster of servers that are different from the server 105 and that are capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the search apparatus for co-occurrence objects provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the method for searching for co-occurrence objects provided by the embodiments of the present disclosure may be performed by the terminal device 101, 102, or 103, or may be performed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the search apparatus for co-occurrence objects provided by the embodiments of the present disclosure may also be provided in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, a data map configured based on the multi-source monitoring data may be stored in any one of the terminal devices 101, 102, or 103 (for example, but not limited to, the terminal device 101), or stored on an external storage device and may be imported into the terminal device 101. Then, the terminal device 101 may locally perform the method for searching for the co-occurrence object provided by the embodiment of the present disclosure, or send the data spectrum to other terminal devices, servers, or server clusters, and perform the method for searching for the co-occurrence object provided by the embodiment of the present disclosure by other terminal devices, servers, or server clusters that receive the data spectrum.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that, the co-occurrence object searching scheme provided by the embodiment of the present disclosure may be used for searching for co-occurrence of personnel/mobile phone/vehicle and the like in various application scenarios.
Fig. 2A to 2C schematically illustrate exemplary application scenarios in which a search method and apparatus of co-occurrence objects can be applied according to an embodiment of the present disclosure. It should be noted that fig. 2A to 2C are only examples of application scenarios to which the embodiments of the present disclosure may be applied, to help those skilled in the art understand the technical content of the embodiments of the present disclosure, but are not meant to imply that the embodiments of the present disclosure may not be used in other scenarios.
As shown in fig. 2A, for example, in a public security scenario, after a criminal suspects is caught, other involved persons and/or involved vehicles in the case may be found based on the crime time and place of the crime by the crime suspects, and based on a large amount of monitoring data (such as a data map).
As shown in fig. 2B, for example, in the context of infectious disease flow and tracing, after learning about the diagnosed case of the fourth, the direct or indirect intimate contact person of the fourth may be found based on the Li Sijin one to two week activity trace and based on a large amount of monitoring data (such as data pattern), etc.
As shown in fig. 2C, for example, in a traffic accident scenario, after learning about victims wang, the witness of the accident, and/or the culprit and culprit vehicle of the accident, etc. may be found based on the time and place of the accident, and based on a large amount of monitoring data (e.g., a data map).
In addition, the embodiments of the present disclosure may also provide an effective co-occurrence object mining approach in a "co-occurrence" decision scenario for multiple data fusion, which is not illustrated herein.
In the process of implementing the inventive concept of the present disclosure, the inventors found that: in the related art, when searching for co-occurrence objects, only co-occurrence objects can be searched for under homologous data (mobile phone information of other objects) based on single data (such as mobile phone information of a certain object), and co-occurrence objects cannot be searched for under multi-source data based on single data. And, the inventors also found that: the co-occurrence object searching scheme has fewer covered scenes and has higher data acquisition rate requirements on monitoring equipment (such as a base station, a router and the like). Furthermore, the inventors have found that: in the related art, if the co-occurrence object is to be searched under the multi-source data (such as mobile phone information and vehicle information), the co-occurrence object must be searched based on the multiple monitoring data (such as searching the co-occurrence object based on the mobile phone information and the vehicle information respectively), and then the co-occurrence aggregation is performed on the basis of the co-occurrence object, so as to finally find the desired co-occurrence object. And the inventors have also found that: with such co-occurrence object search methods, once the time information and the spatial information are not selected accurately enough, the accuracy of the search results may be affected.
Based on this, one of the inventive concepts of the embodiments of the present disclosure is to provide a processing method that can directly find co-occurrence objects under multi-source data based on any one single monitoring data, and this method is not affected by selection of time information and spatial information, and can find co-occurrence objects more accurately than in the prior art.
Fig. 3 schematically illustrates a flow chart of a method of finding co-occurrence objects according to an embodiment of the disclosure.
As shown in FIG. 3, the method may include, for example, operations S302, S304, and S306
In operation S302, a target object of a plurality of objects may be determined. The target object has target feature information, and may have other feature information.
At least one spatiotemporal point of the plurality of spatiotemporal points associated with the target object may be determined based on the target feature information in operation S304. Wherein at least some of the plurality of spatiotemporal points (e.g., each spatiotemporal point, or a portion of the spatiotemporal points) may be associated with different ones of the plurality of objects based on different characteristic information.
In operation S306, a first co-occurrence object of the plurality of objects that appears in the same space as the target object may be found based on the object associated with the at least one space-time point.
The method shown in fig. 3 is described in detail below with reference to fig. 4A to 4C in conjunction with specific embodiments.
In embodiments of the present disclosure, the spatiotemporal points may be set with reference to the schematic of fig. 4A-4C. Specifically, a plurality of spatiotemporal points may be preset for a specified region. It should be understood that the area may be arbitrarily specified, and the embodiments of the present disclosure are not limited herein. For example, a global area may be specified, or one or several continents (such as asia) may be specified, or one or several countries (or cities, urban areas, streets/towns, etc.) may be specified.
For example, as shown in fig. 4A, the designated area 400 may be divided into any N areas such as area 1 to area N, where N is any integer equal to or greater than 2. Each of regions 1 through N may represent a real space-time point. The real-time space point can be expressed by the expression [ area, T 1 -T 2 ]A representation in which "region" represents position attribute information of real-time space points in the expression, "T 1 -T 2 "time attribute information representing real time-space point, i.e. real time-space point is set in the above-mentioned" T "by means of several monitoring devices (such as camera, base station and WiFi device, etc.) placed in the above-mentioned" zone 1 -T 2 Objects (e.g., people, electronic devices, vehicles, etc.) that appear within the above-described "zone" within the "time frame" collect the monitoring data. For example, assuming that the area 1 represents a "garden bridge surrounding area", the real-time point represented by the area 1 may acquire monitoring data for persons, vehicles, and the like appearing in the "garden bridge surrounding area" in the last 1 day through a camera, wiFi device, and the like provided in the "garden bridge surrounding area".
Since the frequency of the real-time space point to collect the monitoring data is generally high, the real-time space point can collect the monitoring data in a cumulative way in a very short time range, which can lead to that one real-time space point can be associated with a plurality of objects, and thus the co-occurrence objects are not easy to find.
Based on the method, the real space-time points can be further divided into a plurality of virtual space-time points in the time dimension, so that a plurality of objects are related to different virtual space-time points in a scattered manner according to the time information acquired by the monitoring data, and the co-occurrence objects can be searched.
For example, as shown in fig. 4B, the real space-time point X represented by any one of the regions 1 to N may be further divided into virtual space-time points 1 to N shown in the figure, where X is any integer of 1 or more and N or less, where N is any integer of 2 or more. The time ranges corresponding to the virtual space-time points 1 to n are not overlapped with each other, and the sum of the time ranges is equal to the time range corresponding to the real space-time point X. And the starting point of the time range corresponding to the virtual space-time point 1 is the same as the starting point of the time range corresponding to the real-time space point X, and the end point of the time range corresponding to the virtual space-time point n is the same as the end point of the time range corresponding to the real-time space point X. And the position attribute information of the virtual space-time point 1 to the virtual space-time point n are the same as each other, namely, all are the region X.
Note that the "spatiotemporal points" mentioned in the above-mentioned operation S302, operation S304, and operation S306, and the spatiotemporal points mentioned in other embodiments described below, if not specifically described, each represent a "virtual spatiotemporal point".
In an embodiment of the present disclosure, each of the "plurality of spatiotemporal points" mentioned in operation S304 may correspond to a plurality of acquisition devices (also referred to as monitoring devices) for data acquisition for a specific type of characteristic information, respectively.
For example, as shown in fig. 4C, each virtual space-time point X of the virtual space-time points 1 to n corresponds to a camera, a base station, and a WiFi device that are disposed within the "region X", and can capture images for an appearing person and a vehicle, respectively, by the camera disposed within the "region X", and can scan for appearing electronic devices such as a cell phone, etc. by the WiFi device and the base station disposed within the "region X", to acquire device information. In this way, each person may be associated with a virtual spatiotemporal point by the respective face image and device information being acquired and the corresponding acquisition time. Likewise, each vehicle may also be associated with a virtual spatio-temporal point by a respective acquired vehicle snapshot image and a corresponding acquisition time.
In the embodiment of the disclosure, in order to facilitate searching for co-occurrence objects, after monitoring data of any object is collected, each virtual space-time point may establish an association relationship with the collected object based on the collected monitoring data and the collection time. In this way, each virtual space-time point can establish an association relationship with different types of objects based on the monitoring data collected by different types of monitoring devices. And each virtual space-time point can also establish an association relationship with different objects of the same type based on the monitoring data collected by the monitoring equipment of the same type.
Therefore, in the above-described operations S302, S304, and S306, when the co-occurrence object of the target object is found, the target object may be determined from the plurality of objects for which the monitoring data has been acquired based on the object information (e.g., object ID, name of the object, etc.) input by the user for the target object. And determining at least one space-time point (such as all space-time points for which the monitoring data is acquired for the target object) of which the relevant monitoring data is acquired for the specific characteristic information of the target object from the plurality of set virtual space-time points based on certain or a plurality of specific characteristic information (namely, target characteristic information such as mobile phone information, face images and the like) of the target object. Then, all objects for which the monitoring data has been acquired by the at least one time-space point are further determined, and a co-occurrence object (first co-occurrence object) occurring in the same time-space as the target object is searched for in the all objects.
Wherein in one embodiment of the present disclosure, upon determining an object for which monitoring data has been acquired by the at least one spatiotemporal point, the at least one spatiotemporal point may be included in all of the objects for which monitoring data has been acquired for different characteristic information. It should be understood that the different feature information may include, for example, different types of feature information (e.g., mobile phone information and vehicle information) or different feature information of the same type (e.g., mobile phone information of the first object and mobile phone information of the second object).
Also, in one embodiment of the present disclosure, the target object may be one specific type of object, or may be any one of a plurality of specific types of objects. The target object may be, for example, one of a person, an electronic device, an instrument, a vehicle, and the like.
Also, in one embodiment of the present disclosure, objects associated with virtual space-time points may have a variety of characteristic information. For example, when the object is a person, one or more of an identity ID, a mobile phone information (e.g., a mobile phone number, a mobile phone ID, etc.), a face image, etc. may be used as the characteristic information that it has. And any one of these feature information may be the target feature information described in the embodiments of the present disclosure. Alternatively, for example, when the object is a vehicle, one or more of a vehicle ID (such as a license plate number), a vehicle color and information of a logo, a vehicle type, a vehicle image, and the like may be used as the characteristic information that it has. And any one of these feature information may also be the target feature information described in the embodiments of the present disclosure.
According to the embodiment of the disclosure, because the virtual space-time points are introduced, and each virtual space-time point can acquire monitoring data aiming at different types of characteristic information through a plurality of different types of monitoring equipment, the co-occurrence object searching scheme provided by the embodiment of the disclosure can directly search for the co-occurrence object in multi-source monitoring data (such as mobile phone information and vehicle information) based on single monitoring data (such as mobile phone information), and can accurately identify the co-occurrence object.
Further to operation S306, in one embodiment of the present disclosure, all objects associated with the at least one spatio-temporal point may be taken as the first co-occurrence object described above. It will be appreciated that in a public security scenario, this embodiment may be used in a scenario where a clandestine is concerned with a co-clandestine and clandestine tools, etc.
Alternatively, in another embodiment, at least one first object of the objects associated with the at least one space-time point may be used as the first co-occurrence object. Wherein each of the at least one first object may be an object associated with a corresponding one of the at least one spatiotemporal point by the same type of characteristic information as the target characteristic information. For example, if the target object is associated with the spatiotemporal point 1 and the spatiotemporal point 2 through the handset information, all objects (at least one first object) associated with one or both of the spatiotemporal point 1 and the spatiotemporal point 2 through the handset information may be listed as the first co-occurrence object. It will be appreciated that in a public security scenario, this embodiment may be used in a scenario where the clandestine is only concerned with the co-perpetrator.
Alternatively, in another embodiment, at least one second object of the objects associated with the at least one spatio-temporal point may be used as the first co-occurrence object. Wherein each of the at least one second object may be associated with a corresponding one of the at least one spatiotemporal point by a different type of characteristic information than the target characteristic information. For example, if the above target object is associated with the spatiotemporal point 1 and the spatiotemporal point 2 by the cellular phone information, only all objects (at least one second object) associated with one or both of the spatiotemporal point 1 and the spatiotemporal point 2 by the vehicle information may be listed as the above first co-occurrence object. It will be appreciated that in a public security scenario, this embodiment may be used in a scenario where the clandestine staff is only concerned with clandestine tools and the like.
Through the embodiments of the present disclosure, co-occurrence objects that satisfy the search requirement can be further more accurately searched based on the object features that are of interest to the co-occurrence object searcher.
For example, in a public security application scenario, if the case handling personnel only pay attention to common case involving personnel, only co-occurrence objects associated based on mobile phone information can be searched. Alternatively, if the case handling person focuses on a case making tool (such as a vehicle) in addition to the common case involving person, the co-occurrence object associated based on the cell phone information and the co-occurrence object associated based on the vehicle information may be searched for at the same time. Alternatively, if the case clerk focuses only on the case tool, only the co-occurrence object associated based on the vehicle information may be searched.
In the embodiment of the present disclosure, if the above-mentioned at least one spatiotemporal point associated with the target object is determined based on only one or some target feature information of the target object in operation S304, a large number of co-occurrence objects unrelated to the current event (such as the criminal case currently involved) are searched out because the time span is too long, so that the search result contains a large amount of redundant data, which is disadvantageous for quickly and accurately locating the co-occurrence object that should be really focused on currently.
Based on this, further, in operation S304, a corresponding target time range may be determined according to time information input by the user so as to determine the above-described at least one spatio-temporal point based on the monitoring data collected at the target time range. Specifically, it is possible to further determine all the spatiotemporal points at which the above-described target feature information is acquired within the target time range, and take all the spatiotemporal points as the above-described at least one spatiotemporal point.
For example, if there are X spatiotemporal points associated with an object a determined based on handset information of the object, where X represents an integer greater than or equal to 1. And if only the monitored data collection time corresponding to the spatiotemporal 1 (expressed as [ garden bridge, 24 th day 11:00-24 th day 12:00 in 2020), the spatiotemporal 2 (expressed as [ garden bridge, 24 th day 13:00-24 th day 14:00 in 2020) and the spatiotemporal 3 (expressed as [ purple bamboo bridge, 24 th day 12:00-24 th day 13:00 in 2020 ]) among these X spatiotempores falls within the 24 th day in 2020), the spatiotemporal 1-spatiotemporal 3 can be further locked from the above X spatiotempores when the "24 th day in 2020" is designated as the query time (target time range), and the object which appears in the "garden bridge" and the "purple bamboo bridge" together with the object a can be queried based on all the objects associated with the spatiotemporal 1-spatiotemporal 3.
According to the embodiment of the disclosure, the target characteristic information of the target object is used as a main query condition, the designated time range is used as an auxiliary query condition, and searching is performed based on the collected monitoring data, so that the query range of the co-occurrence object can be reduced, and a large amount of redundant data caused by unreasonable query range can be prevented.
In order to visually show the association between each object and each spatiotemporal point, and to speed up the search of co-occurrence objects, a data map may be created based on the collected monitoring data and the time at which the monitoring data was collected. Virtual space-time points are introduced into the data map, and multisource monitoring data are mapped into relation data from a person to be collected (object to be collected) to the person to be collected (virtual space-time points), so that an association relation between multisource monitoring data is constructed. And then, for any one acquired object, the local subgraph can be quickly positioned by utilizing the data map query, and the monitoring conditions of other monitoring devices in the relevant space-time points can be found so as to carry out subsequent co-occurrence object identification.
It should be noted that, in the embodiment of the present disclosure, the constituent elements of the data map may include a collector node set, a collected node set, and edges connected between the collector node and the corresponding collected node. Each collector node in the data map represents a virtual space-time point, each collector node represents a collected object, each side describes the association relationship between the connected collector node and the collector node through corresponding side attribute information, and the side attribute information comprises collected data and collection time and can be marked on the associated side from the collected object to the virtual space point.
For example, as shown in fig. 5, "wang wu", "zhang san", "xiaoming", "Shanghai B1", "Shanghai B2", "Shanghai B3" are all collector nodes, and all the objects are represented; "Virtual1", "Virtual2", "Virtual3", etc. are collector nodes, and represent Virtual spatiotemporal points, "Virtual1", "Virtual2", "Virtual3" represent labels of "Virtual spatiotemporal point 1", "Virtual spatiotemporal point 2", "Virtual spatiotemporal point 3", respectively. The connection line between the object and the space-time point represents a corresponding association relation and is used for storing the monitoring data collected by the space-time point for the object and the collection time of the monitoring data. The area coverage covered by the virtual space-time point 1 is the area coverage for collecting the mobile phone information of 'Wangwu'. And, the time when the mobile phone information of "Wangwu" is collected by the virtual space-time point 1 falls within the time range covered by the virtual space-time point 1. In addition, wifi marked on the edge in the figure indicates that monitoring data is acquired in a wifi scanning mode; the "shot" of the edge label indicates that the monitoring data is acquired by shooting an image.
In addition to the data collected by the object and the collection time being stored in the side attribute information, in the embodiment of the present disclosure, the area information for collecting the monitoring data for the object may be stored in the side attribute information.
Specifically, determining all spatiotemporal points at which the target feature information is acquired within the target time range as at least one spatiotemporal point may include, for example, the following operations.
A pre-constructed data map is obtained from cloud storage or other solidified storage space.
Searching for a target edge with the edge attribute information meeting the preset condition by using the data map. The preset condition comprises that the acquisition time falls in a target time range and the acquired data is target characteristic information.
And taking the space-time point represented by the collector node connected with the target edge as at least one space-time point.
Illustratively, as shown in FIG. 5, "Wangwu" is associated with "Virtual1" and "Virtual2" by cell phone information. And [ region 1, 11:00 to 11:10] represents "Virtual1", [ region 2, 11:10 to 11:20] represents "Virtual2", [ region 3, 11:20 to 11:30] represents "Virtual3". For example, in public security, after the case suspects "wang five" are caught, the case handling staff want to find other case involving staff and/or case involving vehicles, etc., the "wang five" and the inquiry time 10:00-15:00 can be input on the interface for displaying the data map, at this time, a local subgraph as shown in fig. 5 can be located, and from the local subgraph, it can be determined that "Virtual1" and "Virtual2" are both associated with "wang five" through mobile phone information, so it is determined that "Virtual1" and "Virtual2" are all space-time points associated with "wang five".
Further, as shown in fig. 5, "Zhang Sano" and "Shanghai B1" are also associated with "Virtual1" and "Virtual2", and "Shanghai B3" is associated with "Virtual 1". Therefore, if the office only pays attention to co-involved persons, only "Zhang Sano" may be regarded as the "co-occurrence object". If the person handling the case only pays attention to the case-related vehicle, both "Shanghai B1" and "Shanghai B3" may be regarded as "co-occurrence targets". If the person working in case pays attention to both the person working in case and the vehicle working in case together, then "Zhang Sano" and "Shanghai B1" and "Shanghai B3" may both be the "co-occurrence object".
It is apparent that, through the embodiments of the present disclosure, co-occurrence objects having a co-occurrence relationship with a specific object can be quickly found based on a data map.
It should be noted that, in the embodiment of the present disclosure, the co-occurrence object may be queried directly in the cloud storage or other solidified storage space where the data map is stored, or may be queried in the cache space. Specifically, objects with more times to be queried or objects with important attention or objects appearing in a specific area can be cached in a corresponding cache space in advance, so that quick query is facilitated. In addition, for the object with important attention, the previous query result can be cached in the corresponding cache space, so that the subsequent query can directly find the query result in the cache. Alternatively, the data map may be updated into the corresponding cache as it is constructed.
In the previously described embodiments, in order to find out the spatiotemporal points at which co-occurrence objects are more likely to occur, a target time range is specified as an auxiliary query condition. However, in this case, the unreasonable specification of the target time range may cause some critical co-occurrence objects to be missed during the search.
In order to prevent missing the more critical co-occurrence object when searching for the co-occurrence object, the following operations may also be performed after searching for the first co-occurrence object.
Other spatiotemporal points of the plurality of spatiotemporal points associated with the at least one spatiotemporal point are determined.
And searching a second co-occurrence object which appears in the same time space with the target object in the plurality of objects based on other time-space points.
Wherein the other spatio-temporal points include any one of the following: the spatiotemporal points adjacent to at least one spatiotemporal point in the time domain and identical in the space domain, e.g., [ purple bamboo bridge, 11:05-11:10 ] used in determining the first co-occurrence object, the spatiotemporal points [ purple bamboo bridge, 11:00-11:05 ] and [ purple bamboo bridge, 11:10-11:15 ] may be listed as other associated spatiotemporal points in embodiments of the disclosure when determining the second co-occurrence object; the spatiotemporal points that are identical to and adjacent in the spatial domain to the at least one spatiotemporal point in the temporal domain, e.g., [ purple bamboo bridge, 11:05-11:10 ] used in determining the first co-occurrence object, may be listed as other associated spatiotemporal points in embodiments of the disclosure in determining the second co-occurrence object; the spatiotemporal points adjacent to at least one spatiotemporal point in both the time domain and the spatial domain, for example, [ purple bamboo bridge, 11:05-11:10 ] used in determining the first co-occurrence object, may be listed as other associated spatiotemporal points in embodiments of the disclosure when determining the second co-occurrence object.
It should be noted that, the more co-occurrence objects, the more the co-occurrence times, indicate that the possibility of co-occurrence is greater, so in the embodiment of the present disclosure, in order to find the co-occurrence object with the greater possibility of co-occurrence, it is also possible to: after the first co-occurrence object is found, a third co-occurrence object with the co-occurrence times greater than or equal to N times is screened out from the first co-occurrence object, wherein N is an integer and N is greater than or equal to 2.
As shown in FIG. 5, both "Zhang Sano" and "Hu B1" co-occur once with "King five" at "Virtual1" and "Virtual2", and "Hu B3" co-occur only once with "King five" at "Virtual 1". Therefore, "Zhang Sano" and "Shanghai B1" can be regarded as objects (third co-occurrence objects) whose co-occurrence is likely to be large.
Further, the method of the embodiment of the present disclosure may further include: and filtering out objects which co-occur in a plurality of adjacent areas in the third co-occurrence object.
In particular, after obtaining the spatiotemporal points of the local subgraph, other objects determined to have co-occurrence relationships with the input object can come from multiple data sources. In this regard, the associated co-occurrence objects may be precisely locked by the filtering policy. For example, the filtering policy may require that the co-occurrence object and the input object have more than 2 co-occurrence records, and that the distance between two co-occurrence regions, two before and after, is greater than a threshold. Multiple co-occurrences in the same or nearby area (which may have been the only chance) or multiple co-occurrences in nearby areas may be excluded.
As shown in fig. 6A, if two objects (e.g., wang and Zhang san) co-occur at three adjacent spatiotemporal points from "marigold temple" to "purple bamboo bridge" to "garden bridge", then the probability of the two objects meeting is relatively high, so that such co-occurring objects can be filtered out. And as shown in fig. 6B, if two objects co-occur at the space-time point "temple" and "siemens gate", such co-occurring objects can be preserved because the space-time point "temple" and "siemens gate" are far apart.
Through the embodiment of the disclosure, the co-occurrence object can be optimized, and the situation of object chance is eliminated, so that the object with larger co-occurrence suspicion can be accurately locked.
Further, the method of the embodiment of the present disclosure may further include: and extracting the co-occurrence object with the coincident moving track from the third co-occurrence object so as to find out the co-occurrence object which is always in the same line as far as possible, thereby achieving the aim of precisely locking the key co-occurrence object. In particular, trajectory fitting may be performed by co-occurring virtual spatio-temporal points to determine objects that are most likely to be consistently in the same row.
Further, the method may further comprise, for example: aiming at collector nodes connected with M edges in a data map, wherein M is more than or equal to M0, M0 represents a preset value, M and M0 are integers, and the following operation is executed.
A spatio-temporal point represented by the collector node is determined.
The space-time point is divided into a plurality of sub-space points in the time dimension and/or in the space dimension.
The data map is modified based on the plurality of sub-spatiotemporal points.
Alternatively, the method may further comprise, for example: aiming at collector nodes connected with M edges in a data map, wherein M is more than or equal to M0, M0 represents a preset value, M and M0 are integers, and the following operation is executed.
A spatio-temporal point represented by the collector node is determined.
The M sides are classified based on the side attribute information.
The space-time point is divided into a plurality of sub-space points based on the edge classification result.
The data map is modified based on the plurality of sub-spatiotemporal points.
For a monitoring area with a large traffic flow or a monitoring area provided with high-frequency monitoring data acquisition equipment, one virtual space-time point may be associated with a very large number of sides, which may have a large influence on subsequent queries. For example, there are too many co-occurrence objects that are detected, which may include many co-occurrence objects that have little relevance to the target object (e.g., contingent objects), which may result in an inability to accurately lock the truly concerned co-occurrence objects.
Based on this, a large node optimization strategy can be used to divide the large node into multiple small nodes. In particular, the region range may not be changed, but the spatiotemporal points divided into a plurality of smaller time periods in the time dimension only. For example, as shown in FIG. 7A, a large spatiotemporal dot [ region 1, T 1 ~T 2 ]Divided into as shownA plurality of small spatio-temporal points of a time range (e.g., t 1-t 2, t 2-t 3, … … tx-tn). Alternatively, the time range may not be changed, but the space-time points divided into a plurality of smaller spaces only in the spatial dimension. For example, as shown in FIG. 7B, a large spatiotemporal dot [ region 1, T 1 ~T 2 ]Divided into a plurality of small spatiotemporal points, such as region 11 through region 14 as shown. Alternatively, or in addition, the time and area ranges may not be changed, and the time and space points with fewer monitoring devices may be divided based on the type of edge alone. For example, as shown in FIG. 7C, a large spatiotemporal dot [ region 1, T 1 ~T 2 ]Divided into a plurality of small spatiotemporal points as shown. For example, a large space-time point capable of collecting mobile phone information and vehicle information is divided into two small space-time points capable of collecting only mobile phone information and only vehicle information.
By the embodiment of the disclosure, the generation of large virtual space-time points can be avoided, and the inquiry of co-occurrence objects which have association relation with the target object and are focused on can be facilitated.
Fig. 8 schematically illustrates a block diagram of a lookup apparatus of co-occurrence objects according to an embodiment of the disclosure.
As shown in fig. 8, the lookup device 800 of co-occurrence objects may include, for example, an object determination module 802, a spatiotemporal point determination module 804, and a lookup module 806.
The object determining module 802 is configured to determine a target object of the plurality of objects, where the target object has target feature information.
The spatiotemporal point determining module 804 is configured to determine at least one spatiotemporal point associated with the target object from the plurality of spatiotemporal points based on the target feature information, wherein at least some spatiotemporal points of the plurality of spatiotemporal points are associated with different objects from the plurality of objects based on different feature information.
A search module 806 is configured to search for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the objects associated with the at least one space-time point.
It should be noted that, the implementation manner of the device portion in the embodiment of the present disclosure is the same as or similar to the implementation manner of the method portion in the embodiment of the present disclosure, and the description of the embodiment of the device portion specifically refers to the description of the embodiment of the method portion, which is not repeated herein.
Any number of the modules, or at least some of the functionality of any number, according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any number of the object determination module 802, the spatiotemporal point determination module 804, and the lookup module 806 may be combined in one module/unit/sub-unit or any one of them may be split into multiple modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the object determination module 802, the space-time point determination module 804, and the lookup module 806 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an application-specific integrated circuit (ASIC), or may be implemented in hardware or firmware, such as any other reasonable way of integrating or packaging the circuits, or in any one of or a suitable combination of any of the three. Alternatively, at least one of the object determination module 802, the spatio-temporal point determination module 804, and the lookup module 806 may be at least partially implemented as computer program modules that, when executed, perform the corresponding functions.
Fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a method and apparatus for finding co-occurrence objects according to an embodiment of the disclosure. The electronic device shown in fig. 9 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 9, an electronic device 900 according to an embodiment of the present disclosure includes a processor 901 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. The processor 901 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 901 may also include on-board memory for caching purposes. Processor 901 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM903, various programs and data necessary for the operation of the electronic device 900 are stored. The processor 901, the ROM 902, and the RAM903 are connected to each other by a bus 904. The processor 901 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the program may be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the electronic device 900 may also include an input/output (I/O) interface 905, the input/output (I/O) interface 905 also being connected to the bus 904. The electronic device 900 may also include one or more of the following components connected to the I/O interface 905: an input section 906 including a keyboard, a mouse, and the like; an output portion 907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. Which, when executed by a processor, may implement the method of any of the embodiments described above. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM902 and/or RAM 903 and/or one or more memories other than ROM902 and RAM 903 described above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (15)

1. A method of finding co-occurrence objects, comprising:
determining a target object in a plurality of objects, wherein the target object has target feature information;
determining at least one spatiotemporal point associated with the target object in a plurality of spatiotemporal points based on the target feature information, wherein at least part of the spatiotemporal points in the plurality of spatiotemporal points are associated with different objects in the plurality of objects based on different feature information, and the time ranges corresponding to the spatiotemporal points do not coincide with each other; and
searching for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the object associated with the at least one space-time point;
Wherein the determining at least one spatiotemporal point of a plurality of spatiotemporal points associated with the target object based on the target feature information comprises:
determining a target time range; and
taking all the space-time points acquired in the target time range of the target characteristic information as the at least one space-time point;
wherein the taking all the space-time points acquired in the target time range as the at least one space-time point includes:
acquiring a data map, wherein the data map comprises a collector node set, a collector node set and edges connected between the collector node and a corresponding collector node, each edge describes the association relationship between the connected collector node and the collector node through corresponding edge attribute information, and the edge attribute information comprises collected data and collection time;
searching a target side with side attribute information meeting preset conditions by using the data map, wherein the preset conditions comprise acquisition time falling within the target time range and acquired data being the target characteristic information; and
and taking the space-time point represented by the collector node connected with the target edge as the at least one space-time point.
2. The method of claim 1, wherein each collector node represents a spatio-temporal point and each collector node represents an object.
3. The method of claim 1, wherein each of the plurality of spatiotemporal points corresponds to a plurality of acquisition devices for data acquisition for a particular type of characteristic information, respectively.
4. The method of claim 1, further comprising: aiming at collector nodes connected with M edges in the data map, determining space-time points represented by the collector nodes, wherein M is more than or equal to M 0 ,M 0 Represents a preset value, and M 0 Are integers;
dividing the space-time points represented by the collector nodes into a plurality of sub-space points in the time dimension and/or in the space dimension; and
the data map is modified based on the plurality of sub-space points.
5. The method of claim 1, further comprising: aiming at collector nodes connected with M edges in the data map, determining the list of the collector nodesSpace-time points shown, where M.gtoreq.M 0 ,M 0 Represents a preset value, and M 0 Are integers;
classifying the M edges based on edge attribute information;
Dividing the space-time points represented by the collector nodes into a plurality of sub-space points based on the edge classification result; and
the data map is modified based on the plurality of sub-space points.
6. The method of claim 1, wherein the looking up a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the objects associated with the at least one space-time point comprises:
taking all objects associated with the at least one space-time point as the first co-occurrence object; or alternatively
Taking at least one first object of objects associated with the at least one space-time point as the first co-occurrence object; or alternatively
Taking at least one second object of the objects associated with the at least one space-time point as the first co-occurrence object;
wherein:
each first object is associated with a corresponding space-time point in the at least one space-time point through the feature information similar to the target feature information;
each second object is associated with a corresponding spatiotemporal point of the at least one spatiotemporal point by feature information of a different class from the target feature information.
7. The method of claim 1, further comprising: after the first co-occurrence object is found,
Determining other spatiotemporal points of the plurality of spatiotemporal points associated with the at least one spatiotemporal point; and
searching a second co-occurrence object which appears in the same time space with the target object in the plurality of objects based on the other time-space points;
wherein the other spatiotemporal points include any one of:
a spatiotemporal point adjacent to and the same in the spatial domain as the at least one spatiotemporal point in the temporal domain;
a spatiotemporal point identical in time domain to the at least one spatiotemporal point and adjacent in space domain;
and a spatiotemporal point adjacent to the at least one spatiotemporal point in both the temporal domain and the spatial domain.
8. The method of claim 1, further comprising: after the first co-occurrence object is found,
and screening a third co-occurrence object with the co-occurrence times greater than or equal to N times from the first co-occurrence objects, wherein N is an integer and is greater than or equal to 2.
9. The method of claim 8, further comprising:
and filtering out objects which co-occur for a plurality of times in a plurality of adjacent areas of the spatial domain in the third co-occurrence object.
10. The method of claim 8, further comprising:
and extracting a fourth co-occurrence object with the coincident movement track from the third co-occurrence object.
11. The method of claim 1, wherein the different characteristic information comprises different types of characteristic information or different characteristic information of the same type.
12. A co-occurrence object lookup apparatus, comprising:
an object determining module for determining a target object of a plurality of objects, wherein the target object has target feature information;
a spatiotemporal point determining module, configured to determine at least one spatiotemporal point associated with the target object from a plurality of spatiotemporal points, where at least some of the spatiotemporal points are associated with different objects from the plurality of objects based on different feature information, and time ranges corresponding to the spatiotemporal points do not overlap with each other; and
a search module for searching for a first co-occurrence object of the plurality of objects that occurs in the same space-time as the target object based on the object associated with the at least one space-time point;
wherein the spatiotemporal point determination module determines at least one spatiotemporal point of a plurality of spatiotemporal points associated with the target object based on the target feature information, comprising:
determining a target time range; and
Taking all the space-time points acquired in the target time range of the target characteristic information as the at least one space-time point;
wherein the taking all the space-time points acquired in the target time range as the at least one space-time point includes:
acquiring a data map, wherein the data map comprises a collector node set, a collector node set and edges connected between the collector node and a corresponding collector node, each edge describes the association relationship between the connected collector node and the collector node through corresponding edge attribute information, and the edge attribute information comprises collected data and collection time;
searching a target side with side attribute information meeting preset conditions by using the data map, wherein the preset conditions comprise acquisition time falling within the target time range and acquired data being the target characteristic information; and
and taking the space-time point represented by the collector node connected with the target edge as the at least one space-time point.
13. An electronic device, comprising:
one or more processors; and
a memory for storing one or more programs,
Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 11.
14. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to implement the method of any of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 11.
CN202010616296.5A 2020-06-30 2020-06-30 Co-occurrence object searching method and device Active CN111767432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010616296.5A CN111767432B (en) 2020-06-30 2020-06-30 Co-occurrence object searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010616296.5A CN111767432B (en) 2020-06-30 2020-06-30 Co-occurrence object searching method and device

Publications (2)

Publication Number Publication Date
CN111767432A CN111767432A (en) 2020-10-13
CN111767432B true CN111767432B (en) 2024-04-02

Family

ID=72724302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010616296.5A Active CN111767432B (en) 2020-06-30 2020-06-30 Co-occurrence object searching method and device

Country Status (1)

Country Link
CN (1) CN111767432B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328658B (en) * 2020-11-03 2023-08-08 北京百度网讯科技有限公司 User profile data processing method, device, equipment and storage medium
CN113286267B (en) * 2021-07-23 2021-10-26 深圳知帮办信息技术开发有限公司 Stream modulation method, system and storage medium for internet communication in high-speed state
CN114092868B (en) * 2021-09-24 2023-07-21 山东高速建设管理集团有限公司 Human-vehicle traceability monitoring management system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256032A (en) * 2018-01-11 2018-07-06 天津大学 A kind of co-occurrence pattern to space-time data carries out visualization method and device
CN109241912A (en) * 2018-09-08 2019-01-18 河南大学 The target identification method based on class brain across media intelligent towards unmanned autonomous system
CN110059668A (en) * 2019-04-29 2019-07-26 中国民用航空总局第二研究所 Behavior prediction processing method, device and electronic equipment
CN111190939A (en) * 2019-12-27 2020-05-22 深圳市优必选科技股份有限公司 User portrait construction method and device
CN111324643A (en) * 2020-03-30 2020-06-23 北京百度网讯科技有限公司 Knowledge graph generation method, relation mining method, device, equipment and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7430550B2 (en) * 2005-02-11 2008-09-30 Microsoft Corporation Sampling method for estimating co-occurrence counts
US8670782B2 (en) * 2011-06-10 2014-03-11 International Business Machines Corporation Systems and methods for analyzing spatiotemporally ambiguous events
US10296634B2 (en) * 2015-08-18 2019-05-21 Facebook, Inc. Systems and methods for identifying and grouping related content labels

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256032A (en) * 2018-01-11 2018-07-06 天津大学 A kind of co-occurrence pattern to space-time data carries out visualization method and device
CN109241912A (en) * 2018-09-08 2019-01-18 河南大学 The target identification method based on class brain across media intelligent towards unmanned autonomous system
CN110059668A (en) * 2019-04-29 2019-07-26 中国民用航空总局第二研究所 Behavior prediction processing method, device and electronic equipment
CN111190939A (en) * 2019-12-27 2020-05-22 深圳市优必选科技股份有限公司 User portrait construction method and device
CN111324643A (en) * 2020-03-30 2020-06-23 北京百度网讯科技有限公司 Knowledge graph generation method, relation mining method, device, equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于CiteSpace的核心素养知识图谱分析;徐梅丹;张杭;;高等理科教育;20180420(02);全文 *
多粒度时空对象属性关联关系的组成与交互式构建方法;文娜;张英卓;陈达;;地理信息世界;20180425(02);全文 *
时空数据挖掘中的模式探讨;刘杰;张戬;;现代测绘;20170525(03);全文 *

Also Published As

Publication number Publication date
CN111767432A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN111767432B (en) Co-occurrence object searching method and device
US11328163B2 (en) Methods and apparatus for automated surveillance systems
US20210103616A1 (en) Short-term and long-term memory on an edge device
CN110826594B (en) Track clustering method, equipment and storage medium
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
CN102843547B (en) Intelligent tracking method and system for suspected target
US20210382933A1 (en) Method and device for archive application, and storage medium
CN110866642A (en) Security monitoring method and device, electronic equipment and computer readable storage medium
CN110659391A (en) Video detection method and device
CN104317918A (en) Composite big-data GIS (geographic information system) based abnormal behavior analysis and alarm system
EP2735984A1 (en) Video query method, device and system
CN111477007A (en) Vehicle checking, controlling, analyzing and managing system and method
CN111222373A (en) Personnel behavior analysis method and device and electronic equipment
Chen et al. Discovering urban traffic congestion propagation patterns with taxi trajectory data
CN105760548A (en) Vehicle first appearance analysis method and system based on big data cross-domain comparison
Aved et al. A general framework for managing and processing live video data with privacy protection
Xu et al. Sttr: A system for tracking all vehicles all the time at the edge of the network
US10341617B2 (en) Public safety camera identification and monitoring system and method
Franchi et al. Detecting disparities in police deployments using dashcam data
US10506201B2 (en) Public safety camera identification and monitoring system and method
CN112383751A (en) Monitoring video data processing method and device, terminal equipment and storage medium
CN116610849A (en) Method, device, equipment and storage medium for acquiring moving objects with similar tracks
CN115687249A (en) Image gathering method and device, terminal and computer readable storage medium
Xu et al. Detecting pedestrian crossing events in large video data from traffic monitoring cameras
CN112885106A (en) Vehicle big data-based regional prohibition detection system and method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant