CN114549873A - Image archive association method and device, electronic equipment and storage medium - Google Patents
Image archive association method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114549873A CN114549873A CN202210186152.XA CN202210186152A CN114549873A CN 114549873 A CN114549873 A CN 114549873A CN 202210186152 A CN202210186152 A CN 202210186152A CN 114549873 A CN114549873 A CN 114549873A
- Authority
- CN
- China
- Prior art keywords
- track
- information
- target
- candidate
- monitoring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides an image archive association method and device, electronic equipment and a storage medium, and relates to the technical field of data analysis. In the method, on the basis of the coincidence degree of each candidate track information and the monitoring point identifier of the actual track information in the appointed time range, the track similarity of the corresponding candidate track information and the actual track information is respectively obtained, so that the target track information meeting the preset similarity condition is selected, and the identification information of the target object is associated with the target image file corresponding to the target track information. By adopting the method and the device, the identification information of the target object and the target image file corresponding to the target track information are associated according to the track similarity between the candidate track information and the actual track information, and the collected track information of the target object can be accurately associated to the corresponding image file.
Description
Technical Field
The present application relates to the field of data analysis technologies, and in particular, to a method and an apparatus for associating image files, an electronic device, and a storage medium.
Background
With the continuous development of social security construction and internet of things technology, the laying of the image acquisition equipment is gradually perfected, and the acquired track information is gradually increased, so that the image acquisition equipment can realize the acquisition of the track information of the target object.
For example, in order to obtain track information of a target object, image gathering files are used for collecting all image collection data containing the same target object, and corresponding image collection data sets are generated according to time sequence; then, traversing each image acquisition data in the image acquisition data set, and comparing whether the interval time between the adjacent image acquisition data before and after meets a preset time threshold condition and whether the interval distance between the adjacent image acquisition data before and after meets a preset distance interval threshold condition; and finally, when the condition that the image acquisition data adjacent to the front and the back meet the preset time threshold condition and the preset distance threshold condition is detected, taking the image acquisition data adjacent to the front and the back as the track information of the target object.
Further, after the trajectory information of the target object is acquired according to the respective spatiotemporal characteristics of the image acquisition data containing the same target object, the trajectory information of the target object acquired by the image acquisition device can be attributed to the image acquisition file of the target object according to the characteristic information of the target object.
However, with the image gathering method, when the image capturing device captures the trajectory information of the target image, the light for image capture is weak, so that the obtained characteristic information of the target object is relatively fuzzy, and the captured trajectory information of the target object cannot be accurately attributed to the image file of the target object, and further, other trajectory information of the target object in the image file cannot be obtained.
Therefore, the acquired trajectory information of the target object cannot be accurately associated with the corresponding image archive in the above manner.
Disclosure of Invention
The application provides an image archive association method, an image archive association device, electronic equipment and a storage medium, which are used for accurately associating acquired track information of a target object to a corresponding image archive.
In a first aspect, an embodiment of the present application provides an image archive association method, where the method includes:
acquiring actual track information of a target object and respective candidate track information of each candidate object within a specified time range; wherein, the actual track information includes: at least one monitoring point and its respective monitoring point identification; each candidate trajectory information includes: at least one monitoring point and its respective monitoring point identification.
And respectively obtaining the track similarity of the corresponding candidate track information and the actual track information based on the overlap ratio of the monitoring point identifier of each candidate track information and the actual track information.
And selecting target track information meeting the preset similarity condition from the candidate track information based on the obtained similarity of each track.
And determining a target image archive corresponding to the target track information, and associating the identification information of the target object with the target image archive.
In a second aspect, an embodiment of the present application further provides an apparatus for associating an image archive, where the apparatus includes:
the acquisition module is used for acquiring actual track information of the target object and candidate track information of each candidate object in a specified time range; wherein, the actual track information includes: at least one monitoring point and its respective monitoring point identification; each candidate trajectory information includes: at least one monitoring point and its respective monitoring point identification.
And the processing module is used for respectively obtaining the track similarity of the corresponding candidate track information and the actual track information based on the contact ratio of the monitoring point identifier of each candidate track information and the actual track information.
And the selecting module is used for selecting target track information meeting the preset similarity condition from the candidate track information based on the obtained similarity of each track.
And the association module is used for determining a target image archive corresponding to the target track information and associating the identification information of the target object with the target image archive.
In an optional embodiment, before acquiring the actual trajectory information of the target object and the candidate trajectory information of each candidate object in the specified time range, the acquiring module is further configured to:
and acquiring the position information corresponding to each monitoring point.
And respectively carrying out geographic hash coding on the position information corresponding to each monitoring point to obtain corresponding coding results.
And respectively using the obtained coding results as monitoring point identifiers of corresponding monitoring points.
In an optional embodiment, when the track similarity between the corresponding candidate track information and the actual track information is respectively obtained based on the overlap ratio of the monitoring point identifier of each candidate track information and the actual track information, the processing module is specifically configured to:
for at least one candidate track information, respectively performing the following operations:
and respectively acquiring the number of target road sections of which the same monitoring point identifier corresponds to each candidate track road section contained in one candidate track information and the corresponding actual track road section in the actual track information.
And determining the contact ratio of the monitoring point identifiers of the candidate track information and the actual track information based on the obtained at least one target road section number and the candidate road section number corresponding to the candidate track information.
And obtaining the track similarity of the candidate track information and the actual track information based on the contact ratio of the monitoring point identifiers.
In an optional embodiment, when obtaining each candidate track segment included in one candidate track information and the number of target segments corresponding to the same monitoring point identifier as the corresponding actual track segment in the actual track information, the processing module is specifically configured to:
and acquiring at least one actual track section from the actual track information based on the monitoring point identification of each monitoring point, acquiring at least one candidate track section from one candidate track information, and recording the number of the candidate sections of the candidate track information.
For at least one target track segment, the following operations are respectively performed:
and acquiring a monitoring point identifier corresponding to a target track section and monitoring point identifiers corresponding to at least one candidate track section.
And determining the number of target road sections corresponding to the same monitoring point identifier in each candidate track road section with one actual track road section.
In an optional embodiment, when obtaining at least one actual track segment from the actual track information based on the monitoring point identifier of each monitoring point, the processing module is specifically configured to:
acquiring target monitoring information which is acquired by each monitoring point in actual track information; wherein each target monitoring information at least comprises: and monitoring the target monitoring time of the target object by the corresponding monitoring point.
Aiming at two pieces of target monitoring information obtained by every two adjacent monitoring points in each piece of obtained target monitoring information, the following operations are respectively executed:
and determining the monitoring time interval of the two target monitoring information obtained by the two adjacent monitoring points.
And selecting the target road section division rule from a preset candidate road section division rule set aiming at the original track road sections corresponding to the two target monitoring information based on the time interval to which the monitoring time interval belongs.
And dividing the original track road section based on the target road section division rule to obtain at least one actual track road section.
In an optional embodiment, when the original track segment is divided based on the target segment division rule to obtain at least one actual track segment, the processing module is specifically configured to:
and if the monitoring time interval is greater than a preset time interval threshold, dividing the original track sections according to a set first time interval to obtain two corresponding actual track sections.
If the monitoring time interval is not greater than the time interval threshold value and the monitoring point identifications of two adjacent monitoring points are different, dividing the original track section according to a set second time interval to obtain two corresponding actual track sections; wherein the first time interval is greater than the second time interval.
And if the monitoring time interval is not larger than the time interval threshold value and the monitoring point identifications of two adjacent monitoring points are the same, directly taking the original track section as an actual track section.
In an optional embodiment, in the process of dividing the original track segment based on the target segment division rule to obtain at least one corresponding actual track segment, the processing module is further configured to:
and if the original track road section is divided based on the target road section division rule to obtain two actual track road sections, respectively allocating monitoring point identifications corresponding to two adjacent monitoring points to the corresponding actual track road sections in the two actual track road sections according to the time sequence of the target object in the two actual track road sections.
In a third aspect, the present application provides an electronic device comprising:
a memory for storing a computer program;
the processor is used for realizing the steps of the image file association method when executing the computer program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-mentioned method steps of associating an image archive.
In a fifth aspect, a computer program product is provided, which, when invoked by a computer, causes the computer to perform the method steps of associating an image archive as described in the first aspect.
According to the image archive association method provided by the embodiment of the application, on the basis of the coincidence degree of the monitoring point identifiers of each candidate track information and the actual track information in the appointed time range, the track similarity of the corresponding candidate track information and the actual track information is respectively obtained, so that the target track information meeting the preset similarity condition is selected, and the target image archive corresponding to the identification information of the target object and the target track information is associated. By adopting the mode, the identification information of the target object is associated with the target image file corresponding to the target track information according to the track similarity between the candidate track information and the actual track information, so that the technical defect that the acquired track information of the target object belongs to the image file of the target object because the acquired characteristic information of the target object is fuzzy due to the fact that the light of image acquisition is weak is overcome, and the acquired track information of the target object can be accurately associated with the corresponding image file.
Drawings
Fig. 1 schematically illustrates an application scenario to which an embodiment of the present application is applied;
fig. 2 schematically illustrates a flowchart of a method for obtaining an identifier of a monitoring point according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating an encoding principle provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a logic diagram based on FIG. 2 according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating a method for associating an image archive according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a logic for acquiring actual trajectory information and candidate trajectory information according to an embodiment of the present disclosure;
fig. 7 exemplarily illustrates a flowchart of a method for obtaining a track similarity according to an embodiment of the present application;
fig. 8 is a schematic flowchart illustrating a method for obtaining an actual track segment according to an embodiment of the present application;
fig. 9 is a logic diagram illustrating an example of selecting a target road segment division rule according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a logic diagram based on FIG. 5 according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram illustrating an apparatus for associating image files according to an embodiment of the present disclosure;
fig. 12 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to accurately associate the acquired track information of the target object with the corresponding image file, in the embodiment of the application, based on the coincidence degree of each candidate track information and the monitoring point identifier of the actual track information within a specified time range, the track similarity between the corresponding candidate track information and the actual track information is respectively obtained, so that the target track information meeting the preset similarity condition is selected, and the identification information of the target object is associated with the target image file corresponding to the target track information.
For a better understanding of the embodiments of the present application, technical terms referred to in the embodiments of the present application will be described first below.
(1) Media access control address: also known as a MAC address, is an address used to identify the location of a network device, and is used to uniquely identify a network card in the network, where a device has one or more network cards, each network card needs to have a unique MAC address.
(2) And (3) geographic hash coding: that is, the Geohash code is a string code that converts a latitude and longitude coordinate into a sequence and a comparison. Wherein, the Geohash coding rule is as follows: the longitude-180, 180 and the latitude-90, 90 are taken as ranges, and the equator and the prime meridian are taken as boundaries for division. Latitude range [ -90, 0), represented by binary 0, (0, 90] represented by binary 1, longitude range [ -180, 0) represented by binary 0, (0, 180] represented by binary 1; and the halving is performed recursively in turn.
Further, longitude and latitude are converted into binary codes according to the Geohash coding rule, then the binary codes of the longitude and the latitude are combined according to the rule that the longitude is in even number and the latitude is in odd number, and finally the combined binary codes are formed into a Geohash code according to a Base32 coding table.
For example, taking the longitude and latitude coordinates [116.390705, 39.923201] of the monitoring point 1 as an example, the latitude 39.923201 belongs to (0, 90), so is coded as 1, (0, 90) is divided into two intervals of (0, 45) and (45, 90), and 39.923201 is located at (0, 45), so is coded as 0, (0, 45) is divided into two intervals of (0, 22.5) and (22.5, 45), and 39.923201 is located at (22.5, 45), so is coded as 1, and so on, by the above coding mode, the binary coding of the latitude of the monitoring point 1 is 101110001100011111, similarly, the binary coding of the longitude 1 is 11010010110001000100, further, the binary coding of the longitude is merged, even number is occupied, odd number is occupied, mixed binary coding is 1110011101001000111100000011010101100001, if the Base32 coding table is coded by using 32 characters of 0-9, B-Z (A, I, L, O), then the Base32 code corresponding to the binary mix code is: WX4G0EC 1.
(3) Base32 encoding: the method refers to encoding binary data into a visible character string, and the encoding rule is as follows: any given binary data is segmented into a group of 5 bits (bit), and each segmented group is encoded to obtain 1 visible character. For ease of understanding, the Base32 encoding table herein encodes using 32 characters 0-9, B-Z (minus A, I, L, O).
(4) Radio Frequency Identification (RFID) is an automatic Identification technology, which performs contactless bidirectional data communication in a Radio Frequency manner, and reads and writes a recording medium (an electronic tag or a Radio Frequency card) in a Radio Frequency manner, thereby achieving the purposes of identifying a target and exchanging data.
It should be noted that the naming manner of the technical terms described above is only an example, and the embodiment of the present application does not limit the naming manner of the technical terms described above.
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "a plurality" is understood as "at least two" in the description of the present application. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. A is connected with B and can represent: a and B are directly connected and A and B are connected through C. In addition, in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
Fig. 1 exemplarily shows an application scenario diagram applied to the embodiment of the present application, and as shown in fig. 1, the application scenario diagram includes: a server 101, a terminal device 102, image capturing devices (103a, 103b, 103c), and a public road 104. Among them, each image acquisition device in the image acquisition devices (103a, 103b, 103c) is arranged on the public road 104 according to a certain spacing distance, and can send the obtained monitoring information to the terminal device 102; in addition, the server 101 and the terminal device 102 may exchange information in a wireless communication manner or a wired communication manner.
Illustratively, the server 101 may communicate with the terminal device 102 by accessing a network via a cellular Mobile communication technology, such as, for example, including a 5th Generation Mobile Networks (5G) technology.
Optionally, the server 101 may access the network via short-range Wireless communication, for example, including Wireless Fidelity (Wi-Fi) technology, to communicate with the terminal device 102.
In the embodiment of the present application, the number of the servers and the other devices is not limited, and fig. 1 only describes one server as an example.
The server 101 is configured to obtain actual trajectory information of the target object and respective candidate trajectory information of each candidate object within a specified time range; then, respectively obtaining the track similarity of the corresponding candidate track information and the actual track information based on the contact ratio of the monitoring point identifier of each candidate track information and the actual track information; further, based on the obtained similarity of each track, target track information meeting a preset similarity condition is selected from the candidate track information; and finally, determining a target image archive corresponding to the target track information, and associating the identification information of the target object with the target image archive.
The terminal device 102 is a device capable of providing voice and/or data connectivity to a user, and includes a handheld terminal device, a vehicle-mounted terminal device, and the like having a wireless connection function.
Illustratively, the terminal device may be: the Mobile terminal Device comprises a Mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a wearable Device, a Virtual Reality (VR) Device, an Augmented Reality (AR) Device, a wireless terminal Device in industrial control, a wireless terminal Device in unmanned driving, a wireless terminal Device in a smart grid, a wireless terminal Device in transportation safety, a wireless terminal Device in a smart city, a wireless terminal Device in a smart home, and the like.
It should be noted that the terminal device 102 may sum up the monitoring information obtained by each of the image capturing devices (103a, 103b, 103c) so as to obtain the actual trajectory information of the target object.
The image acquisition equipment (103a, 103b, 103c) is an equipment for acquiring images or recording videos, and comprises a handheld image acquisition equipment with a wireless connection function, a head-mounted image acquisition equipment, a fixed image acquisition equipment and the like.
Illustratively, the image acquisition device may be: cameras, video cameras, Digital cameras (DSC), Single Lens Reflex Cameras (SLRC), other image acquisition devices (mobile phones, tablet computers, etc.) with a photographing function, video acquisition cards, and the like. In the embodiment of the present application, the image capturing device of the monitoring point is described by taking a bayonet capturing device as an example. The gate acquisition device is used for acquiring track information sources of a formed track generated from a departure area to an arrival area of a plurality of traffic objects (including target objects), and the track information sources include but are not limited to: MAC collection, RFID, vehicle information, etc.
For convenience of description, herein, taking MAC acquisition as an example, the card port acquisition device may acquire an MAC address of the mobile communication device by using a base station or a local area network gateway that communicates with the mobile communication device, and then position the mobile communication device by using the MAC address, so as to acquire target track information of a target object corresponding to the mobile terminal device.
Further, based on the application scene schematic diagram, a monitoring point identifier corresponding to each monitoring point is obtained, as shown in fig. 2, in the embodiment of the present application, a method flow for obtaining the monitoring point identifier includes the following specific steps:
s201: and acquiring the position information corresponding to each monitoring point.
Specifically, when step S201 is executed, the server may obtain, according to a data feature extraction algorithm of the location information, location information corresponding to the corresponding monitoring point from respective feature data sets of each monitoring point included in the original database.
Illustratively, the raw database includes respective feature data sets of respective bayonet acquisition devices, where each feature data set at least includes: the server can obtain the position information of the corresponding bayonet collecting equipment from each characteristic data set based on a characteristic extraction algorithm of the position information, and the position information of the bayonet collecting equipment is used as the position information of the corresponding monitoring point.
S202: and respectively carrying out geographic hash coding on the position information corresponding to each monitoring point to obtain corresponding coding results.
Specifically, when step S202 is executed, after the server obtains the position information corresponding to each monitoring point, the server determines the longitude and latitude coordinates of the corresponding monitoring point according to the data type of the longitude and latitude, so as to obtain the coding result of the corresponding monitoring point according to the Geohash coding.
Illustratively, the server may perform Geohash coding on the position information of the card port acquisition device, and encode the two-dimensional space longitude and latitude data into a character string with a specified number of bits. As shown in fig. 3, the basic principle of Geohash is as follows: the earth is understood as a two-dimensional plane, and the two-dimensional plane is recursively decomposed into smaller sub-blocks, wherein each sub-block has the same coding value within a certain latitude and longitude range. The coded character strings are similar to each other and represent that the distances between corresponding bayonet acquisition devices are close, and in most cases, the more the prefix matching of the character strings is, the closer the distance is.
It is worth noting that the position information of the card port acquisition device is fixed, so that the server can accelerate the calculation process of the subsequent track similarity and other related parameters by encoding the position information of the card port acquisition device.
S203: and respectively using the obtained coding results as monitoring point identifiers of corresponding monitoring points.
Specifically, when step S203 is executed, the server may directly use each encoding result as the monitoring point identifier corresponding to each monitoring point after obtaining the encoding result corresponding to each monitoring point.
For example, referring to fig. 4, the server obtains, based on a feature extraction algorithm of the location information, location information corresponding to each monitoring point from a feature data set of each monitoring point included in the original database; secondly, respectively acquiring longitude and latitude coordinates of corresponding monitoring points from the acquired position information based on the data type of the longitude and latitude; further, performing Geohash coding on each longitude and latitude coordinate to respectively obtain respective corresponding coding results of corresponding monitoring points; and finally, respectively using the obtained coding results as the monitoring point identifiers of the corresponding monitoring points.
Further, based on the foregoing pre-operation processing, after the server obtains the monitoring point identifiers corresponding to the respective monitoring points, referring to fig. 5, in this embodiment of the present application, a flow of an association method for an image archive for actual trajectory information of a target object includes the following specific steps:
s501: and acquiring actual track information of the target object and respective candidate track information of each candidate object in a specified time range.
Specifically, in step S501, the server screens out, from the original database, the respective candidate trajectory information of each candidate object and the actual trajectory information of the target object, which satisfy the specified time range and are obtained by each monitoring point, respectively, based on the specified time range. Wherein, the actual track information includes: at least one monitoring point and its respective monitoring point identification; each candidate trajectory information includes: at least one monitoring point and its respective monitoring point identification.
For example, assume that the specified time range is: 2022.01.07-2022.01.1012: 00-14:00, referring to fig. 6, the server may select candidate trajectory information satisfying the requirement of the specified time range from each original trajectory information according to the respective time information of each original trajectory information contained in the original database, in combination with the specified time range, and obtain the actual trajectory information of the target object at this time, for example, 2022.01.1012: 02-13: 58. It should be noted that the specified time range is set according to the time range corresponding to the actual trajectory information of the target object.
S502: and respectively obtaining the track similarity of the corresponding candidate track information and the actual track information based on the overlap ratio of the monitoring point identifier of each candidate track information and the actual track information.
In a possible implementation manner, when step S502 is executed, after obtaining each candidate track information and actual track information, for one candidate track information, according to each candidate track segment included in the candidate track information and the number of target segments corresponding to the same monitoring point identifier as that of the corresponding actual track segment in the actual track information, the server determines the similarity of the monitoring point identifiers of the candidate track information and the actual track information, and further obtains the track similarity of the candidate track information and the actual track information, as shown in fig. 7, the specific steps are as follows:
s701: and respectively acquiring the number of target road sections of which the same monitoring point identifier corresponds to each candidate track road section contained in one candidate track information and the corresponding actual track road section in the actual track information.
Specifically, when step S701 is executed, the server obtains at least one actual track segment from the actual track information based on the monitoring point identifier of each monitoring point, obtains at least one candidate track segment from one candidate track information, and records the number of candidate segments of one candidate track information.
Further, for at least one target track segment, the following operations are respectively performed: the method comprises the steps of obtaining a monitoring point identifier corresponding to a target track road section and monitoring point identifiers corresponding to at least one candidate track road section, and further determining the number of target road sections corresponding to the same monitoring point identifier with an actual track road section in each candidate track road section.
For example, assume that the monitoring point identifier corresponding to the current actual track segment is: Geohash.A, the number of candidate road sections of the candidate track information is 6, wherein the monitoring point identifiers corresponding to the candidate track road sections are as follows in sequence: geohash.c, geohash.a, geohash.b, geohash.a, geohash.e, geohash.d. After the server compares the monitoring point identifiers corresponding to the current track road section with the monitoring point identifiers corresponding to the 6 candidate track road sections contained in the candidate track information, it can be known that the number of target road sections corresponding to the same monitoring point identifiers is 2 in the current actual track road section and the 6 candidate track road sections contained in the candidate track information.
Optionally, after obtaining each actual track road section and each candidate track road section, the server may respectively obtain the total number of target road sections corresponding to the same monitoring point identifier in the actual track information and each candidate track information in the set statistical period in the following manner. Assuming that the time threshold of the set statistical period is T2, the actual trajectory information tra.1 and the candidate trajectory information tra.2 are compared one by one for the trajectory segments within the set statistical period in a traversal manner.
Specifically, it is assumed that the set of actual track segments corresponding to the actual track information tra.1 in the set statistical period is { a }k1,…,Ak2The candidate track road section set corresponding to the candidate track information Tra.2 is { B }k3,…,Bk4The method for calculating the total number of the target road sections corresponding to the same monitoring point identifier in the actual track information tra.1 and the candidate track information tra.2 acquired by the MAC acquisition is as follows:
a1+=1if geohash(Ai)=geohash(Bj),k1<=i<=k2,k3<=j<=k4
wherein a1 is the total number of target road sections corresponding to the same monitoring point identifier in the actual track information tra.1 and the candidate track information tra.2; geohash (A)i) Identifying a monitoring point corresponding to the ith actual track road section; geohash (B)j) And identifying the monitoring point corresponding to the jth actual track road section.
In a possible implementation manner, referring to fig. 8, in the embodiment of the present application, based on the monitoring point identifier of each monitoring point, at least one actual track section is obtained from actual track information, and the specific steps are as follows:
s801: and acquiring target monitoring information which is acquired by each monitoring point in the actual track information.
Specifically, when step S801 is executed, the server may extract respective target monitoring information of each monitoring point from the actual track information, and further obtain target monitoring time when the corresponding monitoring point monitors the target object from each target monitoring information based on the data type of the monitoring time.
Further, after the server respectively obtains the monitoring time of the corresponding monitoring points for monitoring the target object from the actual track information, the server respectively executes the following operations for two pieces of target monitoring information obtained by every two adjacent monitoring points in each obtained target monitoring information:
s802: and determining the monitoring time interval of the two target monitoring information obtained by the two adjacent monitoring points.
Specifically, when step S802 is executed, a calculation formula of the monitoring time interval of two pieces of target monitoring information obtained by two adjacent monitoring points is as follows:
ΔT=T2-T1
where Δ T is the monitoring time interval, T1And T2Are target monitoring time T1Earlier than target monitoring time T2
S803: and selecting the target road section division rule from a preset candidate road section division rule set aiming at the original track road sections corresponding to the two target monitoring information based on the time interval to which the monitoring time interval belongs.
Specifically, referring to fig. 9, when step S803 is executed, after obtaining the monitoring time interval, the server selects the target road segment division rule from the preset candidate road segment division rule set based on the time interval section to which the monitoring time interval belongs and the corresponding relationship between the time interval section and the candidate road segment division rule. The time interval can be divided into the following time intervals according to the first time threshold and the second time threshold in sequence: a first interval, a second interval, and a third interval, wherein the first time threshold is less than the second time threshold, and the first time threshold can be characterized as: and whether the monitoring point identifications corresponding to the two adjacent monitoring points are the same or not.
Exemplarily, assuming that the first time threshold is 5 minutes and the second time threshold is 30 minutes, if the monitoring time interval is less than 5 minutes, the monitoring time interval belongs to the first interval, and it is known that the monitoring point identifiers of two adjacent monitoring points are the same; if the monitoring time interval is greater than or equal to 5 minutes and less than 30 minutes, the monitoring time interval belongs to the second interval zone, and the monitoring point identifications of two adjacent monitoring points are known to be different; if the monitoring time interval is greater than 30 minutes, the monitoring time interval belongs to a third interval, and the monitoring point identifications of two adjacent monitoring points are different.
S804: and dividing the original track road section based on the target road section division rule to obtain at least one actual track road section.
Specifically, when step S804 is executed, after the server selects the target road segment division rule corresponding to the monitoring time interval, the server divides the original track road segments corresponding to the two target monitoring information based on the target division rule to obtain one or two actual track road segments. If the monitoring time interval is larger than a preset time interval threshold, dividing the original track sections according to a set first time interval to obtain two corresponding actual track sections; if the monitoring time interval is not greater than the time interval threshold value and the monitoring point identifications of two adjacent monitoring points are different, dividing the original track section according to a set second time interval to obtain two corresponding actual track sections; wherein the first time interval is greater than the second time interval; and if the monitoring time interval is not greater than the time interval threshold value and the monitoring point identifications of two adjacent monitoring points are the same, directly taking the original track section as an actual track section.
Illustratively, still taking the first time threshold as 5 minutes and the second time threshold as 30 minutes as an example, if the monitoring time interval is 52 minutes and it is easy to know that the monitoring time interval belongs to the third interval, dividing the original track road segments according to the first time interval, i.e. 26 minutes, to obtain two corresponding actual track road segments; if the monitoring time interval is 28 minutes and the monitoring time interval belongs to the second interval zone, dividing the original track road section according to the second time interval, namely 14 minutes, and obtaining two corresponding actual track road sections; if the monitoring time interval is 3 minutes, and the monitoring time interval is easy to be known to belong to the first interval, the original track section is directly used as an actual track section.
Optionally, in the process that the server divides the original track road segment based on the target road segment division rule to obtain at least one corresponding actual track road segment, if the original track road segment is divided based on the target road segment division rule to obtain two actual track road segments, the server allocates the monitoring point identifiers corresponding to the two adjacent monitoring points to the corresponding actual track road segments in the two actual track road segments according to the time sequence of the target object appearing in the two actual track road segments.
For example, it is assumed that the monitoring point identifiers corresponding to two adjacent monitoring points are respectively: the method comprises the following steps that Geohash.A and Geohash.B, monitoring points corresponding to Geohash.A detect a target object before monitoring points corresponding to Geohash.B, and if the monitoring time interval between two adjacent monitoring points is 28 minutes, an original track section is divided according to a second time interval, namely 14 minutes, so that two corresponding actual track sections are obtained: and S.G.1 and S.G.2, wherein the target monitoring time corresponding to the actual track section S.G.1 is earlier than the target monitoring time corresponding to the actual track section S.G.2, and further, the monitoring point identifier Geohash.A is distributed to the actual track section S.G.1, and the monitoring point identifier Geohash.B is distributed to the actual track section S.G.2.
Optionally, in the process that the server divides the original track road section based on the target road section division rule to obtain at least one corresponding actual track road section, if the original track road section is divided based on the target road section division rule to obtain one actual track road section, the same monitoring point identifiers corresponding to two adjacent monitoring points are directly allocated to the actual track road section.
For example, it is assumed that the monitoring point identifiers corresponding to two adjacent monitoring points are: and Geohash.C, if the monitoring time interval is 3 minutes, directly taking the original track section as an actual track section S.G.3, and further distributing the monitoring point identifier Geohash.C to the actual track section S.G.3.
Based on the method steps similar to S801 to S804, the server may obtain at least one candidate track segment from one candidate track information based on the monitoring point identifier of each monitoring point, and record the number of candidate segments of one candidate track information, so that the description is omitted.
S702: and determining the contact ratio of the monitoring point identifiers of the candidate track information and the actual track information based on the obtained at least one target road section number and the candidate road section number corresponding to the candidate track information.
Specifically, when step S702 is executed, after the server obtains at least one target road segment number, the server summarizes the target road segment numbers to obtain a total number of the target road segments, and then determines a contact ratio of the candidate track information and the monitoring point identifier of the actual track information by combining the candidate road segment number corresponding to the candidate track information, where the calculation formula is as follows:
wherein, alpha is the mark contact ratio of the monitoring point and can be used as a reference index for subsequently judging the correlation between the mark information of the target object and the corresponding target image file; a1 is the total number of target road sections corresponding to the same monitoring point identifier in the actual track information and the candidate track information; and m is the number of candidate road sections corresponding to the candidate track information.
S703: and obtaining the track similarity of the candidate track information and the actual track information based on the contact ratio of the monitoring point identifiers.
For example, when step S703 is executed, after obtaining the overlap ratio of the monitoring point identifiers, the server may obtain the corresponding track similarity based on a conversion formula of the overlap ratio of the monitoring point identifiers, where the conversion formula is specifically as follows:
β=γ×α
wherein, beta is the track similarity; alpha is the contact ratio of the monitoring point identification; and gamma is a track similarity conversion factor and can be set according to actual conditions.
S503: and selecting target track information meeting a preset similarity condition from the candidate track information based on the obtained similarity of each track.
Specifically, when step S503 is executed, after the server obtains each trajectory similarity, at least one candidate similarity is screened out based on the obtained each trajectory similarity and a preset similarity threshold, and then each candidate similarity is sorted, and a target trajectory similarity satisfying a similarity condition is screened out, so that candidate trajectory information corresponding to the target trajectory similarity is determined and is used as the target trajectory information.
Exemplarily, assuming that a preset similarity threshold is t, when α is greater than t, the server may preliminarily consider that corresponding candidate trajectory information is similar to actual trajectory information, where t is a similarity threshold set according to an actual service scenario; and then, at least one candidate similarity which is larger than the similarity threshold is arranged in a descending order according to the sequence of the similarity from large to small, and the maximum track similarity is screened out, so that candidate track information corresponding to the maximum track similarity is determined and is used as target track information.
S504: and determining a target image archive corresponding to the target track information, and associating the identification information of the target object with the target image archive.
Specifically, when step S504 is executed, after determining the target trajectory information, the server determines a target image archive corresponding to the target trajectory information from a preset image archive set, and associates the identification information of the target object with the target image archive, where each image archive includes feature information and all trajectory information of a corresponding candidate object.
Illustratively, after determining a target image file corresponding to the target track information, the server associates an MAC ID associated with actual track information obtained by performing MAC acquisition on the mobile communication device carried by the target object with the feature information of the target image file, and further may obtain the feature information and all track information of the target object in the target image file associated with the MAC ID.
Based on the above method steps, referring to fig. 10, the server obtains the track similarity between the corresponding candidate track information and the actual track information based on the actual track information, the candidate track information, and the coincidence degree of the monitoring point identifier of each candidate track information and the actual track information within the specified time range, so as to select the candidate track information meeting the preset similarity condition, and use the candidate track information as the target track information, thereby associating the identifier information of the target object with the target image file corresponding to the target track information.
According to the image archive association method provided by the embodiment of the application, on the basis of the coincidence degree of the monitoring point identifiers of each candidate track information and the actual track information in the appointed time range, the track similarity of the corresponding candidate track information and the actual track information is respectively obtained, so that the target track information meeting the preset similarity condition is selected, and the target image archive corresponding to the identification information of the target object and the target track information is associated. By adopting the mode, the identification information of the target object is associated with the target image file corresponding to the target track information according to the track similarity between the candidate track information and the actual track information, so that the technical defect that the acquired track information of the target object belongs to the image file of the target object because the acquired characteristic information of the target object is fuzzy due to the fact that the light of image acquisition is weak is overcome, and the acquired track information of the target object can be accurately associated with the corresponding image file.
Based on the same technical concept, the embodiment of the application also provides a device for associating the image file, and the device for associating the image file can realize the method flow of the embodiment of the application. As shown in fig. 11, the image archive association device includes: an obtaining module 1101, a processing module 1102, a selecting module 1103 and an associating module 1104, wherein:
an obtaining module 1101, configured to obtain actual trajectory information of a target object and respective candidate trajectory information of each candidate object within a specified time range; wherein, the actual track information includes: at least one monitoring point and its respective monitoring point identification; each candidate trajectory information includes: at least one monitoring point and its respective monitoring point identification.
The processing module 1102 is configured to obtain track similarities between the corresponding candidate track information and the actual track information respectively based on respective overlap ratios of the candidate track information and the monitoring point identifiers of the actual track information.
A selecting module 1103, configured to select, from the candidate trajectory information, target trajectory information that meets a preset similarity condition based on the obtained similarity of each trajectory.
And the associating module 1104 is configured to determine a target image archive corresponding to the target track information, and associate the identification information of the target object with the target image archive.
In an alternative embodiment, before acquiring the actual trajectory information of the target object and the candidate trajectory information of each candidate object in the specified time range, the acquiring module 1101 is further configured to:
and acquiring the position information corresponding to each monitoring point.
And respectively carrying out geographic hash coding on the position information corresponding to each monitoring point to obtain corresponding coding results.
And respectively using the obtained coding results as monitoring point identifiers of corresponding monitoring points.
In an optional embodiment, when obtaining the track similarity between each candidate track information and the actual track information based on the overlap ratio of the monitoring point identifier of each candidate track information and the actual track information, respectively, the processing module 1102 is specifically configured to:
for at least one candidate track information, respectively performing the following operations:
and respectively acquiring the number of target road sections corresponding to the same monitoring point identifier with the corresponding actual track road section in the actual track information, wherein each candidate track road section contained in one candidate track information is obtained.
And determining the contact ratio of the monitoring point identifiers of the candidate track information and the actual track information based on the obtained at least one target road section number and the candidate road section number corresponding to the candidate track information.
And obtaining the track similarity of the candidate track information and the actual track information based on the contact ratio of the monitoring point identifiers.
In an alternative embodiment, when obtaining each candidate track segment included in one candidate track information and the number of target segments corresponding to the same monitoring point identifier as the corresponding actual track segment in the actual track information, the processing module 1102 is specifically configured to:
and acquiring at least one actual track section from the actual track information based on the monitoring point identification of each monitoring point, acquiring at least one candidate track section from one candidate track information, and recording the number of the candidate sections of the candidate track information.
For at least one target track segment, the following operations are respectively performed:
and acquiring a monitoring point identifier corresponding to a target track section and monitoring point identifiers corresponding to at least one candidate track section.
And determining the number of target road sections corresponding to the same monitoring point identifier in each candidate track road section with one actual track road section.
In an optional embodiment, when obtaining at least one actual track segment from the actual track information based on the monitoring point identifier of each monitoring point, the processing module 1102 is specifically configured to:
acquiring target monitoring information which is acquired by each monitoring point in actual track information; wherein each target monitoring information at least comprises: and monitoring the target monitoring time of the target object by the corresponding monitoring point.
Aiming at two pieces of target monitoring information obtained by every two adjacent monitoring points in each piece of obtained target monitoring information, the following operations are respectively executed:
and determining the monitoring time interval of the two target monitoring information obtained by the two adjacent monitoring points.
And selecting the target road section division rule from a preset candidate road section division rule set aiming at the original track road sections corresponding to the two target monitoring information based on the time interval to which the monitoring time interval belongs.
And dividing the original track road section based on the target road section division rule to obtain at least one actual track road section.
In an alternative embodiment, when the original track segment is divided based on the target segment division rule to obtain at least one actual track segment, the processing module 1102 is specifically configured to:
if the monitoring time interval is larger than a preset time interval threshold, dividing the original track sections according to a set first time interval to obtain two corresponding actual track sections.
If the monitoring time interval is not greater than the time interval threshold value and the monitoring point identifications of two adjacent monitoring points are different, dividing the original track section according to a set second time interval to obtain two corresponding actual track sections; wherein the first time interval is greater than the second time interval.
And if the monitoring time interval is not larger than the time interval threshold value and the monitoring point identifications of two adjacent monitoring points are the same, directly taking the original track section as an actual track section.
In an optional embodiment, in the process of dividing the original track segment based on the target segment division rule to obtain at least one actual track segment, the processing module 1102 is further configured to:
if the original track road section is divided based on the target road section division rule to obtain two actual track road sections, monitoring point identifications corresponding to two adjacent monitoring points are distributed to the corresponding actual track road sections in the two actual track road sections according to the time sequence of the target object in the two actual track road sections.
Based on the same technical concept, the embodiment of the application also provides electronic equipment, and the electronic equipment can realize the method flows provided by the embodiments of the application. In one embodiment, the electronic device may be a server, a terminal device, or other electronic device. As shown in fig. 12, the electronic device may include:
at least one processor 1201 and a memory 1202 connected to the at least one processor 1201, in this embodiment, a specific connection medium between the processor 1201 and the memory 1202 is not limited, and fig. 12 illustrates an example in which the processor 1201 and the memory 1202 are connected by a bus 1200. The bus 1200 is shown by a thick line in fig. 12, and the connection manner between other components is merely illustrative and not limited thereto. The bus 1200 may be divided into an address bus, a data bus, a control bus, etc., and for ease of illustration only one thick line is shown in fig. 12, but not to indicate only one bus or type of bus. Alternatively, the processor 1201 may also be referred to as a controller, without limitation to name a few.
In the embodiment of the present application, the memory 1202 stores instructions executable by the at least one processor 1201, and the at least one processor 1201 can execute the instructions stored in the memory 1202 to perform a method for associating an image file as discussed above. The processor 1201 may implement the functions of the respective modules in the apparatus shown in fig. 11.
The processor 1201 is a control center of the apparatus, and may connect various parts of the entire control device by using various interfaces and lines, and perform various functions and process data of the apparatus by operating or executing instructions stored in the memory 1202 and calling data stored in the memory 1202, thereby performing overall monitoring of the apparatus.
In one possible design, the processor 1201 may include one or more processing units, and the processor 1201 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1201. In some embodiments, the processor 1201 and the memory 1202 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1201 may be a general-purpose processor, such as a cpu (central processing unit), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method for associating an image file disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
By programming the processor 1201, the code corresponding to the method for associating an image archive described in the foregoing embodiment may be solidified in the chip, so that the chip can execute the steps of the method for associating an image archive of the embodiment shown in fig. 5 when running. How the processor 1201 is programmed is well known to those skilled in the art and will not be described in detail herein.
Based on the same inventive concept, the present application further provides a storage medium storing computer instructions, which when executed on a computer, cause the computer to perform the method for associating an image archive as discussed above.
In some possible embodiments, the present application provides that the aspects of a method for associating an image archive can also be implemented in the form of a program product comprising program code means for causing a control device to carry out the steps of a method for associating an image archive according to various exemplary embodiments of the present application described above in this description, when the program product is run on an apparatus.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Claims (17)
1. A method for associating an image archive, comprising:
acquiring actual track information of a target object and respective candidate track information of each candidate object within a specified time range; wherein the actual trajectory information includes: at least one monitoring point and its own monitoring point mark; each candidate trajectory information includes: at least one monitoring point and its respective monitoring point identification;
respectively obtaining the track similarity of the corresponding candidate track information and the actual track information based on the contact ratio of the monitoring point identifier of each candidate track information and the actual track information;
based on the obtained similarity of each track, selecting target track information meeting a preset similarity condition from the candidate track information;
and determining a target image archive corresponding to the target track information, and associating the identification information of the target object with the target image archive.
2. The method of claim 1, wherein the obtaining actual trajectory information of the target object and the candidate trajectory information of each candidate object within the specified time range further comprises:
acquiring position information corresponding to each monitoring point;
respectively carrying out geographic hash coding on the position information corresponding to each monitoring point to obtain corresponding coding results;
and respectively using the obtained coding results as monitoring point identifiers of corresponding monitoring points.
3. The method of claim 1, wherein the obtaining the track similarity between the corresponding candidate track information and the actual track information based on the coincidence degree of the monitoring point identifier of each candidate track information and the actual track information respectively comprises:
for the at least one candidate track information, respectively performing the following operations:
respectively acquiring the number of target road sections of which the same monitoring point identifier corresponds to each candidate track road section contained in one candidate track information and the corresponding actual track road section in the actual track information;
determining the contact ratio of the monitoring point identifiers of the candidate track information and the actual track information based on the obtained number of at least one target road section and the number of candidate road sections corresponding to the candidate track information;
and obtaining the track similarity of the candidate track information and the actual track information based on the contact ratio of the monitoring point identifiers.
4. The method as claimed in claim 3, wherein the obtaining of the number of target road segments corresponding to the same waypoint identifier as the corresponding actual track road segments in the actual track information for each candidate track road segment included in the candidate track information comprises:
acquiring at least one actual track section from the actual track information based on the monitoring point identification of each monitoring point, acquiring at least one candidate track section from the candidate track information, and recording the number of the candidate sections of the candidate track information;
for the at least one target track segment, respectively performing the following operations:
acquiring a monitoring point identifier corresponding to a target track section and monitoring point identifiers corresponding to the at least one candidate track section;
and determining the number of target road sections corresponding to the same monitoring point identifier in each candidate track road section with the actual track road section.
5. The method of claim 4, wherein obtaining at least one actual track segment from the actual track information based on the respective monitoring point identification for each monitoring point comprises:
acquiring target monitoring information which is acquired by each monitoring point in the actual track information; wherein each target monitoring information at least comprises: monitoring the target monitoring time of the target object by the corresponding monitoring point;
aiming at two pieces of target monitoring information obtained by every two adjacent monitoring points in each piece of obtained target monitoring information, the following operations are respectively executed:
determining the monitoring time interval of two target monitoring information obtained by two adjacent monitoring points;
selecting a target road section division rule from a preset candidate road section division rule set aiming at the original track road sections corresponding to the two target monitoring information based on the time interval to which the monitoring time interval belongs;
and dividing the original track road section based on the target road section division rule to obtain at least one actual track road section.
6. The method of claim 5, wherein the dividing the original track segment based on the target segment division rule to obtain at least one actual track segment comprises:
if the monitoring time interval is larger than a preset time interval threshold, dividing the original track section according to a set first time interval to obtain two corresponding actual track sections;
if the monitoring time interval is not greater than the time interval threshold value and the monitoring point identifications of the two adjacent monitoring points are different, dividing the original track section according to a set second time interval to obtain two corresponding actual track sections; wherein the first time interval is greater than the second time interval;
and if the monitoring time interval is not larger than the time interval threshold value and the monitoring point identifications of the two adjacent monitoring points are the same, directly taking the original track section as an actual track section.
7. The method as claimed in claim 5 or 6, wherein the step of dividing the original track segment based on the target segment division rule to obtain at least one actual track segment further comprises:
and if the original track road section is divided based on the target road section division rule to obtain two actual track road sections, respectively allocating monitoring point identifications corresponding to the two adjacent monitoring points to the corresponding actual track road sections in the two actual track road sections according to the time sequence of the target object in the two actual track road sections.
8. An apparatus for associating image files, comprising
The acquisition module is used for acquiring actual track information of the target object and candidate track information of each candidate object in a specified time range; wherein the actual trajectory information includes: at least one monitoring point and its respective monitoring point identification; each candidate trajectory information includes: at least one monitoring point and its respective monitoring point identification;
the processing module is used for respectively obtaining the track similarity of the corresponding candidate track information and the actual track information based on the contact ratio of the monitoring point identifier of each candidate track information and the actual track information;
the selection module is used for selecting target track information meeting a preset similarity condition from the candidate track information based on the obtained track similarity;
and the association module is used for determining a target image archive corresponding to the target track information and associating the identification information of the target object with the target image archive.
9. The apparatus of claim 8, wherein before the obtaining actual trajectory information of the target object and the candidate trajectory information of each candidate object within the specified time range, the obtaining module is further configured to:
acquiring position information corresponding to each monitoring point;
respectively carrying out geographical hash coding on the position information corresponding to each monitoring point to obtain corresponding coding results;
and respectively using the obtained coding results as monitoring point identifiers of corresponding monitoring points.
10. The apparatus according to claim 8, wherein when the track similarity between the corresponding candidate track information and the actual track information is obtained based on a coincidence degree of the monitoring point identifier of each candidate track information and the actual track information, respectively, the processing module is specifically configured to:
for the at least one candidate track information, respectively performing the following operations:
respectively acquiring the number of target road sections of which the same monitoring point identifier corresponds to each candidate track road section contained in one candidate track information and the corresponding actual track road section in the actual track information;
determining the coincidence degree of the monitoring point identifiers of the candidate track information and the actual track information based on the obtained number of the at least one target road section and the number of the candidate road sections corresponding to the candidate track information;
and obtaining the track similarity of the candidate track information and the actual track information based on the contact ratio of the monitoring point identifiers.
11. The apparatus according to claim 10, wherein when the number of target road segments corresponding to the same waypoint identifier as that of a corresponding actual track road segment in the actual track information is obtained for each candidate track road segment included in the one candidate track information, the processing module is specifically configured to:
acquiring at least one actual track section from the actual track information based on the monitoring point identification of each monitoring point, acquiring at least one candidate track section from the candidate track information, and recording the number of the candidate sections of the candidate track information;
for the at least one target track segment, respectively performing the following operations:
acquiring a monitoring point identifier corresponding to a target track section and monitoring point identifiers corresponding to the at least one candidate track section;
and determining the number of target road sections corresponding to the same monitoring point identifier in each candidate track road section with the actual track road section.
12. The apparatus according to claim 11, wherein, when obtaining at least one actual track segment from the actual track information based on the monitoring point identifier of each monitoring point, the processing module is specifically configured to:
acquiring target monitoring information which is acquired by each monitoring point in the actual track information; wherein each target monitoring information at least comprises: monitoring the target monitoring time of the target object by the corresponding monitoring point;
aiming at two pieces of target monitoring information obtained by every two adjacent monitoring points in each piece of obtained target monitoring information, the following operations are respectively executed:
determining the monitoring time interval of two target monitoring information obtained by two adjacent monitoring points;
selecting a target road section division rule from a preset candidate road section division rule set aiming at the original track road sections corresponding to the two target monitoring information based on the time interval to which the monitoring time interval belongs;
and dividing the original track road section based on the target road section division rule to obtain at least one actual track road section.
13. The apparatus according to claim 12, wherein, when the original track segment is divided based on the target segment division rule to obtain at least one actual track segment, the processing module is specifically configured to:
if the monitoring time interval is larger than a preset time interval threshold, dividing the original track road sections according to a set first time interval to obtain two corresponding actual track road sections;
if the monitoring time interval is not greater than the time interval threshold value and the monitoring point identifications of the two adjacent monitoring points are different, dividing the original track section according to a set second time interval to obtain two corresponding actual track sections; wherein the first time interval is greater than the second time interval;
and if the monitoring time interval is not larger than the time interval threshold value and the monitoring point identifications of the two adjacent monitoring points are the same, directly taking the original track section as an actual track section.
14. The apparatus according to claim 12 or 13, wherein in the process of dividing the original track segment based on the target segment division rule to obtain the corresponding at least one actual track segment, the processing module is further configured to:
and if the original track road section is divided based on the target road section division rule to obtain two actual track road sections, respectively allocating monitoring point identifications corresponding to the two adjacent monitoring points to the corresponding actual track road sections in the two actual track road sections according to the time sequence of the target object in the two actual track road sections.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the computer program.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
17. A computer program product, which, when called by a computer, causes the computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210186152.XA CN114549873A (en) | 2022-02-28 | 2022-02-28 | Image archive association method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210186152.XA CN114549873A (en) | 2022-02-28 | 2022-02-28 | Image archive association method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114549873A true CN114549873A (en) | 2022-05-27 |
Family
ID=81680025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210186152.XA Pending CN114549873A (en) | 2022-02-28 | 2022-02-28 | Image archive association method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549873A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114926795A (en) * | 2022-07-19 | 2022-08-19 | 深圳前海中电慧安科技有限公司 | Method, device, equipment and medium for determining information relevance |
CN116543356A (en) * | 2023-07-05 | 2023-08-04 | 青岛国际机场集团有限公司 | Track determination method, track determination equipment and track determination medium |
-
2022
- 2022-02-28 CN CN202210186152.XA patent/CN114549873A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114926795A (en) * | 2022-07-19 | 2022-08-19 | 深圳前海中电慧安科技有限公司 | Method, device, equipment and medium for determining information relevance |
CN116543356A (en) * | 2023-07-05 | 2023-08-04 | 青岛国际机场集团有限公司 | Track determination method, track determination equipment and track determination medium |
CN116543356B (en) * | 2023-07-05 | 2023-10-27 | 青岛国际机场集团有限公司 | Track determination method, track determination equipment and track determination medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114549873A (en) | Image archive association method and device, electronic equipment and storage medium | |
CN109740004B (en) | Filing method and device | |
CN109783685A (en) | A kind of querying method and device | |
CN103929644A (en) | Video fingerprint database building method and device and video fingerprint recognition method and device | |
CN112050820A (en) | Road matching method and device, electronic equipment and readable storage medium | |
CN107729367A (en) | A kind of moving line recommends method, apparatus and storage medium | |
US20220357176A1 (en) | Methods and data processing systems for predicting road attributes | |
CN115393681A (en) | Target fusion method and device, electronic equipment and storage medium | |
CN114817660A (en) | Track missing completion method and device, electronic equipment and storage medium | |
CN115203354B (en) | Vehicle code track pre-association method and device, computer equipment and storage medium | |
Brejcha et al. | GeoPose3K: Mountain landscape dataset for camera pose estimation in outdoor environments | |
CN111899279A (en) | Method and device for detecting motion speed of target object | |
CN112215205A (en) | Target identification method and device, computer equipment and storage medium | |
Velasquez-Camacho et al. | Implementing Deep Learning algorithms for urban tree detection and geolocation with high-resolution aerial, satellite, and ground-level images | |
CN111767839B (en) | Vehicle driving track determining method, device, equipment and medium | |
CN108932839A (en) | A kind of colleague's vehicle judgment method and device | |
CN113704276A (en) | Map updating method and device, electronic equipment and computer readable storage medium | |
CN112925899A (en) | Ranking model establishing method, case clue recommending device and medium | |
CN114639076A (en) | Target object detection method, target object detection device, storage medium, and electronic device | |
CN112257628A (en) | Method, device and equipment for identifying identities of outdoor competition athletes | |
CN114078269A (en) | Face image clustering method, device, server and storage medium | |
CN115880754A (en) | Multi-gear combination method and device and electronic equipment | |
CN112487082A (en) | Biological feature recognition method and related equipment | |
CN114005053A (en) | Video processing method, video processing device, computer equipment and computer-readable storage medium | |
CN113486852A (en) | Human face and human body association method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |