CN113570635A - Target motion trajectory reduction method and device, electronic equipment and storage medium - Google Patents

Target motion trajectory reduction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113570635A
CN113570635A CN202110621363.7A CN202110621363A CN113570635A CN 113570635 A CN113570635 A CN 113570635A CN 202110621363 A CN202110621363 A CN 202110621363A CN 113570635 A CN113570635 A CN 113570635A
Authority
CN
China
Prior art keywords
target
probability
clustering
clustering result
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110621363.7A
Other languages
Chinese (zh)
Other versions
CN113570635B (en
Inventor
王凯垚
周明伟
陈立力
何林强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110621363.7A priority Critical patent/CN113570635B/en
Publication of CN113570635A publication Critical patent/CN113570635A/en
Application granted granted Critical
Publication of CN113570635B publication Critical patent/CN113570635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target motion trail reduction method, a target motion trail reduction device, electronic equipment and a storage medium, wherein the target motion trail reduction method comprises the following steps: obtaining state transition parameters based on the historical motion track of the target, and obtaining an original clustering result of the target and a current motion track corresponding to the original clustering result; judging whether the current motion track has a clustering omission phenomenon or not according to the state transition parameters; and in response to the clustering omission phenomenon in the current motion track, clustering the images corresponding to the omission positions based on the original clustering result to obtain a clustering result, and obtaining a modified motion track of the target according to the clustering result. The method can obtain the complete motion track, and solves the incomplete track information acquisition phenomenon caused by the gear missing phenomenon.

Description

Target motion trajectory reduction method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of target identification, in particular to a target motion track restoration method, a target motion track restoration device, electronic equipment and a storage medium.
Background
With the continuous progress of the society, the identity recognition technology becomes one of the key means for solving the case by public security, and based on the snapshot pictures and the file data of the cameras in the area in a period of time, the deep learning technology is utilized to carry out identity recognition, all the snapshot pictures of the same target in a specified period of time can be gathered and displayed to form track information, so as to help solve the case. In the prior art, most of the prior art is directly based on a deep learning technology, the portrait characteristics in pictures are extracted, and portrait clustering and identity filing are carried out; however, due to factors such as the capturing angle and the image definition under different conditions, the similarity of the images of the same portrait is sometimes slightly lower than the clustering threshold, and the images cannot be successfully clustered, so that the phenomenon of missing the image is caused, and the track information acquisition is not complete due to the phenomenon.
Disclosure of Invention
The invention provides a target motion trail restoration method and device, electronic equipment and a storage medium. The method can obtain the complete motion track, and solves the incomplete track information acquisition phenomenon caused by the gear missing phenomenon.
In order to solve the above technical problems, a first technical solution provided by the present invention is: the method for restoring the motion trail of the target comprises the following steps: obtaining state transition parameters based on the historical motion track of the target, and obtaining an original clustering result of the target and a current motion track corresponding to the original clustering result; judging whether the current motion track has a clustering omission phenomenon or not according to the state transition parameters; and in response to the clustering omission phenomenon in the current motion track, clustering the images corresponding to the omission positions based on the original clustering result to obtain a clustering result, and obtaining a modified motion track of the target according to the clustering result.
Wherein, judge whether there is the step of clustering omission phenomenon in the present motion trajectory according to the state transition parameter, include: acquiring a first probability from the state transition parameters, wherein the first probability is the probability of the target from the first position to the second position; and judging whether the current motion track has a clustering omission phenomenon or not based on the first probability.
Wherein, judge whether have the step of clustering omission phenomenon in the current motion trajectory based on first probability, include: comparing the first probability with a preset threshold value; and judging that the current motion track has a clustering omission phenomenon in response to the first probability being smaller than a preset threshold value.
The method comprises the following steps of clustering images corresponding to missing positions based on an original clustering result, and obtaining a corrected motion track of a target according to the clustering result, wherein the steps comprise: determining a snapshot time period when the target appears at the missing position based on the state transition parameters, and acquiring a snapshot image of the snapshot time period; and clustering the snapshot images in the snapshot time period based on the original clustering result, and obtaining the corrected motion track of the target according to the clustering result.
Wherein, the step of determining the snapshot time period when the target appears at the missing position based on the state transition parameters comprises: acquiring a second probability and a third probability from the state transition parameters, wherein the second probability is the probability that the target is from the first position to the third position, and the third probability is the probability that the target is from the third position to the second position; and determining the snapshot time period of the target at the missing position based on the second probability and the third probability.
Wherein the step of determining the snapshot time period in which the target appears at the missing position based on the second probability and the third probability comprises: comparing the second probability and the third probability with a preset threshold; and in response to the second probability and the third probability both being greater than a preset threshold, determining that the snapshot time period is from the snapshot time corresponding to the first position to the snapshot time corresponding to the second position.
The method for obtaining the original clustering result of the target and the current motion track corresponding to the original clustering result comprises the following steps: clustering is carried out on the basis of images shot by a shooting device, so that an original clustering result set of a plurality of detection targets is obtained, and original clustering results of the targets are obtained from the original clustering result set; and sequencing the images in the original clustering result according to the shooting time of the images so as to obtain the current motion track.
The method comprises the following steps of sequencing images in an original clustering result according to the shooting time of the images so as to obtain a current motion track, wherein the steps comprise: and sequencing the image data in the original clustering result according to the shooting time of the image, and coding the position information corresponding to each image data to further obtain the current motion track.
The method comprises the following steps of coding position information corresponding to each image data to obtain a current motion trail, wherein the steps comprise: coding the position information corresponding to each image data to obtain a coding block; and removing the duplication of the coding block to further obtain the current motion trail.
The step of encoding the position information corresponding to each image data includes: encoding the position information corresponding to each image data by using a geohash encoding method; the position information corresponding to the image data is longitude and latitude position information of the shooting device corresponding to the image data.
In order to solve the above technical problems, a second technical solution provided by the present invention is: provided is a target motion trail restoration device, including: the acquisition module is used for acquiring state transition parameters based on the historical motion track of the target and acquiring an original clustering result of the target and a current motion track corresponding to the original clustering result; the judging module is used for judging whether the current motion track has a clustering omission phenomenon according to the state transition parameters; and the clustering module is used for responding to the clustering omission phenomenon in the current motion track, clustering the images corresponding to the omission positions based on the original clustering result to obtain a clustering result, and obtaining a motion track after target correction according to the clustering result.
In order to solve the above technical problems, a third technical solution provided by the present invention is: provided is an electronic device including: a memory storing program instructions and a processor retrieving the program instructions from the memory to perform any of the above methods.
In order to solve the above technical problems, a fourth technical solution provided by the present invention is: there is provided a computer readable storage medium storing a program file executable to implement the method of any of the above.
The method has the beneficial effects that the method is different from the prior art, the state transition parameters are obtained based on the historical motion track of the target, the original clustering result of the target and the current motion track corresponding to the original clustering result are obtained; judging whether the current motion track has a clustering omission phenomenon or not according to the state transition parameters; and in response to the clustering omission phenomenon in the current motion track, clustering the images corresponding to the omission positions based on the original clustering result, and obtaining the modified motion track of the target according to the clustering result. The method can obtain the complete motion track, and solves the incomplete track information acquisition phenomenon caused by the gear missing phenomenon.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a flowchart illustrating a target motion trajectory reduction method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a current motion trajectory of the present invention;
FIG. 3 is a schematic structural diagram of an embodiment of a target motion trajectory reduction device according to the present invention;
FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
FIG. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present invention.
Detailed Description
The invention provides a target motion track restoration method, which can obtain a complete motion track and solve the incomplete phenomenon of track information acquisition caused by a gear missing phenomenon. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a schematic flow chart of a first embodiment of the target motion trajectory reduction method of the present invention specifically includes:
step S11: and obtaining state transition parameters based on the historical motion track of the target, and obtaining an original clustering result of the target and a current motion track corresponding to the original clustering result.
Specifically, historical travel information of the target can be counted, and a state transition parameter can be constructed based on a historical movement track corresponding to a place where the historical travel information passes. Specifically, the state transition parameter is a markov state transition matrix. Specifically, the longitude and latitude information of a place where the target appears, namely the longitude and latitude information of a shooting device shooting the place, is coded by using a geohash coding method, and then a coding block is obtained. Further, the coding blocks may be deduplicated, for example, if three or more identical coding blocks, that is, identical locations, are consecutively passed through, only the first snapshot record and the last snapshot record of the location are reserved. For example, if the coding block Hj +1 is Hj +2 in the historical motion trajectory of the target a, the snapshot data corresponding to Hj and the snapshot data corresponding to Hj +2 are retained.
And obtaining a state transition matrix of the target A based on the historical motion trail of the target A. For example, the same code block as Hj appears n times in the motion trail of the target A, wherein n is total1Second-time snapshot address HjThe next snapshot address is Hk1In total, n2Second-time snapshot address HjThe next snapshot address is Hk2Wherein n is1+n2N, then object a is currently at snapshot address HjAnd the next snapshot address is Hk1Has a probability of P (H)k1/Hj)=n1V, the target A is currently at the snapshot address HjAnd the next snapshot address is Hk2Has a probability of P (H)k2/Hj)=n2/. The state transition matrix obtained by the method is as follows:
Figure BDA0003100038610000051
and acquiring an original clustering result of the target and a current motion track corresponding to the original clustering result. Specifically, clustering is carried out based on images shot by a shooting device, so as to obtain an original clustering result set of a plurality of detection targets, and an original clustering result of the targets is obtained from the original clustering result set; and sequencing the images in the original clustering result according to the shooting time of the images so as to obtain the current motion track.
In a specific embodiment, images shot by all the shooting devices of the checkpoints needing target clustering in the city of nearly N days can be selected, all the targets in N days are clustered by utilizing the images to obtain an original clustering result set of all the targets, the original clustering result set comprises original clustering results of a plurality of detection targets, and an original clustering result corresponding to the target needing track reduction is selected from the original clustering results. It should be noted that the images include the shooting time of the images when being shot, and the images in the original clustering result are further sorted according to the shooting time of each image in the original clustering result, so as to obtain the current motion trajectory of the target. It should be noted that the position information of the shooting device corresponding to each image, that is, the longitude and latitude coordinates corresponding to the shooting device, is further obtained, and the position information of the shooting devices is sorted while the images are sorted according to the shooting time, so that the motion trajectory of the target can be obtained. For example:
Figure BDA0003100038610000061
wherein A ismIs the mth snapshot of object A, TmSnap-shot time, G, corresponding to the mth snap-shot picturemAnd the latitude and longitude of the shooting device corresponding to the mth snapshot picture.
In a specific embodiment, the position information corresponding to the shooting device of each image may be further obtained, and after the image data in the original clustering result is sorted according to the shooting time of the image, the position information corresponding to each image data is encoded, so as to obtain the current motion trajectory. For example, in an embodiment, the location information corresponding to each image data may be encoded by using a geohash encoding method; the position information corresponding to the image data is longitude and latitude position information of the shooting device corresponding to the image data.
The Geohash coding method is commonly used for converting two-dimensional longitude and latitude into character strings and comprises the following two steps: the first step is binary encoding of latitude and longitude and the second step is transcoding of base 32. Specifically, the current motion trajectory obtained after encoding is:
Figure BDA0003100038610000062
in a particular embodiment, the encoded blocks are also required to be deduplicated. Specifically, if three or more identical coding blocks, that is, identical locations, are continuously passed through, only the first snapshot record and the last snapshot record of the location are reserved. For example, if the coding block Hj +1 is Hj +2 in the historical motion trajectory of the target a, the snapshot data corresponding to Hj and the snapshot data corresponding to Hj +2 are retained. And obtaining the current motion trail of the target after the repetition.
Step S12: and judging whether the current motion track has a clustering omission phenomenon or not according to the state transition parameters.
Specifically, the state transition parameter indicates the probability that the current snapshot address of the target is at the position a and the next snapshot position is at the position b, and whether the clustering omission phenomenon exists in the current motion trajectory can be judged based on the probability. Specifically, a first probability is obtained from the state transition parameters, and the first probability is the probability that the target is from a first position to a second position; and judging whether the current motion track has a clustering omission phenomenon or not based on the first probability. In one embodiment, the first probability is compared to a predetermined threshold; and judging that the current motion track has a clustering omission phenomenon in response to the first probability being smaller than a preset threshold value. It can be understood that if the first probability is greater than the preset threshold, there is no cluster omission phenomenon in the current motion trajectory.
Specifically, as shown in fig. 2, assuming that the current motion trajectory of the target obtained based on the original clustering result is D-E-F-G-H-I, the probability that the current snapshot position of the target a is D and the next snapshot address is E, that is, P (H) can be obtained from the state transition parametersE/HD) Adding P (H)E/HD) Comparing with a predetermined threshold if P (H)E/HD) If the value is less than the preset threshold value, the clustering omission phenomenon exists between the position D and the position E, and if the value is P (H)E/HD) If the value is larger than the preset threshold value, no clustering omission phenomenon exists between the position D and the position E. Further, the probability that the current snapshot position of the target A is E and the next snapshot address is F, namely P (H), is obtained from the state transition parametersF/HE) Adding P (H)F/HE) Comparing with a predetermined threshold if P (H)F/HE) If the value is less than the preset threshold value, the clustering omission phenomenon exists between the position E and the position F, and if the value is P (H)F/HE) If the value is larger than the preset threshold value, no clustering omission phenomenon exists between the position E and the position F. The probability that the current snapshot position of the target A is F and the next snapshot address is G, namely P (H), is obtained from the state transition parameters by the same methodG/HF) And so on. As shown in fig. 2, after calculation, it is found that a clustering omission phenomenon exists between the snapshot address E and the snapshot address F, a target also appears at the address J, a clustering omission phenomenon exists between the snapshot address G and the snapshot address H, and a target also appears at the address K.
Step S13: and in response to the clustering omission phenomenon in the current motion track, clustering the images corresponding to the omission positions based on the original clustering result to obtain a clustering result, and obtaining a modified motion track of the target according to the clustering result.
Specifically, if the current motion trajectory has a clustering omission phenomenon, clustering is performed on the basis of an original clustering result by using an image corresponding to the omission position to obtain a clustering result, and a modified motion trajectory of the target is obtained according to the clustering result.
Specifically, the images of the target a at the position J and the position K may be obtained, and the images of the target a at the position J and the position K are clustered again based on the original clustering result, so as to obtain the modified motion trajectory. As shown in FIG. 2, the modified motion trajectory is D-E-J-F-G-K-H-I.
In one embodiment, in order to improve the correction precision, a snapshot time period in which the target appears at the missing position is further determined based on the state transition parameters, and a snapshot image in the snapshot time period is acquired; and clustering the snapshot images in the snapshot time period based on the original clustering result, and obtaining the corrected motion track of the target according to the clustering result. By the method, the time period corresponding to the clustering omission phenomenon is determined, all images in the time period can be obtained, and the images in the time period are clustered again based on the original clustering result, so that the correction precision of the motion track can be improved.
In a specific embodiment, a second probability and a third probability are obtained from the state transition parameter, the second probability is a probability that the target goes from the first position to the third position, and the third probability is a probability that the target goes from the third position to the second position; and determining the snapshot time period of the target at the missing position based on the second probability and the third probability. Specifically, the second probability and the third probability are compared with a preset threshold; and in response to the second probability and the third probability both being greater than a preset threshold, determining that the snapshot time period is from the snapshot time corresponding to the first position to the snapshot time corresponding to the second position.
Referring to fig. 2, a cluster miss occurs between the address E and the address F, and if the third location is G, H, I, the locations are traversed to determine whether the third location is the missed address. For example, if the third position is G, the second probability P (H) that the current snapshot address is E and the next snapshot address is G is obtained from the state transition parametersG/HE) And acquiring a third probability P (H) that the current snapshot address is G and the next snapshot address is F from the state transition parametersF/HG). If the second probability P (H)G/HE) And a third probability P (H)F/HG) Are all greater than presetAnd determining the snapshot time period as the time period from the snapshot time corresponding to the snapshot address E to the snapshot time corresponding to the snapshot address F.
Specifically, in an embodiment of the present application, the user may specify the identity and the time period of the movement of the target, and a specific activity track is shown on the map based on the clustering result of the target in the time period, for example, as shown in fig. 2. Starting from the starting point D, the target passes through a plurality of bayonets to the end point I, based on the method of the present application, two missing positions J, K of the clustering result can be checked, and J, K can be complemented to form a modified motion trajectory. Specifically, the user can search the image by using the image in the missing position or in the designated time period, search the snapshot image of the target, add the snapshot image of the target to the original clustering result, and further obtain the corrected motion trajectory.
The track restoration method utilizes the Markov state transition matrix to complement the motion track of the target, encodes the snapshot device geohash, and simultaneously performs track deduplication on the track data of the specified target, so that the accuracy of the Markov state transition matrix of the motion track of the target is improved while the calculation efficiency is improved. And based on the motion trail of the target, a time-space domain of gear leakage during gear aggregation is presumed, and the gear leakage position is clustered again, so that a complete motion trail can be obtained, and the incomplete phenomenon of track information acquisition caused by the gear leakage phenomenon is solved. The method provided by the invention enhances the man-machine interaction of the user while improving the identity landing effect, and is more flexible to use.
Fig. 3 is a schematic structural diagram of a target motion trajectory recovery device according to an embodiment of the present invention. Specifically, the target motion trajectory restoration device includes an obtaining module 31, a judging module 32, and a clustering module 33.
The obtaining module 31 is configured to obtain a state transition parameter based on a historical motion trajectory of the target, and obtain an original clustering result of the target and a current motion trajectory corresponding to the original clustering result.
Specifically, the obtaining module 31 may count historical trip information of the target, and may construct a state transition parameter based on a historical movement trajectory corresponding to a location where the historical trip information passes through. Specifically, the state transition parameter is a state transition matrix. Specifically, the longitude and latitude information of a place where the target appears, namely the longitude and latitude information of a shooting device shooting the place, is coded by using a geohash coding method, and then a coding block is obtained. Further, the coding blocks may be deduplicated, for example, if three or more identical coding blocks, that is, identical locations, are consecutively passed through, only the first snapshot record and the last snapshot record of the location are reserved. For example, if the coding block Hj +1 is Hj +2 in the historical motion trajectory of the target a, the snapshot data corresponding to Hj and the snapshot data corresponding to Hj +2 are retained.
And obtaining a state transition matrix of the target A based on the historical motion trail of the target A. For example, the same code block as Hj appears n times in the motion trail of the target A, wherein n is total1Second-time snapshot address HjThe next snapshot address is Hk1In total, n2Second-time snapshot address HjThe next snapshot address is Hk2Wherein n is1+n2N, then object a is currently at snapshot address HjAnd the next snapshot address is Hk1Has a probability of P (H)k1/Hj)=n1V, the target A is currently at the snapshot address HjAnd the next snapshot address is Hk2Has a probability of P (H)k2/Hj)=n2/. The state transition matrix obtained by the method is as follows:
Figure BDA0003100038610000091
and acquiring an original clustering result of the target and a current motion track corresponding to the original clustering result. Specifically, clustering is carried out based on images shot by a shooting device, so as to obtain an original clustering result set of a plurality of detection targets, and an original clustering result of the targets is obtained from the original clustering result set; and sequencing the images in the original clustering result according to the shooting time of the images so as to obtain the current motion track.
In a specific embodiment, images shot by all the shooting devices of the checkpoints needing target clustering in the city of nearly N days can be selected, all the targets in N days are clustered by utilizing the images to obtain an original clustering result set of all the targets, the original clustering result set comprises original clustering results of a plurality of detection targets, and an original clustering result corresponding to the target needing track reduction is selected from the original clustering results. It should be noted that the images include the shooting time of the images when being shot, and the images in the original clustering result are further sorted according to the shooting time of each image in the original clustering result, so as to obtain the current motion trajectory of the target. It should be noted that the position information of the shooting device corresponding to each image, that is, the longitude and latitude coordinates corresponding to the shooting device, is further obtained, and the position information of the shooting devices is sorted while the images are sorted according to the shooting time, so that the motion trajectory of the target can be obtained. For example:
Figure BDA0003100038610000101
wherein A ismIs the mth snapshot of object A, TmSnap-shot time, G, corresponding to the mth snap-shot picturemAnd the latitude and longitude of the shooting device corresponding to the mth snapshot picture.
In a specific embodiment, the position information corresponding to the shooting device of each image may be further obtained, and after the image data in the original clustering result is sorted according to the shooting time of the image, the position information corresponding to each image data is encoded, so as to obtain the current motion trajectory. For example, in an embodiment, the location information corresponding to each image data may be encoded by using a geohash encoding method; the position information corresponding to the image data is longitude and latitude position information of the shooting device corresponding to the image data.
The Geohash coding method is commonly used for converting two-dimensional longitude and latitude into character strings and comprises the following two steps: the first step is binary encoding of latitude and longitude and the second step is transcoding of base 32. Specifically, the current motion trajectory obtained after encoding is:
Figure BDA0003100038610000102
in a particular embodiment, the encoded blocks are also required to be deduplicated. Specifically, if three or more identical coding blocks, that is, identical locations, are continuously passed through, only the first snapshot record and the last snapshot record of the location are reserved. For example, if the coding block Hj +1 is Hj +2 in the historical motion trajectory of the target a, the snapshot data corresponding to Hj and the snapshot data corresponding to Hj +2 are retained. And obtaining the current motion trail of the target after the repetition.
The judging module 32 is used for judging whether the current motion trajectory has a clustering omission phenomenon according to the state transition parameters.
Specifically, the state transition parameter indicates the probability that the target goes from the current position to the next position, that is, the state transition parameter indicates the probability that the current snapshot address of the target is the position a and the next snapshot position is the position b. The determining module 32 may determine whether there is a cluster omission phenomenon in the current motion trajectory based on the probability. Specifically, a first probability is obtained from the state transition parameters, and the first probability is the probability that the target is from a first position to a second position; and judging whether the current motion track has a clustering omission phenomenon or not based on the first probability. In one embodiment, the first probability is compared to a predetermined threshold; and judging that the current motion track has a clustering omission phenomenon in response to the first probability being smaller than a preset threshold value. It can be understood that if the first probability is greater than the preset threshold, there is no cluster omission phenomenon in the current motion trajectory.
Specifically, as shown in fig. 2, assuming that the current motion trajectory of the target obtained based on the original clustering result is D-E-F-G-H-I, the probability that the current snapshot position of the target a is D and the next snapshot address is E, that is, P (H) can be obtained from the state transition parametersE/HD) Adding P (H)E/HD) Comparing with a predetermined threshold if P (H)E/HD) If the value is less than the preset threshold value, clustering omission exists between the position D and the position EPhenomenon if P (H)E/HD) If the value is larger than the preset threshold value, no clustering omission phenomenon exists between the position D and the position E. Further, the probability that the current snapshot position of the target A is E and the next snapshot address is F, namely P (H), is obtained from the state transition parametersF/HE) Adding P (H)F/HE) Comparing with a predetermined threshold if P (H)F/HE) If the value is less than the preset threshold value, the clustering omission phenomenon exists between the position E and the position F, and if the value is P (H)F/HE) If the value is larger than the preset threshold value, no clustering omission phenomenon exists between the position E and the position F. The probability that the current snapshot position of the target A is F and the next snapshot address is G, namely P (H), is obtained from the state transition parameters by the same methodG/HF) And so on. As shown in fig. 2, after calculation, it is found that a clustering omission phenomenon exists between the snapshot address E and the snapshot address F, a target also appears at the address J, a clustering omission phenomenon exists between the snapshot address G and the snapshot address H, and a target also appears at the address K.
The clustering module 33 is configured to, in response to that there is a clustering omission phenomenon in the current motion trajectory, perform clustering based on an original clustering result by using an image corresponding to the omission position, and obtain a motion trajectory after target modification according to the clustering result.
Specifically, if the current motion trajectory has a clustering omission phenomenon, the clustering module 33 performs clustering based on the original clustering result by using the image corresponding to the omission position, and obtains a modified motion trajectory of the target according to the clustering result.
Specifically, the images of the target a at the position J and the position K may be obtained, and the images of the target a at the position J and the position K are clustered again based on the original clustering result, so as to obtain the modified motion trajectory. As shown in FIG. 2, the modified motion trajectory is D-E-J-F-G-K-H-I.
In one embodiment, in order to improve the correction precision, a snapshot time period in which the target appears at the missing position is further determined based on the state transition parameters, and a snapshot image in the snapshot time period is acquired; and clustering the snapshot images in the snapshot time period based on the original clustering result, and obtaining the corrected motion track of the target according to the clustering result. By the method, the time period corresponding to the clustering omission phenomenon is determined, all images in the time period can be obtained, and the images in the time period are clustered again based on the original clustering result, so that the correction precision of the motion track can be improved.
In a specific embodiment, a second probability and a third probability are obtained from the state transition parameters, the second probability is the probability that the target goes from the first position to the third position, and the third probability is the probability that the target goes from the third position to the second position; and determining the snapshot time period of the target at the missing position based on the second probability and the third probability. Specifically, the second probability and the third probability are compared with a preset threshold; and in response to the second probability and the third probability both being greater than a preset threshold, determining that the snapshot time period is from the snapshot time corresponding to the first position to the snapshot time corresponding to the second position.
Referring to fig. 2, a cluster miss occurs between the address E and the address F, and if the third location is G, H, I, the locations are traversed to determine whether the third location is the missed address. For example, if the third position is G, the second probability P (H) that the current snapshot address is E and the next snapshot address is G is obtained from the state transition parametersG/HE) And acquiring a third probability P (H) that the current snapshot address is G and the next snapshot address is F from the state transition parametersF/HG). If the second probability P (H)G/HE) And a third probability P (H)F/HG) And if the snapshot time periods are all larger than the preset threshold value, determining that the snapshot time period is the time period from the snapshot time corresponding to the snapshot address E to the snapshot time corresponding to the snapshot address F.
The track restoring device completes the motion track of the target by using the Markov state transition matrix, codes the snapshot device geohash, and simultaneously performs track deduplication on the track data of the specified target, so that the accuracy of the Markov state transition matrix of the motion track of the target is improved while the calculation efficiency is improved. And based on the motion trail of the target, a time-space domain of gear leakage during gear aggregation is presumed, and the gear leakage position is clustered again, so that a complete motion trail can be obtained, and the incomplete phenomenon of track information acquisition caused by the gear leakage phenomenon is solved. The method provided by the invention enhances the man-machine interaction of the user while improving the identity landing effect, and is more flexible to use.
Referring to fig. 4, a schematic structural diagram of an electronic device according to an embodiment of the present invention is shown, where the electronic device includes a memory 202 and a processor 201 that are connected to each other.
The memory 202 is used to store program instructions implementing the method of any of the above.
The processor 201 is used to execute program instructions stored by the memory 202.
The processor 201 may also be referred to as a Central Processing Unit (CPU). The processor 201 may be an integrated circuit chip having signal processing capabilities. The processor 201 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 202 may be a memory bank, a TF card, etc., and may store all information in the electronic device of the device, including the input raw data, the computer program, the intermediate operation results, and the final operation results. It stores and retrieves information based on the location specified by the controller. With the memory, the electronic device can only have the memory function to ensure the normal operation. The memories of electronic devices are classified into a main memory (internal memory) and an auxiliary memory (external memory) according to their purposes, and also into an external memory and an internal memory. The external memory is usually a magnetic medium, an optical disk, or the like, and can store information for a long period of time. The memory refers to a storage component on the main board, which is used for storing data and programs currently being executed, but is only used for temporarily storing the programs and the data, and the data is lost when the power is turned off or the power is cut off.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a system server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application.
Please refer to fig. 5, which is a schematic structural diagram of a computer-readable storage medium according to the present invention. The storage medium of the present application stores a program file 203 capable of implementing all the methods described above, wherein the program file 203 may be stored in the storage medium in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. A target motion trail restoration method is characterized by comprising the following steps:
acquiring state transition parameters based on the historical motion track of a target, and acquiring an original clustering result of the target and a current motion track corresponding to the original clustering result;
judging whether the current motion track has a clustering omission phenomenon or not according to the state transition parameters;
and in response to the current motion track having a clustering omission phenomenon, clustering the images corresponding to the omission positions based on the original clustering result to obtain a clustering result, and obtaining a modified motion track of the target according to the clustering result.
2. The method according to claim 1, wherein the step of determining whether there is a cluster omission phenomenon in the current motion trajectory according to the state transition parameter comprises:
obtaining a first probability from the state transition parameters, the first probability being a probability that the target goes from a first location to a second location;
and judging whether the current motion track has a clustering omission phenomenon or not based on the first probability.
3. The method of claim 2, wherein the step of determining whether there is a cluster omission phenomenon in the current motion trajectory based on the first probability comprises:
comparing the first probability with a preset threshold;
and judging that the current motion track has a clustering omission phenomenon in response to the first probability being smaller than a preset threshold value.
4. The method according to claim 1, wherein the step of clustering the images corresponding to the missing positions based on the original clustering result and obtaining the modified motion trajectory of the target according to the clustering result comprises:
determining a snapshot time period when the target appears at the missing position based on the state transition parameters, and acquiring a snapshot image of the snapshot time period;
and clustering the snapshot images in the snapshot time period based on the original clustering result, and obtaining the corrected motion track of the target according to the clustering result.
5. The method of claim 4, wherein the step of determining a snapshot time period for the target to appear at the missing location based on the state transition parameters comprises:
obtaining a second probability and a third probability from the state transition parameter, wherein the second probability is the probability that the target goes from the first position to the third position, and the third probability is the probability that the target goes from the third position to the second position;
determining a snapshot time period for the target to appear at the miss location based on the second probability and the third probability.
6. The method of claim 5, wherein the step of determining a snapshot time period for the target to appear at the miss location based on the second probability and the third probability comprises:
comparing the second probability and the third probability with a preset threshold;
and in response to that the second probability and the third probability are both greater than the preset threshold, determining that the snapshot time period is from the snapshot time corresponding to the first position to the snapshot time corresponding to the second position.
7. The method according to any one of claims 1 to 6, wherein the step of obtaining the original clustering result of the target and the current motion trajectory corresponding to the original clustering result comprises:
clustering is carried out on the basis of images shot by a shooting device, so that an original clustering result set of a plurality of detection targets is obtained, and original clustering results of the targets are obtained from the original clustering result set;
and sequencing the images in the original clustering result according to the shooting time of the images so as to obtain the current motion track.
8. The method according to claim 7, wherein the step of sorting the images in the original clustering result according to the shooting time of the images to obtain the current motion trajectory comprises:
sorting the image data in the original clustering result according to the shooting time of the images,
and coding the position information corresponding to each image data to further obtain the current motion trail.
9. The method according to claim 8, wherein the step of encoding the position information corresponding to each image data to obtain the current motion trajectory comprises:
coding the position information corresponding to each image data to obtain a coding block;
and removing the duplication of the coding block to further obtain the current motion trail.
10. The method of claim 9, wherein the step of encoding the position information corresponding to each image data comprises:
encoding the position information corresponding to each image data by using a geohash encoding method;
and the position information corresponding to the image data is longitude and latitude position information of a shooting device corresponding to the image data.
11. A target motion trail restoration device is characterized by comprising:
the acquisition module is used for acquiring state transition parameters based on the historical motion track of the target and acquiring an original clustering result of the target and a current motion track corresponding to the original clustering result;
the judging module is used for judging whether the current motion track has a clustering omission phenomenon according to the state transition parameters;
and the clustering module is used for responding to the clustering omission phenomenon in the current motion track, clustering the images corresponding to the omission positions based on the original clustering result to obtain a clustering result, and obtaining the motion track after the target is corrected according to the clustering result.
12. An electronic device, comprising: a memory and a processor, wherein the memory stores program instructions which the processor retrieves from the memory to perform the target motion trajectory restoration method of any one of claims 1-10.
13. A computer-readable storage medium, characterized in that a program file is stored, which is executable to implement the target motion trajectory restoration method according to any one of claims 1 to 10.
CN202110621363.7A 2021-06-03 2021-06-03 Target motion trail restoration method and device, electronic equipment and storage medium Active CN113570635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110621363.7A CN113570635B (en) 2021-06-03 2021-06-03 Target motion trail restoration method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110621363.7A CN113570635B (en) 2021-06-03 2021-06-03 Target motion trail restoration method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113570635A true CN113570635A (en) 2021-10-29
CN113570635B CN113570635B (en) 2024-06-21

Family

ID=78161049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110621363.7A Active CN113570635B (en) 2021-06-03 2021-06-03 Target motion trail restoration method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113570635B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037517A (en) * 2022-05-06 2022-09-09 全球能源互联网研究院有限公司南京分公司 Intelligent Internet of things terminal safety state acquisition method and device and electronic equipment
WO2023174304A1 (en) * 2022-03-18 2023-09-21 Zhejiang Dahua Technology Co., Ltd. Systems, methods, and storage devices for data clustering
CN117975071A (en) * 2024-03-28 2024-05-03 浙江大华技术股份有限公司 Image clustering method, computer device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600960A (en) * 2016-12-22 2017-04-26 西南交通大学 Traffic travel origin and destination identification method based on space-time clustering analysis algorithm
US20170286781A1 (en) * 2016-04-05 2017-10-05 Omni Al, Inc. Trajectory cluster model for learning trajectory patterns in videos data
CN109034454A (en) * 2018-06-25 2018-12-18 腾讯大地通途(北京)科技有限公司 Route method for digging, device, computer readable storage medium and computer equipment
CN109813318A (en) * 2019-02-12 2019-05-28 北京百度网讯科技有限公司 Coordinates compensation method and device, equipment and storage medium
CN110895879A (en) * 2019-11-26 2020-03-20 浙江大华技术股份有限公司 Method and device for detecting co-running vehicle, storage medium and electronic device
CN111243277A (en) * 2020-03-09 2020-06-05 山东大学 Commuting vehicle space-time trajectory reconstruction method and system based on license plate recognition data
CN111524164A (en) * 2020-04-21 2020-08-11 北京爱笔科技有限公司 Target tracking method and device and electronic equipment
CN112818149A (en) * 2021-01-21 2021-05-18 浙江大华技术股份有限公司 Face clustering method and device based on space-time trajectory data and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170286781A1 (en) * 2016-04-05 2017-10-05 Omni Al, Inc. Trajectory cluster model for learning trajectory patterns in videos data
CN106600960A (en) * 2016-12-22 2017-04-26 西南交通大学 Traffic travel origin and destination identification method based on space-time clustering analysis algorithm
CN109034454A (en) * 2018-06-25 2018-12-18 腾讯大地通途(北京)科技有限公司 Route method for digging, device, computer readable storage medium and computer equipment
CN109813318A (en) * 2019-02-12 2019-05-28 北京百度网讯科技有限公司 Coordinates compensation method and device, equipment and storage medium
CN110895879A (en) * 2019-11-26 2020-03-20 浙江大华技术股份有限公司 Method and device for detecting co-running vehicle, storage medium and electronic device
CN111243277A (en) * 2020-03-09 2020-06-05 山东大学 Commuting vehicle space-time trajectory reconstruction method and system based on license plate recognition data
CN111524164A (en) * 2020-04-21 2020-08-11 北京爱笔科技有限公司 Target tracking method and device and electronic equipment
CN112818149A (en) * 2021-01-21 2021-05-18 浙江大华技术股份有限公司 Face clustering method and device based on space-time trajectory data and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡娟娟;: "基于目标跟踪的数据挖掘研究", 长沙大学学报, no. 02 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023174304A1 (en) * 2022-03-18 2023-09-21 Zhejiang Dahua Technology Co., Ltd. Systems, methods, and storage devices for data clustering
CN115037517A (en) * 2022-05-06 2022-09-09 全球能源互联网研究院有限公司南京分公司 Intelligent Internet of things terminal safety state acquisition method and device and electronic equipment
CN115037517B (en) * 2022-05-06 2023-11-17 全球能源互联网研究院有限公司南京分公司 Intelligent Internet of things terminal safety state acquisition method and device and electronic equipment
CN117975071A (en) * 2024-03-28 2024-05-03 浙江大华技术股份有限公司 Image clustering method, computer device and storage medium

Also Published As

Publication number Publication date
CN113570635B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN113570635A (en) Target motion trajectory reduction method and device, electronic equipment and storage medium
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
CN110060276B (en) Object tracking method, tracking processing method, corresponding device and electronic equipment
CN110046266B (en) Intelligent management method and device for photos
CN110765860A (en) Tumble determination method, tumble determination device, computer apparatus, and storage medium
CN112329888A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110555428B (en) Pedestrian re-identification method, device, server and storage medium
JP2022550195A (en) Text recognition method, device, equipment, storage medium and computer program
CN111753826B (en) Vehicle and license plate association method, device and electronic system
CN114708304B (en) Cross-camera multi-target tracking method, device, equipment and medium
CN111400550A (en) Target motion trajectory construction method and device and computer storage medium
CN112084939A (en) Image feature data management method and device, computer equipment and storage medium
CN116189162A (en) Ship plate detection and identification method and device, electronic equipment and storage medium
CN113610967B (en) Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN111148045A (en) User behavior cycle extraction method and device
JP2017022690A (en) Method and device for use when reassembling fragmented jpeg image
CN111539435A (en) Semantic segmentation model construction method, image segmentation equipment and storage medium
CN112446361A (en) Method and equipment for cleaning training data
CN109598190A (en) Method, apparatus, computer equipment and storage medium for action recognition
CN115797291A (en) Circuit terminal identification method and device, computer equipment and storage medium
CN114724128A (en) License plate recognition method, device, equipment and medium
CN114329023A (en) File processing method and device, electronic equipment and computer storage medium
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium
CN114219938A (en) Region-of-interest acquisition method
CN112257666A (en) Target image content aggregation method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant