CN108876817B - Cross track analysis method and device, electronic equipment and storage medium - Google Patents

Cross track analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN108876817B
CN108876817B CN201810558528.9A CN201810558528A CN108876817B CN 108876817 B CN108876817 B CN 108876817B CN 201810558528 A CN201810558528 A CN 201810558528A CN 108876817 B CN108876817 B CN 108876817B
Authority
CN
China
Prior art keywords
image
target object
target
analyzed
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810558528.9A
Other languages
Chinese (zh)
Other versions
CN108876817A (en
Inventor
王晶
张广程
李蔚琳
朱斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201810558528.9A priority Critical patent/CN108876817B/en
Publication of CN108876817A publication Critical patent/CN108876817A/en
Application granted granted Critical
Publication of CN108876817B publication Critical patent/CN108876817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a cross trajectory analysis method and apparatus, an electronic device, and a storage medium, the method including: determining a target image of a target object in an image to be analyzed, wherein the image to be analyzed comprises time information and place information; determining the time information and the place information of the target image according to the time information and the place information of the image to be analyzed; obtaining the track of the target object according to the time information and the place information of the target image; and obtaining the cross tracks of the target objects according to the tracks of the target objects. The embodiment of the disclosure utilizes the target images of all the target objects to obtain the crossed track with high precision and accurate positioning.

Description

Cross track analysis method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to a method and an apparatus for analyzing a cross trajectory, an electronic device, and a storage medium.
Background
With the development of information technology, the need to determine the cross-track between different objects is increasing. For example, in security management departments such as public security, the cross tracks of different suspects are used to control the suspects, so that the capturing success rate can be improved. In a conventional cross trajectory analysis method, after a suspect is tracked by using wireless communication equipment carried by the suspect, a method for determining cross trajectories among different suspects is limited by the influence of coverage quality of wireless network signals and the condition that the suspect avoids using the wireless communication equipment, so that the accuracy of the determined cross trajectories is low.
Disclosure of Invention
In view of this, the present disclosure provides a cross-track analysis technical solution.
According to an aspect of the present disclosure, there is provided a cross-track analysis method, the method including:
determining a target image of a target object in an image to be analyzed, wherein the image to be analyzed comprises time information and place information;
determining the time information and the place information of the target image according to the time information and the place information of the image to be analyzed;
obtaining the track of the target object according to the time information and the place information of the target image;
and obtaining the cross tracks of the target objects according to the tracks of the target objects.
In one possible implementation, determining a target image of a target object in an image to be analyzed includes:
detecting a target object in the image to be analyzed;
when the target object is detected in the image to be analyzed, determining a target image of the target object according to the image to be analyzed.
In one possible implementation, determining a target image of a target object in an image to be analyzed includes:
detecting a candidate target object in an original image;
intercepting an image comprising each candidate target object in the original image to generate an image to be analyzed;
and determining a target image of a target object in the image to be analyzed.
In one possible implementation, determining a target image of a target object in the image to be analyzed includes:
and determining a target image of the target object in the image to be analyzed according to the similarity between the target object and the candidate target object.
In one possible implementation, determining a target image of a target object in an image to be analyzed includes:
determining a target object in any reference image for the target object;
and determining the target image of the target object in each reference image in the image to be analyzed according to the target object in each reference image.
In a possible implementation manner, obtaining the trajectory of the target object according to the time information and the location information of the target image includes:
determining a track to be merged corresponding to the target object in each reference image according to the time information and the place information of the target image of the target object in each reference image;
and combining the tracks to be combined corresponding to the target object in each reference image to obtain the track of the target object.
In a possible implementation manner, merging the to-be-merged tracks corresponding to the target object in each reference image to obtain the track of the target object, including:
merging the tracks to be merged corresponding to the target object in each reference image to obtain merged tracks;
and removing the abnormal time information and/or the abnormal place information in the combined track to obtain the track of the target object.
According to an aspect of the present disclosure, there is provided a cross-trajectory analysis device, the device including:
the target image determining module is used for determining a target image of a target object in an image to be analyzed, and the image to be analyzed comprises time information and place information;
the information determining module is used for determining the time information and the place information of the target image;
the target track determining module is used for obtaining the track of the target object according to the time information and the place information of the target image;
and the cross track determining module is used for obtaining the cross tracks of the target objects according to the tracks of the target objects.
In one possible implementation, the target image determination module includes:
the detection submodule is used for detecting a target object in the image to be analyzed;
the first target image determining submodule is used for determining a target image of the target object according to the image to be analyzed when the target object is detected in the image to be analyzed.
In one possible implementation, the target image determination module includes:
a candidate target object detection sub-module for detecting a candidate target object in the original image;
the image to be analyzed generating submodule is used for intercepting images comprising the candidate target objects in the original image to generate an image to be analyzed;
and the second target image determining submodule is used for determining a target image of a target object in the image to be analyzed.
In one possible implementation, the second target image determination sub-module includes:
and the target image determining unit is used for determining a target image of the target object in the image to be analyzed according to the similarity between the target object and the candidate target object.
In one possible implementation, the target image determination module includes:
a target object determination sub-module for determining a target object in any one of the reference images for the target object;
and the third target image determining submodule is used for determining a target image of the target object in each reference image in the image to be analyzed according to the target object in each reference image.
In one possible implementation, the target trajectory determination module includes:
the track to be merged determining submodule is used for determining a track to be merged corresponding to the target object in each reference image according to the time information and the place information of the target image of the target object in each reference image;
and the first target track determining submodule is used for combining the tracks to be combined corresponding to the target object in each reference image to obtain the track of the target object.
In one possible implementation manner, the first target trajectory determination sub-module includes:
the merging track determining submodule is used for merging the tracks to be merged corresponding to the target object in each reference image to obtain merging tracks;
and the second target track determining submodule is used for removing the abnormal time information and/or the abnormal place information in the combined track to obtain the track of the target object.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
and the processor directly or indirectly calls the executable instruction to execute the cross track analysis method.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions to be executed by a processor for performing the above-mentioned cross-track analysis method.
According to the embodiment of the disclosure, the target image of the target object can be determined in the image to be analyzed, the track of each target object is obtained according to the target image of each target object, and the cross track of a plurality of target objects is obtained by using the tracks of the plurality of target objects. The cross track obtained by using the target image of each target object has high precision and accurate positioning.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of cross-track analysis in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a trajectory in a cross-trajectory analysis method in accordance with an exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of cross-track analysis in accordance with an exemplary embodiment;
FIG. 4 is a flowchart illustrating a step S10 of a cross-track analysis method in accordance with an exemplary embodiment;
FIG. 5 is a flowchart illustrating a step S10 of a cross-track analysis method in accordance with an exemplary embodiment;
FIG. 6 is a flowchart illustrating a step S10 of a cross-track analysis method in accordance with an exemplary embodiment;
FIG. 7 is a flowchart illustrating a step S30 of a cross-track analysis method in accordance with an exemplary embodiment;
FIG. 8 is a block diagram illustrating a cross-track analysis device in accordance with an exemplary embodiment;
FIG. 9 is a block diagram of a cross-track analysis apparatus shown in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 is a flowchart illustrating a cross-track analysis method according to an exemplary embodiment, as shown in fig. 1, the cross-track analysis method including:
step S10, a target image of a target object is determined in an image to be analyzed, which includes time information and location information.
In one possible implementation, the target object may include various types of objects such as a person, an animal, a plant, a building, and so on. The type of the target object may be determined according to the requirements. The target object may include one or more types of objects. For example, the target object may be a person a or a combination of a person a and an animal B.
The image to be analyzed may comprise a single image or may comprise a plurality of images. The image to be analyzed may be a static image or a plurality of frame images in a video stream. The image to be analyzed may include images of a plurality of objects, or may include an image of a single object.
The image to be analyzed includes time information and location information when the image to be analyzed is photographed. For example, the image to be analyzed photographed by the monitoring camera may determine time information of the image to be analyzed according to photographing time, and determine location information of the image to be analyzed according to installation location information of the monitoring camera. The location information may include latitude and longitude information and may also include postal address information.
The target image of the target object may be determined in the image to be analyzed according to information such as an image or a name of the target object.
An image to be analyzed including a target object image may be determined as a target image of the target object. An image of the target object may also be cut out in the image to be analyzed as a target image of the target object.
When the image to be analyzed is a frame image in a video stream, a plurality of target images of the target object may be determined in the image to be analyzed.
Step S20, determining the time information and the location information of the target image according to the time information and the location information of the image to be analyzed.
In one possible implementation, when the image to be analyzed including the target object is determined as the target image of the target object, time information and location information of the image to be analyzed may be determined as the time information and the location information of the target image.
When an image including a target object is cut out from an image to be analyzed as a target image, time information and location information of the image to be analyzed before cutting out may be determined as the time information and the location information of the target image.
And step S30, obtaining the track of the target object according to the time information and the place information of the target image.
In one possible implementation, a plurality of target images of the target object may be determined in the image to be analyzed. According to the time information and the location information of each target image, the target images can be connected in sequence according to the time sequence to obtain the association relationship among the location information of each target image, and the association relationship among the location information of each target image is determined as the track of the target object.
The target images can be identified on the map according to the location information of the target images, and then the target images are sequentially connected according to the time information of the target images and the time sequence, so that the linear track of the target object is obtained on the map. The trajectory of the target object displayed on the map is more intuitive.
Fig. 2 is a schematic diagram of a combined trajectory in a cross trajectory analysis method according to an exemplary embodiment, where as shown in fig. 2, five points in the diagram are five target images of a target object, and the five target images are sequentially connected in time order to obtain a linear trajectory of the target images on a map.
When there is only one target image of the target object, the trajectory of the target object is a location including time information.
Step S40, obtaining intersecting trajectories of the plurality of target objects according to the trajectories of the plurality of target objects.
In a possible implementation manner, when there are multiple target objects, the trajectory of the overlapped part in the trajectories of the target objects may be determined as the intersecting trajectory of the multiple target objects.
The crossing trajectory may be a trajectory determined from the target images that are overlapped after determining the target image in which the time information and the location information are overlapped in all the target images of the plurality of target objects. The time information overlapping may include that the time difference calculated according to the time information is within a time difference threshold range, and the location information overlapping may include that the distance calculated according to the location information is within a distance threshold range. For example, the intersecting trajectory of the target object 1 and the target object 2 is a trajectory in which the target object 1 and the target object 2 appear at the same place at the same time.
The intersecting trajectory may be a trajectory determined from the target images that are overlapped after determining the target images whose location information is overlapped among all the target images of the plurality of target objects. For example, the intersecting trajectory of target object 3 and target object 4 is a trajectory in which target object 3 and target object 4 have appeared at the same place at the same or different times.
The crossing trajectory may be a crossing trajectory determined from the intersection of the trajectories of the plurality of target objects. For example, the trajectory of target object 5 and the trajectory of target object 6 do not coincide in time and place, but the two trajectories of target object 5 and target object 6 include one or more intersections. The intersection trajectory of the target object 5 and the target object 6 can be determined from the intersection points. Wherein the intersection point is not a point on the trajectory of the target object 5 and the target object 6.
In this embodiment, a target image of a target object may be determined in an image to be analyzed, a trajectory of each target object may be obtained according to the target image of each target object, and a cross trajectory of a plurality of target objects may be obtained by using the trajectories of the plurality of target objects. The cross track obtained by using the target image of each target object has high precision and accurate positioning.
Fig. 3 is a flowchart illustrating a cross trace analysis method according to an exemplary embodiment, where step S10 in the cross trace analysis method includes, as shown in fig. 3:
step S11, detecting a target object in the image to be analyzed.
In one possible implementation, the target object may be detected in the image to be detected based on an image of the target object, for example, a photograph or portrait of the target object may be utilized. A target image of the target object may be determined in the image to be analyzed using techniques such as image recognition. Or inputting the image of the target object into the neural network, and determining the image of the target object in the image to be analyzed according to the output result of the neural network.
A cat may also be detected in the image to be analyzed from information such as the name of the target object, for example, from the name "cat" of the target object. A target image of the target object may be determined in the image to be analyzed using techniques such as image recognition. The name of the target object can also be input into the neural network, and the image of the target object can be determined in the image to be analyzed according to the output result of the neural network.
Step S12, when the target object is detected in the image to be analyzed, determining a target image of the target object according to the image to be analyzed.
In one possible implementation, the image to be analyzed may include only the image of the target object. For example, the image a to be analyzed includes only the object 1, and the image B to be analyzed includes only the object 2. When the target object is the object 1, the object 1 may be detected in the image a to be analyzed, and the image a to be analyzed may be determined as the target image of the object 1.
The image to be analyzed may also include the target object and other object pair images at the same time. For example, the image a to be analyzed includes three objects, object 1, object 2, and object 3. The target object is the object 1, the object 1 may be recognized in the image a to be analyzed from the image of the object 1, and the image a to be analyzed may be determined as the target image of the object 1. It is also possible to intercept the image of the object 1 in the image a to be analyzed as a target image of the target object.
In the present embodiment, when a target object is detected in an image to be analyzed, a target image of the target object is determined from the image to be analyzed. The accurate target image can be obtained according to the detection result of the image to be analyzed, so that the determination result of the cross track is more accurate and reliable.
Fig. 4 is a flowchart illustrating step S10 of a cross trace analysis method according to an exemplary embodiment, where step S10 of the cross trace analysis method, as illustrated in fig. 4, includes:
in step S13, a candidate target object is detected in the original image.
In one possible implementation, the original image may comprise an unprocessed image. For example, the original image may be a frame image in a video stream captured by a monitoring camera. A plurality of candidate target objects may be included in the original image. For example, in an original image taken by a monitoring camera in a person-dense place, images of a plurality of persons may be included.
The candidate target object may be detected in the original image using techniques such as face recognition. The original image can be input into a face recognition neural network, and each candidate target object in the original image is detected according to the output result of the face recognition neural network. Each candidate target object may be identified in the original image according to the detection result, for example, a detection frame of each candidate target object may be identified in the original image.
Step S14, capturing an image including each candidate target object from the original image, and generating an image to be analyzed.
In one possible implementation manner, an image including each candidate target object may be cut out from an original image according to a detection result of the candidate target object. For example, three candidate target objects, namely candidate target object 1, candidate target object 2, and candidate target object 3, are detected in the original image D, and images corresponding to candidate target object 1, candidate target object 2, and candidate target object 3 may be respectively captured in the original image D, so as to obtain three images to be analyzed.
Step S15, determining a target image of a target object in the image to be analyzed.
In a possible implementation manner, an image to be analyzed including a plurality of candidate target objects may be generated according to an original image, and then a target object of the target object may be determined according to the image to be analyzed including the plurality of candidate target objects.
According to the requirements, the image to be analyzed can be generated according to the selected time and/or the original image of the selected place, the image library to be analyzed is established, the target image of the target object is determined according to the established image library to be analyzed, the track of the target object can be determined in a more targeted manner, and the cross track of a plurality of target objects can be obtained in a more targeted manner.
In this embodiment, after detecting candidate target objects in an original image, an image including each of the candidate target objects is captured, an image to be analyzed is generated, and a target image of the target object is determined in the image to be analyzed.
Fig. 5 is a flowchart illustrating step S10 of a cross trace analysis method according to an exemplary embodiment, where step S15 of the cross trace analysis method, as illustrated in fig. 5, includes:
step S151, determining a target image of the target object in the image to be analyzed according to the similarity between the target object and the candidate target object.
In one possible implementation, the target object may be detected in the image to be analyzed from an image of the target object. When the similarity between the target object and the candidate target object in the image to be analyzed is greater than the similarity threshold, the image to be analyzed may be determined as the target image of the target object.
The specified features may be set as required, the specified features of the target object and the specified features of the candidate target object may be compared, and the degree of similarity between the target object and the candidate target object may be determined according to the comparison result. For example, the target object is a person, and the comparison may be performed according to the characteristics of the nose and the hair of the target object and the characteristics of the nose and the hair of the candidate target object, where the nose of the target object is the olecranon nose and the hair is curly hair, and then the image to be analyzed of the candidate target object with the nose of the olecranon nose and the hair is curly hair is the target image of the target object.
The feature value of the target object and the feature value of the candidate target object can be extracted according to requirements, and the similarity between the target object and the candidate target object is determined by calculating the difference value between the feature value of the target object and the feature value of the candidate target object.
The similarity between the target object and the candidate target object can be determined according to the first similarity and the second similarity by acquiring the first similarity between the target object and the third object and acquiring the second similarity between the candidate target object and the same third object.
In the embodiment, the target image determined according to the similarity between the target object and the candidate target object is more accurate. The corresponding relation between the similarity and the target image can be adjusted according to requirements, and the application range of the embodiment is wider.
Fig. 6 is a flowchart illustrating step S10 of a cross trace analysis method according to an exemplary embodiment, where step S10 of the cross trace analysis method includes, as shown in fig. 6:
in step S16, the target object is determined in any of the reference images for the target object.
In one possible implementation, the reference image of the target object may be a photograph or a portrait or the like that includes the target object. Due to the difference of the shooting angle and the shooting environment, the definition and the characteristics of the target object in the image are different, and the target object in a single image may not fully reflect all the characteristics of the target object. The target object may be determined in multiple reference images.
The target object identified in any of the reference images may include different characteristics of the target object. For example, the target object is a person, the face of the target object detected in the reference image 1 is a side face, and the target object detected in the reference image 2 is in a head-down state. Only according to the reference image 1 or the reference image 2, all the features of the target object cannot be fully reflected, and the most accurate target image cannot be obtained.
Step S17, determining a target image of the target object in each reference image in the image to be analyzed according to the target object in each reference image.
In a possible implementation manner, according to a target object determined in any reference image, a target image corresponding to the reference image may be determined in an image to be analyzed. And determining a target image corresponding to the target object in each reference image in the image to be analyzed according to the target object determined by each reference image.
The track corresponding to each reference image can be obtained according to the time information and the location information of the target image corresponding to the reference image. And obtaining the track of the target object according to the corresponding track of each reference image.
The track of the target object can also be directly obtained according to the time information and the location information of the target image corresponding to all the reference images.
In this embodiment, the target object is determined according to any one of the reference images of the target object, and then the target image is determined in the image to be analyzed according to the target object in each of the reference images. According to different reference images, more comprehensive characteristics of each target object can be obtained, more accurate tracks of each target object can be obtained, and the combined tracks of each target object are more accurate.
Fig. 7 is a flowchart illustrating step S30 of a cross trace analysis method according to an exemplary embodiment, where step S30 of the cross trace analysis method includes, as shown in fig. 7:
step S31, determining the trajectory to be merged corresponding to the target object in each reference image according to the time information and the location information of the target image of the target object in each reference image.
In a possible implementation manner, according to the target image determined in each reference image, the track of the target object corresponding to each reference image can be determined in the image to be recognized. For example, the target object a has three reference images, which are reference image 1, reference image 2, and reference image 3. According to the target object A determined in the reference image 1, determining a track 1 to be merged of the target object A in the image to be analyzed; according to the target object A determined in the reference image 2, determining a track 2 to be merged of the target object A in the image to be analyzed; according to the target object A determined in the reference image 3, the track 3 to be merged of the target object A can be determined in the image to be analyzed.
And step S32, merging the tracks to be merged corresponding to the target object in each reference image to obtain the track of the target object.
In a possible implementation manner, the tracks to be merged corresponding to each reference image may be merged according to a requirement, so as to obtain the track of the target object. For example, merging tracks to be merged for a particular time period, and/or merging tracks to be merged for a particular geographic range.
In this embodiment, after the to-be-merged track corresponding to each reference image is obtained according to the target image corresponding to each reference image, the track of the target object may be determined according to the to-be-merged track of each reference image. According to the tracks to be merged of the reference images, the tracks of the target objects can be more accurate and reliable.
In a possible implementation manner, step S32 in the cross trace analysis method includes:
and merging the tracks to be merged corresponding to the target object in each reference image to obtain merged tracks.
And removing the abnormal time information and/or the abnormal place information in the combined track to obtain the track of the target object.
In a possible implementation manner, the merging trajectory may include a merging trajectory of each reference image, or may be obtained by merging trajectories to be merged of partial reference images. For example, when the track to be merged of one of the reference images has a significant deviation from the track to be merged of the other reference images, the track to be merged of the remaining reference images may be excluded to obtain the merged track.
In a possible implementation manner, the abnormal time information and/or the abnormal place information in the merged track may be removed according to a requirement. According to the time information, the average time corresponding to each target image can be calculated, and the point where the target image with the largest time difference with the average time is located is removed to obtain the track of the target object. And removing points of the target image beyond the set geographic range according to the location information to obtain the track of the target object.
In this embodiment, the track of the target object may be obtained by removing the abnormal time information and the abnormal location information in the combined track according to the requirement. The track of each target object can be more accurate, and the cross track of each target object can be more accurate.
Fig. 8 is a block diagram illustrating a cross-trace analyzing apparatus according to an exemplary embodiment, which includes, as shown in fig. 8:
the target image determining module 10 is used for determining a target image of a target object in an image to be analyzed, wherein the image to be analyzed comprises time information and place information.
And an information determining module 20, configured to determine time information and location information of the target image.
And a target track determining module 30, configured to obtain a track of the target object according to the time information and the location information of the target image.
And the crossing track determining module 40 is configured to obtain crossing tracks of the multiple target objects according to the tracks of the multiple target objects.
Fig. 9 is a block diagram illustrating a cross-track analysis apparatus according to an exemplary embodiment, as shown in fig. 9, in one possible implementation, the target image determination module 10 includes:
the detection submodule 11 is configured to detect a target object in the image to be analyzed;
the first target image determining sub-module 12 is configured to determine, when the target object is detected in the image to be analyzed, a target image of the target object according to the image to be analyzed.
In one possible implementation, the target image determination module 10 includes:
a candidate target object detection sub-module 13 for detecting a candidate target object in the original image;
an image to be analyzed generation submodule 14, configured to intercept an image including each candidate target object from the original image, and generate an image to be analyzed;
and the second target image determining submodule 15 is configured to determine a target image of a target object in the image to be analyzed.
In one possible implementation, the second target image determination sub-module 15 includes:
and the target image determining unit is used for determining a target image of the target object in the image to be analyzed according to the similarity between the target object and the candidate target object.
In one possible implementation, the target image determination module 10 includes:
a target object determination sub-module 16 for determining a target object in any one of the reference images for the target object;
and a third target image determining submodule 17, configured to determine, according to the target object in each reference image, a target image of the target object in each reference image in the image to be analyzed.
In one possible implementation, the target trajectory determination module 30 includes:
a track to be merged determining submodule 31, configured to determine a track to be merged corresponding to the target object in each reference image according to the time information and the location information of the target image of the target object in each reference image;
the first target track determining submodule 32 is configured to combine the to-be-combined tracks corresponding to the target object in each reference image, so as to obtain the track of the target object.
In one possible implementation, the first target trajectory determination submodule 32 includes:
a merging track determining unit, configured to merge tracks to be merged corresponding to the target object in each reference image to obtain a merged track;
and the second target track determining unit is used for removing the abnormal time information and/or the abnormal place information in the combined track to obtain the track of the target object.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment. For example, the electronic device may be provided as a terminal, a server, or other form of device. The electronic device includes a cross trace analysis apparatus 1900. Referring to fig. 10, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a computer-readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A method of cross-track analysis, the method comprising:
determining a target image of a target object in an image to be analyzed, wherein the image to be analyzed comprises time information and location information, and the method comprises the following steps: determining a target object in any reference image for the target object; determining a target image of a target object in each reference image in an image to be analyzed according to the target object in each reference image, wherein the target object in each reference image comprises different characteristics of the target object, and the reference image comprises a photo or a portrait containing the target object;
determining the time information and the place information of the target image according to the time information and the place information of the image to be analyzed;
obtaining the track of the target object according to the time information and the location information of the target image, wherein the track obtaining method comprises the following steps: determining a track to be merged corresponding to the target object in each reference image according to the time information and the place information of the target image of the target object in each reference image; merging the tracks to be merged corresponding to the target object in each reference image to obtain the track of the target object;
and obtaining the cross tracks of the target objects according to the tracks of the target objects.
2. The method of claim 1, wherein determining a target image of a target object in an image to be analyzed comprises:
detecting a target object in the image to be analyzed;
when the target object is detected in the image to be analyzed, determining a target image of the target object according to the image to be analyzed.
3. The method of claim 1, wherein determining a target image of a target object in an image to be analyzed comprises:
detecting a candidate target object in an original image;
intercepting an image comprising each candidate target object in the original image to generate an image to be analyzed;
and determining a target image of a target object in the image to be analyzed.
4. The method of claim 3, wherein determining a target image of a target object in the image to be analyzed comprises:
and determining a target image of the target object in the image to be analyzed according to the similarity between the target object and the candidate target object.
5. The method according to claim 1, wherein merging the to-be-merged trajectories corresponding to the target object in each reference image to obtain the trajectory of the target object comprises:
merging the tracks to be merged corresponding to the target object in each reference image to obtain merged tracks;
and removing the abnormal time information and/or the abnormal place information in the combined track to obtain the track of the target object.
6. A cross-track analysis apparatus, the apparatus comprising:
the target image determining module is used for determining a target image of a target object in an image to be analyzed, wherein the image to be analyzed comprises time information and place information, and the target image determining module comprises the following steps: a target object determination sub-module for determining a target object in any one of the reference images for the target object; a third target image determining submodule, configured to determine, according to a target object in each reference image, a target image of the target object in each reference image in an image to be analyzed, where the target object in each reference image includes different features of the target object, and the reference image includes a photograph or an portrait that includes the target object;
the information determining module is used for determining the time information and the place information of the target image;
a target track determining module, configured to obtain a track of the target object according to the time information and the location information of the target image, including: the track to be merged determining submodule is used for determining a track to be merged corresponding to the target object in each reference image according to the time information and the place information of the target image of the target object in each reference image; the first target track determining submodule is used for merging the tracks to be merged corresponding to the target object in each reference image to obtain the track of the target object;
and the cross track determining module is used for obtaining the cross tracks of the target objects according to the tracks of the target objects.
7. The apparatus of claim 6, wherein the target image determination module comprises:
the detection submodule is used for detecting a target object in the image to be analyzed;
the first target image determining submodule is used for determining a target image of the target object according to the image to be analyzed when the target object is detected in the image to be analyzed.
8. The apparatus of claim 6, wherein the target image determination module comprises:
a candidate target object detection sub-module for detecting a candidate target object in the original image;
the image to be analyzed generating submodule is used for intercepting images comprising the candidate target objects in the original image to generate an image to be analyzed;
and the second target image determining submodule is used for determining a target image of a target object in the image to be analyzed.
9. The apparatus of claim 8, wherein the second target image determination sub-module comprises:
and the target image determining unit is used for determining a target image of the target object in the image to be analyzed according to the similarity between the target object and the candidate target object.
10. The apparatus of claim 6, wherein the first target trajectory determination submodule comprises:
a merging track determining unit, configured to merge tracks to be merged corresponding to the target object in each reference image to obtain a merged track;
and the second target track determining unit is used for removing the abnormal time information and/or the abnormal place information in the combined track to obtain the track of the target object.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 5.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 5.
CN201810558528.9A 2018-06-01 2018-06-01 Cross track analysis method and device, electronic equipment and storage medium Active CN108876817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810558528.9A CN108876817B (en) 2018-06-01 2018-06-01 Cross track analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810558528.9A CN108876817B (en) 2018-06-01 2018-06-01 Cross track analysis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108876817A CN108876817A (en) 2018-11-23
CN108876817B true CN108876817B (en) 2021-08-20

Family

ID=64335395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810558528.9A Active CN108876817B (en) 2018-06-01 2018-06-01 Cross track analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108876817B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524160A (en) * 2019-02-01 2020-08-11 深圳市商汤科技有限公司 Track information acquisition method and device, electronic equipment and storage medium
CN110502651B (en) * 2019-08-15 2022-08-02 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111126807B (en) * 2019-12-12 2023-10-10 浙江大华技术股份有限公司 Stroke segmentation method and device, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2790123A1 (en) * 2013-04-08 2014-10-15 BlackBerry Limited Generating A Data Audit Trail For Cross Perimeter Data Transfer
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN107818573A (en) * 2016-09-12 2018-03-20 杭州海康威视数字技术股份有限公司 A kind of method for tracking target and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363997A (en) * 2018-03-20 2018-08-03 南京云思创智信息科技有限公司 It is a kind of in video to the method for real time tracking of particular person

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2790123A1 (en) * 2013-04-08 2014-10-15 BlackBerry Limited Generating A Data Audit Trail For Cross Perimeter Data Transfer
CN107818573A (en) * 2016-09-12 2018-03-20 杭州海康威视数字技术股份有限公司 A kind of method for tracking target and device
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system

Also Published As

Publication number Publication date
CN108876817A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108897777B (en) Target object tracking method and device, electronic equipment and storage medium
CN108229504B (en) Image analysis method and device
CN109543680B (en) Method, apparatus, device, and medium for determining location of point of interest
CN108876817B (en) Cross track analysis method and device, electronic equipment and storage medium
KR20200116158A (en) Image processing method and device, electronic device and storage medium
US20160110885A1 (en) Cloud based video detection and tracking system
CN109492531B (en) Face image key point extraction method and device, storage medium and electronic equipment
CN112347296A (en) Person and case association analysis method and device based on face recognition
CN110688873A (en) Multi-target tracking method and face recognition method
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN109033264B (en) Video analysis method and device, electronic equipment and storage medium
CN115223022B (en) Image processing method, device, storage medium and equipment
CN108734491B (en) Method and device for evaluating copyright value of multimedia data
US10607383B2 (en) Seamless focus plane stitching to create panoramic video
US10049456B2 (en) Verification of business processes using spatio-temporal data
CN110807127A (en) Video recommendation method and device
CN115019126A (en) Image sample screening method and device, electronic equipment and storage medium
CN115205553A (en) Image data cleaning method and device, electronic equipment and storage medium
CN113344064A (en) Event processing method and device
CN110312171B (en) Video clip extraction method and device
US9864890B1 (en) Systems and methods for contextualizing data obtained from barcode images
CN115147789B (en) Method, device, equipment and computer readable medium for detecting split and combined road information
CN111212196B (en) Information processing method and device, electronic equipment and storage medium
CN114564398A (en) Test method and device, electronic equipment and storage medium
CN114972807B (en) Method and device for determining image recognition accuracy, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant