CN115631212B - Person accompanying track determining method and device, electronic equipment and readable medium - Google Patents

Person accompanying track determining method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN115631212B
CN115631212B CN202211661068.5A CN202211661068A CN115631212B CN 115631212 B CN115631212 B CN 115631212B CN 202211661068 A CN202211661068 A CN 202211661068A CN 115631212 B CN115631212 B CN 115631212B
Authority
CN
China
Prior art keywords
track
information
person
position information
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211661068.5A
Other languages
Chinese (zh)
Other versions
CN115631212A (en
Inventor
常海峰
陈海峰
李娜
王星
任志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongguancun Smart City Co Ltd
Original Assignee
Zhongguancun Smart City Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongguancun Smart City Co Ltd filed Critical Zhongguancun Smart City Co Ltd
Priority to CN202211661068.5A priority Critical patent/CN115631212B/en
Publication of CN115631212A publication Critical patent/CN115631212A/en
Application granted granted Critical
Publication of CN115631212B publication Critical patent/CN115631212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a person accompanying track determination method, a person accompanying track determination device, an electronic device and a readable medium. One embodiment of the method comprises: acquiring a position information sequence of a target person in a preset time period; fitting the personnel track according to the position coordinates included by the position information; determining a target video according to the position positioning time and the position coordinates included in the position information; performing personnel identification on the target video to generate candidate personnel movement track information; for each candidate person movement track information in the candidate person movement track information sequence, determining the track degree of accompaniment of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information; and screening candidate person movement track information of which the corresponding track accompanying degree meets track screening conditions from the candidate person movement track information sequence. This embodiment improves the accuracy and efficiency of the attendant trajectory determination.

Description

Person accompanying track determining method and device, electronic equipment and readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a person accompanying track determining method, a person accompanying track determining device, an electronic device and a readable medium.
Background
The track determination refers to a technology for determining a moving track according to the positioning data of a person. On the basis, the method has great research value in the fields of epidemic disease tracing, suspect tracing and the like by realizing the accompanying track determination of a plurality of persons. Currently, when determining the accompanying trajectory, the following method is generally adopted: and identifying the accompanying personnel based on a manual judging mode so as to achieve the purpose of determining the accompanying track.
However, the inventors have found that when the above-described manner is adopted, there are often technical problems as follows:
firstly, for a large amount of positioning data, it is difficult to efficiently determine the trajectory and the accompanying relationship in a manual manner, so that rapid and accurate accompanying trajectory determination cannot be performed in a short time;
secondly, in practical situations, the interaction relationship of the tracks corresponding to multiple personnel is complex, so that it is difficult to accurately determine the accompanying relationship between the tracks, and the wrong accompanying relationship may affect the subsequent applications in the fields of epidemic disease tracing, suspect tracing, and the like, thereby causing problems such as tracing error and tracing failure.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a person-along-track determination method, apparatus, electronic device, and readable medium to address one or more of the technical problems noted in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for determining a person accompanying trajectory, the method including: acquiring a position information sequence of a target person in a preset time period, wherein position information in the position information sequence comprises: position location time and position coordinates; according to the position coordinates included by the position information in the position information sequence, carrying out personnel track fitting to generate target personnel movement track information corresponding to the target personnel; determining a target video according to the position positioning time and the position coordinates included in the position information sequence, wherein the target video is a video containing the target person; identifying people in the target video to generate candidate people movement track information to obtain a candidate people movement track information sequence; for each candidate person movement track information in the candidate person movement track information sequence, determining the track accompaniments of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information; and screening candidate person movement track information of which the corresponding track accompanying degree meets track screening conditions from the candidate person movement track information sequence, and obtaining an accompanying track information set as accompanying track information corresponding to the target person.
In a second aspect, some embodiments of the present disclosure provide a person accompanying trajectory determination apparatus, the apparatus comprising: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a position information sequence of a target person within a preset time period, and the position information in the position information sequence comprises: position location time and position coordinates; a person trajectory fitting unit configured to perform person trajectory fitting according to position coordinates included in the position information sequence to generate target person movement trajectory information corresponding to the target person; a first determining unit, configured to determine a target video according to position location time and position coordinates included in position information in the position information sequence, wherein the target video is a video containing the target person; the personnel identification unit is configured to identify personnel of the target video so as to generate candidate personnel movement track information and obtain a candidate personnel movement track information sequence; a second determining unit configured to determine, for each candidate person movement trajectory information in the candidate person movement trajectory information sequence, a trajectory accompaniment of a person trajectory corresponding to the candidate person movement trajectory information and a person trajectory corresponding to the target person movement trajectory information; and a screening unit configured to screen candidate person movement trajectory information, of which the corresponding trajectory accompaniment satisfies the trajectory screening condition, from the candidate person movement trajectory information sequence as accompanying trajectory information corresponding to the target person, and obtain an accompanying trajectory information set.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: the accuracy and efficiency of attendant trajectory determination is improved by the method of attendant trajectory determination of some embodiments of the present disclosure. Specifically, the reason for the low accuracy and efficiency of the accompanying trajectory determination is that: for a large amount of positioning data, it is difficult to efficiently determine the trajectory and the accompanying relationship in a manual manner, and thus rapid and accurate determination of the accompanying trajectory cannot be performed in a short time. Based on this, according to the person accompanying track determining method of some embodiments of the present disclosure, first, a position information sequence of a target person within a preset time period is obtained, where position information in the position information sequence includes: position fix time and position coordinates. And then, according to the position coordinates included in the position information sequence, carrying out personnel trajectory fitting to generate target personnel movement trajectory information corresponding to the target personnel. Thereby realizing the generation of the moving track of the target person. And then, determining a target video according to the position positioning time and the position coordinates included in the position information sequence, wherein the target video is a video containing the target person. In practical situations, people having an accompanying relationship with the target person may be included in the target video. The target video is acquired for determination of the following accompanying personnel. And then, carrying out personnel identification on the target video to generate candidate personnel movement track information to obtain a candidate personnel movement track information sequence. And identifying persons other than the target person contained in the target video by performing person identification on the target video. Further, for each candidate person movement track information in the candidate person movement track information sequence, the track association degree of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information is determined. In practical situations, a plurality of people are often contained in the target video, and some people may have a relationship with the target person, so that it is necessary to determine whether the relationship exists. And finally, screening out candidate person movement track information of which the corresponding track accompanying degree meets track screening conditions from the candidate person movement track information sequence, and taking the candidate person movement track information as accompanying track information corresponding to the target person to obtain an accompanying track information set. Therefore, the automatic determination of the accompanying track is realized, and compared with a manual judging mode, the accuracy and the efficiency of the determination of the accompanying track are greatly improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a person companion trajectory determination method according to the present disclosure;
FIG. 2 is a schematic block diagram of some embodiments of a person companion trajectory determination device according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of some embodiments of a person companion trajectory determination method according to the present disclosure is shown. The method for determining the accompanying trajectory of the person comprises the following steps:
step 101, acquiring a position information sequence of a target person in a preset time period.
In some embodiments, the execution subject (e.g., a computing device) of the person accompanying trajectory determination method may obtain the position information sequence of the target person within the preset time period through a wired connection or a wireless connection. The position information in the position information sequence includes: position fix time and position coordinates. The target person is a person to be determined with a corresponding track. The preset time period may be a preset time period. The location information may characterize the location of the target person within the time period. The location determination time may be a time for determining a location of the target person. The position coordinates may represent a position of the target person at the time of positioning. In practice, the position information includes position coordinates that can be determined by a three-point positioning method according to the communication base station.
It should be noted that the above-mentioned wireless connection means may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection means.
The computing device may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein. It should be understood that the number of computing devices may have any number of computing devices, as desired for an implementation.
And 102, fitting the staff track according to the position coordinates included by the position information in the position information sequence to generate target staff movement track information corresponding to the target staff.
In some embodiments, the executing body may perform person trajectory fitting according to position coordinates included in the position information sequence to generate target person movement trajectory information corresponding to the target person. And the target person movement track information represents the movement track corresponding to the target person. For example, the target person movement trajectory information may be a trajectory equation of a movement trajectory corresponding to the target person. For another example, the target person movement track information may be a track point set of a movement track corresponding to the target person. In practice, the executing body may perform a person trajectory fitting by a least square method according to the position coordinates included in the position information sequence to generate the target person movement trajectory information.
In some optional implementation manners of some embodiments, the executing body performs person trajectory fitting according to position coordinates included in the position information sequence to generate target person movement trajectory information corresponding to the target person, and may include the following steps:
a first step of, for each pair of position information groups in the above-mentioned position information sequence, performing the following spaced-apart position determining step:
the first substep is to determine the time interval of the position locating time included in the position information group, and obtain the time difference.
Wherein the position information group comprises two adjacent position information,
As an example, the position information group includes position information a and position information B. The position information B includes a position location time later than the position location time included in the position information a. The execution body may subtract the position locating time included in the position information a from the position locating time included in the position information B as a time difference.
A second substep of determining separation location information based on the set of location information in response to determining that the time difference is greater than a predetermined difference.
Wherein the separation position information characterizes the separation position. In practice, the execution subject may determine the partition position information by the following formula:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 80746DEST_PATH_IMAGE002
indicating separation position information. />
Figure DEST_PATH_IMAGE003
And characterizing the position of the first position information in the position information sequence in the position information group. />
Figure 325782DEST_PATH_IMAGE004
Characterizing the second position information in the set of position information at said positionPosition in the information sequence.
And secondly, grouping the position information sequence according to the obtained separation position information set to obtain a position information group sequence.
And thirdly, performing track fitting on each position information group in the position information group sequence to generate a sub-fitting track to obtain a sub-fitting track sequence.
As an example, the execution subject described above may perform trajectory fitting by a least square method from position coordinates included in the position information group to generate a sub-fitted trajectory.
Fourthly, executing the following track fusion processing steps for each pair of sub-fitting track groups in the sub-fitting track sequence:
the first sub-step, determine the number of insertion points between the sub-fit trajectories in the set of sub-fit trajectories.
Wherein, the sub-fitting track group comprises two adjacent sub-fitting tracks.
As an example, the sub-fit trajectory group includes a sub-fit trajectory a and a sub-fit trajectory B. The execution main body may determine, as the number of insertion points, a difference between position location time included in the position information corresponding to the end point of the sub-fitting trajectory a and position location time included in the position information corresponding to the start point of the sub-fitting trajectory B.
The second sub-step, determining a set of insertion point location information.
And the number of the insertion point position information in the insertion point position information set is the same as that of the insertion points.
As an example, the number of insertion points may be 1. The sub-fitting trajectory group includes a sub-fitting trajectory a and a sub-fitting trajectory B. The execution body may determine the intersection position of the connection line a and the connection line B as a position corresponding to the insertion point position information. The connecting line a is a line connecting the end point of the sub-fit trajectory a and the start point of the sub-fit trajectory B. The connecting line B is a line perpendicular to the connecting line a and at the intersection of the tangent line at the end point of the transition sub-fit locus a and the tangent line at the start point of the sub-fit locus B.
And a third substep of fitting a joining track according to the insertion point position information set.
And a fourth substep of performing track joining on the sub-fitting tracks in the sub-fitting track group through the joining track to generate a joined track.
And fifthly, performing track connection on each connected track in the obtained connected track set to generate the target person movement track information.
In actual situations, since the communication base station has a fixed position and a fixed signal receiving range, there is a case where positioning fails, so that the position information of the target person cannot be obtained. Based on this, the completion of the track is realized through the first step to the fifth step, and the continuity and the integrity of the moving track corresponding to the target person are ensured.
And 103, determining the target video according to the position positioning time and the position coordinates included in the position information sequence.
In some embodiments, the execution subject may determine the target video according to a position location time and a position coordinate included in the position information sequence. In practice, the execution subject may pull, from the video storage server, a video acquired by the image acquisition device at the position positioning time corresponding to the position coordinate, as the target video. The image acquisition device can be a camera with a video recording function.
In some optional implementations of some embodiments, the determining, by the execution main body, the target video according to the position location time and the position coordinates included in the position information sequence may include:
the method comprises the following steps of firstly, determining an image acquisition device matched with the position coordinates for the position coordinates included by each piece of position information in the position information sequence so as to generate image acquisition device position information.
The image acquisition device can be a camera with a video recording function.
As an example, when the position coordinate is within the acquisition range of the image acquisition device, the position of the device corresponding to the image acquisition device may be determined as the image acquisition device information.
And secondly, performing duplicate removal processing on the obtained image acquisition device position information set to obtain a duplicate-removed image acquisition device position information set.
And thirdly, acquiring the sub-video acquired by the target image acquisition device at the position positioning time included in the position information for each position information in the position information sequence.
Wherein the target image acquisition device is an image acquisition device corresponding to the position information of the de-duplicated image acquisition device corresponding to the position information in the set of the position information of the de-duplicated image acquisition device. The execution main body may acquire, in a wired connection or a wireless connection manner, a sub-video acquired by the target image acquisition device at a position positioning time included in the position information.
And fourthly, performing video splicing on the sub-videos in the obtained sub-video set according to the time dimension to obtain the target video.
In practice, the execution subject may stitch the sub-videos in the sub-video set according to the sequence of the generation time of the video to generate the target video.
And 104, identifying the personnel of the target video to generate candidate personnel movement track information to obtain a candidate personnel movement track information sequence.
In some embodiments, the execution subject may perform person identification on the target video to generate candidate person movement track information, so as to obtain a candidate person movement track information sequence. In practice, the executive agent may perform staff detection through a YOLO (young Only Look Once) network, and perform staff tracking through a DeepSORT algorithm to generate candidate staff trajectory information.
In some embodiments, the performing step of performing person identification on the target video to generate candidate person movement track information to obtain a candidate person movement track information sequence may include the following steps:
firstly, carrying out personnel detection on the target video to generate personnel detection information and obtain a personnel detection information set.
In practice, the execution subject may perform the person detection through the FaceNet network to generate the person detection information.
And secondly, carrying out personnel information association on the personnel detection information in the personnel detection information set to generate a personnel detection information group set.
In practice, the execution subject may classify the person corresponding to the person detection information through a classification network. And then determining the person detection information of the same category as the person detection information group.
And thirdly, generating candidate personnel movement track information for each personnel detection information group in the personnel detection information group set according to each personnel detection information in the personnel detection information group.
In practice, first, the execution subject may map the face position corresponding to the face detection information to a world coordinate system, so as to obtain a plurality of face position coordinates in the world coordinate system. And then, performing linear fitting on the position coordinates of the human faces under the world coordinate system to generate candidate person movement track information.
And 105, determining the track association degree of the personnel track corresponding to the candidate personnel movement track information and the personnel track corresponding to the target personnel movement track information for each candidate personnel movement track information in the candidate personnel movement track information sequence.
In some embodiments, the executing entity may determine, for each candidate person movement trajectory information in the candidate person movement trajectory information sequence, a trajectory accompaniment of a person trajectory corresponding to the candidate person movement trajectory information and a person trajectory corresponding to the target person movement trajectory information. And the track accompanying degree represents the track accompanying degree of the personnel track corresponding to the candidate personnel moving track information and the personnel track corresponding to the target personnel moving track information. In practice, the executing body may generate the trajectory accompanying degree by determining a trajectory distance of a person trajectory corresponding to the candidate person movement trajectory information and a person trajectory corresponding to the target person movement trajectory information.
In some optional implementation manners of some embodiments, the determining, by the execution subject, for each candidate person movement trajectory information in the candidate person movement trajectory information sequence, a trajectory accompaniment of a person trajectory corresponding to the candidate person movement trajectory information and a person trajectory corresponding to the target person movement trajectory information may include:
firstly, determining a trajectory equation of a person trajectory corresponding to the target person movement trajectory information as a first trajectory equation.
And secondly, determining a trajectory equation of the person trajectory corresponding to the candidate person movement trajectory information as a second trajectory equation.
And thirdly, dividing the preset time interval into a plurality of sub-time intervals to obtain a sub-time interval set.
And the time period lengths of the sub-time periods in the sub-time period set are consistent.
And fourthly, determining the area value of the area enclosed by the first track equation and the second track equation in the sub-time periods for each sub-time period in the sub-time period set.
In practice, the execution subject may determine the region area value by way of fixed integration. The area value of the region can also be determined by determining the number of the pixel points in the region.
And fifthly, determining the ratio of the number of the area values in the plurality of continuous area values to the number of the sub-time periods in the sub-time period set as the track degree of accompaniment in response to the fact that the plurality of continuous area values in the obtained area value sequence are all smaller than or equal to the preset area.
And 106, screening candidate person movement track information of which the corresponding track accompanying degree meets track screening conditions from the candidate person movement track information sequence, and taking the candidate person movement track information as accompanying track information corresponding to the target person to obtain an accompanying track information set.
In some embodiments, the execution subject may screen candidate person movement trajectory information, of which the corresponding trajectory accompaniment satisfies the trajectory screening condition, from the candidate person movement trajectory information sequence, and obtain the accompanying trajectory information set as the accompanying trajectory information corresponding to the target person. Wherein, the track screening conditions are as follows: the track degree of accompaniment is greater than the preset track degree of accompaniment.
In some optional implementations of some embodiments, the method further comprises:
a first step of executing, for each accompanying track information in the above accompanying track information set, an accompanying relationship determination step of:
and a first sub-step of sampling track points with preset frequency by taking the track starting point of the first person track as a sampling starting point to obtain a first track sampling point sequence.
Wherein the first person trajectory is a person trajectory corresponding to the target person movement trajectory information.
And a second substep, taking the track starting point of the second person track as a sampling starting point, and sampling the track points with the preset frequency to obtain a second track sampling point sequence.
Wherein the second person trajectory is a person trajectory corresponding to the accompanying trajectory information.
And a third substep, determining the track point similarity and the track point deflection of each first track sampling point in the first track sampling point sequence and the corresponding second track sampling point in the second track sampling point sequence.
And the track point similarity is a sampling point distance value between the first track sampling point and a corresponding second track sampling point in the second track sampling point sequence. And the track point deflection is the angle value between a connecting line between the first track sampling point and the corresponding second track sampling point in the second track sampling point sequence and a horizontal line, wherein the track point similarity is the track point similarity.
And a fourth substep of determining the accompanying relationship between the person corresponding to the accompanying trajectory information and the target person according to the obtained track point similarity sequence and track point deflection sequence to obtain accompanying relationship information.
As an example, the execution subject may perform weighted summation on the standard deviation of the track point similarity sequence and the standard deviation of the track point deflection sequence to obtain the adjoint value. And determining the accompanying relation information according to the accompanying relation value and a plurality of preset numerical value intervals. In practice, the accompanying relationship information may be a parallel accompanying relationship or a trailing accompanying relationship.
And secondly, sending the obtained accompanying relation information set to an information storage end for information storage.
The first step to the second step are taken as an invention point of the present disclosure, which solves the second technical problem mentioned in the background art, in an actual situation, the trajectory interaction relationship corresponding to multiple personnel is complex, so that it is difficult to accurately determine the accompanying relationship between the trajectories, and the wrong accompanying relationship may affect the subsequent application in the fields of epidemic disease tracing, suspect tracing, and the like, thereby causing problems such as tracing error, tracing failure, and the like. In practical situations, the tracks corresponding to the multiple human elements are not parallel, so that the companion relationship among the multiple tracks is difficult to measure, and based on the companion relationship, the distance relationship of the tracks is measured through the similarity of track points. Furthermore, the connecting lines of the position points of the plurality of tracks at the same time are not completely parallel. The conversion into an actual scene can be interpreted that the adjoint relationship has a plurality of adjoint ways such as trailing adjoint besides parallel adjoint, and therefore, the position relationship between the tracks is measured by setting the deflection degree of the track point. By the method, the adjoint relations among the tracks are quantized, and the adjoint relations are accurately classified.
In some optional implementations of some embodiments, the method further comprises:
firstly, a local map area corresponding to the position information sequence is determined.
The execution body may determine an area corresponding to an administrative district in which the position information is located in the position information sequence as a local map area.
And secondly, determining the region point location through which the personnel track corresponding to the target personnel movement track information passes to generate region point location information to obtain a region point location information set.
Where an area point may be a location that contains actual geographic meaning. In practice, such as supermarket locations, restaurant locations, etc.
And thirdly, determining the residence time of the target person in the area point location corresponding to the area point location information for each area point location information in the area point location information set.
In practice, the execution main body may determine a difference between a time when the target person leaves the area point corresponding to the area point information and a time when the target person enters the area point corresponding to the area point information as the dwell time.
And fourthly, mapping the personnel track corresponding to the target personnel movement track information to the local map area, and displaying the residence time of the area point corresponding to the area point information in the area point information set in the local map area.
By the method, the track visualization is realized, and meanwhile, the stay time of the target person in different area point positions can be visually displayed through the display of the stay time.
The above embodiments of the present disclosure have the following advantages: the accuracy and efficiency of attendant trajectory determination is improved by the method of attendant trajectory determination of some embodiments of the present disclosure. Specifically, the reason for the low accuracy and efficiency of the accompanying trajectory determination is that: for a large amount of positioning data, it is difficult to efficiently determine the trajectory and the accompanying relationship in a manual manner, and thus rapid and accurate determination of the accompanying trajectory cannot be performed in a short time. Based on this, the person accompanying track determining method of some embodiments of the present disclosure first obtains a position information sequence of a target person within a preset time period, where position information in the position information sequence includes: position fix time and position coordinates. And then, according to the position coordinates included in the position information sequence, performing personnel track fitting to generate target personnel movement track information corresponding to the target personnel. Therefore, the generation of the moving track of the target person is realized. And then, determining a target video according to the position positioning time and the position coordinates included in the position information sequence, wherein the target video is a video containing the target person. In practical situations, people having an accompanying relationship with the target person may be included in the target video. The target video is acquired for determination of the following accompanying personnel. And then, carrying out personnel identification on the target video to generate candidate personnel movement track information to obtain a candidate personnel movement track information sequence. And identifying persons other than the target person contained in the target video by performing person identification on the target video. Further, for each candidate person movement track information in the candidate person movement track information sequence, the track association degree of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information is determined. In practical situations, a plurality of people are often contained in the target video, and some people may have a relationship with the target person, so that it is necessary to determine whether the relationship exists. And finally, screening candidate person movement track information of which the corresponding track accompanying degree meets track screening conditions from the candidate person movement track information sequence, and taking the candidate person movement track information as accompanying track information corresponding to the target person to obtain an accompanying track information set. Therefore, the automatic determination of the accompanying track is realized, and compared with a manual judging mode, the accuracy and the efficiency of the determination of the accompanying track are greatly improved.
As shown in fig. 2, the person accompanying trajectory determination device 200 of some embodiments includes: the system comprises an acquisition unit 201, a person trajectory fitting unit 202, a first determination unit 203, a person identification unit 204, a second determination unit 205 and a screening unit 206. The acquiring unit 201 is configured to acquire a position information sequence of a target person within a preset time period, where position information in the position information sequence includes: position location time and position coordinates; a person trajectory fitting unit 202 configured to perform person trajectory fitting according to the position coordinates included in the position information sequence to generate target person movement trajectory information corresponding to the target person; a first determining unit 203 configured to determine a target video according to the position location time and the position coordinates included in the position information sequence, wherein the target video is a video including the target person; a person identification unit 204 configured to perform person identification on the target video to generate candidate person movement track information to obtain a candidate person movement track information sequence; a second determining unit 205 configured to determine, for each candidate person movement trajectory information in the candidate person movement trajectory information sequence, a trajectory accompaniment of a person trajectory corresponding to the candidate person movement trajectory information and a person trajectory corresponding to the target person movement trajectory information; a screening unit 206 configured to screen candidate person movement trajectory information, of which the corresponding trajectory accompaniment satisfies the trajectory screening condition, from the candidate person movement trajectory information sequence as accompanying trajectory information corresponding to the target person, and obtain an accompanying trajectory information set.
It will be appreciated that the units described in the person accompanying trajectory determination apparatus correspond to the various steps in the method described with reference to fig. 1. Therefore, the operations, features and advantages of the method described above are also applicable to the person-associated trajectory determination apparatus and the units included therein, and are not described herein again.
Referring now to FIG. 3, shown is a schematic block diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, electronic device 300 may include a processing means (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a read-only memory 302 or a program loaded from a storage device 308 into a random access memory 303. In the random access memory 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing means 301, the read only memory 302 and the random access memory 303 are connected to each other by a bus 304. An input/output interface 305 is also connected to the bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the read-only memory 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a position information sequence of a target person in a preset time period, wherein position information in the position information sequence comprises: position location time and position coordinates; according to the position coordinates included by the position information in the position information sequence, carrying out personnel track fitting to generate target personnel movement track information corresponding to the target personnel; determining a target video according to the position positioning time and the position coordinates included in the position information sequence, wherein the target video is a video containing the target person; identifying the target video to generate candidate personnel movement track information to obtain a candidate personnel movement track information sequence; for each candidate person movement track information in the candidate person movement track information sequence, determining the track accompaniments of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information; and screening candidate person movement track information of which the corresponding track accompanying degree meets track screening conditions from the candidate person movement track information sequence, and obtaining an accompanying track information set as accompanying track information corresponding to the target person.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a person trajectory fitting unit, a first determination unit, a person identification unit, a second determination unit, and a screening unit. The names of the units do not form a limitation on the units themselves in some cases, for example, the person trajectory fitting unit may also be described as a "unit that performs person trajectory fitting according to the position coordinates included in the position information sequence to generate target person movement trajectory information corresponding to the target person".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (7)

1. A person accompanying trajectory determination method, comprising:
acquiring a position information sequence of a target person in a preset time period, wherein position information in the position information sequence comprises: position location time and position coordinates;
according to the position coordinates included by the position information in the position information sequence, carrying out personnel track fitting to generate target personnel movement track information corresponding to the target personnel;
determining a target video according to the position positioning time and the position coordinates included in the position information sequence, wherein the target video is a video containing the target person;
identifying the target video to generate candidate personnel movement track information to obtain a candidate personnel movement track information sequence;
for each candidate person movement track information in the candidate person movement track information sequence, determining the track accompaniments of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information;
screening candidate personnel moving track information of which the corresponding track degree of accompaniment meets track screening conditions from the candidate personnel moving track information sequence, and taking the candidate personnel moving track information as accompanying track information corresponding to the target personnel to obtain an accompanying track information set;
for each companion track information in the set of companion track information, performing the following companion relationship determination step:
taking a track starting point of a first person track as a sampling starting point, and sampling track points with preset frequency to obtain a first track sampling point sequence, wherein the first person track is a person track corresponding to the target person movement track information;
taking a track starting point of a second person track as a sampling starting point, and sampling track points with the preset frequency to obtain a second track sampling point sequence, wherein the second person track is a person track corresponding to the accompanying track information;
for each first track sampling point in the first track sampling point sequence, determining track point similarity and track point deflection of the first track sampling point and a corresponding second track sampling point in the second track sampling point sequence;
according to the obtained track point similarity sequence and track point deflection sequence, determining an accompanying relationship between the person corresponding to the accompanying track information and the target person to obtain accompanying relationship information, wherein the accompanying relationship information comprises: a parallel companion relationship and a trailing companion relationship;
sending the obtained accompanying relation information set to an information storage end for information storage;
wherein, for each candidate person movement track information in the candidate person movement track information sequence, determining the track accompaniments of the person track corresponding to the candidate person movement track information and the person track corresponding to the target person movement track information includes:
determining a trajectory equation of a person trajectory corresponding to the target person movement trajectory information as a first trajectory equation;
determining a trajectory equation of the person trajectory corresponding to the candidate person movement trajectory information as a second trajectory equation;
dividing the preset time interval into a plurality of sub-time intervals to obtain a sub-time interval set, wherein the time interval lengths of the sub-time intervals in the sub-time interval set are consistent;
for each sub-time period in the set of sub-time periods, determining a region area value of a region surrounded by the first trajectory equation and the second trajectory equation in the sub-time period;
and in response to the fact that a plurality of continuous region area values in the obtained region area value sequence are smaller than or equal to a preset area, determining the ratio of the number of the region area values in the plurality of continuous region area values to the number of the sub-time periods in the sub-time period set as the track accompaniment.
2. The method of claim 1, wherein the method further comprises:
determining a local map area corresponding to the position information sequence;
determining the region point location through which the personnel track corresponding to the target personnel movement track information passes to generate region point location information to obtain a region point location information set;
for each area point location information in the area point location information set, determining the residence time of the target person in the area point location corresponding to the area point location information;
and mapping the personnel track corresponding to the target personnel movement track information to the local map area, and displaying the residence time of the area point location corresponding to the area point location information in the area point location information set in the local map area.
3. The method according to claim 2, wherein the performing person trajectory fitting according to the position coordinates included in the position information sequence to generate target person movement trajectory information corresponding to the target person includes:
for each pair of sets of position information in the sequence of position information, performing the following separate position determining steps:
determining a time interval of position positioning time included in position information in the position information group to obtain a time difference value, wherein the position information group includes two adjacent position information;
determining separation position information according to the position information group in response to determining that the time difference is greater than a preset difference;
according to the obtained separation position information set, grouping the position information sequences to obtain position information group sequences;
performing track fitting on each position information group in the position information group sequence to generate a sub-fitting track to obtain a sub-fitting track sequence;
for each pair of sub-fitted trajectory groups in the sequence of sub-fitted trajectories, performing the following trajectory fusion processing steps:
determining the number of insertion points between sub-fitting tracks in the sub-fitting track group, wherein the sub-fitting track group comprises two adjacent sub-fitting tracks;
determining an insertion point position information set, wherein the number of insertion point position information in the insertion point position information set is the same as that of insertion points;
fitting a connection track according to the insertion point position information set;
performing track connection on the sub-fitting tracks in the sub-fitting track group through the connection track to generate a connected track;
and performing track connection on each connected track in the obtained connected track set to generate the target person movement track information.
4. The method of claim 3, wherein the determining the target video according to the position location time and the position coordinate included in the position information sequence comprises:
for the position coordinates included by each piece of position information in the position information sequence, determining an image acquisition device matched with the position coordinates to generate image acquisition device position information;
carrying out duplicate removal processing on the obtained image acquisition device position information set to obtain a duplicate-removed image acquisition device position information set;
for each piece of position information in the position information sequence, acquiring a sub-video acquired by a target image acquisition device at the position positioning time included in the position information, wherein the target image acquisition device is an image acquisition device corresponding to the position information of the image acquisition device after the duplication removal corresponding to the position information in the position information set of the image acquisition device after the duplication removal;
and carrying out video splicing on the sub-videos in the obtained sub-video set according to the time dimension to obtain the target video.
5. The method of claim 4, wherein the performing person identification on the target video to generate candidate person movement track information to obtain a candidate person movement track information sequence comprises:
carrying out personnel detection on the target video to generate personnel detection information and obtain a personnel detection information set;
carrying out personnel information association on personnel detection information in the personnel detection information set to generate a personnel detection information group set;
and for each personnel detection information group in the personnel detection information group set, generating candidate personnel movement track information according to each personnel detection information in the personnel detection information group.
6. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-5.
7. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202211661068.5A 2022-12-23 2022-12-23 Person accompanying track determining method and device, electronic equipment and readable medium Active CN115631212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211661068.5A CN115631212B (en) 2022-12-23 2022-12-23 Person accompanying track determining method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211661068.5A CN115631212B (en) 2022-12-23 2022-12-23 Person accompanying track determining method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN115631212A CN115631212A (en) 2023-01-20
CN115631212B true CN115631212B (en) 2023-03-28

Family

ID=84910119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211661068.5A Active CN115631212B (en) 2022-12-23 2022-12-23 Person accompanying track determining method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN115631212B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117062009B (en) * 2023-10-11 2024-01-23 北京艾瑞数智科技有限公司 Method, device, equipment and storage medium for judging accompanying track
CN117495063B (en) * 2024-01-03 2024-04-12 中关村科学城城市大脑股份有限公司 Police resource scheduling method, apparatus, electronic device and computer readable medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11164335B2 (en) * 2018-11-06 2021-11-02 International Business Machines Corporation Passenger travel route inferencing in a subway system
CN113761964A (en) * 2020-05-18 2021-12-07 深圳云天励飞技术有限公司 Method for determining persons in the same group and related products
CN112653990A (en) * 2020-09-18 2021-04-13 武汉爱迪科技股份有限公司 Screening algorithm and system for close contact personnel

Also Published As

Publication number Publication date
CN115631212A (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN115631212B (en) Person accompanying track determining method and device, electronic equipment and readable medium
CN108427939B (en) Model generation method and device
CN110213614B (en) Method and device for extracting key frame from video file
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN108510084B (en) Method and apparatus for generating information
CN110765280B (en) Address recognition method and device
US20210264198A1 (en) Positioning method and apparatus
CN112257582A (en) Foot posture determination method, device, equipment and computer readable medium
CN117290561B (en) Service state information feedback method, device, equipment and computer readable medium
CN112464039B (en) Tree-structured data display method and device, electronic equipment and medium
CN112183388A (en) Image processing method, apparatus, device and medium
CN109816670B (en) Method and apparatus for generating image segmentation model
CN111143496A (en) Method and device for determining target objects with similar tracks
WO2022052889A1 (en) Image recognition method and apparatus, electronic device, and computer-readable medium
CN111383337B (en) Method and device for identifying objects
CN111369624B (en) Positioning method and device
CN110084298B (en) Method and device for detecting image similarity
CN113255290A (en) Electric power project information display method, equipment and computer readable medium
CN113255819A (en) Method and apparatus for identifying information
CN111835917A (en) Method, device and equipment for showing activity range and computer readable medium
CN112699289A (en) House resource information aggregation display method and device, electronic equipment and computer readable medium
CN111931044A (en) Information display method and device and electronic equipment
CN118038193B (en) Visual display method and device for underground cable, electronic equipment and computer medium
CN116384945B (en) Project management method and system
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant