WO2023087860A1 - Procédé et appareil pour générer une trajectoire de cible, et dispositif électronique et support - Google Patents

Procédé et appareil pour générer une trajectoire de cible, et dispositif électronique et support Download PDF

Info

Publication number
WO2023087860A1
WO2023087860A1 PCT/CN2022/117505 CN2022117505W WO2023087860A1 WO 2023087860 A1 WO2023087860 A1 WO 2023087860A1 CN 2022117505 W CN2022117505 W CN 2022117505W WO 2023087860 A1 WO2023087860 A1 WO 2023087860A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
fusion
image
trajectory
image frame
Prior art date
Application number
PCT/CN2022/117505
Other languages
English (en)
Chinese (zh)
Inventor
宋荣
刘晓东
Original Assignee
上海高德威智能交通系统有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海高德威智能交通系统有限公司 filed Critical 上海高德威智能交通系统有限公司
Publication of WO2023087860A1 publication Critical patent/WO2023087860A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the present application relates to the technical field of image processing, and in particular to a method, device, electronic equipment and medium for generating a target trajectory.
  • multiple cameras are set up to shoot movable targets in the scene.
  • multiple cameras are usually set up in intersections and other areas to capture surveillance videos of targets including vehicles at intersections.
  • multiple cameras may be set up to photograph animals in the venue.
  • the action trajectories of vehicles, animals and other targets can be generated according to the captured surveillance video, and then it can be determined according to the action trajectories whether the vehicle has violations such as running a red light, whether the animals are abnormal, etc.
  • the method of generating the action trajectory of the target is mainly: detecting the target in the surveillance video, and then constructing a target re-recognition model, which is used to match the starting position of a specific target to be recognized in other cameras, and finally through the positive and negative Obtain its action trajectory from the analysis target trajectory.
  • the purpose of the embodiments of the present application is to provide a method, device, electronic device, and medium for generating target trajectories, so as to generate action trajectories of multiple targets.
  • the embodiment of the present application provides a method for generating a target trajectory, including:
  • each image acquisition device For the image group collected by each image acquisition device, obtain the image position of each target in each image frame in the image group;
  • the world position of each identical target in each image frame in the image frame set is fused to obtain the fusion position of the target;
  • the image frame set is each image frame that is acquired synchronously A collection of images composed of;
  • the fusion position of the target is correlated to generate the fusion trajectory of the target.
  • the number of image acquisition devices is at least 3;
  • the world position of each identical target in each image frame in the set of image frames is fused to obtain the fused position of the target, including:
  • the world positions of the same target in the set of image frames are fused to obtain a fused position, and the fused position is used as a fused position of a fused target corresponding to the same target.
  • constructing a similarity matrix of each target in the two image frames includes:
  • a similarity matrix of each object in the two image frames is constructed.
  • each fusion target For each fusion target, if there are at least two targets from the same image group among the multiple identical targets corresponding to the fusion target, remove the target whose distance from the fusion target is a non-minimum distance among the at least two targets; and update The fusion position of this fusion target.
  • the method further includes:
  • correlating the fusion position of the target to generate the fusion track of the target including:
  • the existing trajectory is the trajectory formed by the fusion position of each target whose acquisition time is before the current fusion position of the fusion target;
  • the current fusion position of the fusion target does not include the position in the same single camera track as the source of the fusion position in any existing trajectory, and the current fusion position of the fusion target is in the preset motion state, determine the current fusion position of the fusion target The fusion position is not associated with the existing trajectory;
  • the current fusion position of the fusion target does not include the position in the same single camera track as the source of the fusion position in any existing trajectory, and the current fusion position of the fusion target is not in the preset motion state, construct the fusion target corresponding The correlation matrix between each fusion target and the existing trajectory in the image frame;
  • a fusion trajectory of the fusion target is generated based on all associated fusion positions of the fusion target.
  • the method further includes:
  • the existing trajectory is updated based on the current fusion position of the fusion target.
  • the method before said merging the world position of each same target in each image frame in the set of image frames for each set of image frames to obtain the fused position of the target, the method further includes:
  • Select a group of matching single-camera trajectories respectively calculate the distance between the starting position and the end position of the first single-camera trajectory in the group and each position of the second single-camera trajectory; and, respectively calculate the distances in the group The distance between the start position and the end position of the second single camera trajectory to each position of the first single camera trajectory;
  • the distances from the start and end positions of the first single-camera track in the set to each position of the second single-camera track, and the first single-camera track in the set Select the shortest distance from the distance between the starting position and the end position of the second single-camera trajectory and each position of the first single-camera trajectory, and determine the image frames where the two positions corresponding to the shortest distance are respectively located as the primary synchronous image frame;
  • the embodiment of the present application also provides a device for generating a target trajectory, including:
  • the position acquisition module is used to obtain the image position of each target in each image frame in the image group for the image group collected by each image acquisition device;
  • a position conversion module configured to convert the image position into a position in a world coordinate system according to a preset conversion relationship, to obtain a world position corresponding to the image position;
  • the position fusion module is used to fuse the world position of each identical target in each image frame in the image frame set for each image frame set to obtain the fusion position of the target; wherein, the image frame set is collected an image set composed of time-synchronized image frames;
  • the trajectory generation module is configured to, for each target, correlate the fusion positions of the target according to the acquisition time sequence of the image frame set corresponding to the target, and generate the fusion trajectory of the target.
  • the number of image acquisition devices is at least 3;
  • the position fusion module includes:
  • the similarity matrix determination submodule is used for constructing the similarity matrix of each target in the two image frames according to the world position of each target in the image frame set for any two image frames in the image frame set;
  • the target determination submodule is used to solve the similarity matrix according to the Hungarian algorithm, if the calculated similarity between two targets at corresponding positions in the similarity matrix and located in different image frames is greater than the preset Set a similarity threshold to determine that the two targets are the same target;
  • the position fusion sub-module is configured to fuse the world positions of the same target in the image frame set to obtain a fused position, and use the fused position as the fused position of the fused target corresponding to the same target.
  • the similarity matrix determination submodule is specifically configured to determine the velocity direction of the target based on the world position of each target in the image frame set; for any two image frames in the image frame set, Based on the world position and velocity direction of each object in the two image frames, a similarity matrix of each object in the two image frames is constructed.
  • the device also includes:
  • the fused position update module is used for each fused target, if there are at least two targets from the same image group among multiple identical targets corresponding to the fused target, and the distance between the at least two targets and the fused target is not The target with the minimum distance; and update the fused position of the fused target.
  • the device also includes:
  • a single-camera trajectory generating module for each image group, based on multiple world positions of each target in the image group to generate a single-camera trajectory of the target;
  • the trajectory generation module is specifically used for each fusion target, if the current fusion position of the fusion target and the source of the fusion position in the existing trajectory include the position in the same single camera trajectory, determine the current fusion position of the fusion target Associated with the existing trajectory; wherein, the existing trajectory is the trajectory formed by the fusion position of each target whose acquisition time is before the current fusion position of the fusion target; if the current fusion position of the fusion target is consistent with any
  • the source of the fusion position in the existing trajectory does not include the position in the same single camera trajectory, and the current fusion position of the fusion target is in the preset motion state, and the distance between each fusion target in the image frame corresponding to the fusion target and the existing track is constructed.
  • the correlation matrix is solved, if the correlation between the fusion target and the existing trajectory at the corresponding position in the correlation matrix is greater than the preset correlation threshold , determine that the current fusion position of the fusion target is associated with the existing trajectory; if the fusion positions corresponding to the fusion target are all associated, generate the fusion trajectory of the fusion target based on all associated fusion positions of the fusion target.
  • the device also includes:
  • a track updating module configured to update the existing track based on the current fusion position of the fusion target.
  • the device also includes:
  • a synchronization time determination module for each image group, based on multiple world positions of each target in the image group to generate a single camera trajectory of the target; match the single camera trajectories in any two image groups; any Select a group of matching single-camera trajectories, and calculate the distance between the starting position and end position of the first single-camera trajectory in the group and each position of the second single-camera trajectory; The distance between the start position and end position of the second single-camera track and each position of the first single-camera track; If the distance is less than the preset distance threshold, and there is a position in the first single-camera track whose distance from the start position or end position of the second single-camera track is less than the preset distance threshold, the group of matching single-cameras will be retained trajectory; otherwise, delete the single-camera trajectory of the group; for each group of matching single-camera trajectories retained, each The distance between the positions, and the distance between the start position and end position of the second single camera track in the group and
  • an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus;
  • the processor is configured to implement the method steps described in any one of the above-mentioned first aspects when executing the program stored in the memory.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, any of the above-mentioned first aspects can be implemented. Method steps.
  • an embodiment of the present application provides a computer program product containing instructions, which, when the computer program product is run on a computer, cause the computer to execute the method steps described in any one of the above first aspects.
  • the image position of each target in each image frame in the image group is obtained; according to the preset conversion relationship, the image position is converted to The position in the world coordinate system is obtained to obtain the world position corresponding to the image position; for each image frame set, the world position of each identical target in each image frame in the image frame set is fused to obtain the fusion position of the target; For each target, according to the acquisition time sequence of the image frame set corresponding to the target, the fusion position of the target is correlated to generate the fusion trajectory of the target.
  • the method provided in the embodiment of the present application can generate trajectories of multiple targets, and can meet the needs of generating trajectories of multiple complex targets in scenarios such as urban traffic.
  • any product or method of the present application does not necessarily need to achieve all the above-mentioned advantages at the same time.
  • FIG. 1 is a flow chart of a method for generating a target trajectory provided in an embodiment of the present application
  • Fig. 2 is a flow chart of the automatic correction method provided by the embodiment of the present application.
  • FIG. 3 is a flow chart of a position fusion method provided in an embodiment of the present application.
  • FIG. 4 is a flow chart of generating a fusion trajectory of a target provided by an embodiment of the present application
  • FIG. 5 is a schematic flow chart of a trajectory correction method provided in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a device for generating a target trajectory provided by an embodiment of the present application.
  • FIG. 7 is another schematic structural diagram of a device for generating a target trajectory provided by an embodiment of the present application.
  • FIG. 8 is another structural schematic diagram of a device for generating a target trajectory provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the embodiments of the present application provide a method, device, electronic device, storage medium, and computer program product for generating target trajectories .
  • the method for generating the target trajectory provided by the embodiment of the present application will firstly be introduced below.
  • the method for generating a target trajectory provided in the embodiment of the present application can be applied to any electronic device with an image processing function, and is not specifically limited here.
  • Fig. 1 is a kind of flow chart of the generation method of the target trajectory that the embodiment of the present application provides, as shown in Fig. 1, described method comprises:
  • the image frame set is an image set composed of image frames whose collection time is synchronized.
  • the image position of each target in each image frame in the image group is obtained; according to the preset conversion relationship, the image position is converted to The position in the world coordinate system is obtained to obtain the world position corresponding to the image position; for each image frame set, the world position of each identical target in each image frame in the image frame set is fused to obtain the fusion position of the target; For each target, according to the acquisition time sequence of the image frame set corresponding to the target, the fusion position of the target is correlated to generate the fusion trajectory of the target. That is to say, the method provided in the embodiment of the present application can generate trajectories of multiple targets, and can meet the needs of generating trajectories of multiple complex targets in scenarios such as urban traffic.
  • each image acquisition device may be arranged in a specific application scene, each image acquisition device has a different position and angle, and each image acquisition device may collect images from different angles in the application scene.
  • Each image acquisition device can acquire multiple frames of images as an image group.
  • Each frame of image may include one or more targets in the application scene, and the targets may specifically be movable targets such as animals and vehicles in the application scene.
  • the image acquisition device may specifically be a camera, a video recorder, and the like.
  • the image position of the target in the image captured by the image acquisition device is not consistent with the corresponding position of the target in the real world, and urban traffic and other applications
  • the scene needs to display the trajectory of the target on the high-precision map to realize the precise positioning of the target on the high-precision map. Therefore, it is necessary to convert the image position of each target in each image frame into the position in the world coordinate system to obtain the target world position.
  • the conversion relationship between the image position of the target captured by the image capture device and the world position in the world coordinate system can be determined, specifically, the image frame captured by the image capture device can be obtained
  • the world position coordinates of a set of specified targets in the world coordinate system can be obtained through actual distance measurement or from a high-precision map.
  • the specified target can select targets with obvious characteristics such as lane line stop points, guiding arrows, and road fixed facilities.
  • the following formula can be used according to the world position coordinates of multiple specified targets corresponding to the selected image capture device in the world coordinate system and the image position coordinates of multiple specified targets in the image frame Calculate the coordinate transformation matrix corresponding to the image acquisition device:
  • a 11 -a 33 are transformation parameters in the coordinate transformation matrix
  • x′, y′ and w′ are the abscissa
  • u and v are respectively the horizontal and vertical coordinates in the image position coordinates of the specified target.
  • a33 is 1, and w' is 1 because the plane height directions before and after the position coordinate transformation are normalized.
  • image coordinates and world position coordinates can be composed of the following equations to solve 8 parameters a11, a12, a13, a21, a22, a23, a31, a32. At least 4 pairs of image coordinates and world position coordinates are required to solve by least squares
  • the coordinate transformation matrix corresponding to the image acquisition device can be determined by the following linear equation.
  • the coordinate transformation matrix corresponding to the image acquisition device since the image acquisition device may have certain distortion, it is necessary to correct the coordinate transformation matrix in combination with the parameters of the image acquisition device. Specifically, it can be based on a set of specified targets. Based on the calculated coordinate transformation matrix, multiply the parameters of the image acquisition device to obtain a new coordinate transformation matrix.
  • the coordinate transformation matrix corresponding to each group of specified objects can be calculated separately according to multiple groups of specified objects in the image frame captured by the image acquisition device, and then the coordinate transformation matrix corresponding to multiple groups of specified objects can be calculated The matrix corresponding to the average value of is used as the coordinate transformation matrix corresponding to the image acquisition device. In this way, because the coordinate transformation matrix corresponding to the image acquisition device combines the information of the coordinate transformation matrix corresponding to multiple specified targets, the coordinate transformation matrix is more complete and can more accurately represent the mapping between the image position coordinates and the world position coordinates relation.
  • the coordinate transformation matrix can be used as a preset conversion relationship, and the image position coordinates of each target in the image frame collected by the image acquisition device can be Convert to world position coordinates in the world coordinate system.
  • the following formula can be used to convert the image position into a position in the world coordinate system to obtain the world position corresponding to the image position:
  • a 11 -a 33 are the transformation parameters in the coordinate transformation matrix
  • x', y' and w' are the abscissa, ordinate and vertical coordinate in the world position respectively
  • u and v are the abscissa and ordinate in the image position, respectively.
  • time synchronization correction can be performed on each image acquisition device by manual correction, so as to ensure that each image acquisition device starts to acquire images at the same time, and the frequency of acquiring images is the same.
  • the set of image frames collected by different image acquisition devices at the same time can also be determined as an image frame set, so as to ensure that in one image frame set The acquisition times of the included image frames are synchronized.
  • FIG. 2 is a flow chart of the automatic correction method provided by the embodiment of the present application, as shown in FIG. 2 As shown, the correction method may include:
  • Each image group is composed of multiple image frames collected by the same image acquisition device.
  • steps A1-A2 can be used to generate the single-camera trajectory of the target:
  • Step A1 For each image group, each target in the first image frame in the image group can be used as a single-camera trajectory, and a single-camera trajectory identifier for each single-camera trajectory is generated, that is, for each image
  • Each target in the first image frame in the image group can respectively generate a single-camera trajectory corresponding to the target, and generate a single-camera trajectory identifier of each single-camera trajectory.
  • the corresponding single-camera trajectories can be generated for target 1, target 2 and target 3 respectively, and the identification of the single-camera trajectories can be generated respectively , for example, may be track 1, track 2, and track 3.
  • Step A2 For each target in the next image frame in the image group, the minimum distance between the target and each single camera track in the previous frame image in the image group can be calculated according to the world position of the target; If the minimum distance is less than the preset threshold, determine that the target in the image frame is the same as the target in the single-camera track, and use the world position of the target as the track point of the single-camera track to update the single-camera track.
  • the distance between the target and the previous frame image in the image group can be calculated.
  • the minimum distance between single camera trajectories If the minimum distance is less than the preset threshold, it means that the world position of the target when the next image frame is collected conforms to the movement law of the target represented by the single camera trajectory corresponding to the minimum distance, then the position in the image frame can be determined.
  • the target is the same as the corresponding target in the single-camera track, and the world position of the target is used as the track point of the single-camera track to update the single-camera track.
  • the preset threshold can be set according to the actual application. For example, if the target is a vehicle, then the preset threshold can be determined according to the average speed of the vehicle at the intersection and the acquisition time interval of two adjacent video frames in the image group. Be specific.
  • DeepSort multi-target analysis algorithm
  • FairMot multi-target analysis algorithm
  • the single-camera trajectories in any two image groups can be matched according to the target's vehicle ID, and the single-camera trajectories of targets with the same vehicle ID can be regarded as a group of matched single-camera trajectories. track.
  • the vehicle identification is used to uniquely represent the vehicle target.
  • the matching degree between the single-camera trajectories in any two image groups can also be calculated according to the distance between the single-camera trajectories, and the pair of single-camera trajectories with the highest matching degree can be regarded as a group of matched single-camera trajectories. track.
  • the two image groups are image group A formed by image frames collected by image acquisition device 1 and image group B formed by image frames collected by image acquisition device 2.
  • the cosine similarity of the distance between the single camera trajectory and each single camera trajectory in the image group B can be calculated as the matching degree, that is, the single camera trajectory and the image group can be calculated
  • the similarity between each single-camera trajectory in B is used as the matching degree; then select the single-camera trajectory with the highest matching degree with the single-camera trajectory in the image group B as the preliminary single-camera trajectory Match a single camera trajectory, the preliminary matching of the single camera trajectory and the single camera trajectory
  • the matching degree between the single camera trajectory and the single camera trajectory reaches the preset matching degree threshold, and the single camera trajectory can be determined by the preliminary matching of the single camera trajectory and the single camera trajectory is a set of matched single-camera trajectories.
  • the preset matching degree threshold may be set according to an actual application scenario, which is not specifically limited here.
  • the preset matching degree threshold may be further judged that the lane number of the lane where the single camera track is located is related to the single camera track. Preliminary matching of camera trajectories Whether the lane number of the lane where the single camera trajectory is located is consistent, if consistent, the single camera trajectory and the preliminary matching single camera trajectory of the single camera trajectory can be determined as a group of matching single camera trajectories.
  • a set of matched single-camera trajectories includes a first single-camera trajectory and a second single-camera trajectory from different sets of images.
  • S203 select a group of matching single camera trajectories, respectively calculate the distance between the starting position and the end position of the first single camera trajectories in the group and each position of the second single camera trajectories; and, respectively calculate the The distance from the start and end positions of the second single-camera track in the group to each position of the first single-camera track.
  • single-camera trajectory A in image group 1 matches single-camera trajectory B in image group 2, wherein single-camera trajectory A is the first single-camera trajectory, and single-camera trajectory B is the second single-camera trajectory.
  • the distance between the starting position of single-camera trajectory A and each position of single-camera trajectory B can be calculated, and the distance between the end position of single-camera trajectory A and each position of single-camera trajectory B can be calculated Distance, and can calculate the distance between the starting position of single camera track B and each position of single camera track A, and can calculate the distance between the end position of single camera track B and each position of single camera track A .
  • the preset distance threshold may be set to 1 meter or 2 meters.
  • the preset distance threshold is set to 1 meter
  • the distance between the third position in single-camera trajectory B and the starting position of single-camera trajectory A is is less than 1 meter
  • the distance between the fourth position in single-camera trajectory A and the starting position of single-camera trajectory B is less than 1 meter
  • any position in single-camera trajectory B and the starting position of single-camera trajectory A is not less than 1 meter, and the distance between any position in single-camera trajectory B and the end position of single-camera trajectory A is Not less than 1 meter, and the distance between any position in single camera track A and the starting position of single camera track B is not less than 1 meter, and the distance between any position in single camera track A and the end position of single camera track B If the distance between them is not less than 1 meter, delete the matching single-camera trajectory A and single-camera trajectory B.
  • the matching single-camera trajectory A and single-camera trajectory B are retained, for the matching single-camera trajectory A and single-camera trajectory B, if the starting position and end position of single-camera trajectory A reach single-camera trajectory B The distance between each position of , and, the distance between the start position and end position of single camera track B to each position of single camera track A, the starting position of single camera track A to single camera track B The distance between the third position of the single camera track A is the shortest, then the image frame where the starting position of the single camera track A is located and the image frame where the third position of the single camera track B is located can be used as the primary synchronous image frame.
  • image frame A1 in image group 1 and image frame B1 in image group 2 are primary synchronous image frames, and image frame A2 in image group 1 is identical to image frame B in image group 2
  • the image frame B2 is the primary synchronous image frame, wherein, the acquisition time of the image frame A1 is t A1 , the acquisition time of the image frame A2 is t A2 , the acquisition time of the image frame B1 is t B1 , and the acquisition time of the image frame B2 is t B2 .
  • the average time taverage (t A1 +t A2 +t B1 +t B2 )/4 can be calculated.
  • the image frame A3 whose acquisition time is t average in the image group 1 and the image frame B3 whose acquisition time is t average in the image group 2 can be used as the final synchronous image frames of the two image groups.
  • the image frame with the smallest difference between the acquisition time and t- average in the image group may be determined as the final synchronous image frame of the image group.
  • the number of frames away from image frame A3 after image frame A3 in image group 1 is synchronized with the acquisition time between the image frames with the same frame number after image frame B3 and image frame B3 in image group 2, that is to say , the acquisition time of the image frame A3 and the image frame B3 are synchronized, then the acquisition time of each corresponding image frame acquired after the two is also synchronized.
  • the image frames whose acquisition time is synchronized between each image group can be automatically determined.
  • the number of time-synchronized image frames can also be calculated for each set of matching single-camera trajectories, and the number of frames determined by multiple sets of matching single-camera trajectories The average of the frame numbers of the time-synchronized image frames is used as the time-synchronized frame.
  • single camera trajectory A in image group 1 matches single camera trajectory B in image group 2, and based on single camera trajectory A and single camera trajectory B, the first image frame and image in image group 1 are determined
  • the acquisition time of the third image frame in group 2 is synchronized
  • the single camera trajectory C in image group 1 matches the single camera trajectory D in image group 2
  • the image is determined based on single camera trajectory C and single camera trajectory D
  • the acquisition time of the first image frame in group 1 is synchronized with that of the fifth image frame in image group 2, then the average value of the number of matching frames can be taken, and finally the first image frame in image group 1 and image group 2 can be obtained
  • the acquisition time of the fourth image frame in is synchronized.
  • the method can generate trajectories of multiple targets, and can adapt to the trajectory generation requirements of multiple complex targets in scenarios such as urban traffic.
  • the method provided by the embodiment of the present application provides more reliable spatio-temporal information for the trajectory fusion of the target by synchronizing the image frames collected by multiple image collection devices.
  • the number of image acquisition devices may be 2, 3 or more, which is not specifically limited here.
  • FIG. 3 is a flowchart of a position fusion method provided in the embodiment of the present application.
  • Said merging the world position of each same target in each image frame in the set of image frames to obtain the fused position of the target may include:
  • Steps B1-B2 can be taken as follows:
  • Step B1 based on the world position of each target in the image frame set, determine the speed direction of the target.
  • the position difference between the world position corresponding to the target's image position in the current image frame and the world position corresponding to the target's image position in the previous image frame of the current image frame can be calculated value, that is, the displacement of the target in the acquisition time interval between the current image frame and the previous image frame, the ratio between the position difference and the acquisition time difference of the two image frames is determined as the speed of the target, based on the speed Positive or negative indicates the direction of the target's velocity.
  • Step B2 for any two image frames in the set of image frames, based on the world position and velocity direction of each object in the two image frames, construct a similarity matrix of each object in the two image frames.
  • the targets collected by any two image acquisition devices can be firstly analyzed. match, and determine the same target among the targets captured by the two image capture devices.
  • any two image frames in the image frame set such as image frame A and image frame B
  • image frame A includes m objects
  • image frame B includes n objects
  • the world position of each target in image frame B, and the velocity direction of each target in image frame A and image frame B calculate the cosine similarity between the velocity vectors of each target in image frame A and image frame B, as the similarity, Get an m ⁇ n similarity matrix.
  • the velocity vector of each target in image frame A and image frame B can be calculated according to the world position of each target in image frame A and image frame B, and the velocity direction of each target in image frame A and image frame B, Furthermore, calculate the cosine similarity between the velocity vectors including m objects in the image frame A and the velocity vectors including n objects in the image frame B, and arrange the similarities in a matrix in the form of m ⁇ n to obtain m ⁇ The similarity matrix of n.
  • the target type can be animal, vehicle or sign, etc.
  • the appearance feature of the target can be the vehicle logo of the vehicle and the appearance feature of the animal
  • the posture feature of the target can be the world position, speed and speed direction of the target, etc.
  • the vehicle identification of the target can also be obtained, and the similarity between any two targets located in different image frames can be calculated according to the vehicle identification.
  • the Hungarian algorithm can be used to solve the similarity matrix to obtain the one-to-one matching relationship between each object. For example, if image frame A1 in the image frame set includes 5 targets: a1, a2, a3, a4 and a5, image frame B1 includes 3 targets: b1, b2 and b3, image frame A1 and image frame B1
  • the similarity matrix of each target is a 5 ⁇ 3 matrix
  • the Hungarian algorithm can be used to solve the similarity matrix to obtain the one-to-one matching relationship between each target: a1 matches b2, a2 matches b1, a4 matches b3.
  • the single camera trajectory of the target can be generated based on the multiple world positions of each target in the image group. Specifically, the above steps A1-A2 can be used to generate the single camera trajectory of the target. The camera trajectory will not be repeated here.
  • the similarity between the two targets can be directly set to 1. For example, if the image frame A1 in the image frame set includes 5 objects: a1, a2, a3, a4, and a5, and the image frame B1 includes 3 objects: b1, b2, and b3.
  • the target in the single-camera trajectory to which a1 belongs matches the target in the single-camera trajectory to which b2 belongs in the previous image frame of image frame B1 , then the similarity between target a1 and target b2 can be directly set to 1.
  • the calculation shows that a1 matches b2, a2 matches b1, a4 and b3, that is, a1 and b2 are in the corresponding position in the similarity matrix, a2 and b1 are in the corresponding position in the similarity matrix, a4 and b3 are in the similarity
  • the similarity between a1 and b2 is greater than the preset similarity threshold, whether the similarity between a2 and b1 is greater than the preset similarity threshold, the similarity between a4 and b3 Whether it is greater than the preset similarity threshold.
  • the similarity between a1 and b2 is greater than the preset similarity threshold, the similarity between a2 and b1 is not greater than the preset similarity threshold, and the similarity between a4 and b3 is greater than the preset similarity threshold, then the a1 and b2 are determined to be the same target, a4 and b3 are determined to be the same target, but a2 and b1 are not the same target.
  • the preset similarity threshold may be set according to an actual application scenario, which is not specifically limited here.
  • the average value of the world coordinates of the world positions of the same target in the image frame set can be calculated, and the position represented by the average value can be used as the fused position, and the fused position can be used as the corresponding position of the same target The fusion position of the fusion target.
  • the matching results of any two targets located in different image frames in the image frame set are integrated to obtain the fusion positions of all fusion targets corresponding to the same target in the image frame set.
  • each fusion target in order to ensure that each fusion target can only match at most one target in the image frame collected by each image acquisition device, that is, the target collected by each image acquisition device can only appear in one fusion target, it also includes: for each fusion target, if there are at least two targets from the same image group in the multiple identical targets corresponding to the fusion target, remove the target whose distance from the fusion target is not the minimum distance among the at least two targets; and update the fusion The fusion position of the target.
  • the distance between the at least two targets and the fusion target can be calculated, the target with the smallest distance to the fusion target among the at least two targets can be kept, other targets can be eliminated, and then the world position of the remaining same target can be calculated after the target is eliminated.
  • the average value of the world coordinates is used as the updated fusion position of the fusion target.
  • the features of each fused object can be updated.
  • the features of the fused object may include: the world coordinates of the fused position of the fused object, appearance features, object type, vehicle identification, and the like.
  • the world coordinates of the fusion target are the average value of the world coordinates of the targets in each image acquisition device of its fusion source; the target type and vehicle identification of the fusion target adopt the The type and lane number of the object closest to the fused object among the objects;
  • the appearance feature of the fused object is the average value of the appearance features of the objects in each image acquisition device from which it is fused.
  • the method can generate trajectories of multiple targets, and can adapt to the trajectory generation requirements of multiple complex targets in scenarios such as urban traffic.
  • the method provided by the embodiment of the present application provides more reliable spatio-temporal information for the trajectory fusion of the target by synchronizing the image frames collected by multiple image collection devices.
  • combining world coordinates, vehicle identification, movement speed, appearance and other features to achieve position association makes the association results more reliable and widely applicable, not limited to specific types of targets.
  • a single-camera trajectory for each object in the image set is generated based on the multiple world positions of the object.
  • the above-mentioned steps A1-A2 can be used to generate the single-camera trajectory of the target, which will not be repeated here.
  • a single-camera trajectory identifier can also be generated for each single-camera trajectory.
  • Fig. 4 is a flow chart of generating the fusion trajectory of the target provided by the embodiment of the present application. As shown in Fig. 4, for each target, according to the acquisition time sequence of the image frame set corresponding to the target, the target's The steps of associating the fusion position and generating the fusion trajectory of the target may specifically include:
  • the existing trajectories are trajectories formed by correlating the fusion positions of the targets whose acquisition time is before the current fusion position of the fusion target. That is to say, the existing trajectory is the trajectory formed by the fusion positions of the targets before the current fusion position of the fusion target, wherein each target before the current fusion position of the fusion target is the corresponding image frame Each target whose acquisition time is before the acquisition time of the image frame corresponding to the fused target.
  • the current fusion position of the fusion target is obtained by fusion of a group of image frames with the earliest acquisition time in each group of synchronous image frames, the current fusion position of the fusion target can be directly used as an existing trajectory, and the All trajectories are assigned a unified fused trajectory identity.
  • the source of the fusion position in the existing trajectory A includes: the position in the single camera trajectory 1 of the image acquisition device 1, the position in the single camera trajectory 2 of the image acquisition device 2, and the single camera trajectory 3 of the image acquisition device 3 The position in and the position in the single camera track 4 of the image acquisition device 4;
  • the source of the current fusion position of the fusion target a includes: the position in the single camera track 1 of the image acquisition device 1, the single camera track 5 of the image acquisition device 2 The position in , the position in the single camera track 6 of the image acquisition device 3 and the position in the single camera track 7 of the image acquisition device 4; because the source of the current fusion position of the existing track A and the fusion target a includes the image acquisition device 1, it can be directly determined that the current fusion position of the fusion target is associated with the existing track, and the existing track A can be updated based on the current fusion position of the fusion target a, that is, the The current fusion position of the fusion target a is used as a new trajectory point of the
  • the current fusion position of the fusion target and the source of the fusion position in the existing trajectory can be determined according to the single-camera trajectory identification and the fusion trajectory identification of the single-camera trajectory.
  • the preset motion state is a state in which the motion state of the fusion target is leaving the intersection.
  • the existing trajectory identifier of the existing trajectory source is in a lost state, and the existing trajectory is in the state of moving away from the intersection, then the existing trajectory is prohibited from being associated with any fusion target; If the single-camera track mark is in a canceled state, and the existing track is in a state of moving away from the intersection, then the existing track is directly canceled, that is, the existing track is deleted.
  • the flag of following the single-camera track can be set to 0.
  • the mark of using the single camera track is 0 means: it is not necessary to determine the existing track associated with the current fusion position of the fusion target according to the single camera track; the mark of using the single camera track is 1 means: it is necessary to determine Whether the source of the fusion position is the same as the source of the single camera track determines the existing track associated with the current fusion position of the fusion target.
  • the similarity between the fusion target and the world coordinate distance of the existing trajectory can be calculated, and a correlation matrix between each fusion target and the existing trajectory in the image frame corresponding to the fusion target can be constructed.
  • Local correlation processing and global correlation processing between the fusion target and the existing trajectory are performed through the correlation degree matrix between each fusion target and the existing track in the image frame corresponding to the fusion target.
  • the local association processing is specifically: based on the association matrix, the Hungarian algorithm can be used to calculate the association result of the fusion target and the existing trajectory.
  • the global association processing is specifically as follows: based on the association matrix, the Hungarian algorithm can be used to calculate the association result of the existing trajectory in which the single camera trajectory identification of the fusion target and source is in a lost state.
  • a new fusion trajectory identifier can be assigned to the matched fusion target, and its trajectory characteristics can be updated; for the unmatched fusion target, if the unmatched fusion target meets the trajectory creation condition, then Create a new trajectory for the fusion target as an existing trajectory; for an existing trajectory that does not match the fusion target, set the existing trajectory as a lost state.
  • the unmatched fusion target is obtained by fusion of a group of image frames with the earliest acquisition time in each group of synchronous image frames, it is determined that the fusion target satisfies the trajectory creation condition.
  • the preset correlation degree threshold may be set according to actual application conditions, and is not specifically limited here.
  • the Hungarian algorithm can be used to solve the correlation matrix to obtain the one-to-one matching relationship between each fusion target in the image frame corresponding to the fusion target and the existing trajectory.
  • the existing trajectories include: existing trajectory A and existing trajectory B
  • the correlation matrix is a 3 ⁇ 2 matrix
  • the Hungarian algorithm can be used Calculate the correlation matrix to obtain the one-to-one matching relationship between each fusion target in the image frame corresponding to the fusion target and the existing trajectory: c1 matches the existing trajectory B, and c2 matches the existing trajectory A.
  • the degree of association between c1 and the existing trajectory B is greater than the preset association degree threshold, it may be determined that the current fusion position of the fusion target c1 is associated with the existing trajectory B. If the degree of association between c2 and the existing trajectory A is not greater than the preset association degree threshold, it can be determined that the current fusion position of the fusion target c2 is not associated with the existing trajectory A.
  • the existing track can also be updated based on the current fusion position of the fusion target, that is, the current fusion position of the fusion target is used as a new track of the existing track point, and assign a new fusion trajectory identifier to the updated existing trajectory.
  • the method can generate trajectories of multiple targets, and can adapt to the trajectory generation requirements of multiple complex targets in scenarios such as urban traffic.
  • the fused trajectory of the target can be generated in real time, that is, the method provided by the embodiment of the present application is applicable to scenarios with high real-time requirements such as smart intersections.
  • the method provided by the embodiment of the present application provides more reliable spatio-temporal information for the trajectory fusion of the target by synchronizing the image frames collected by multiple image collection devices.
  • the location association is realized by combining features such as world coordinates, lane numbers, movement speed, appearance, etc., making the association results more reliable and widely applicable, not limited to specific types of targets.
  • the method provided in the embodiment of the present application utilizes the trajectory of a single camera for target trajectory fusion, which makes the fusion trajectory more reliable and effectively reduces the phenomenon of fusion trajectory identification bringing back.
  • the method provided in the embodiment of the present application does not need to artificially obtain the relationship matrix of the area collected by the image collection device, and has lower cost and better universality.
  • the method provided in the embodiment of the present application can be applied to a real-time scene to generate a fusion trajectory of a target in real time.
  • FIG. 5 is a schematic flow diagram of the trajectory correction method provided by the embodiment of the present application, as shown in Figure 5: for the application scenario using delay correction, the trajectory delay cache can be performed, and then the trajectory smoothing process can be performed, and finally the smoothing process can be output
  • the fusion trajectory can be directly output.
  • the method can generate trajectories of multiple targets, and can adapt to the trajectory generation requirements of multiple complex targets in scenarios such as urban traffic.
  • the fused trajectory of the target can be generated in real time, that is, the method provided by the embodiment of the present application is applicable to scenarios with high real-time requirements such as smart intersections.
  • the method provided by the embodiment of the present application provides more reliable spatio-temporal information for the trajectory fusion of the target by synchronizing the image frames collected by multiple image collection devices.
  • the location association is realized by combining features such as world coordinates, lane numbers, movement speed, appearance, etc., making the association results more reliable and widely applicable, not limited to specific types of targets.
  • the method provided by the embodiment of the present application provides more information for the trajectory fusion of the target by synchronizing the image frames collected by multiple image acquisition devices for the fusion trajectory generated by a specific target. It is reliable spatio-temporal information, and combines world coordinates, lane number, motion speed, appearance and other features to achieve position association, making the association results more reliable and widely applicable. Therefore, compared with the existing current related technologies, using The fusion trajectory generated by the method provided in the embodiment of the present application is more accurate for a specific target.
  • FIG. 6 is a schematic structural diagram of a device for generating a target trajectory provided in an embodiment of the present application. As shown in FIG. 6, the device includes:
  • the position acquisition module 601 is used for acquiring the image position of each target in each image frame in the image group for each image group collected by the image acquisition device;
  • a position conversion module 602 configured to convert the image position into a position in a world coordinate system according to a preset conversion relationship, to obtain a world position corresponding to the image position;
  • the position fusion module 603 is configured to, for each image frame set, fuse the world position of each identical target in each image frame in the image frame set to obtain the fusion position of the target; wherein, the image frame set is Collecting an image set composed of time-synchronized image frames;
  • the track generation module 604 is configured to, for each target, associate the fusion position of the target according to the acquisition time sequence of the image frame set corresponding to the target, and generate the fusion track of the target.
  • the image position of each target in each image frame in the image group is obtained; according to the preset conversion relationship, the image position is converted to The position in the world coordinate system is obtained to obtain the world position corresponding to the image position; for each image frame set, the world position of each identical target in each image frame in the image frame set is fused to obtain the fusion position of the target; For each target, according to the acquisition time sequence of the image frame set corresponding to the target, the fusion position of the target is correlated to generate the fusion trajectory of the target. That is, the device provided by the embodiment of the present application can generate the trajectories of multiple targets, and can meet the needs of generating trajectories of multiple complex targets in scenarios such as urban traffic.
  • the number of image acquisition devices is at least 3;
  • the position fusion module 603 includes:
  • the similarity matrix determination submodule 701 is used for constructing the similarity matrix of each target in the two image frames according to the world position of each target in the image frame set for any two image frames in the image frame set;
  • the target determination sub-module 702 is used to solve the similarity matrix according to the Hungarian algorithm, if the similarity between two targets at corresponding positions in the similarity matrix and located in different image frames obtained through the calculation is greater than Preset the similarity threshold to determine that the two targets are the same target;
  • the position fusion sub-module 703 is configured to fuse the world positions of the same target in the image frame set to obtain a fused position, and use the fused position as the fused position of the fused target corresponding to the same target.
  • the similarity matrix determining submodule 701 is specifically configured to determine the velocity direction of the target based on the world position of each target in the set of image frames; for any two image frames in the set of image frames , based on the world position and velocity direction of each object in the two image frames, construct a similarity matrix of each object in the two image frames.
  • the device further includes:
  • the fused position update module 801 is used for each fused target, if there are at least two targets from the same image group among multiple identical targets corresponding to the fused target, and the distance between the at least two targets and the fused target is An object with a non-minimum distance; and update the fused position of the fused object.
  • the device further includes:
  • a single-camera trajectory generating module 802 configured to generate a single-camera trajectory of the target based on multiple world positions of each target in the image group for each image group;
  • the track generation module 604 is specifically used to determine the current fusion position of the fusion target for each fusion target if the source of the current fusion position of the fusion target and the source of the fusion position in the existing track include the same single camera track.
  • the position is associated with the existing trajectory; wherein, the existing trajectory is the trajectory formed by the fusion position of each target whose collection time is before the current fusion position of the fusion target; if the current fusion position of the fusion target is not the same as any
  • the source of the fusion position in an existing track does not include the position in the same single camera track, and the current fusion position of the fusion target is in a preset motion state, and it is determined that the current fusion position of the fusion target is not associated with the existing track ; If the current fusion position of the fusion target does not include the position in the same single camera track as the source of the fusion position in any existing trajectory, and the current fusion position of the fusion target is not in the preset motion state, construct the fusion target
  • the device further includes:
  • a track updating module 803, configured to update the existing track based on the current fusion position of the fusion target.
  • the device further includes:
  • Synchronization time determination module 804 for each image group, based on multiple world positions of each target in the image group to generate a single camera trajectory of the target; match the single camera trajectory in any two image groups; Select a group of matching single-camera trajectories, respectively calculate the distance between the starting position and the end position of the first single-camera trajectory in the group and each position of the second single-camera trajectory; and, respectively calculate the distances in the group The distance between the start position and end position of the second single camera track and each position of the first single camera track; if there is a distance between the start position or the end position of the first single camera track in the second single camera track The position whose distance is less than the preset distance threshold, and the distance between the first single camera track and the start position or the end position of the second single camera track is less than the preset distance threshold, the group of matching single camera tracks is retained.
  • the image frames where the two positions are respectively located are primary synchronous image frames; for any two image groups, calculate the average time of the acquisition time of each pair of primary synchronous image frames in the two image groups, and combine the two image groups
  • the image frame whose acquisition time is the average time in the middle is used as the final synchronous image frame of the two image groups.
  • the device for each target, according to the acquisition time sequence of the image frame set corresponding to the target, correlate the fusion position of the target to generate the fusion trajectory of the target, that is, the fusion trajectory provided by the embodiment of the present application
  • the device can generate trajectories of multiple targets, and can adapt to the trajectory generation requirements of multiple complex targets in scenarios such as urban traffic.
  • the fused trajectory of the target can be generated in real time, that is, the device provided by the embodiment of the present application is suitable for scenarios with high real-time requirements such as smart intersections.
  • the apparatus provided in the embodiment of the present application provides more reliable spatio-temporal information for trajectory fusion of a target by synchronizing image frames collected by multiple image collection devices.
  • the location association is realized by combining features such as world coordinates, lane numbers, movement speed, appearance, etc., making the association results more reliable and widely applicable, not limited to specific types of targets.
  • the device provided in the embodiment of the present application uses a single camera trajectory to perform trajectory fusion of a target, making the fusion trajectory more reliable and effectively reducing the phenomenon of fusion trajectory identification being brought back.
  • the device provided by the embodiment of the present application does not need to artificially obtain the relationship matrix of the area collected by the image collection device, and has lower cost and better universality.
  • the embodiment of the present application also provides an electronic device, as shown in FIG. 9 , including a processor 901, a communication interface 902, a memory 903, and a communication bus 904. complete the mutual communication,
  • Memory 903 used to store computer programs
  • the processor 901 is configured to implement the steps of the method for generating the target trajectory described in any one of the above-mentioned embodiments when executing the program stored in the memory 903 .
  • the communication bus mentioned above for the electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the electronic device and other devices.
  • the memory may include a random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a computer-readable storage medium is also provided.
  • a computer program is stored in the computer-readable storage medium.
  • the computer program is executed by a processor, any of the above-mentioned target trajectories can be realized. The steps to generate the method.
  • a computer program product including instructions is also provided, and when it is run on a computer, it causes the computer to execute any method for generating a target trajectory in the above embodiments.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
  • SSD Solid State Disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un procédé et un appareil pour générer une trajectoire d'une cible, ainsi qu'un dispositif électronique et un support. Le procédé comprend : pour un groupe d'images qui est collecté au moyen de chaque dispositif de collecte d'images, l'acquisition d'une position d'image où chaque cible dans chaque trame d'image dans le groupe d'images est située ; la conversion de la position d'image en position dans un système de coordonnées universelles selon une relation de conversion prédéfinie, de façon à obtenir une position universelle correspondant à la position d'image ; pour chaque ensemble de trames d'image, la fusion des positions universelles de chaque même cible dans chaque trame d'image dans l'ensemble de trames d'image, de façon à obtenir une position fusionnée de la cible ; et pour chaque cible, l'association des positions fusionnées de la cible selon une séquence temporelle de collecte des ensembles de trames d'image correspondant à la cible, de façon à générer une trajectoire fusionnée de la cible. Au moyen du procédé, des trajectoires d'une pluralité de cibles peuvent être générées, et le procédé peut s'adapter à des exigences de génération de trajectoire d'une pluralité de cibles complexes dans des scénarios tels que la circulation urbaine.
PCT/CN2022/117505 2021-11-17 2022-09-07 Procédé et appareil pour générer une trajectoire de cible, et dispositif électronique et support WO2023087860A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111359855.X 2021-11-17
CN202111359855.XA CN114066974A (zh) 2021-11-17 2021-11-17 一种目标轨迹的生成方法、装置、电子设备及介质

Publications (1)

Publication Number Publication Date
WO2023087860A1 true WO2023087860A1 (fr) 2023-05-25

Family

ID=80273030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117505 WO2023087860A1 (fr) 2021-11-17 2022-09-07 Procédé et appareil pour générer une trajectoire de cible, et dispositif électronique et support

Country Status (2)

Country Link
CN (2) CN114066974A (fr)
WO (1) WO2023087860A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435934A (zh) * 2023-12-22 2024-01-23 中国科学院自动化研究所 基于二分图的运动目标轨迹的匹配方法、装置和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066974A (zh) * 2021-11-17 2022-02-18 上海高德威智能交通系统有限公司 一种目标轨迹的生成方法、装置、电子设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476827A (zh) * 2019-01-24 2020-07-31 曜科智能科技(上海)有限公司 目标跟踪方法、系统、电子装置及存储介质
US20200364443A1 (en) * 2018-05-15 2020-11-19 Tencent Technology (Shenzhen) Company Limited Method for acquiring motion track and device thereof, storage medium, and terminal
CN112070807A (zh) * 2020-11-11 2020-12-11 湖北亿咖通科技有限公司 多目标跟踪方法和电子装置
CN112232279A (zh) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 一种人员间距检测方法和装置
CN112465866A (zh) * 2020-11-27 2021-03-09 杭州海康威视数字技术股份有限公司 多目标轨迹获取方法、装置、系统及存储介质
CN114066974A (zh) * 2021-11-17 2022-02-18 上海高德威智能交通系统有限公司 一种目标轨迹的生成方法、装置、电子设备及介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200364443A1 (en) * 2018-05-15 2020-11-19 Tencent Technology (Shenzhen) Company Limited Method for acquiring motion track and device thereof, storage medium, and terminal
CN111476827A (zh) * 2019-01-24 2020-07-31 曜科智能科技(上海)有限公司 目标跟踪方法、系统、电子装置及存储介质
CN112232279A (zh) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 一种人员间距检测方法和装置
CN112070807A (zh) * 2020-11-11 2020-12-11 湖北亿咖通科技有限公司 多目标跟踪方法和电子装置
CN112465866A (zh) * 2020-11-27 2021-03-09 杭州海康威视数字技术股份有限公司 多目标轨迹获取方法、装置、系统及存储介质
CN114066974A (zh) * 2021-11-17 2022-02-18 上海高德威智能交通系统有限公司 一种目标轨迹的生成方法、装置、电子设备及介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435934A (zh) * 2023-12-22 2024-01-23 中国科学院自动化研究所 基于二分图的运动目标轨迹的匹配方法、装置和存储介质

Also Published As

Publication number Publication date
CN114066974A (zh) 2022-02-18
CN115908545A (zh) 2023-04-04

Similar Documents

Publication Publication Date Title
WO2023087860A1 (fr) Procédé et appareil pour générer une trajectoire de cible, et dispositif électronique et support
WO2021196294A1 (fr) Procédé et système de suivi d'emplacement de personne à travers des vidéos, et dispositif
WO2020134512A1 (fr) Système de détection de trafic par radar à ondes millimétriques et vidéo
WO2019233286A1 (fr) Procédé et appareil de positionnement visuel, dispositif électronique et système
JP2022509302A (ja) 地図生成方法、運転制御方法、装置、電子機器及びシステム
CN111784729B (zh) 一种对象跟踪方法、装置、电子设备及存储介质
WO2023045271A1 (fr) Procédé et appareil de génération de carte bidimensionnelle, dispositif terminal et support de stockage
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
CN113447923A (zh) 目标检测方法、装置、系统、电子设备及存储介质
CN103198488A (zh) Ptz监控摄像机实时姿态快速估算方法
CN112016483A (zh) 目标检测的接力系统、方法、装置及设备
CN111784730B (zh) 一种对象跟踪方法、装置、电子设备及存储介质
TW201142751A (en) Video processing system generating corrected geospatial metadata for a plurality of georeferenced video feeds and related methods
CN111435435B (zh) 一种同伴识别方法、装置、服务器及系统
CN114827570A (zh) 一种基于三维场景的视频态势感知与信息融合方法及电子设备
CN109064499A (zh) 一种基于分布式解析的多层框架抗震实验高速视频测量方法
WO2024055966A1 (fr) Procédé et appareil de détection de cible à caméras multiples
CN113850837B (zh) 视频处理方法、装置、电子设备、存储介质及计算机产品
CN105894505A (zh) 一种基于多摄像机几何约束的快速行人定位方法
den Hollander et al. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras
CN114782496A (zh) 一种对象的跟踪方法、装置、存储介质及电子装置
CN115144843A (zh) 一种物体位置的融合方法及装置
TW202236214A (zh) 深度圖像之生成方法、系統以及應用該方法之定位系統
CN113611112A (zh) 一种目标关联方法、装置、设备及存储介质
CN110865368A (zh) 一种基于人工智能的雷达视频数据融合方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894400

Country of ref document: EP

Kind code of ref document: A1

WD Withdrawal of designations after international publication