CN116543356B - Track determination method, track determination equipment and track determination medium - Google Patents

Track determination method, track determination equipment and track determination medium Download PDF

Info

Publication number
CN116543356B
CN116543356B CN202310813483.6A CN202310813483A CN116543356B CN 116543356 B CN116543356 B CN 116543356B CN 202310813483 A CN202310813483 A CN 202310813483A CN 116543356 B CN116543356 B CN 116543356B
Authority
CN
China
Prior art keywords
candidate
video
video acquisition
determining
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310813483.6A
Other languages
Chinese (zh)
Other versions
CN116543356A (en
Inventor
曹伟
史世莲
马飞
王雯雯
吴蕾
卢超
路扬
于洋
马前进
张鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao International Airport Group Co ltd
Hisense TransTech Co Ltd
Original Assignee
Qingdao International Airport Group Co ltd
Hisense TransTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao International Airport Group Co ltd, Hisense TransTech Co Ltd filed Critical Qingdao International Airport Group Co ltd
Priority to CN202310813483.6A priority Critical patent/CN116543356B/en
Publication of CN116543356A publication Critical patent/CN116543356A/en
Application granted granted Critical
Publication of CN116543356B publication Critical patent/CN116543356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image processing, in particular to a track determining method, track determining equipment and a track determining medium, which are used for providing a scheme for reducing the data volume of video data to be processed in the track determining process and further reducing the resource consumption. The embodiment of the application determines the first position of the target object at the current moment, and predicts the candidate position area of the target object at the next moment based on the first position; selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices; acquiring video data acquired by candidate video acquisition equipment, and determining a second position of a target object at the next moment based on the video data; a movement trajectory of the target object is determined based on the first position and the second position.

Description

Track determination method, track determination equipment and track determination medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a track determining method, apparatus, and medium.
Background
With the advancement of technology, technologies for determining a target object action track based on video are applied to methods. Currently, when track determination is performed based on videos, video data of all video points in an area where a target object may appear is generally stored, and an action track of the target object is determined by analyzing and identifying each video.
However, in the above manner, since the data size of the video data is large, a large number of calculation servers are required to ensure the operation efficiency of the algorithm in the process of analyzing and identifying each video, and a large number of storage servers are required to store the video data, so that the resource consumption is excessive, and the practical service value is exceeded when the special object action track determination, the history track tracing and other applications are performed in places such as airport hubs.
Disclosure of Invention
The application aims to provide a track determining method, track determining equipment and a track determining medium, which are used for providing a scheme for reducing the data volume of video data to be subjected to track determining process and further reducing the resource consumption.
In a first aspect, the present application provides a track determining method, the method comprising:
determining a first position of the target object at the current moment, and predicting a candidate position area of the target object at the next moment based on the first position;
selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices;
acquiring video data acquired by candidate video acquisition equipment, and determining a second position of a target object at the next moment based on the video data;
A movement trajectory of the target object is determined based on the first position and the second position.
In one possible embodiment, if the first location belongs to a block, the candidate location area includes a target link to which the first location belongs, and an associated link having a communication relationship with the target link.
In one possible implementation, selecting a candidate video capture device from a plurality of video capture devices based on a candidate location area and an installation location of the plurality of video capture devices, comprises: determining a first travel direction and a second travel direction starting from the first position; the first traveling direction is a direction from the first position to the starting point of the target road; the second traveling direction is a direction pointing from the first position to the end point of the target road; and determining candidate video acquisition equipment closest to the first position in the candidate position area along the first travelling direction and the second travelling direction respectively based on the installation positions of the plurality of video acquisition equipment.
In one possible embodiment, determining a candidate video capture device within the candidate location area that is closest to the first location along the first direction of travel and the second direction of travel, respectively, includes: if the first video acquisition equipment is installed on the target road in the first travelling direction, the first video acquisition equipment with the smallest distance from the first position is used as candidate video acquisition equipment in the first travelling direction; the first video acquisition device is positioned between the first position and the starting point of the target road; if the second video acquisition equipment is installed on the target road in the second travelling direction, the second video acquisition equipment with the smallest distance from the first position is used as candidate video acquisition equipment in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
In one possible embodiment, determining a candidate video capture device within the candidate location area that is closest to the first location along the first direction of travel and the second direction of travel, respectively, includes: if the first video acquisition equipment does not exist on the target road in the first travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the first travelling direction; the first video acquisition device is positioned between the first position and the starting point of the target road; if the second video acquisition equipment does not exist on the target road in the second travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
In one possible implementation, predicting, based on the first location, a candidate location area where the target object is located at a next time includes: if the first position belongs to a non-neighborhood, determining an influence factor based on the moving speed of the target object, the acquisition range of each video acquisition device and the identification speed of video data, wherein the influence factor is used for identifying the size of the candidate position area; a candidate location area is determined based on the first location and the impact factor.
In one possible implementation, selecting a candidate video capture device from a plurality of video capture devices based on a candidate location area and an installation location of the plurality of video capture devices, comprises: screening video acquisition equipment positioned in the candidate position area from the plurality of video acquisition equipment based on the installation positions of the plurality of video acquisition equipment; and determining the screened video acquisition equipment as candidate video acquisition equipment.
In one possible implementation, determining, based on the video data, a second location at which the target object is located at a next time includes: matching the video data acquired by each candidate video acquisition device with the identification information of the target object respectively, wherein the identification information of different target objects is different; and determining the installation position of the candidate video acquisition equipment successfully matched as a second position of the target object at the next moment.
In a second aspect, the present application provides a trajectory determining device comprising:
the prediction module is used for determining a first position of the target object at the current moment and predicting a candidate position area of the target object at the next moment based on the first position;
a selection module for selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices;
The position determining module is used for acquiring video data acquired by the candidate video acquisition equipment and determining a second position of the target object at the next moment based on the video data;
and the track determining module is used for determining the moving track of the target object based on the first position and the second position.
In a third aspect, the present application provides a track determining apparatus, the apparatus comprising a server and a plurality of video capturing apparatuses;
the video acquisition equipment is used for acquiring video data;
the server is used for determining a first position of the target object at the current moment and predicting a candidate position area of the target object at the next moment based on the first position;
selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices;
acquiring video data acquired by candidate video acquisition equipment, and determining a second position of a target object at the next moment based on the video data;
a movement trajectory of the target object is determined based on the first position and the second position.
In a fourth aspect, the present application provides a computer readable storage medium, which when executed by an electronic device, causes the electronic device to perform the trajectory determination method as in the first aspect described above.
In a fifth aspect, the application provides a computer program product comprising a computer program:
the computer program, when executed by a processor, implements the trajectory determination method of the first aspect as described above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
according to the first position of the target object at the current moment, predicting a candidate position area of the target object at the next moment, so that the area range of video data processing is reduced, and meanwhile, the accuracy of track determination is ensured; selecting candidate video acquisition equipment from the plurality of video acquisition equipment based on the candidate position area and the installation positions of the plurality of video acquisition equipment, screening the video acquisition equipment, and processing the video acquired by the screened video acquisition equipment only, so that the processing amount of video data is further reduced; determining a second position of the target object at the next moment based on the data acquired by the candidate video equipment; and the moving track of the target object is determined based on the first position and the second position, and the processing pressure of the server is reduced and the consumption of resources is reduced by reducing the number of video data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an application scenario of an alternative track determination method according to an embodiment of the present application;
FIG. 2 is a flowchart of a track determination method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a second position determining process according to an embodiment of the present application;
FIG. 4 is a schematic diagram of one possible trajectory determination process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a road communication relationship according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a possible association relationship according to an embodiment of the present application;
FIG. 7 is a flowchart of a candidate video capture device selection process according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an installation position of a video capturing apparatus according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an installation position of a video capturing apparatus according to an embodiment of the present application;
FIG. 10 is a flow chart of an action trace according to an embodiment of the present application;
FIG. 11 is a schematic diagram of one possible trajectory determination process according to an embodiment of the present application;
FIG. 12 is a flow chart of a candidate location area determination procedure according to an embodiment of the application;
FIG. 13 is a schematic diagram of a track determining device according to an embodiment of the present application;
fig. 14 is a schematic diagram of a trajectory determining device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Wherein the described embodiments are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Also, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and furthermore, in the description of the embodiments of the present application, "plural" means two or more than two. The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or the like may explicitly or implicitly include one or more such feature, and in the description of embodiments of the application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the technical scheme of the application, the data is collected, transmitted, used and the like, and all meet the requirements of national relevant laws and regulations.
Before describing the track determining method provided by the embodiment of the present application, for convenience of understanding, the following detailed description will be given to the technical background of the embodiment of the present application.
With the advancement of technology, technologies for determining a target object action track based on video are applied to methods. Currently, when track determination is performed based on video, video data of all video points in an area where a target object may appear is generally structured in real time, and all structured data is stored; and determining the action track of the target object by analyzing and identifying each stored structured data.
However, in the above manner, since the data size of the video data is large, a large number of calculation servers are required to ensure the operation efficiency of the algorithm in the process of analyzing and identifying each video, and a large number of storage servers are required to store the video data, so that the resource consumption is excessive, and the practical service value is exceeded and the application effect is poor when the special object action track determination, the history track tracing and other applications are performed in places such as airport hubs.
Based on the problems, the application provides a lightweight track determination method, which mainly comprises the following steps: and predicting a candidate position area which possibly appears at the next moment based on the current position of the target object, screening the video acquisition equipment based on the candidate position area, and calculating the positions of the video acquisition equipment where all the peripheral target objects possibly appear. Therefore, the number of video data calculated each time is greatly reduced, the pressure of a server is reduced, and the construction cost is saved.
An application scenario of an optional track determination method provided by an embodiment of the present application is described below with reference to the accompanying drawings. As shown in fig. 1, the scene includes a plurality of video capture devices 10 installed at different locations and a server 20, wherein: a video acquisition device 10 for acquiring video data and transmitting the video data to the server 20; the server 20 is configured to determine a first location where the target object is located at a current time, and predict a candidate location area where the target object is located at a next time based on the first location; selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices; acquiring video data acquired by candidate video acquisition equipment, and determining a second position of a target object at the next moment based on the video data; a movement trajectory of the target object is determined based on the first position and the second position. Of course, the method provided by the embodiment of the present application is not limited to the application scenario shown in fig. 1, but may be used in other possible application scenarios, and the embodiment of the present application is not limited thereto.
As shown in fig. 2, a flowchart of a track determining method according to an embodiment of the present application includes the following specific steps:
step S201, determining a first position of a target object at the current moment, and predicting a candidate position area of the target object at the next moment based on the first position;
in the embodiment of the application, when the action track of the target object is determined, the position of the target object is determined based on the position of the target object determined last time, the movement track of the target object is determined based on the position of the target object determined last time and the position of the target object determined next time, and all the action tracks of the target object are determined based on a plurality of movement tracks determined in sequence; therefore, if the current time is the starting time of determining the moving track of the target object (i.e. the target object is not identified by the video data collected by the video collecting device at present), the first position of the target object at the current time is obtained by: through user input; if the current time is the starting time of the moving track of the target object (namely, the target object is identified by the video data acquired by the video acquisition device at present), the first position of the target object at the current time is the position of the video acquisition device for identifying the target object.
In some embodiments, different candidate location area determining manners are provided for scenes where different first locations are located, specifically, when the first locations belong to a block (i.e. a scene with a fixed road), candidate location areas are determined based on a connection relationship of the roads, and when the first locations belong to a non-block (i.e. a scene without a fixed road, such as an open area, etc.), candidate location areas are determined based on factors such as a moving speed of a target object, an acquisition range of each video acquisition device, a speed of processing video data by a server, and the like.
Step S202, selecting candidate video acquisition equipment from the plurality of video acquisition equipment based on the candidate position area and the installation positions of the plurality of video acquisition equipment;
in some embodiments, in order to avoid larger resource consumption caused by excessive processed video data, in the embodiments of the present application, the number of collected video data is reduced by screening the video collection devices based on the installation positions corresponding to the plurality of video collection devices. In a specific implementation, the selection modes of the video acquisition device are different according to scenes where different first positions are located, and a specific selection mode user can set according to requirements.
Step S203, acquiring video data acquired by candidate video acquisition equipment, and determining a second position of a target object at the next moment based on the video data;
in some embodiments, before the video data acquired by the candidate video acquisition device is acquired, a time period range of the acquired video data may be preset, for example, ten minutes, and when the video data acquired by the candidate video acquisition device is acquired, the video data of the preset time period range acquired by the video acquisition device may be acquired.
In specific implementation, as shown in fig. 3, the above process of determining the second position of the target object at the next moment specifically includes the following steps:
step S301, matching video data acquired by each candidate video acquisition device with identification information of a target object respectively;
the identification information of the target object is a unique identification of the target object, and the identification information of different target objects is different, so that the position of the target object can be determined by matching the identification information.
And step S302, determining the installation position of the candidate video acquisition equipment successfully matched as a second position of the target object at the next moment.
In some embodiments, a video list (video list) is preconfigured in the embodiments of the present application, and is used for recording the identifier of the candidate video capturing device and the captured video data, after the candidate video capturing device is obtained, the candidate video capturing device is added into the video list, and the server in the track determining device monitors the number and identifier of the candidate video capturing devices in the video list (video list) in real time, and when determining that there is a new candidate video capturing device, identifies the captured video data, where a specific identification manner is shown in the above steps S301-302.
Step S204, determining the moving track of the target object based on the first position and the second position.
After determining the movement track of the target object based on the first position and the second position, the second position is used as the current position used in the next track determination, and the steps of steps S201-S204 are executed to determine the next movement track of the target object.
According to the method, the candidate position area where the target object is located at the next moment is predicted according to the first position where the target object is located at the current moment, so that the area range where video data processing is required is reduced, and meanwhile, the accuracy of track determination is ensured; selecting candidate video acquisition equipment from the plurality of video acquisition equipment based on the candidate position area and the installation positions of the plurality of video acquisition equipment, screening the video acquisition equipment, and processing the video acquired by the screened video acquisition equipment only, so that the processing amount of video data is further reduced; determining a second position of the target object at the next moment based on the data acquired by the candidate video equipment; and determining the moving track of the target object based on the first position and the second position, and reducing the processing pressure of the server and the consumption of resources by reducing the number of video data.
In the embodiment of the application, different candidate location area determining strategies and candidate video acquisition equipment selecting strategies are respectively designed aiming at two scenes of a neighborhood and a non-neighborhood, and the following specific implementation modes of track determination under two conditions are respectively introduced.
1. The first location belongs to a block (a scene where there is a fixed road).
If the first location belongs to the neighborhood, the process of determining the movement track of the target object in this case includes six steps as shown in fig. 4, in order: determining a communication relation between roads, establishing an association relation between video acquisition equipment and the roads, determining a candidate position area, selecting candidate video acquisition equipment, identifying a target object and generating a moving track. Wherein determining the connectivity between the roads, and establishing the association between the video capture device and the roads are typically performed before determining the first location where the target object is currently located.
The following specifically describes the execution process of the above steps:
1. a connectivity relationship between the roads is determined.
In some embodiments, the connectivity of the roads within the trajectory determination area of the target object is first determined before predicting the candidate location area where the target object is located at the next time. In the implementation, the roads in the track determination area are first determined, and the roads are numbered (e.g., roads 1 to 8 shown in fig. 5); then determining the starting point and the end point of each road, and recording the longitude and latitude information of the starting point and the end point of each road, for example, the road 1 shown in fig. 5 (a circle represents the starting point or the end point, a line represents the road), wherein the starting point is A, the end point is B, and the longitude and latitude coordinates of the starting point A and the end point B of the road 1 are recorded; and finally, establishing and storing an association relation diagram for recording the connection relation between the road and the starting point and the end point and between the roads based on the corresponding relation between the starting point and the end point of each road.
2. And establishing an association relation between the video acquisition equipment and the road.
The installation position of each video acquisition device is determined, and the road position corresponding to each road is determined, when the installation position of the video acquisition device is located in the road range of the road, the video acquisition device is determined to belong to the road, and the acquisition direction of the video acquisition device is recorded at the same time, such as one-way (including from the end point of the road to the starting point and from the starting point of the road to the end point) or two-way. The association relationship between each video capture device and the road is recorded as described above.
Fig. 6 is a schematic diagram of a possible association relationship, as shown in fig. 6, the video capturing devices a, B, and c belong to the road 1, the capturing direction of the video capturing device a is a direction B, the capturing direction of the video capturing device B is a direction B, and the capturing direction of the video capturing device c is bidirectional, i.e. both directions can be captured.
3. Candidate location areas are determined.
In some embodiments, when determining the candidate location area, first determining the target road according to the first location based on the first location and the locations corresponding to the roads; determining an associated road with a communication relation with the target road based on the communication relation among the roads; the candidate location area includes a target road to which the first location belongs, and an associated road having a communication relationship with the target road. The above-mentioned related road having a communication relationship with the target road includes both a road directly connected to the target road and a road connected to the target road through at least one road within the trajectory determination range.
4. Candidate video capture devices are selected.
In some embodiments, as shown in fig. 7, in the neighborhood scene, a candidate video capture device is selected from a plurality of video capture devices based on a candidate location area and an installation location of the plurality of video capture devices, and specifically includes the steps of:
step S701, determining a first traveling direction and a second traveling direction starting from a first position;
the first travelling direction is a direction pointing to the starting point of the target road from the first position; the second traveling direction is a direction pointing from the first position to the end point of the target road;
in step S702, candidate video capture devices closest to the first location in the candidate location area are determined along the first travel direction and the second travel direction, respectively, based on the installation positions of the plurality of video capture devices.
In specific implementation, when determining the candidate video capturing device closest to the first position in the candidate position area along the first travelling direction and the second travelling direction, the specific cases include several cases, where in fig. 8, M is the first position, a is the starting point of the target road 1, B is the end point of the target road 1, the first travelling direction is the direction in which M points to a, and the second travelling direction is the direction in which M points to B; in fig. 9, M is a first position, D is a start point of the target road 11, C is an end point of the target road 11, the first traveling direction is a direction in which M is directed D, and the second traveling direction is a direction in which M is directed C.
Wherein the first video capture device is a video capture device located between the first location and the origin of the target link (e.g., first video capture devices a and b in fig. 8); the second video capture device is a video capture device (e.g., second video capture device c in fig. 8) located between the first location and the end of the target link.
In the first case, if a first video acquisition device is installed on a target road in a first traveling direction, the first video acquisition device with the smallest distance from a first position is used as a candidate video acquisition device in the first traveling direction; as shown in fig. 8, in the implementation, there are first video capturing devices a and b in the direction of the direction a of M, and at this time, distances between the first video capturing device a and the first video capturing device b and the first position M are calculated, respectively, and the first video capturing device b having the smallest distance from the first position M is taken as a candidate video capturing device in the direction.
If the first video acquisition equipment does not exist on the target road in the first travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the first travelling direction; the associated road in the first traveling direction is a road directly or indirectly connected to the start point of the target road.
In some embodiments, if the first video capture device is not present on the target road in the first direction of travel, the target road is taken as a reference road and each adjacent road connected to the origin of the reference road is determined;
for any adjacent road, determining video acquisition devices on the adjacent road, respectively determining the distance between each video acquisition device and the first position (the distance between the video acquisition device and the road existing between the first position), and selecting the video acquisition device with the smallest distance from the adjacent road based on the distance as a candidate video acquisition device.
If no video acquisition device exists on any adjacent road, the adjacent road is used as a reference road, the process is circularly executed until the candidate video acquisition device is found on the adjacent road or the boundary of the track determination range of the target object is reached, wherein the track determination range of the target object is a preset area for the target object, for example, the track determination range of the target object is set as a zone in a certain city. In some embodiments, when the above-described process is cyclically performed, if there is no adjacent road connected to the start point of the reference road when each adjacent road connected to the start point of the reference road is determined, determining that the boundary of the trajectory determination range to the target object is reached; since the present application determines only the communication relationship between the roads located within the trajectory determination area when determining the communication relationship between the roads, it is possible to determine the boundary reaching the trajectory determination range when the reference road does not have a corresponding adjacent road.
In a specific implementation, first, an adjacent road of a target road in a first traveling direction is determined, then, for any adjacent road, video capturing devices located on the road are determined, distances between each video capturing device and a first location (distances of roads existing between the video capturing device and the first location) are determined, and a video capturing device with the smallest distance is selected from the adjacent road based on the distances as a candidate video capturing device.
As shown in fig. 9, in the direction in which M points to D, the target link does not have the first video capturing apparatus, and at this time, the associated link of the target link in the first traveling direction, that is, links 12 to 17 in fig. 9 and the like in the drawing is determined. Candidate video capture devices are then determined from adjacent roads (i.e., roads 12 and 17), respectively, and a specific embodiment of this process is described herein by way of example in relation to road 12.
Determining a video acquisition device positioned on the road 12, comprising video acquisition devices D and e, and determining the distance between the video acquisition devices D and e and a first position M (the distance of the road existing between the video acquisition device and the first position M), respectively, wherein the distance between the video acquisition device D and the first position M is the sum of the distance D to D and the distance D to M, and the distance between the video acquisition device e and the first position M is the sum of the distance e to D and the distance D to M; and selecting the video acquisition device with the smallest distance (namely the video acquisition device e) from the adjacent roads based on the distance as a candidate video acquisition device.
In some embodiments, if no video capturing device exists on any adjacent road, a video capturing device with a minimum distance (a distance between the video capturing device and a road existing between the first location) from the first location is determined as a candidate video capturing device in each adjacent road connected to the start point of the adjacent road. Taking the road 12 in fig. 9 as an example, if no video capturing device exists in the road 12, candidate video capturing devices are determined on adjacent roads 13 and 14 in the direction of the starting point E, and the determination manner is the same as that in the road 12, and will not be described herein. If no video acquisition device exists in each adjacent road connected with the starting point of the adjacent road, repeating the above process until the candidate video acquisition device is found or the boundary of the track determination range of the target object is reached.
The start point of one road is connected to the end point of the other road in the connection relationship between the roads in the trajectory determination range of the target object, and as shown in fig. 9, D is the start point, C is the end point, and e is the start point, and D is the end point, for the roads 11, C and e.
Thirdly, if the second video acquisition equipment is installed on the target road in the second travelling direction, the second video acquisition equipment with the smallest distance from the first position is used as candidate video acquisition equipment in the second travelling direction; as shown in fig. 8, in the implementation, there is a second video capturing device c in the direction of the direction M pointing to B, and since there is only one second video capturing device, the second video capturing device having the smallest distance from the first position M in this direction is the second video capturing device c, and the second video capturing device c is regarded as a candidate video capturing device in this direction.
And fourthly, if the second video acquisition equipment does not exist on the target road in the second travelling direction, determining the candidate video acquisition equipment closest to the first position from the associated road of the target road in the second travelling direction. In this case, the selection manner of the candidate video capturing device is similar to that of the second case, and reference may be made to the specific embodiment of the second case, which is not described herein.
It should be noted that, the first case and the second case are selection methods of candidate video capturing devices for the first traveling direction, and the third case and the fourth case are selection methods of candidate video capturing devices for the second traveling direction, and in a specific implementation, the selection methods may be combined based on actual situations.
5. And (5) identifying a target object.
In some embodiments, a video list (video list) is preconfigured in the embodiments of the present application, and is used to record the identifier of the candidate video capturing device and the captured video data, after the candidate video capturing device is obtained, the candidate video capturing device is added to the video list, and the server in the track determining device monitors the number and identifier of the candidate video capturing devices in real time (video list), and when it is determined that there is a new candidate video capturing device, the specific identifying manner is shown in the above steps S301-302.
6. A movement track is generated.
When the target object is identified based on the video data, the installation position of the candidate video acquisition device where the video data of the identified target object is located is determined as a second position where the target object is located at the next moment. And simultaneously, recording the acquisition time of the video data of the identified target object as the target appearance time corresponding to the second position.
And generating a moving track from the first position to the second position along the road direction based on the first position and the second position, and marking the target appearance time corresponding to each position (the first position and the second position) on the moving track.
Fig. 10 is a schematic diagram of a possible action track, in which point M is a first position, point N is a second position, and a connection line between point M and point N is a generated movement track of the target object.
2. The first location belongs to a non-neighborhood (open area scene).
If the first location belongs to a non-neighborhood, the process of determining the movement track of the target object in this case includes four steps as shown in fig. 11, in order: determining a candidate position area, selecting a candidate video acquisition device, identifying a target object and generating a moving track.
The following specifically describes the execution process of the above steps:
1. candidate location areas are determined.
As shown in fig. 12, when the first position belongs to the non-street region, after determining the first position where the target object is located at the current time, the candidate position region where the target object is located at the next time is predicted by:
step S1201, determining an impact factor based on the moving speed of the target object, the acquisition range of each video acquisition device, and the recognition speed of the video data;
wherein an impact factor is used to identify the size of the candidate location area;
step S1202, a candidate location area is determined based on the first location and the influence factor.
In some embodiments, the candidate location area is determined centered on the first location, with the impact factor being a radius.
Specifically, the present application calculates the K value (i.e., the above-described influence factor) by the following formula:
where s is the recognition speed of the video data (i.e., the number of video data processed per unit time),is the acquisition range of video acquisition equipment (the acquisition range of each video acquisition equipment is the same and is circular, and d is the diameter of the video acquisition range),>is the moving speed of the target object.
It should be noted that 1.5 in the above formula is a margin preset for determining the candidate location area, that is, for determining a candidate location area that is sufficient and has a remainder, so as to reduce the possibility that the target object will appear at the candidate location area at the next time. The value can be set by the user based on the requirement, and the embodiment of the application is not limited.
And taking the first position as the center, determining a candidate position area where the target object is positioned at the next moment by taking the determined K value as the radius, and determining the candidate position area to obtain the size of the candidate position area (for example, the area of the candidate position area).
2. Candidate video capture devices are selected.
After determining the candidate position area, screening video acquisition equipment positioned in the candidate position area from the plurality of video acquisition equipment based on the installation positions of the plurality of video acquisition equipment; and determining the screened video acquisition equipment as candidate video acquisition equipment.
3. And (5) identifying a target object.
In some embodiments, a video list (video list) is preconfigured in the embodiments of the present application, and is used to record the identifier of the candidate video capturing device and the captured video data, after the candidate video capturing device is obtained, the candidate video capturing device is added to the video list, and the server in the track determining device monitors the number and identifier of the candidate video capturing devices in real time (video list), and when it is determined that there is a new candidate video capturing device, the specific identifying manner is shown in the above steps S301-302.
It should be noted that, if all the candidate video capturing devices are not identified after identifying the video data, at this time, the candidate location area in the step 1 is modified, the K value is enlarged by a preset multiple (the value of the preset multiple may be set based on the requirement, for example, to be twice), then the candidate location area is redetermined (the first location in the step 1 is used as the center when the candidate location area is redetermined), the candidate video capturing device is redetermined in the redetermined candidate location area, and the identification of the target object is performed, that is, the process of selecting the candidate video capturing device in the step 2 and identifying the target object in the step 3 is performed. If the target object is not yet identified, the above process is repeatedly performed until the target object is identified, or the determined size of the candidate location area exceeds a preset threshold, where the preset threshold is a value set by the user based on the requirement, and in some embodiments, the size of the candidate location area may be the area of the candidate location area.
4. A movement track is generated.
When the target object is identified based on the video data, the installation position of the candidate video acquisition device where the video data of the identified target object is located is determined as a second position where the target object is located at the next moment. And simultaneously, recording the acquisition time of the video data of the identified target object as the target appearance time corresponding to the second position.
And connecting the first position and the second position by a straight line to generate a moving track, and marking the appearance time of the target corresponding to each position (the first position and the second position) on the moving track.
Based on the same application concept, an embodiment of the present application further provides a track determining apparatus, as shown in fig. 13, which may be specifically applied to the track determining device in the foregoing embodiment, including:
a prediction module 1301, configured to determine a first position where a target object is located at a current time, and predict, based on the first position, a candidate location area where the target object is located at a next time;
a selection module 1302 configured to select a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices;
The position determining module 1303 is configured to obtain video data collected by the candidate video collecting device, and determine, based on the video data, a second position where the target object is located at a next moment;
a trajectory determination module 1304 is configured to determine a movement trajectory of the target object based on the first location and the second location.
Optionally, if the first location belongs to a neighborhood, the candidate location area includes a target road to which the first location belongs, and an associated road having a communication relationship with the target road.
Optionally, the selecting module 1302 is specifically configured to:
determining a first travel direction and a second travel direction starting from the first position; the first traveling direction is a direction from the first position to a start point of the target road; the second traveling direction is a direction pointing from the first position to an end point of the target road; and determining candidate video acquisition equipment closest to the first position in the candidate position area along the first travelling direction and the second travelling direction respectively based on the installation positions of the video acquisition equipment.
Optionally, the selecting module 1302 is specifically configured to:
If a first video acquisition device is installed on the target road in the first travelling direction, taking the first video acquisition device with the smallest distance from the first position as a candidate video acquisition device in the first travelling direction; the first video acquisition device is a video acquisition device positioned between the first position and the starting point of the target road;
if a second video acquisition device is installed on the target road in a second travelling direction, taking the second video acquisition device with the smallest distance from the first position as a candidate video acquisition device in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
Optionally, the selecting module 1302 is specifically configured to:
if the first video acquisition equipment does not exist on the target road in the first travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the first travelling direction; the first video acquisition device is a video acquisition device positioned between the first position and the starting point of the target road;
If the second video acquisition equipment does not exist on the target road in the second travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
Optionally, the prediction module 1301 is specifically configured to: if the first position belongs to a non-neighborhood, determining an influence factor based on the moving speed of the target object, the acquisition range of each video acquisition device and the identification speed of video data, wherein the influence factor is used for identifying the size of the candidate position area; the candidate location area is determined based on the first location and the impact factor.
Optionally, the selecting module 1302 is specifically configured to: screening video acquisition equipment positioned in the candidate position area from a plurality of video acquisition equipment based on the installation positions of the video acquisition equipment; and determining the screened video acquisition equipment as the candidate video acquisition equipment.
Optionally, the location determining module 1303 is specifically configured to: matching the video data acquired by each candidate video acquisition device with the identification information of the target object respectively, wherein the identification information of different target objects is different; and determining the installation position of the candidate video acquisition equipment successfully matched as a second position of the target object at the next moment.
A trajectory determining device according to this embodiment of the present application, which includes a server 1400 and a plurality of video capturing devices 1404 (only one shown in the figure), is described below with reference to fig. 14. The device 1400 shown in fig. 14 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 14, the device 1400 is embodied in the form of a general purpose device. The components of the device 1400 may include, but are not limited to: the at least one processor 1401, the at least one memory 1402, and the bus 1403 connecting the different system components (including the memory 1402 and the processor 1401), wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of:
determining a first position of a target object at the current moment, and predicting a candidate position area of the target object at the next moment based on the first position; selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices; acquiring video data acquired by the candidate video acquisition equipment, and determining a second position of the target object at the next moment based on the video data; and determining the moving track of the target object based on the first position and the second position.
Bus 1403 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
Memory 1402 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 14021 and/or cache memory 14022, and may further include Read Only Memory (ROM) 14023.
Memory 1402 may also include a program/utility 14025 having a set (at least one) of program modules 14024, such program modules 14024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The device 1400 may communicate with the plurality of video capture devices 1404 directly and/or through any device (e.g., router, modem, etc.) that enables the device 1400 to communicate with one or more other devices. In addition, the device 1400 may also be in communication with one or more devices that enable a user to interact with the device 1400. Such communication may occur through an input/output (I/O) interface 1405. Also, the device 1400 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet via the network adapter 1406. As shown, the network adapter 1406 communicates with other modules for the device 1400 over the bus 1403. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with device 1400, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
Optionally, if the first location belongs to a neighborhood, the candidate location area includes a target road to which the first location belongs, and an associated road having a communication relationship with the target road.
Optionally, the above processor is specifically configured to: determining a first travel direction and a second travel direction starting from the first position; the first traveling direction is a direction from the first position to a start point of the target road; the second traveling direction is a direction pointing from the first position to an end point of the target road; and determining candidate video acquisition equipment closest to the first position in the candidate position area along the first travelling direction and the second travelling direction respectively based on the installation positions of the video acquisition equipment.
Optionally, the above processor is specifically configured to: if a first video acquisition device is installed on the target road in the first travelling direction, taking the first video acquisition device with the smallest distance from the first position as a candidate video acquisition device in the first travelling direction; the first video acquisition device is a video acquisition device positioned between the first position and the starting point of the target road; if a second video acquisition device is installed on the target road in a second travelling direction, taking the second video acquisition device with the smallest distance from the first position as a candidate video acquisition device in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
Optionally, the above processor is specifically configured to: if the first video acquisition equipment does not exist on the target road in the first travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the first travelling direction; the first video acquisition device is a video acquisition device positioned between the first position and the starting point of the target road; if the second video acquisition equipment does not exist on the target road in the second travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
Optionally, the above processor is specifically configured to: if the first position belongs to a non-neighborhood, determining an influence factor based on the moving speed of the target object, the acquisition range of each video acquisition device and the identification speed of video data, wherein the influence factor is used for identifying the size of the candidate position area; the candidate location area is determined based on the first location and the impact factor.
Optionally, the above processor is specifically configured to: screening video acquisition equipment positioned in the candidate position area from a plurality of video acquisition equipment based on the installation positions of the video acquisition equipment; and determining the screened video acquisition equipment as the candidate video acquisition equipment.
Optionally, the above processor is specifically configured to: matching the video data acquired by each candidate video acquisition device with the identification information of the target object respectively, wherein the identification information of different target objects is different; and determining the installation position of the candidate video acquisition equipment successfully matched as a second position of the target object at the next moment.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. A track determining method, comprising:
determining a first position of a target object at the current moment, and predicting a candidate position area of the target object at the next moment based on the first position;
selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices;
Acquiring video data acquired by the candidate video acquisition equipment, and respectively matching the video data acquired by each candidate video acquisition equipment with the identification information of the target object, wherein the identification information of different target objects is different; determining the installation position of the candidate video acquisition equipment successfully matched as a second position of the target object at the next moment;
determining a moving track of the target object based on the first position and the second position;
wherein predicting, based on the first location, a candidate location area where the target object is located at a next moment, includes:
if the first position belongs to a block, the candidate position area comprises a target road to which the first position belongs and an associated road with a communication relation with the target road;
if the first position belongs to a non-neighborhood, determining an influence factor based on the moving speed of the target object, the acquisition range of each video acquisition device and the identification speed of video data, wherein the influence factor is used for identifying the size of the candidate position area; determining the candidate location area based on the first location and the impact factor;
The selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices, comprising:
determining a first travel direction and a second travel direction starting from the first position; the first traveling direction is a direction from the first position to a start point of the target road; the second traveling direction is a direction pointing from the first position to an end point of the target road;
and determining candidate video acquisition equipment closest to the first position in the candidate position area along the first travelling direction and the second travelling direction respectively based on the installation positions of the video acquisition equipment.
2. The method of claim 1, wherein the determining a candidate video capture device within the candidate location area that is closest to the first location along the first direction of travel and the second direction of travel, respectively, comprises:
if a first video acquisition device is installed on the target road in the first travelling direction, taking the first video acquisition device with the smallest distance from the first position as a candidate video acquisition device in the first travelling direction; the first video acquisition device is a video acquisition device positioned between the first position and the starting point of the target road;
If a second video acquisition device is installed on the target road in a second travelling direction, taking the second video acquisition device with the smallest distance from the first position as a candidate video acquisition device in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
3. The method of claim 1, wherein the determining a candidate video capture device within the candidate location area that is closest to the first location along the first direction of travel and the second direction of travel, respectively, comprises:
if the first video acquisition equipment does not exist on the target road in the first travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the first travelling direction; the first video acquisition device is a video acquisition device positioned between the first position and the starting point of the target road;
if the second video acquisition equipment does not exist on the target road in the second travelling direction, determining candidate video acquisition equipment closest to the first position from the associated road of the target road in the second travelling direction; the second video capture device is a video capture device located between the first location and an end point of the target link.
4. The method of claim 1, wherein selecting a candidate video capture device from a plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices comprises:
screening video acquisition equipment positioned in the candidate position area from a plurality of video acquisition equipment based on the installation positions of the video acquisition equipment;
and determining the screened video acquisition equipment as the candidate video acquisition equipment.
5. A trajectory determining device, characterized in that the device comprises a server and a plurality of video acquisition devices;
the video acquisition equipment is used for acquiring video data;
the server is used for determining a first position of the target object at the current moment and predicting a candidate position area of the target object at the next moment based on the first position;
selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices;
acquiring video data acquired by the candidate video acquisition equipment, and respectively matching the video data acquired by each candidate video acquisition equipment with the identification information of the target object, wherein the identification information of different target objects is different; determining the installation position of the candidate video acquisition equipment successfully matched as a second position of the target object at the next moment;
Determining a moving track of the target object based on the first position and the second position;
wherein predicting, based on the first location, a candidate location area where the target object is located at a next moment, includes:
if the first position belongs to a block, the candidate position area comprises a target road to which the first position belongs and an associated road with a communication relation with the target road;
if the first position belongs to a non-neighborhood, determining an influence factor based on the moving speed of the target object, the acquisition range of each video acquisition device and the identification speed of video data, wherein the influence factor is used for identifying the size of the candidate position area; determining the candidate location area based on the first location and the impact factor;
the selecting a candidate video capture device from the plurality of video capture devices based on the candidate location area and the installation locations of the plurality of video capture devices, comprising:
determining a first travel direction and a second travel direction starting from the first position; the first traveling direction is a direction from the first position to a start point of the target road; the second traveling direction is a direction pointing from the first position to an end point of the target road;
And determining candidate video acquisition equipment closest to the first position in the candidate position area along the first travelling direction and the second travelling direction respectively based on the installation positions of the video acquisition equipment.
6. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 4.
CN202310813483.6A 2023-07-05 2023-07-05 Track determination method, track determination equipment and track determination medium Active CN116543356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310813483.6A CN116543356B (en) 2023-07-05 2023-07-05 Track determination method, track determination equipment and track determination medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310813483.6A CN116543356B (en) 2023-07-05 2023-07-05 Track determination method, track determination equipment and track determination medium

Publications (2)

Publication Number Publication Date
CN116543356A CN116543356A (en) 2023-08-04
CN116543356B true CN116543356B (en) 2023-10-27

Family

ID=87454543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310813483.6A Active CN116543356B (en) 2023-07-05 2023-07-05 Track determination method, track determination equipment and track determination medium

Country Status (1)

Country Link
CN (1) CN116543356B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523643A (en) * 2020-04-10 2020-08-11 商汤集团有限公司 Trajectory prediction method, apparatus, device and storage medium
CN112839855A (en) * 2020-12-31 2021-05-25 华为技术有限公司 Trajectory prediction method and device
CN113624245A (en) * 2020-05-08 2021-11-09 北京京东乾石科技有限公司 Navigation method and device, computer storage medium and electronic equipment
EP3961489A1 (en) * 2020-08-28 2022-03-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for identifying updated road, device and computer storage medium
CN114332798A (en) * 2021-12-30 2022-04-12 南京领行科技股份有限公司 Processing method and related device for network car booking environment information
WO2022105437A1 (en) * 2020-11-19 2022-05-27 歌尔股份有限公司 Path planning method and apparatus, and electronic device
CN114549873A (en) * 2022-02-28 2022-05-27 浙江大华技术股份有限公司 Image archive association method and device, electronic equipment and storage medium
CN114581870A (en) * 2022-03-07 2022-06-03 上海人工智能创新中心 Trajectory planning method, apparatus, device and computer-readable storage medium
CN114898307A (en) * 2022-07-11 2022-08-12 浙江大华技术股份有限公司 Object tracking method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523643A (en) * 2020-04-10 2020-08-11 商汤集团有限公司 Trajectory prediction method, apparatus, device and storage medium
WO2021204092A1 (en) * 2020-04-10 2021-10-14 商汤集团有限公司 Track prediction method and apparatus, and device and storage medium
CN113624245A (en) * 2020-05-08 2021-11-09 北京京东乾石科技有限公司 Navigation method and device, computer storage medium and electronic equipment
EP3961489A1 (en) * 2020-08-28 2022-03-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for identifying updated road, device and computer storage medium
WO2022105437A1 (en) * 2020-11-19 2022-05-27 歌尔股份有限公司 Path planning method and apparatus, and electronic device
CN112839855A (en) * 2020-12-31 2021-05-25 华为技术有限公司 Trajectory prediction method and device
CN114332798A (en) * 2021-12-30 2022-04-12 南京领行科技股份有限公司 Processing method and related device for network car booking environment information
CN114549873A (en) * 2022-02-28 2022-05-27 浙江大华技术股份有限公司 Image archive association method and device, electronic equipment and storage medium
CN114581870A (en) * 2022-03-07 2022-06-03 上海人工智能创新中心 Trajectory planning method, apparatus, device and computer-readable storage medium
CN114898307A (en) * 2022-07-11 2022-08-12 浙江大华技术股份有限公司 Object tracking method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于隐马尔科夫模型和动态规划的手机数据移动轨迹匹配;陈浩;许长辉;张晓平;王静;宋现锋;;地理与地理信息科学(第03期);全文 *
运动目标三维轨迹可视化与关联分析方法;郭洋;马翠霞;滕东兴;杨;王宏安;;软件学报(第05期);全文 *

Also Published As

Publication number Publication date
CN116543356A (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN109472884B (en) Unmanned vehicle data storage method, device, equipment and storage medium
CN102595103B (en) Method based on geographic information system (GIS) map deduction intelligent video
CN104318327A (en) Predictive parsing method for track of vehicle
CN113011323B (en) Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN107862868B (en) A method of track of vehicle prediction is carried out based on big data
CN111988524A (en) Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium
CN102595105A (en) Application method based on geographic information system (GIS) map lens angle information configuration
CN112435469A (en) Vehicle early warning control method and device, computer readable medium and electronic equipment
CN115294169A (en) Vehicle tracking method and device, electronic equipment and storage medium
CN116778292B (en) Method, device, equipment and storage medium for fusing space-time trajectories of multi-mode vehicles
CN114550449A (en) Vehicle track completion method and device, computer readable medium and electronic equipment
CN112085953A (en) Traffic command method, device and equipment
CN112233428A (en) Traffic flow prediction method, traffic flow prediction device, storage medium and equipment
CN116543356B (en) Track determination method, track determination equipment and track determination medium
CN112365520B (en) Pedestrian target real-time tracking system and method based on video big data resource efficiency evaluation
CN105489010A (en) System and method for monitoring and analyzing fast road travel time reliability
CN109446437B (en) Information mining method, device, server and storage medium
CN115457777B (en) Specific vehicle traceability analysis method
CN113850837B (en) Video processing method and device, electronic equipment, storage medium and computer product
CN114286086B (en) Camera detection method and related device
CN109800685A (en) The determination method and device of object in a kind of video
CN112488069B (en) Target searching method, device and equipment
CN115169588A (en) Electrographic computation space-time trajectory vehicle code correlation method, device, equipment and storage medium
CN112541457A (en) Searching method and related device for monitoring node
KR101754995B1 (en) Tracking criminals cctv apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant