CN117372469A - Target tracking system, method, device, equipment and medium - Google Patents
Target tracking system, method, device, equipment and medium Download PDFInfo
- Publication number
- CN117372469A CN117372469A CN202311147868.XA CN202311147868A CN117372469A CN 117372469 A CN117372469 A CN 117372469A CN 202311147868 A CN202311147868 A CN 202311147868A CN 117372469 A CN117372469 A CN 117372469A
- Authority
- CN
- China
- Prior art keywords
- target
- target tracking
- target object
- video
- edge server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000012544 monitoring process Methods 0.000 claims abstract description 132
- 230000005540 biological transmission Effects 0.000 claims abstract description 92
- 238000012545 processing Methods 0.000 claims abstract description 18
- 239000002131 composite material Substances 0.000 claims description 37
- 230000009467 reduction Effects 0.000 claims description 6
- 230000009286 beneficial effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- GIYXAJPCNFJEHY-UHFFFAOYSA-N N-methyl-3-phenyl-3-[4-(trifluoromethyl)phenoxy]-1-propanamine hydrochloride (1:1) Chemical compound Cl.C=1C=CC=CC=1C(CCNC)OC1=CC=C(C(F)(F)F)C=C1 GIYXAJPCNFJEHY-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a target tracking system, a method, a device, an electronic device and a storage medium, comprising: the method comprises the steps that first monitoring data of a preset scene are collected by first terminal equipment, target objects are identified in the first monitoring data, and the target objects are transmitted to a central server; the central server generates a target tracking task based on the target object and sends the target tracking task to a plurality of second terminal devices; the second terminal equipment collects second monitoring data of a target object corresponding to the target tracking task and transmits the second monitoring data to the edge server; the second monitoring data comprises a background image and a motion video of the target object; and the edge server performs fitting processing on the second monitoring data to generate a first synthesized video for tracking the target object, and transmits the first synthesized video to the central server. Compared with the mode of directly transmitting the background image and the motion video to the central server, the method reduces the whole transmission data volume and is beneficial to improving the real-time performance of target tracking.
Description
Technical Field
The application belongs to the field of data processing, and particularly relates to a target tracking system, a method, a device, equipment and a storage medium.
Background
At present, video monitoring has become an important supporting means for modern city management, and besides the traditional camera is used for collecting monitoring video, unmanned aerial vehicle is used for assisting in collecting the monitoring video, so that the combination of the two is a new trend, and the tracking of a moving target in the monitoring video can be effectively realized.
However, the monitoring videos collected by the cameras and the unmanned aerial vehicle have larger data volume, and if all the monitoring videos are uploaded to the server for analysis, larger network bandwidth is required, so that higher network load is caused.
If real-time tracking of a moving object is to be realized under the limitation of the current network bandwidth, the resolution of the uploaded monitoring video is required to be reduced or the transmission speed is required to be reduced, and the real-time tracking effect on the moving object is poor because better balance between the resolution and the transmission speed of the monitoring video is difficult to achieve.
Disclosure of Invention
The embodiment of the application aims to provide a target tracking system, a method, a device, equipment and a storage medium, which can solve the problem that the real-time tracking effect on a moving target is poor because the resolution and the transmission speed of a monitored video are difficult to reach better balance at present.
In a first aspect, embodiments of the present application provide a target tracking system, including:
the first terminal equipment is used for collecting first monitoring data of a preset scene, identifying a target object in the first monitoring data and transmitting the target object to the central server;
the central server is used for generating a target tracking task based on the target object and sending the target tracking task to a plurality of second terminal devices;
the second terminal device is configured to collect second monitoring data of a target object corresponding to the target tracking task, and transmit the second monitoring data to an edge server; the second monitoring data comprises a background image and a motion video of the target object;
and the edge server is used for carrying out fitting processing on the second monitoring data, generating a first synthesized video for tracking the target object, and transmitting the first synthesized video to the central server.
Optionally, the second terminal device is further configured to transmit the second monitoring data to the central server.
Optionally, the central server is further configured to perform fitting processing on the second monitoring data, generate a second composite video for tracking the target object, and store the second composite video; the second composite video has a higher precision than the first composite video.
Optionally, the central server is specifically configured to reconstruct a super-resolution image of the motion video based on the background image, and generate a second composite video that tracks the target object.
Optionally, the second terminal device includes a camera and a drone;
the camera is used for collecting the motion video of the target object corresponding to the target tracking task and transmitting the motion video to the edge server;
the unmanned aerial vehicle is used for collecting a background image of a target object corresponding to the target tracking task and transmitting the background image to the edge server.
Optionally, each camera has a plurality of cameras;
the camera is specifically configured to, when there are a plurality of target tracking tasks, respectively collect, by each camera, a background image of a target object corresponding to each target tracking task, and transmit the background image to an edge server.
Optionally, the central server is specifically configured to send the target tracking task and the video resolution to a plurality of second terminal devices;
the camera is specifically configured to collect a motion video of a target object corresponding to the target tracking task according to the video resolution, and transmit the motion video to an edge server.
Optionally, the central server is specifically configured to send the target tracking task and the number of image transmissions to a plurality of second terminal devices;
the unmanned aerial vehicle is specifically configured to collect background images of a target object corresponding to the target tracking task, and transmit the background images of the image transmission number to an edge server in a preset time period.
Optionally, the central server is further configured to receive network load of a base station and computing power usage information of the edge server; analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device; sending the routing scheduling instruction to the second terminal equipment;
the second terminal device is specifically configured to determine a data transmission path according to the routing scheduling instruction; and transmitting the second monitoring data to an edge server based on the data transmission path.
Optionally, the central server is specifically configured to determine a plurality of alternative routes according to the network load; determining a target network node matched with the target tracking task according to the computing power use information; deleting alternative routes comprising other network nodes than the target network node; and determining a target route with the transmission speed meeting a preset condition from the rest alternative routes, and generating a route scheduling instruction of each second terminal device based on the target route.
Optionally, the central server is specifically configured to receive, in real time, a network load of a base station and computing power usage information of the edge server; and under the condition that the current network load is larger than a preset threshold value, analyzing the network load and the computing power use information, and updating the routing scheduling instruction of each second terminal device.
Optionally, the central server is specifically configured to generate a plurality of target tracking tasks based on a plurality of target objects, and determine a priority of each target tracking task; and generating a routing scheduling instruction of each second terminal device based on the priority, the network load information and the computing power use information.
Optionally, the central server is specifically configured to receive, in real time, a network load of a base station and computing power usage information of the edge server; under the condition that the current network load is larger than the preset threshold, determining a reduction value of video resolution and/or image transmission quantity corresponding to each target tracking task according to the priority, and obtaining the reduced video resolution and/or image transmission quantity;
the second terminal device is specifically configured to transmit the second monitoring data to an edge server based on the video resolution and/or the image transmission number after the clipping.
Optionally, the edge server is configured to extract a background feature point of a background area except for the target object for an area in a target frame of the motion video and a position relationship between the background feature point and the target object; determining a target background image matched with the timestamp of the target frame; removing a target object in the target background image, and extracting image feature points of the target background image after the target object is removed; comparing the background feature points with the image feature points, and fitting a target object in the target frame to the target background image according to the position relation to obtain a first synthetic image; the first composite image is combined into a first composite video.
In a second aspect, an embodiment of the present application provides a method for target tracking, applied to a central server, where the method includes:
acquiring a target object uploaded by first terminal equipment;
generating a target tracking task based on the target object, and sending the target tracking task to a plurality of second terminal devices; the second terminal equipment is enabled to acquire second monitoring data of a target object corresponding to the target tracking task, wherein the second monitoring data comprises a background image and a motion video;
Acquiring a first synthesized video uploaded by an edge server and used for tracking the target object; and the first synthesized video is obtained by fitting the second monitoring data through the edge server.
Optionally, the method further comprises:
acquiring the second monitoring data uploaded by the second terminal equipment;
fitting the second monitoring data to generate a second synthesized video for tracking the target object; the second synthesized video has higher precision than the first synthesized video;
and storing the second synthesized video.
Optionally, the fitting processing is performed on the second monitoring data to generate a second composite video for tracking the target object, including:
and reconstructing the super-resolution image of the motion video based on the background image to generate a second synthesized video for tracking the target object.
Optionally, the second terminal device includes a camera;
the sending the target tracking task to the plurality of second terminal devices includes:
and sending the target tracking task and the video resolution to a plurality of second terminal devices so that the camera acquires the motion video of the target object corresponding to the target tracking task according to the video resolution.
Optionally, the second terminal device includes an unmanned aerial vehicle;
the sending the target tracking task to the plurality of second terminal devices includes:
and sending the target tracking task and the image transmission quantity to a plurality of second terminal devices, so that the unmanned aerial vehicle transmits the background images of the image transmission quantity to the edge server in a preset time period.
Optionally, the method further comprises:
receiving network load of a base station and computing power use information of the edge server;
analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device;
and sending the routing scheduling instruction to the second terminal equipment.
Optionally, the analyzing the network load and the computing power usage information generates a routing scheduling instruction of each second terminal device, including:
determining a plurality of alternative routes according to the network load;
determining a target network node matched with the target tracking task according to the computing power use information;
deleting alternative routes comprising other network nodes than the target network node;
and determining a target route with the transmission speed meeting a preset condition from the rest alternative routes, and generating a route scheduling instruction of each second terminal device based on the target route.
Optionally, the receiving network load of the base station and the computing power usage information of the edge server includes:
receiving network load of a base station and computing power use information of the edge server in real time;
the analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device, including:
and under the condition that the current network load is larger than a preset threshold value, analyzing the network load and the computing power use information, and updating the routing scheduling instruction of each second terminal device.
Optionally, the analyzing the network load and the computing power usage information generates a routing scheduling instruction of each second terminal device, including:
generating a plurality of target tracking tasks based on a plurality of target objects respectively, and determining the priority of each target tracking task;
and generating a routing scheduling instruction of each second terminal device based on the priority, the network load information and the computing power use information.
Optionally, the receiving network load of the base station and the computing power usage information of the edge server includes:
receiving network load of a base station and computing power use information of the edge server in real time;
After generating the target tracking tasks based on the target objects and determining the priority of each target tracking task, the method further comprises:
and under the condition that the current network load is larger than the preset threshold, determining a reduction value of the video resolution and/or the image transmission quantity corresponding to each target tracking task according to the priority, and obtaining the reduced video resolution and/or the reduced image transmission quantity.
In a third aspect, an embodiment of the present application provides a method for target tracking, applied to an edge server, where the method includes:
receiving second monitoring data of a target object corresponding to a target tracking task acquired by second terminal equipment;
fitting the second monitoring data to generate a first synthesized video for tracking the target object;
and transmitting the first synthesized video to a central server.
Optionally, the fitting processing is performed on the second monitoring data to generate a first composite video for tracking the target object, including:
extracting background feature points of a background area except the target object in a target frame of the motion video and the position relation between the background feature points and the target object;
Determining a target background image matched with the timestamp of the target frame;
removing a target object in the target background image, and extracting image feature points of the target background image after the target object is removed;
comparing the background feature points with the image feature points, and fitting a target object in the target frame to the target background image according to the position relation to obtain a first synthetic image; the first composite image is combined into a first composite video.
In a fourth aspect, an embodiment of the present application provides a method for target tracking, which is applied to a second terminal device, where the method includes:
acquiring a target tracking task generated by a central server based on the target object;
collecting second monitoring data of a target object corresponding to the target tracking task;
transmitting the second monitoring data to an edge server; and fitting the second monitoring data by the edge server to generate a first synthesized video for tracking the target object.
Optionally, the method further comprises:
and transmitting the second monitoring data to the central server.
Optionally, the second terminal device is a camera or an unmanned aerial vehicle, and the method further includes:
Under the condition that the second terminal equipment is a camera, acquiring a motion video of a target object corresponding to the target tracking task, and transmitting the motion video to an edge server;
and under the condition that the second terminal equipment is an unmanned aerial vehicle, acquiring a background image of a target object corresponding to the target tracking task, and transmitting the background image to an edge server.
Optionally, the camera has a plurality of cameras, the capturing motion video of the target object corresponding to the target tracking task, and transmitting the motion video to an edge server, including:
under the condition that a plurality of target tracking tasks exist, each camera is used for respectively collecting background images of target objects corresponding to each target tracking task, and the background images are transmitted to an edge server.
Optionally, the acquiring the target tracking task generated by the central server based on the target object includes:
acquiring a target tracking task and video resolution which are generated by a central server based on the target object;
the step of collecting the motion video of the target object corresponding to the target tracking task and transmitting the motion video to an edge server comprises the following steps:
And acquiring a motion video of the target object corresponding to the target tracking task according to the video resolution, and transmitting the motion video to an edge server.
Optionally, the acquiring the target tracking task generated by the central server based on the target object includes:
acquiring a target tracking task and the image transmission quantity generated by a central server based on the target object;
the step of collecting the background image of the target object corresponding to the target tracking task and transmitting the background image to an edge server comprises the following steps:
and collecting background images of the target object corresponding to the target tracking task, and transmitting the background images of the image transmission quantity to an edge server in a preset time period.
Optionally, the method further comprises:
acquiring a routing scheduling instruction sent by a central server;
determining a data transmission path according to the routing scheduling instruction;
and transmitting the second monitoring data to an edge server based on the data transmission path.
In a fifth aspect, an embodiment of the present application provides an apparatus for target tracking, applied to a central server, where the apparatus includes:
the first acquisition module is used for acquiring the target object uploaded by the first terminal equipment;
The generating module is used for generating a target tracking task based on the target object and sending the target tracking task to a plurality of second terminal devices; the second terminal equipment is enabled to acquire second monitoring data of a target object corresponding to the target tracking task, wherein the second monitoring data comprises a background image and a motion video;
the second acquisition module is used for acquiring a first synthesized video which is uploaded by the edge server and is used for tracking the target object; and the first synthesized video is obtained by fitting the second monitoring data through the edge server.
In a sixth aspect, an embodiment of the present application provides an apparatus for target tracking, applied to an edge server, where the apparatus includes:
the receiving module is used for receiving second monitoring data of the target object corresponding to the target tracking task acquired by the second terminal equipment;
the fitting module is used for carrying out fitting processing on the second monitoring data to generate a first synthesized video for tracking the target object;
and the transmission module is used for transmitting the first synthesized video to a central server.
In a seventh aspect, an embodiment of the present application provides an apparatus for target tracking, which is applied to a second terminal device, where the apparatus includes:
The acquisition module is used for acquiring a target tracking task generated by the central server based on the target object;
the acquisition module is used for acquiring second monitoring data of the target object corresponding to the target tracking task;
the transmission module is used for transmitting the second monitoring data to an edge server; and fitting the second monitoring data by the edge server to generate a first synthesized video for tracking the target object.
In an eighth aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a ninth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a tenth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In an eleventh aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first terminal device collects first monitoring data of a preset scene, identifies a target object in the first monitoring data, and transmits the target object to a central server; the central server generates a target tracking task based on the target object and sends the target tracking task to a plurality of second terminal devices; the second terminal equipment collects second monitoring data of a target object corresponding to the target tracking task and transmits the second monitoring data to the edge server; the second monitoring data comprises a background image and a motion video of the target object; and the edge server performs fitting processing on the second monitoring data to generate a first synthesized video for tracking the target object, and transmits the first synthesized video to the central server.
In this way, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
Drawings
FIG. 1 is a flow chart of a target tracking system shown according to an exemplary embodiment;
FIG. 2 is a network diagram illustrating a target tracking method according to an example embodiment;
FIG. 3 is a schematic diagram illustrating one way of adjusting video acquisition resolution according to an exemplary embodiment;
FIG. 4 is a schematic representation of a fit of a motion video and background images, shown according to an exemplary embodiment;
FIG. 5 is a schematic representation of a fit of a motion video and background images, according to an exemplary embodiment;
FIG. 6 is a logic diagram of a target tracking system, according to an example embodiment;
FIG. 7 is a schematic diagram of a target tracking system, according to an example embodiment;
FIG. 8 is a schematic diagram illustrating a target tracking method according to an example embodiment;
FIG. 9 is a schematic diagram illustrating a target tracking method according to an example embodiment;
FIG. 10 is a schematic diagram illustrating a target tracking method according to an example embodiment;
FIG. 11 is a schematic diagram of an object tracking device, according to an example embodiment;
FIG. 12 is a schematic diagram of a target tracking device, according to an example embodiment;
FIG. 13 is a schematic diagram of a target tracking device, according to an example embodiment;
FIG. 14 is a block diagram of a base station target tracking electronic device, according to an example embodiment;
FIG. 15 is a block diagram illustrating an apparatus for target tracking, according to an example embodiment.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The following describes in detail the target tracking method provided in the embodiment of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
FIG. 1 is a flow chart of an object tracking system including the following steps, according to an exemplary embodiment.
In step S11, the first terminal device collects first monitoring data of a preset scene, identifies a target object in the first monitoring data, and transmits the target object to the central server.
At present, if real-time tracking of a moving object is to be realized under the limitation of network bandwidth, the resolution of an uploaded monitoring video is required to be reduced, or the transmission speed is reduced, and the real-time tracking effect on the moving object is poor because better balance between the resolution and the transmission speed of the monitoring video is difficult to achieve.
By the target tracking scheme provided by the application, the whole transmission data volume can be reduced, and the real-time performance of target tracking is improved. The first terminal device may be a camera or an unmanned aerial vehicle, and the camera or the unmanned aerial vehicle collects a monitoring video of a preset scene as first monitoring data, identifies a target object from the monitoring video, and transmits the target object to the central server.
The target object may be a moving pedestrian, a vehicle or other moving target, and is not particularly limited. The target object is transmitted to the central server, so that the data quantity transmitted to the central server can be reduced, and meanwhile, the central server can respond quickly according to the characteristics of the target object, and the target object can be tracked in time.
In step S12, the central server generates a target tracking task based on the target object, and transmits the target tracking task to the plurality of second terminal apparatuses.
In this step, the central server may generate a target tracking task based on the target object, and send the target tracking task to the plurality of second terminal devices. The second terminal device is used for processing the target tracking task, where the second terminal device may or may not include the first terminal device, and is not specifically limited. The second terminal device may comprise a plurality of different kinds, such as a camera, a drone, etc.
In one implementation, the central server is further configured to receive network load of the base station and computing power usage information of the edge server; analyzing the network load and calculation force use information to generate a routing scheduling instruction of each second terminal device; sending a routing scheduling instruction to second terminal equipment;
The second terminal equipment is specifically used for determining a data transmission path according to the routing scheduling instruction; the second monitoring data is transmitted to the edge server based on the data transmission path.
The routing scheduling instruction may also be called a power scheduling plan, and it is assumed that the target tracking task is T, the power scheduling plan is P, the network load is N, the power usage information is C, the data transmission route is R, and the power scheduling plan P is a data transmission route R of the target tracking task determined according to the network load N of the base station and the power usage information C of the edge server in a period of time.
In the present application, a route is connected to a base station serving as a data gateway of a terminal, an edge server serving as a computing node, and a plurality of target network nodes, and a back end is connected to a central server.
In particular, the central server is configured to determine a plurality of alternative routes according to the network load; determining a target network node matched with the target tracking task according to the calculation force use information; deleting alternative routes comprising other network nodes than the target network node; and determining a target route with the transmission speed meeting a preset condition from the rest alternative routes, and generating a route scheduling instruction of each second terminal device based on the target route.
That is, as shown in fig. 2, a network diagram of the target tracking method provided in the present application is that a plurality of alternative routes R are calculated according to a network load N 1 ,……,R n Then determining a plurality of network nodes according to whether the calculation force using information C meets the calculation force requirement of the task, and then eliminating the alternative routes which can not meet the task; the remaining alternative routes select the best one according to the data transfer speed. Overall, i.e. data transmission route r=f 1 (N)+f 2 (C) Then the force schedule p=g (R). If a plurality of target tracking tasks T are sent out within a certain time 1 ,T 2 ,……,T n According to the priority level of the target tracking task, a computational power scheduling plan P=P is obtained 1 +P 2 +……+P n 。
Further, the central server is specifically configured to receive, in real time, network load of the base station and computing power usage information of the edge server; and under the condition that the current network load is larger than a preset threshold value, analyzing the network load and the calculation power use information, and updating the routing scheduling instruction of each second terminal device.
That is, the power scheduling plan may need to be temporarily adjusted due to an increase in network load, and the network load of the base station may be increased due to an increase in the number of users or the number of tasks, and a target tracking task having a higher priority may be added.
For example, for increasing the network load of the base station due to an increase in the number of users or tasks, it is necessary to set a network load threshold N T1 In the power dispatch plan P, several sub-plans are included, i.e. p=p 1 +P 2 +……+P m +……+P n During execution of the power dispatch plan P, when the network load condition N > N T1 The data transmission route R ' needs to be recalculated to obtain an updated power scheduling plan P ' =g (R '), and from the sub-plan P m Start execution of updated power schedule plan P' =p m +……+P n . For the target tracking task with the higher priority added, the route r″ is recalculated, resulting in an updated power schedule P "=g (R").
In one implementation, a central server is specifically configured to generate a plurality of target tracking tasks based on a plurality of target objects, and determine a priority of each target tracking task; and generating a routing scheduling instruction of each second terminal device based on the priority, the network load information and the computing power use information.
That is, the central server may schedule priority levels of the plurality of target tracking tasks and generate a power schedule plan to adjust data transmission routes of the different second terminal devices. For example, the second terminal device performing the target tracking task with a higher priority may be allocated with more network bandwidth, or the route path corresponding to the second terminal device performing the target tracking task with a higher priority is shorter, which is not limited in particular.
In one implementation, the central server is specifically configured to receive, in real time, network load of the base station and computing power usage information of the edge server; under the condition that the current network load is larger than a preset threshold, determining a reduction value of video resolution and/or image transmission quantity corresponding to each target tracking task according to the priority, and obtaining the reduced video resolution and/or image transmission quantity;
correspondingly, the second terminal device is specifically configured to transmit the second monitoring data to the edge server based on the reduced video resolution and/or the reduced image transmission number.
For example, if the current network load is high, the real-time performance and resolution of the target tracking task with high priority are ensured by reducing the data transmission amount of the target tracking task with low priority. The power scheduling plan needs to be temporarily adjusted due to the increase of network load, and the network load of the base station is increased due to the increase of the number of users or the number of tasks, and the target tracking task with higher priority is added.
For issuing a plurality of target tracking tasks T in a short time 1 ,T 2 ,……,T n It is necessary to set a network load threshold N T2 When the network load condition N is greater than N T2 Then for task T arranged by priority 1 ,T 2 ,……,T n Setting the original value of the data transmission amount as v and the reduced value d of the data transmission amount as T 1 The data transmission quantity is v 1 V is kept unchanged, T 2 Data transmission quantity v of (2) 2 =v*(1-d),T 3 Data transmission quantity v of (2) 3 =v×1-2 d), and so on.
Therefore, the central server is used for configuring the data transmission routes of the moving video and the background picture, so that the base station bandwidth and the computing power of the edge server can be distributed according to the target priority, the real-time transmission data quantity of the collected video and picture can be configured, and the picture delay, the picture blocking and the picture loss caused by network congestion and insufficient computing power are avoided.
In step S13, the second terminal device collects second monitoring data of the target object corresponding to the target tracking task, and transmits the second monitoring data to the edge server; the second monitoring data comprises a background image and a motion video of the target object.
In this step, the second terminal device responds to the target tracking task, collects a background image and a motion video of the target object, and transmits the background image and the motion video to the edge server. Wherein the background image is typically of higher resolution than the motion video.
In one implementation, the second terminal device includes a camera and an unmanned aerial vehicle;
The camera is used for collecting a motion video of a target object corresponding to the target tracking task and transmitting the motion video to the edge server;
the unmanned aerial vehicle is used for collecting a background image of a target object corresponding to the target tracking task and transmitting the background image to the edge server.
That is, the central server may send the target tracking task to a plurality of heterogeneous terminals, which may be assisted by one or more unmanned aerial vehicles when the plurality of cameras of the camera simultaneously track a plurality of targets. The camera records the motion video of the tracked target, and the unmanned aerial vehicle assists the camera to shoot the background picture of the tracked target.
In the application, a camera is used for recording a small-range motion video of a target object, an unmanned aerial vehicle is used for shooting a large-range high-resolution background picture of the target object, and then an edge server is used for fitting the motion video of the target object with the background picture, so that the volume of a video file can be reduced, and the real-time transmission performance of a system is improved.
In one implementation, each camera has a plurality of cameras;
the camera is specifically used for respectively acquiring background images of target objects corresponding to each target tracking task by each camera under the condition that a plurality of target tracking tasks exist, and transmitting the background images to the edge server.
That is, each camera includes a plurality of cameras, which can simultaneously track a plurality of targets within a monitoring range according to a target tracking instruction.
In one implementation, the central server is specifically configured to send a target tracking task and video resolution to a plurality of second terminal devices;
the video camera is specifically used for collecting the motion video of the target object corresponding to the target tracking task according to the video resolution ratio and transmitting the motion video to the edge server.
That is, when the camera records the moving video of the target object, the video acquisition resolution ratio in a period of time can be flexibly adjusted according to the data transmission strategy.
In one implementation, the central server is specifically configured to send the target tracking task and the number of image transmissions to the plurality of second terminal devices;
the unmanned aerial vehicle is specifically used for collecting background images of a target object corresponding to a target tracking task, and transmitting background images of the image transmission quantity to the edge server in a preset time period.
That is, when the unmanned aerial vehicle shoots and records the background image of the target object, the number of pictures transmitted to the edge server in a period of time can be flexibly adjusted according to the data transmission strategy. When the data transmission amount of the target tracking task is smaller than a certain index, the target object may be displayed as a dot.
For example, as shown in fig. 3, the camera flexibly adjusts the resolution of video acquisition within a period of time according to the data transmission policy sent by the central server, where (a) indicates that the resolution is moderate, (b) indicates that the resolution is low, and (c) indicates that the resolution is high); the unmanned aerial vehicle flexibly adjusts the number of pictures transmitted to the edge server in a period of time according to a data transmission strategy sent by the central server, wherein (d) represents moderate number of pictures, (e) represents smaller number of pictures, and (f) represents larger number of pictures.
In step S14, the edge server performs fitting processing on the second monitoring data, generates a first composite video for tracking the target object, and transmits the first composite video to the central server.
In this step, the edge server may fit the motion video of one or more target objects to one or more background pictures to generate a real-time first composite video, and then transmit the first composite video to the central server.
Specifically, the edge server is used for extracting background feature points of a background area except a target object in a target frame of the motion video and the position relation between the background feature points and the target object; determining a target background image matching the timestamp of the target frame; removing a target object in the target background image, and extracting image feature points of the target background image after the target object is removed; comparing the background characteristic points with the image characteristic points, and fitting a target object in a target frame into a target background image according to the position relation to obtain a first synthetic image; the first composite image is combined into a first composite video.
That is, the edge server generates a first composite video for tracking the target object, and may first acquire a plurality of feature points of the background picture, where the feature points need to be around the tracked target; then, using an image segmentation model for the background picture, removing a target object, and generating a new background picture for fitting; then, other content which can cause confusion or interference, such as an object moving at a high speed, such as an automobile, can be removed from the background picture; furthermore, a plurality of background feature points of the motion video are obtained, and the position relation between the feature points and the target object is described by vectors; and fitting the motion video and the background picture according to the position relation of the background characteristic points to generate a first synthesized video.
Specifically, as shown in fig. 4 and 5, the motion video acquired by the camera is set to X, where the motion video X includes n frames, i.e., x=x 1 +X 2 +……+X m +……+X n The background picture shot by the unmanned aerial vehicle is F 1 ,F 2 ,……,F n 。
Regarding the time stamp, since the moving video is continuously shot and the background picture is discontinuously shot, two background pictures F are assumed n And F n+1 The shooting interval between the two frames is m, and the 1 st frame of the motion video X is X 1 The corresponding time stamp is S 1 M-th frame of motion video X, X m The corresponding time stamp is S m And so on; picture F 1 The corresponding time stamp is S 1 To S m Picture F 2 The corresponding time stamp is S m+1 To S 2m And so on.
For the nth frame in motion video X, X n Compared with picture F n The imaging range is smaller, only the tracked object and the surrounding background are imaged, and the 1 st frame, namely X, is extracted 1 Characteristic points of the background of the tracked object according to the time stamp S 1 Find the corresponding picture F 1 Comparing the characteristic points, X 1 Fitting into picture F 1 Generating new X' 1 . For X 2 To X m Frames, also according to time stamps S 2 To S m And picture F 1 Fitting to generate new X' 2 To X' m . Extracting the m+1st frameX m+1 Characteristic points of the background of the tracked object according to the time stamp S m+1 Find the corresponding picture F 2 Comparing the characteristic points, X m+1 Fitting into picture F 2 Generating new X' m+1 . For X m+2 To X 2m Frames, also according to time stamps S m+2 To S 2m And picture F 2 Fitting to generate new X' m+2 To X' 2m . And so on.
Finally, fitting to a new X ' =x ' ' 1 +X’ 2 +……+X’ m +……+X’ n And the real-time monitoring video is obtained. When a plurality of tracking targets exist in the camera, the motion video of the plurality of tracking targets can be fitted into one background picture or a larger picture formed by splicing the plurality of background pictures.
In one implementation, the second terminal device is further configured to transmit second monitoring data to the central server.
Further, the central server is further configured to perform fitting processing on the second monitoring data, generate a second synthesized video for tracking the target object, and store the second synthesized video; the second composite video is more accurate than the first composite video.
The central server is specifically configured to reconstruct a super-resolution image of the motion video based on the background image, and generate a second composite video for tracking the target object.
That is, after the target tracking task is finished, all files can be transmitted to a central server for backup, an image characteristic algorithm and an image super-resolution technology based on a reference image are applied, the whole super-resolution restoration process is assisted by a high-resolution image similar to an input image, a non-real-time but more accurate monitoring video is regenerated, and high accuracy and traceability of the target tracking task are ensured. In the application scene, the problem of super resolution of the image is converted from relatively difficult texture recovery and generation to relatively simple texture search and migration, which is superior to other high definition reconstruction technologies relying on a large amount of training.
Fig. 6 is a schematic diagram of the logic of the present application. In the application, a central server sends out a target tracking task starting instruction, a data transmission strategy and a routing scheduling instruction, a camera records a motion video of a tracked target, an unmanned aerial vehicle assists the camera, and a background picture of the tracked target is taken. According to the data transmission strategy and the routing scheduling instruction, the video camera transmits the recorded motion video, the unmanned aerial vehicle transmits the shot background picture to the edge server, the edge server fits the motion video and the background picture into a real-time monitoring video, and the edge server transmits the real-time monitoring video to the central server.
The central server sends out a target tracking task stopping instruction, an idle data transmission route is appointed, the video camera and the unmanned aerial vehicle stop tracking targets, the video camera transmits recorded moving videos according to the appointed data transmission route, and the unmanned aerial vehicle transmits shot background pictures to the central server. The central server uses an image characteristic algorithm and an image super-resolution technology based on a reference image, and uses a large-scale high-resolution picture shot by the unmanned aerial vehicle to finish high-definition reconstruction work of the monitoring video, so that the motion video and the background picture are matched into a more accurate monitoring video and stored.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
In one embodiment, as shown in fig. 7, the target tracking system includes a central server 100, an edge server 200, a base station 300, a camera 400, a drone 500, and a network 600, wherein the central server 100 includes:
task scheduling module 11: the power schedule plan from the power schedule module 12 is received, a data transmission policy and a routing instruction are generated, and a target tracking task, a data transmission policy and a routing instruction are issued to the camera 400 and the unmanned aerial vehicle 500.
The calculation power scheduling module 12: the computing power load information from the edge server 200 is received, and the network load information from the base station 300 generates a computing power scheduling plan.
The input module 13: control information is input to the task scheduling module 11.
The output module 14: the identified targets from the camera 400, the drone 500, the real-time surveillance video from the edge server 200, and the surveillance video stored in the storage module 16 are displayed.
Calculation module 15: the central server fits all the videos and pictures acquired in the task, generates a more accurate surveillance video and saves it to the storage module 16.
The storage module 16: video and pictures from the camera 400 and the unmanned aerial vehicle 500, and monitoring video generated from the calculation modules 15 and 21 are saved.
The edge server 200 includes:
the calculation module 21: the motion video of the tracked object from the camera 400 and the background picture of the tracked object from the drone 500 are fitted, and a real-time surveillance video is generated and saved to the storage module 22.
The storage module 22: the real-time monitoring video generated from the calculation module 21 is saved.
The video camera 400 includes:
camera module 41: video of the motion of the tracked object or objects is captured and stored in the memory module 42.
Storage module 42: the motion video recorded from the camera module 41 is saved.
The identification module 43: the identified targets are provided to the central server 100 for issuing target tracking tasks.
The unmanned aerial vehicle 500 includes:
MCU module 51: the 5G module 52 receives the instruction sent by the task scheduling module 11 of the central server 100, and controls the 5G module 52 and the camera module 53 of the unmanned aerial vehicle to execute the instruction.
5G module 52: the instruction sent to the unmanned aerial vehicle 500 by the central server 100 is transmitted, and the background picture stored in the storage module 54 is transmitted to the edge server 200.
Camera module 53: a background picture of the tracked object is taken and stored in the storage module 54.
Storage module 54: a background picture of the tracked object taken from the camera module 53 is saved.
The identification module 55: the identified targets are provided to the central server 100 for issuing target tracking tasks.
Network 600: the target tracking task, the data transmission strategy and the routing scheduling instruction sent by the central server 100 are transmitted to the camera 400 and the unmanned aerial vehicle 500, the real-time monitoring video and the calculation load information generated by the edge server 200 are transmitted to the central server 100, the network load information of the base station 300 is transmitted to the central server 100, and the motion video recorded by the camera 400 and the background picture shot by the unmanned aerial vehicle 500 are transmitted to the central server 100 and the edge server 200.
The dynamic target tracking system based on the cooperation of the edge computing power dispatching and the unmanned aerial vehicle can realize the function of dynamically tracking multiple targets through the combined action of the task dispatching module, the computing power dispatching module, the input module, the output module, the computing module, the storage module, the MCU module, the 5G module, the camera module and the identification module, and avoid the problem of network delay from influencing the real-time tracking effect.
As shown in fig. 8, the embodiment of the present application further provides a target tracking method, which is applied to a central server, and the method includes:
s201: acquiring a target object uploaded by first terminal equipment;
s202: generating a target tracking task based on the target object, and sending the target tracking task to a plurality of second terminal devices; the second terminal equipment is enabled to acquire second monitoring data of a target object corresponding to the target tracking task, wherein the second monitoring data comprises a background image and a motion video;
s203: acquiring a first synthesized video uploaded by an edge server and used for tracking the target object; and the first synthesized video is obtained by fitting the second monitoring data through the edge server.
In one implementation, the method further comprises:
acquiring the second monitoring data uploaded by the second terminal equipment;
fitting the second monitoring data to generate a second synthesized video for tracking the target object; the second synthesized video has higher precision than the first synthesized video;
and storing the second synthesized video.
In one implementation manner, the fitting the second monitoring data to generate a second composite video for tracking the target object includes:
And reconstructing the super-resolution image of the motion video based on the background image to generate a second synthesized video for tracking the target object.
In one implementation, the second terminal device includes a camera;
the sending the target tracking task to the plurality of second terminal devices includes:
and sending the target tracking task and the video resolution to a plurality of second terminal devices so that the camera acquires the motion video of the target object corresponding to the target tracking task according to the video resolution.
In one implementation, the second terminal device includes an unmanned aerial vehicle;
the sending the target tracking task to the plurality of second terminal devices includes:
and sending the target tracking task and the image transmission quantity to a plurality of second terminal devices, so that the unmanned aerial vehicle transmits the background images of the image transmission quantity to the edge server in a preset time period.
In one implementation, the method further comprises:
receiving network load of a base station and computing power use information of the edge server;
analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device;
And sending the routing scheduling instruction to the second terminal equipment.
In one implementation manner, the analyzing the network load and the computing power usage information generates a routing scheduling instruction of each second terminal device, including:
determining a plurality of alternative routes according to the network load;
determining a target network node matched with the target tracking task according to the computing power use information;
deleting alternative routes comprising other network nodes than the target network node;
and determining a target route with the transmission speed meeting a preset condition from the rest alternative routes, and generating a route scheduling instruction of each second terminal device based on the target route.
In one implementation, the receiving network load of the base station and the computing power usage information of the edge server includes:
receiving network load of a base station and computing power use information of the edge server in real time;
the analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device, including:
and under the condition that the current network load is larger than a preset threshold value, analyzing the network load and the computing power use information, and updating the routing scheduling instruction of each second terminal device.
In one implementation manner, the analyzing the network load and the computing power usage information generates a routing scheduling instruction of each second terminal device, including:
generating a plurality of target tracking tasks based on a plurality of target objects respectively, and determining the priority of each target tracking task;
and generating a routing scheduling instruction of each second terminal device based on the priority, the network load information and the computing power use information.
In one implementation, the receiving network load of the base station and the computing power usage information of the edge server includes:
receiving network load of a base station and computing power use information of the edge server in real time;
after generating the target tracking tasks based on the target objects and determining the priority of each target tracking task, the method further comprises:
and under the condition that the current network load is larger than the preset threshold, determining a reduction value of the video resolution and/or the image transmission quantity corresponding to each target tracking task according to the priority, and obtaining the reduced video resolution and/or the reduced image transmission quantity.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
As shown in fig. 9, the embodiment of the present application further provides a target tracking method, which is applied to an edge server, and the method includes:
s301: receiving second monitoring data of a target object corresponding to a target tracking task acquired by second terminal equipment;
s302: fitting the second monitoring data to generate a first synthesized video for tracking the target object;
s303: and transmitting the first synthesized video to a central server.
In one implementation manner, the fitting processing is performed on the second monitoring data to generate a first composite video for tracking the target object, which includes:
extracting background feature points of a background area except the target object in a target frame of the motion video and the position relation between the background feature points and the target object;
determining a target background image matched with the timestamp of the target frame;
removing a target object in the target background image, and extracting image feature points of the target background image after the target object is removed;
comparing the background feature points with the image feature points, and fitting a target object in the target frame to the target background image according to the position relation to obtain a first synthetic image; the first composite image is combined into a first composite video.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
As shown in fig. 10, the embodiment of the present application further provides a target tracking method, which is applied to a second terminal device, and the method includes:
s401: acquiring a target tracking task generated by a central server based on the target object;
s402: collecting second monitoring data of a target object corresponding to the target tracking task;
s403: transmitting the second monitoring data to an edge server; and fitting the second monitoring data by the edge server to generate a first synthesized video for tracking the target object.
In one implementation, the method further comprises:
and transmitting the second monitoring data to the central server.
In an implementation manner, the second terminal device is a camera or an unmanned aerial vehicle, and the method further includes:
Under the condition that the second terminal equipment is a camera, acquiring a motion video of a target object corresponding to the target tracking task, and transmitting the motion video to an edge server;
and under the condition that the second terminal equipment is an unmanned aerial vehicle, acquiring a background image of a target object corresponding to the target tracking task, and transmitting the background image to an edge server.
In an implementation manner, the camera has a plurality of cameras, the capturing a motion video of a target object corresponding to the target tracking task, and transmitting the motion video to an edge server, including:
under the condition that a plurality of target tracking tasks exist, each camera is used for respectively collecting background images of target objects corresponding to each target tracking task, and the background images are transmitted to an edge server.
In one implementation, the acquiring the target tracking task generated by the central server based on the target object includes:
acquiring a target tracking task and video resolution which are generated by a central server based on the target object;
the step of collecting the motion video of the target object corresponding to the target tracking task and transmitting the motion video to an edge server comprises the following steps:
And acquiring a motion video of the target object corresponding to the target tracking task according to the video resolution, and transmitting the motion video to an edge server.
In one implementation, the acquiring the target tracking task generated by the central server based on the target object includes:
acquiring a target tracking task and the image transmission quantity generated by a central server based on the target object;
the step of collecting the background image of the target object corresponding to the target tracking task and transmitting the background image to an edge server comprises the following steps:
and collecting background images of the target object corresponding to the target tracking task, and transmitting the background images of the image transmission quantity to an edge server in a preset time period.
In one implementation, the method further comprises:
acquiring a routing scheduling instruction sent by a central server;
determining a data transmission path according to the routing scheduling instruction;
and transmitting the second monitoring data to an edge server based on the data transmission path.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
According to the target tracking method provided by the embodiment of the application, the execution subject can be a target tracking device. In the embodiment of the application, a method for executing target tracking by using a target tracking device is taken as an example, and a device of the target tracking method provided in the embodiment of the application is described.
As shown in fig. 11, the embodiment of the present application further provides a target tracking device, which is applied to a central server, and the device includes:
a first obtaining module 501, configured to obtain a target object uploaded by a first terminal device;
the generating module 502 is configured to generate a target tracking task based on the target object, and send the target tracking task to a plurality of second terminal devices; the second terminal equipment is enabled to acquire second monitoring data of a target object corresponding to the target tracking task, wherein the second monitoring data comprises a background image and a motion video;
a second obtaining module 503, configured to obtain a first composite video that is uploaded by an edge server and that tracks the target object; and the first synthesized video is obtained by fitting the second monitoring data through the edge server.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
As shown in fig. 12, the embodiment of the present application further provides an object tracking device, which is applied to an edge server, and the device includes:
the receiving module 601 is configured to receive second monitoring data of a target object corresponding to a target tracking task acquired by a second terminal device;
the fitting module 602 is configured to perform fitting processing on the second monitoring data, and generate a first composite video that tracks the target object;
and a transmission module 603, configured to transmit the first composite video to a central server.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
As shown in fig. 13, the embodiment of the present application further provides an object tracking apparatus, which is applied to a second terminal device, and the apparatus includes:
an obtaining module 701, configured to obtain a target tracking task generated by a central server based on the target object;
The acquisition module 702 is configured to acquire second monitoring data of the target object corresponding to the target tracking task;
a transmission module 703, configured to transmit the second monitoring data to an edge server; and fitting the second monitoring data by the edge server to generate a first synthesized video for tracking the target object.
Therefore, according to the technical scheme provided by the embodiment of the application, after the background image and the motion video of the target object are transmitted to the edge server, the edge server fits the background image and the motion video to generate the first synthesized video, and then the first synthesized video is transmitted to the central server.
According to the target tracking method provided by the embodiment of the application, the execution subject can be a target tracking terminal. In the embodiment of the application, a method for executing target tracking by a target tracking terminal is taken as an example, and a device of the target tracking method provided in the embodiment of the application is described.
The target tracking device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The object tracking device in the embodiments of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The target tracking apparatus provided in the embodiment of the present application can implement each process implemented by the embodiment of the method of fig. 1, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 14, the embodiment of the present application further provides an electronic device 800, including a processor 801 and a memory 802, where a program or an instruction capable of being executed on the processor 801 is stored in the memory 802, and the program or the instruction implements each step of the above-mentioned embodiment of the target tracking method when being executed by the processor 801, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 15 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
From the above, it can be seen that, according to the technical scheme provided by the embodiment of the application, by using the unmanned aerial vehicle to assist in shooting the background picture of the tracked target, the background picture is fitted with the motion video of the tracked target, so as to generate a real-time monitoring video, and under the condition of ensuring that the tracked target has high frame number and high resolution, the overall transmission data size of the monitoring video is reduced by reducing the data transmission amount of the background of the tracked target, so that better real-time performance is obtained.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the above embodiment of the target tracking method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the above embodiment of the target tracking method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the embodiments of the target tracking method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
Claims (38)
1. A target tracking system, comprising:
the first terminal equipment is used for collecting first monitoring data of a preset scene, identifying a target object in the first monitoring data and transmitting the target object to the central server;
the central server is used for generating a target tracking task based on the target object and sending the target tracking task to a plurality of second terminal devices;
the second terminal device is configured to collect second monitoring data of a target object corresponding to the target tracking task, and transmit the second monitoring data to an edge server; the second monitoring data comprises a background image and a motion video of the target object;
and the edge server is used for carrying out fitting processing on the second monitoring data, generating a first synthesized video for tracking the target object, and transmitting the first synthesized video to the central server.
2. The object tracking system of claim 1, wherein the target tracking system comprises a plurality of sensors,
the second terminal device is further configured to transmit the second monitoring data to the central server.
3. The object tracking system of claim 2, wherein,
The central server is further used for performing fitting processing on the second monitoring data, generating a second synthesized video for tracking the target object, and storing the second synthesized video; the second composite video has a higher precision than the first composite video.
4. The object tracking system of claim 3, wherein,
the central server is specifically configured to reconstruct a super-resolution image of the motion video based on the background image, and generate a second composite video that tracks the target object.
5. The target tracking system of claim 1, wherein the second terminal device comprises a camera and a drone;
the camera is used for collecting the motion video of the target object corresponding to the target tracking task and transmitting the motion video to the edge server;
the unmanned aerial vehicle is used for collecting a background image of a target object corresponding to the target tracking task and transmitting the background image to the edge server.
6. The object tracking system of claim 5 wherein each camera has a plurality of cameras;
The camera is specifically configured to, when there are a plurality of target tracking tasks, respectively collect, by each camera, a background image of a target object corresponding to each target tracking task, and transmit the background image to an edge server.
7. The object tracking system of claim 5, wherein the target tracking system comprises a plurality of sensors,
the central server is specifically configured to send the target tracking task and the video resolution to a plurality of second terminal devices;
the camera is specifically configured to collect a motion video of a target object corresponding to the target tracking task according to the video resolution, and transmit the motion video to an edge server.
8. The object tracking system of claim 5, wherein the target tracking system comprises a plurality of sensors,
the central server is specifically configured to send the target tracking task and the number of image transmissions to a plurality of second terminal devices;
the unmanned aerial vehicle is specifically configured to collect background images of a target object corresponding to the target tracking task, and transmit the background images of the image transmission number to an edge server in a preset time period.
9. The object tracking system of claim 1, wherein the target tracking system comprises a plurality of sensors,
The central server is also used for receiving the network load of the base station and the computing power use information of the edge server; analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device; sending the routing scheduling instruction to the second terminal equipment;
the second terminal device is specifically configured to determine a data transmission path according to the routing scheduling instruction; and transmitting the second monitoring data to an edge server based on the data transmission path.
10. The object tracking system of claim 9, wherein the target tracking system comprises a plurality of sensors,
the central server is specifically configured to determine a plurality of alternative routes according to the network load; determining a target network node matched with the target tracking task according to the computing power use information; deleting alternative routes comprising other network nodes than the target network node; and determining a target route with the transmission speed meeting a preset condition from the rest alternative routes, and generating a route scheduling instruction of each second terminal device based on the target route.
11. The object tracking system of claim 9, wherein the target tracking system comprises a plurality of sensors,
The central server is specifically configured to receive network load of a base station and computing power usage information of the edge server in real time; and under the condition that the current network load is larger than a preset threshold value, analyzing the network load and the computing power use information, and updating the routing scheduling instruction of each second terminal device.
12. The object tracking system of claim 9, wherein the target tracking system comprises a plurality of sensors,
the central server is specifically configured to generate a plurality of target tracking tasks based on a plurality of target objects, and determine a priority of each target tracking task; and generating a routing scheduling instruction of each second terminal device based on the priority, the network load information and the computing power use information.
13. The object tracking system of claim 12, wherein,
the central server is specifically configured to receive network load of a base station and computing power usage information of the edge server in real time; under the condition that the current network load is larger than the preset threshold, determining a reduction value of video resolution and/or image transmission quantity corresponding to each target tracking task according to the priority, and obtaining the reduced video resolution and/or image transmission quantity;
The second terminal device is specifically configured to transmit the second monitoring data to an edge server based on the video resolution and/or the image transmission number after the clipping.
14. The object tracking system of claim 1, wherein the target tracking system comprises a plurality of sensors,
the edge server is used for extracting background feature points of a background area except the target object in a target frame of the motion video and the position relationship between the background feature points and the target object; determining a target background image matched with the timestamp of the target frame; removing a target object in the target background image, and extracting image feature points of the target background image after the target object is removed; comparing the background feature points with the image feature points, and fitting a target object in the target frame to the target background image according to the position relation to obtain a first synthetic image; the first composite image is combined into a first composite video.
15. A target tracking method, applied to a central server, comprising:
acquiring a target object uploaded by first terminal equipment;
generating a target tracking task based on the target object, and sending the target tracking task to a plurality of second terminal devices; the second terminal equipment is enabled to acquire second monitoring data of a target object corresponding to the target tracking task, wherein the second monitoring data comprises a background image and a motion video;
Acquiring a first synthesized video uploaded by an edge server and used for tracking the target object; and the first synthesized video is obtained by fitting the second monitoring data through the edge server.
16. The target tracking method of claim 15, wherein the method further comprises:
acquiring the second monitoring data uploaded by the second terminal equipment;
fitting the second monitoring data to generate a second synthesized video for tracking the target object; the second synthesized video has higher precision than the first synthesized video;
and storing the second synthesized video.
17. The method of claim 15, wherein fitting the second monitoring data to generate a second composite video that tracks the target object comprises:
and reconstructing the super-resolution image of the motion video based on the background image to generate a second synthesized video for tracking the target object.
18. The target tracking method of claim 15, wherein the second terminal device comprises a camera;
The sending the target tracking task to the plurality of second terminal devices includes:
and sending the target tracking task and the video resolution to a plurality of second terminal devices so that the camera acquires the motion video of the target object corresponding to the target tracking task according to the video resolution.
19. The target tracking method of claim 15, wherein the second terminal device comprises an unmanned aerial vehicle;
the sending the target tracking task to the plurality of second terminal devices includes:
and sending the target tracking task and the image transmission quantity to a plurality of second terminal devices, so that the unmanned aerial vehicle transmits the background images of the image transmission quantity to the edge server in a preset time period.
20. The target tracking method of claim 15, wherein the method further comprises:
receiving network load of a base station and computing power use information of the edge server;
analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device;
and sending the routing scheduling instruction to the second terminal equipment.
21. The method of claim 20, wherein the analyzing the network load and the computing power usage information to generate the routing schedule instruction for each second terminal device comprises:
Determining a plurality of alternative routes according to the network load;
determining a target network node matched with the target tracking task according to the computing power use information;
deleting alternative routes comprising other network nodes than the target network node;
and determining a target route with the transmission speed meeting a preset condition from the rest alternative routes, and generating a route scheduling instruction of each second terminal device based on the target route.
22. The target tracking method of claim 20, wherein the receiving network load of the base station and the computing power usage information of the edge server comprises:
receiving network load of a base station and computing power use information of the edge server in real time;
the analyzing the network load and the computing power use information to generate a routing scheduling instruction of each second terminal device, including:
and under the condition that the current network load is larger than a preset threshold value, analyzing the network load and the computing power use information, and updating the routing scheduling instruction of each second terminal device.
23. The method of claim 20, wherein the analyzing the network load and the computing power usage information to generate the routing schedule instruction for each second terminal device comprises:
Generating a plurality of target tracking tasks based on a plurality of target objects respectively, and determining the priority of each target tracking task;
and generating a routing scheduling instruction of each second terminal device based on the priority, the network load information and the computing power use information.
24. The target tracking method of claim 23, wherein the receiving network load of the base station and the computing power usage information of the edge server comprises:
receiving network load of a base station and computing power use information of the edge server in real time;
after generating the target tracking tasks based on the target objects and determining the priority of each target tracking task, the method further comprises:
and under the condition that the current network load is larger than the preset threshold, determining a reduction value of the video resolution and/or the image transmission quantity corresponding to each target tracking task according to the priority, and obtaining the reduced video resolution and/or the reduced image transmission quantity.
25. A target tracking method, applied to an edge server, comprising:
receiving second monitoring data of a target object corresponding to a target tracking task acquired by second terminal equipment;
Fitting the second monitoring data to generate a first synthesized video for tracking the target object;
and transmitting the first synthesized video to a central server.
26. The method of claim 25, wherein the fitting the second monitored data to generate a first composite video that tracks the target object comprises:
extracting background feature points of a background area except the target object in a target frame of the motion video and the position relation between the background feature points and the target object;
determining a target background image matched with the timestamp of the target frame;
removing a target object in the target background image, and extracting image feature points of the target background image after the target object is removed;
comparing the background feature points with the image feature points, and fitting a target object in the target frame to the target background image according to the position relation to obtain a first synthetic image; the first composite image is combined into a first composite video.
27. A target tracking method, applied to a second terminal device, comprising:
Acquiring a target tracking task generated by a central server based on the target object;
collecting second monitoring data of a target object corresponding to the target tracking task;
transmitting the second monitoring data to an edge server; and fitting the second monitoring data by the edge server to generate a first synthesized video for tracking the target object.
28. The target tracking method of claim 27, wherein the method further comprises:
and transmitting the second monitoring data to the central server.
29. The target tracking method of claim 27, wherein the second terminal device is a camera or a drone, the method further comprising:
under the condition that the second terminal equipment is a camera, acquiring a motion video of a target object corresponding to the target tracking task, and transmitting the motion video to an edge server;
and under the condition that the second terminal equipment is an unmanned aerial vehicle, acquiring a background image of a target object corresponding to the target tracking task, and transmitting the background image to an edge server.
30. The method of claim 29, wherein the camera has a plurality of cameras, the capturing a motion video of a target object corresponding to the target tracking task, and transmitting the motion video to an edge server, comprising:
Under the condition that a plurality of target tracking tasks exist, each camera is used for respectively collecting background images of target objects corresponding to each target tracking task, and the background images are transmitted to an edge server.
31. The target tracking method of claim 29, wherein the acquiring a target tracking task generated by a central server based on the target object comprises:
acquiring a target tracking task and video resolution which are generated by a central server based on the target object;
the step of collecting the motion video of the target object corresponding to the target tracking task and transmitting the motion video to an edge server comprises the following steps:
and acquiring a motion video of the target object corresponding to the target tracking task according to the video resolution, and transmitting the motion video to an edge server.
32. The target tracking method of claim 29, wherein the acquiring a target tracking task generated by a central server based on the target object comprises:
acquiring a target tracking task and the image transmission quantity generated by a central server based on the target object;
the step of collecting the background image of the target object corresponding to the target tracking task and transmitting the background image to an edge server comprises the following steps:
And collecting background images of the target object corresponding to the target tracking task, and transmitting the background images of the image transmission quantity to an edge server in a preset time period.
33. The target tracking method of claim 27, wherein the method further comprises:
acquiring a routing scheduling instruction sent by a central server;
determining a data transmission path according to the routing scheduling instruction;
and transmitting the second monitoring data to an edge server based on the data transmission path.
34. An object tracking device, for use in a central server, comprising:
the first acquisition module is used for acquiring the target object uploaded by the first terminal equipment;
the generating module is used for generating a target tracking task based on the target object and sending the target tracking task to a plurality of second terminal devices; the second terminal equipment is enabled to acquire second monitoring data of a target object corresponding to the target tracking task, wherein the second monitoring data comprises a background image and a motion video;
the second acquisition module is used for acquiring a first synthesized video which is uploaded by the edge server and is used for tracking the target object; and the first synthesized video is obtained by fitting the second monitoring data through the edge server.
35. An object tracking device, applied to an edge server, comprising:
the receiving module is used for receiving second monitoring data of the target object corresponding to the target tracking task acquired by the second terminal equipment;
the fitting module is used for carrying out fitting processing on the second monitoring data to generate a first synthesized video for tracking the target object;
and the transmission module is used for transmitting the first synthesized video to a central server.
36. An object tracking apparatus, characterized by being applied to a second terminal device, comprising:
the acquisition module is used for acquiring a target tracking task generated by the central server based on the target object;
the acquisition module is used for acquiring second monitoring data of the target object corresponding to the target tracking task;
the transmission module is used for transmitting the second monitoring data to an edge server; and fitting the second monitoring data by the edge server to generate a first synthesized video for tracking the target object.
37. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the target tracking method as claimed in claims 15-24 or 25-26 or 27-33.
38. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the object tracking method as claimed in claims 15 to 24 or 25 to 26 or 27 to 33.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311147868.XA CN117372469A (en) | 2023-09-06 | 2023-09-06 | Target tracking system, method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311147868.XA CN117372469A (en) | 2023-09-06 | 2023-09-06 | Target tracking system, method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117372469A true CN117372469A (en) | 2024-01-09 |
Family
ID=89406710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311147868.XA Pending CN117372469A (en) | 2023-09-06 | 2023-09-06 | Target tracking system, method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117372469A (en) |
-
2023
- 2023-09-06 CN CN202311147868.XA patent/CN117372469A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110310326B (en) | Visual positioning data processing method and device, terminal and computer readable storage medium | |
CN110784628A (en) | Image data acquisition processing method and system, intelligent camera and server | |
CN111260037B (en) | Convolution operation method and device of image data, electronic equipment and storage medium | |
EP3975133A1 (en) | Processing of images captured by vehicle mounted cameras | |
CN104751164A (en) | Method and system for capturing movement trajectory of object | |
US20210158490A1 (en) | Joint rolling shutter correction and image deblurring | |
CN111125382A (en) | Personnel track real-time monitoring method and terminal equipment | |
CN112422798A (en) | Photographing method and device, electronic equipment and storage medium | |
CN112085768A (en) | Optical flow information prediction method, optical flow information prediction device, electronic device, and storage medium | |
CN111027195B (en) | Simulation scene generation method, device and equipment | |
CN114679607A (en) | Video frame rate control method and device, electronic equipment and storage medium | |
CN115278047A (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN114125226A (en) | Image shooting method and device, electronic equipment and readable storage medium | |
CN113347356A (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN112291480A (en) | Tracking focusing method, tracking focusing device, electronic device and readable storage medium | |
CN113660420B (en) | Video frame processing method and video frame processing device | |
CN117372469A (en) | Target tracking system, method, device, equipment and medium | |
CN115002516B (en) | System, method, electronic device, storage medium, and program product for video processing | |
CN110633641A (en) | Intelligent security pedestrian detection method, system and device and storage medium | |
Weng et al. | Reducing Computational Requirements of Image Dehazing Using Super-Resolution Networks | |
CN115242981A (en) | Video playing method, video playing device and electronic equipment | |
CN112215174A (en) | Sanitation vehicle state analysis method based on computer vision | |
CN113705309A (en) | Scene type judgment method and device, electronic equipment and storage medium | |
CN112887605A (en) | Image anti-shake method and device and electronic equipment | |
CN113038094B (en) | Image acquisition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |