CN115665552A - Cross-mirror tracking method and device, electronic equipment and readable storage medium - Google Patents

Cross-mirror tracking method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115665552A
CN115665552A CN202211003302.5A CN202211003302A CN115665552A CN 115665552 A CN115665552 A CN 115665552A CN 202211003302 A CN202211003302 A CN 202211003302A CN 115665552 A CN115665552 A CN 115665552A
Authority
CN
China
Prior art keywords
cameras
camera
target object
target
circle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211003302.5A
Other languages
Chinese (zh)
Inventor
罗莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN202211003302.5A priority Critical patent/CN115665552A/en
Publication of CN115665552A publication Critical patent/CN115665552A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a cross-mirror tracking method, a cross-mirror tracking device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, expanding the current camera monitoring circle; starting a video searching mechanism based on all cameras in the enlarged camera monitoring circle, and determining the target video time of the searched target object; if the target video recording time is longer than the last occurrence time of the target object in the current camera monitoring circle, updating the current camera monitoring circle based on the camera of the target object in the target video recording time; the target object is tracked based on the updated current camera monitoring circle, and once the current camera monitoring circle is determined to be invalid through overtime detection, the target object is retraced based on a video search mechanism, and the current camera monitoring circle is updated based on the camera tracking the target object, so that the target tracking precision is improved.

Description

Cross-mirror tracking method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of cross-mirror tracking, in particular to a cross-mirror tracking method and device, electronic equipment and a readable storage medium.
Background
The cross-mirror tracking technology is a method for effectively identifying and retrieving pedestrians or objects under a cross-camera or cross-scene condition, namely, the pedestrians or the objects are identified or identified again under different camera lenses, so that the effect of tracking targets under different camera lenses is achieved, and the cross-mirror tracking technology is widely applied to the security industry along with the near-full coverage of camera equipment, the improvement of the efficiency of target identification and analysis technology and the development and application of GIS technology.
At present, many similar cross-mirror tracking algorithm applications appear, generally, a prevention and control circle needs to be set according to path planning, target recognition is carried out on images captured by each device in the prevention and control circle, feature comparison is carried out on the images and a tracked object, and finally a comparison result is returned to a user and a historical travel track corresponding to a target is generated.
However, due to the uncertain factors of the actual field environment, when the device in the control circle does not capture the tracked object or causes the target to be lost, the prior art cannot correct the current control circle by itself, and the path planning is used for prejudging the action track of the tracked object, and sending the target identification, comparison and tracking tasks to the camera on the track, so that the path planning may fail in the scenes of the device relationship which cannot be reflected by the longitude and latitude, such as a market, a station, a mountain city, and the like, and the device in the control circle cannot be defined, so that the device cannot be analyzed to continue to track, and the high-precision tracking level cannot be achieved.
Disclosure of Invention
An object of the present invention is to provide a cross-mirror tracking method, device, electronic device and readable storage medium for adaptively correcting a camera monitoring circle and improving tracking accuracy, and embodiments of the present invention can be implemented as follows:
in a first aspect, the present invention provides a cross-mirror tracking method, comprising: if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, expanding the current camera monitoring circle; starting a video searching mechanism based on all the cameras in the enlarged camera monitoring circle, and determining the target video time of the target object; if the target video recording time is longer than the last occurrence time of the target object in the current camera monitoring circle, updating the current camera monitoring circle based on the camera of the target object when the target object occurs in the target video recording time; and tracking the target object based on the updated current camera monitoring circle.
In a second aspect, the present invention provides a cross-mirror tracking device, comprising: the determining module is used for expanding the current camera monitoring circle if all cameras in the current camera monitoring circle do not acquire images of the target object within a theoretical occurrence time period and a preset buffering time period; the searching module is used for starting a video searching mechanism based on all the cameras in the enlarged camera monitoring circle and determining the target video time of the searched target object; an updating module, configured to update the current camera monitoring circle based on the camera of the target object appearing at the target video recording time if the target video recording time is greater than the last appearing time of the target object in the current camera monitoring circle; and the tracking module is used for tracking the target object based on the updated current camera monitoring circle.
In a third aspect, the present invention provides an electronic device comprising a processor and a memory, the memory storing a computer program executable by the processor, the processor being capable of executing the computer program to implement the method of the first aspect.
In a fourth aspect, the invention provides a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
The invention provides a cross-mirror tracking method, a cross-mirror tracking device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, enlarging the current camera monitoring circle; starting a video searching mechanism based on all the cameras in the enlarged camera monitoring circle, and determining the target video time for searching out the target object; if the target video recording time is longer than the last occurrence time of the target object in the current camera monitoring circle, updating the current camera monitoring circle based on the camera of the target object when the target object occurs in the target video recording time; the target object is tracked based on the updated current camera monitoring circle, whether the target object is lost or not can be determined through time-lapse detection, whether the current camera monitoring circle is invalid or not can be determined, the target object can be retraced based on a video searching mechanism after the current camera monitoring circle is invalid, once the reappearance time of the target is longer than the last appearance time of the target object in the current prevention and control circle, the current prevention and control circle can be updated based on the camera tracking the target object, the whole process can correct the camera monitoring circle, and the target loss probability is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of a scenario provided by an embodiment of the present invention;
fig. 2 is a block diagram of an electronic device according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a cross-mirror tracking method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a custom camera set provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of a self-defined camera set generated by a resource number selection and a map click according to an embodiment of the present invention;
FIG. 6 is an exemplary diagram of a multi-graph feature fusion provided by an embodiment of the present invention;
FIG. 7 is a schematic flow chart of another cross-mirror tracking method according to an embodiment of the present invention;
FIG. 8 is an exemplary diagram of an automatic robust camera surveillance loop provided by an embodiment of the present invention;
FIG. 9 is a schematic flow chart of an automatic robust camera surveillance loop provided by an embodiment of the present invention;
fig. 10 is a functional block diagram of a cross-mirror tracking device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings or orientations or positional relationships conventionally placed when the products of the present invention are used, the orientations or positional relationships are only used for convenience in describing the present invention and simplifying the description, but the terms do not indicate or imply that the devices or elements referred to must have specific orientations, be constructed in specific orientations, and be operated, and thus, the present invention should not be construed as being limited.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic view of a scene according to an embodiment of the present invention, which includes a terminal device 101, a plurality of cameras (e.g., camera 102-1 to camera 102-n), and a server 103, wherein the terminal device 101, the camera 102-1 to camera 102-n, and the server 103 are communicatively connected through a network 104.
The cameras 102-1 to 102-n are configured to collect images or videos within a monitoring range, send the collected images or videos to the server 103 through a network, and the server 103 stores the images in a database or performs corresponding service processing on the images or videos. In addition, the terminal device 101 may send a service request to the server 103, and after receiving the service request, the server 103 may perform corresponding service processing on the image or the video, and feed back a processing result to the terminal device 101 for display.
Taking the cross-mirror tracking as an example, the cross-mirror tracking analyzes a camera in real time by using a target identification technology, extracts a snapshot image with a target tracking object, pushes the snapshot image to a user, performs real-time investigation on the target tracking object in the tracking process by using a characteristic identification comparison technology, and generates a dynamic prevention and control loop (i.e. the camera monitoring loop provided by the embodiment of the invention) by combining a GIS path planning technology to predict the action direction of the target tracking object and perform continuous track tracking on the target tracking object, wherein the target tracking object can be, but not limited to, a person, a vehicle, an animal and the like, and the situation is not limited here.
For example, taking the scene schematic diagram of fig. 1 as an example, a target tracking object is a person, after an initial attention device sends a captured human body image to the server 103, the server 103 feeds back the captured image with the person to the terminal device 101, the terminal device 101 displays the captured image on a screen, a user can select at least one human body image, the terminal device 101 feeds back the selected human body image to the server 103, the server 103 can start cross-mirror tracking, that is, a control circle is generated according to path planning, that is, cameras 102-1 to 102-n in fig. 1 belong to a camera in the same control circle, then the server 103 can issue a tracking task to the cameras 102-1 to 102-n, after the tracking task is received by the cameras 102-1 to 102-n, the acquired image can be fed back to the server 103, after the server 103 receives the image, performs target identification on the image, performs similarity matching between the identified human body features and the human body features of the person, and feeds back a comparison result to the terminal device 101, and generates a track of the person.
However, due to the uncertainty of the actual field panoramic view, the existing cross-mirror tracking has the following disadvantages:
1. only the human body is used as a tracking object, and people with similar body types or people changing clothes in the middle are easy to miss or lose targets. For example, unclear human body features are likely to cause misjudgment when meeting passers-by with similar body types and clothes, or lose someone changing clothes midway; or the target identification task or a person under the lens is not reported due to reasons such as occlusion, light and the like, so that the person is lost.
2. When the camera does not catch a person or a target is lost, the current prevention and control circle cannot catch the target any more and cannot correct the target by self.
3. Because the indoor environment is inaccurate in positioning or the longitude and latitude may be in the same vertical space, the path planning fails, the equipment in the prevention and control circle cannot be planned, and the equipment cannot be analyzed and tracked continuously. For example, the action track of a certain person is predicted by path planning, a prevention and control circle is generated for a camera on the track, and if the situation that the equipment relationship cannot be embodied by the longitude and latitude in a market, a station, and a mountain city, the dynamic prevention and control circle generated based on the path planning fails and cannot be tracked continuously.
In order to solve the above technical problem, in the cross-mirror tracking method provided in the embodiments of the present invention, first, in order to reduce the target loss rate, the cross-mirror tracking method provided in the embodiments of the present invention may update the control loop of the system by actively switching the real-time tracking camera, perform video check over time, correct the error with the latest track, and reduce the target loss probability; secondly, in order to improve the target accuracy, the cross-mirror tracking method provided by the embodiment of the invention can improve the tracking target reporting readiness in modes of double verification of human face and human body characteristics, characteristic fusion by manually confirming an angle diagram for the second time, and the like; finally, an indoor and outdoor combined tracking method is provided, and the method is suitable for a scene that longitude and latitude cannot be adapted to equipment relation.
Referring to fig. 2, fig. 2 is a block diagram of an electronic device according to an embodiment of the present invention, where the electronic device may be used to execute the cross-mirror tracking method according to the embodiment of the present invention, for example, but not limited to the server 103 or the terminal device 101 in fig. 1.
As shown in fig. 2, the electronic device 200 comprises a memory 201, a processor 202 and a communication interface 203, wherein the memory 201, the processor 202 and the communication interface 203 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 201 may be used to store software programs and modules, such as instructions/modules of the cross-mirror tracking apparatus 400 provided in the embodiment of the present invention, which may be stored in the memory 201 in the form of software or firmware (firmware) or fixed in an Operating System (OS) of the electronic device 200, and the processor 202 executes the software programs and modules stored in the memory 201 to perform various functional applications and data processing. The communication interface 203 may be used for communication of signaling or data with other node devices.
The memory 201 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an erasable read only memory (EPROM), an electrically erasable read only memory (EEPROM), and the like.
The processor 202 may be an integrated circuit chip having signal processing capabilities. The processor 202 may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 2 is merely illustrative and that electronic device 200 may include more or fewer components than shown in fig. 2 or may have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 3, fig. 3 is a schematic flowchart of a cross-mirror tracking method according to an embodiment of the present invention, where an execution subject of the method may be the server 103 in fig. 1, and the method includes the following steps:
s301, if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, expanding the current camera monitoring circle;
s302, starting a video searching mechanism based on all cameras in the enlarged camera monitoring circle, and determining target video time for searching out a target object;
s303, if the target video recording time is longer than the last occurrence time of the target object in the current camera monitoring circle, updating the current camera monitoring circle based on the camera of the target object in the target video recording time;
s304, tracking the target object based on the updated current camera monitoring circle.
According to the cross-mirror tracking method provided by the embodiment of the invention, if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, it is indicated that the target object is lost because the cameras in the prevention and control circle do not capture the target object, that is, the current camera monitoring circle is invalid, the current camera monitoring circle needs to be adjusted, that is, the current camera monitoring circle is expanded, based on all the cameras in the expanded camera monitoring circle, a video searching mechanism is started, the target video recording time of the target object is determined, if the target video recording time is longer than the last occurrence time of the target object in the current camera monitoring circle, it is indicated that the expanded camera monitoring circle can track the target object, at this time, the current camera monitoring circle can be updated based on the cameras of the target object occurring in the video recording time, and the target object is tracked based on the updated current camera monitoring circle.
It can be seen that in the embodiment of the present invention, whether the target object has been lost or not can be determined through timeout detection, so as to determine whether the current camera surveillance circle has failed or not, after the current camera surveillance circle has failed, the target object can be retraced based on a video search mechanism, and once the time when the target reappears is longer than the last appearance time of the target object in the current surveillance circle, the current surveillance circle can be updated based on the camera tracking the target object, so that the camera surveillance circle can be corrected in the whole process, and the loss probability of the target is reduced.
The above steps S304 to S306 will be described in detail with reference to fig. 4 to 8.
In step S301, if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, the current camera monitoring circle is expanded.
In the embodiment of the invention, the theoretical occurrence time period can be realized by the following steps:
and a1, determining the maximum distance between a master camera and a plurality of slave cameras in the current camera monitoring circle.
a2, determining a theoretical occurrence time period based on the maximum distance and a preset moving speed;
in order to meet the tracking service, in the current camera monitoring circle, the master camera and the slave camera can be switched with each other, the master camera can be adaptively switched, for example, the master camera can be manually switched, or the master camera can be adaptively switched by the server according to an actual tracking result.
With respect to step a1, in an alternative embodiment, the distance between the master camera and each slave camera may be calculated first, and then the maximum distance may be determined to be found from all the distances.
Aiming at the step a2, the preset moving speed is used for simulating the walking speed of the person, the setting can be carried out according to the actual scene, the maximum distance is divided by the preset moving speed, the obtained time is the theoretical occurring time period, and the buffering time period can be set according to the actual requirement.
In the embodiment of the present invention, the current time may be taken as a time starting point, and if no target object is found within the theoretical occurrence time (that is, all the images acquired by the camera are recognized as the target object), it is continuously determined whether the target object appears within the current camera monitoring circle within the buffering time period, and if the target object does not appear yet, the target object is considered to be lost within the current camera monitoring circle.
For example, the current time is 9 points and 0 seconds, the preset moving speed is 1 meter per second, the maximum distance is 50 meters, the buffer time is 2 minutes, and the theoretical occurrence time is 50 seconds, then starting from the 9 points, if the target object appears within 2 minutes and 50 seconds after the 9 points, the target object is still tracked in the current camera monitoring circle, and if the target object does not appear within 2 minutes and 50 seconds after the 9 points, the previous camera monitoring circle is already failed and needs to be updated.
After determining that the current camera monitoring circle is invalid, the current camera monitoring circle may be expanded, and the range of the expanded current camera monitoring circle may be several times of the range of the current camera monitoring circle before the expansion, for example, if there are 5 cameras in the current camera monitoring circle, there may be 10 cameras in the expanded current camera monitoring circle.
In an actual scene, when a current camera monitoring circle is enlarged, a camera near the current prevention and control circle can be pulled into the current camera monitoring circle, and other cameras near the edge camera can be pulled into the current camera monitoring circle by taking the edge camera of the camera monitoring circle as a reference; the main camera may be used as a center, and the camera whose distance from the main camera satisfies a preset distance range may be pulled into the monitoring circle, which is not limited herein.
In step S302, a video search mechanism is started based on all the cameras in the enlarged camera surveillance circle, and a target video time for searching out a target object is determined.
It can be seen from the above contents that, some new cameras exist in the enlarged current camera monitoring circle, and the cameras are all cameras near the current camera monitoring circle, and when a target is lost in the current camera monitoring circle, the target object should still be near the current camera monitoring circle theoretically, so that a video recording searching mechanism is performed based on the enlarged current camera monitoring circle, and the probability of tracking the target object can be improved.
In the embodiment of the invention, the video searching mechanism represents that the target object is searched in the video collected by each camera in the enlarged current camera monitoring circle until the target object is searched. The video analysis times can be evenly distributed by the calculation force of the current residual video card.
Wherein, the time that the target object appears at the current camera monitoring circle last is: for example, if the current time is 9 points, 5 cameras in the current camera monitoring circle respectively acquire images of the target object at 8 points, 8 points 10 minutes, 8 points 20 minutes, 8 points 30 minutes, and 8 points 40 minutes, and then the target object does not appear, then the 8 points 40 minutes is the last appearance time of the target object.
The recording time may be set to be from the time when the target object is last present in the current camera surveillance circle to the current time. For example, if the last appearance time of the target object is 8 o 'clock and 50 o' clock and the current time is 9 o 'clock, the recording time may be set to 8 o' clock and 45 o 'clock to 9 o' clock.
Therefore, in an optional embodiment, after the video search mechanism is started, the image of the target object corresponding to the human face feature similarity and the human body feature similarity is pushed to the front-end device, the user can select "confirm the target track" according to an actual requirement through the front-end device, and after the device receives a selection instruction of the user, the video recording time corresponding to the image of the selected target object can be determined, and then compared with the last occurrence time (i.e., the latest track time) in the current camera monitoring circle of the target object, and whether to update the current camera monitoring circle is determined according to a comparison result.
In step S303, if the target video recording time is greater than the latest occurrence time of the target object in the current camera monitoring circle, the current camera monitoring circle is updated based on the camera of the target object occurring in the target video recording time.
For example, if the last appearance time of the target object in the current camera monitoring circle is 8: 50 minutes, and the target video recording time of the newly searched target is 8: 55 minutes, then the camera monitoring circle needs to be updated, and if the target video recording time of the newly searched target is 8: 45 minutes, then the camera monitoring circle does not need to be updated.
Therefore, the above steps S302 and S303 can be understood as follows: starting a video searching mechanism based on all cameras in the enlarged camera monitoring circle, pushing the searched images of the target object meeting the requirements of face feature similarity and human body feature similarity to front-end equipment, determining the image of the selected target object by the front-end equipment after receiving a selection instruction of a user, determining target video recording time and a camera corresponding to the image of the target object, and updating the current camera monitoring circle by taking the camera as a center camera if the target video recording time is longer than the last appearance time of the target object in the current camera monitoring circle; and if not, not updating the current camera monitoring circle.
It can be understood that, in the front-end device, each time the user performs a selection operation, a target video recording time and a camera may be determined, if the determination condition in step S303 is satisfied, the current camera monitoring circle is updated by using the determined camera as the center camera, and if the determination condition is not satisfied, the current camera monitoring circle is not updated, the whole process may implement close tracking on the target object, and the monitoring effect of the camera monitoring circle may be improved.
Therefore, in an alternative embodiment, the step S303 can be performed as follows:
b1, if the target object is located in an outdoor environment, determining adjacent cameras corresponding to a plurality of cameras based on the positions of the cameras and road network data, and updating a current camera monitoring circle based on the adjacent cameras and the cameras.
In an alternative embodiment, the moving path of the target object may be predicted based on the position of the camera and the road network data, and the current camera monitoring circle may be updated by the camera located on the moving path and the camera of the target object at the time of video recording.
b2, if the target object is located in an indoor environment, updating a current camera monitoring circle based on a pre-generated custom camera set; and the user-defined camera set comprises adjacent cameras corresponding to the cameras.
It can be understood that, in indoor and other scenes without latitude and longitude or coincidence of latitude and longitude, the camera monitoring circle generated by the above method cannot reflect the device relationship, so the embodiment of the present invention first provides an implementation manner for establishing an indoor camera monitoring circle: referring to fig. 4, fig. 4 is a schematic diagram of a custom camera set provided by an embodiment of the present invention, and as can be seen from fig. 4, the custom camera set may embody a device relationship between indoor layers 1 (1F) and 2 (2F).
The user-defined camera set can be realized through three forms of resource tree selection, map clicking and table importing.
The two modes of resource tree selection and map click are that a central camera and a plurality of adjacent cameras around the central camera are manually selected from a basic equipment library, and a spatial relation between the cameras is established. As shown in fig. 5, fig. 5 is a schematic diagram of generating a custom camera set by resource number selection and map click according to an embodiment of the present invention.
The form import mode is that data import is manually filled according to the following templates to generate a custom camera form, the imported equipment must exist in a basic equipment library, and the generated custom camera form is shown as table 1:
TABLE 1
Figure RE-GDA0003949358010000101
Figure RE-GDA0003949358010000111
Therefore, when the target object is tracked in the indoor environment, the camera monitoring circle can be generated based on the pre-generated custom camera set, and because the custom camera set is all adjacent cameras corresponding to the central camera, a plurality of adjacent cameras can be randomly determined, and the current camera monitoring circle is updated.
It can also be understood that when the target object is located at the edge of the indoor environment and enters the outdoor environment, that is, the target object is located in the boundary area between the outdoor environment and the indoor environment, the camera monitoring circle is defined by the indoor self-defined camera set, which results in data loss and cannot complete the camera monitoring circle, and in order to complete the monitoring circle, the camera which collects the target object in the boundary area can be used as the center camera; and pulling an adjacent camera corresponding to the central camera in the outdoor environment by combining the longitude and latitude information of the equipment and path planning to form a new camera monitoring circle combined indoors and outdoors.
In step S304, the target object is tracked based on the updated current camera surveillance circle.
In the embodiment of the invention, the situation that the target tracking is only carried out by using the human body characteristics and the misjudgment or the missed judgment is possibly caused is considered, so that when the tracked target moves to the high-definition camera, the clear human face can be identified by the target identification algorithm, and the association relationship between the human face and the human body is established in real time. When the human face comparison result and the human body comparison result are the same target, the target is determined to be identified, and the target tracking accuracy is improved.
Therefore, after the current camera surveillance circle is updated, multi-image feature fusion may be performed on the human body image of the target object in the target video and the existing human body image, multi-image feature fusion is performed on the human face image and the human face image of the target object in the target video, and tracking is performed based on the fused human face feature and human body feature, so that, in an optional implementation manner, the step S304 may be performed according to the following steps:
c1, acquiring a plurality of face images to be processed and a plurality of human body images to be processed corresponding to the target object;
wherein, the human body angle of the target object in each human body image to be processed is different. The human face image to be processed meets the human face characteristic similarity condition, the human body image to be processed meets the human body characteristic similarity condition, in the real-time tracking process, a user can select a plurality of human body images with different angles and a plurality of human face images from the human face image and the human body image corresponding to the pushed target object, the device utilizes multi-image characteristic fusion to complete the target characteristic based on the image selected by the user, and the fused characteristic is continuously tracked.
And c2, performing feature fusion on the plurality of human face images to be processed based on the preset human face organ weight to obtain fused human face features, and performing feature fusion on the plurality of human body images to be processed based on the preset human body angle weight to obtain fused human body features.
In the embodiment of the invention, weights can be set for key face organs (eye shape, nose, forehead and ear) in the face in advance, and then a plurality of face images to be processed are subjected to feature fusion to obtain fused face features, and the fused face features are subjected to weighting coefficient fusion to obtain fused face image features.
Similarly, for the human body image to be processed, the human body angle weight is set, for example, the human body front side weight is 0.4, the human body back side weight is 0, and the calculation formula of the different angle weights of the offset front side is as follows: offset face angle/360 ° flank weight 0.4.
And c3, tracking the target object based on the fused human body features and the fused human face features.
For convenience of understanding, please refer to fig. 6, fig. 6 is an exemplary diagram of multi-diagram feature fusion according to an embodiment of the present invention, as shown in fig. 6, wherein there are human body images with 5 angles, and the front surface, the front side surface, and the back surface have offset angles with respect to the front side surface, so that the human body images with the 5 angles can be feature-fused based on the weight corresponding to each human body angle to obtain fused human body features; similarly, the 18 face images in fig. 6 may be subjected to feature fusion by combining with the preset face organ weights, so as to obtain fused face features.
By the implementation mode, the tracking target reporting readiness can be improved in modes of performing double verification on the fused face features and the fused human body, performing feature fusion by manually confirming the angle diagram for the second time, and the like.
In an alternative embodiment, assuming that the target object appears after the updated current camera surveillance circle, but because the system delays that the user does not receive the reported information, the following embodiment may be performed to automatically correct the camera surveillance circle:
d1, if a main camera switching instruction input by a user is received, determining the switched main camera in the current camera monitoring circle;
and d2, updating the current camera monitoring circle by taking the main camera as a center.
The real-time tracking camera is actively switched to enable the system to adaptively update the camera monitoring circle, so that the target loss probability is reduced.
In an optional embodiment, in the tracking process, the devices in the dynamic defense circle need to reach a certain number to cover the moving path of the tracking target as fully as possible, and even outdoors, the same longitude and latitude camera devices which are vertical in space are provided. Therefore, the prevention and control ring should have an automatic strengthening mechanism, and therefore, the embodiment of the present invention provides an implementation manner of automatic strengthening of a camera monitoring ring, and the main idea of the implementation manner is as follows: any at least one of the following embodiments is used.
Referring to fig. 7, fig. 7 is a schematic flowchart of another cross-mirror tracking method according to an embodiment of the present invention, where the method may further include:
s305, if the total number of the cameras of the current camera monitoring circle is less than the preset number, determining a target number of cameras to be pulled so that the total number of the cameras is equal to the preset number;
s306, updating the cameras to be pulled with the target number into the current camera monitoring circle;
wherein, the camera to be pulled is any one of the following cameras and the combination thereof: self-defining a camera; a path planning camera; a linear distance camera;
the custom camera representation belongs to a camera in a custom camera set; the path planning camera represents a camera determined based on the road network data, and specifically comprises the following steps: performing path planning based on the position of the central camera to obtain a target path, then combining road network data to obtain cameras on the target path or near the target path, and determining a plurality of path planning cameras; the straight-line distance camera represents a camera closest to the straight-line distance of the center camera, and specifically comprises the following steps: in the cameras around the central camera, the distance between each camera and the central camera is calculated respectively, and then a plurality of linear distance cameras are determined according to the direction from small to large of the distance.
In the embodiment of the invention, the user-defined equipment is preferably arranged into the prevention and control ring, the equipment arranged by the path is used for supplementing when the quantity is insufficient, and the GIS is used for calculating the camera completion prevention and control ring with the similar horizontal linear distance under the condition that the quantity still cannot be reached. The camera is pulled in a way of approaching the linear distance, the relation of the shooting equipment in the vertical space is also improved, the target with stronger anti-reconnaissance capability is also tracked, and the possibility of tracking is improved. The priority is as follows: custom camera > path planning camera > straight-line distance camera.
Therefore, in an alternative embodiment, the step S305 may be performed as follows:
e1, determining whether a custom camera set exists;
e2, if the target number of the custom cameras in the custom camera set exists, determining the target number of the custom cameras in the custom camera set as the cameras to be pulled;
e3, if not, determining whether road network data exists;
e4, if the route planning cameras exist, determining a plurality of route planning cameras based on the route network data, and selecting a target number of route planning cameras as the cameras to be pulled;
and e5, if the camera does not exist, selecting a target number of linear distance cameras from the plurality of linear distance cameras according to the direction of the linear distance from the center camera from small to large as the camera to be pulled.
In an alternative embodiment, in the process of implementing the foregoing embodiment, it may occur that the number of the self-defined cameras or the path planning cameras or the linear distance cameras is less than the number of the targets, at this time, at least two cameras may be determined to update the current camera monitoring circle, and the sum of the number of the at least two cameras is the number of the targets. Therefore, the embodiment of the invention also provides the following implementation modes:
f1, determining whether the number of the custom cameras in the custom camera set is larger than or equal to the target number;
f2, if yes, taking the target number of the custom cameras in the custom camera set as the cameras to be pulled;
f3, if not, taking all the user-defined cameras and the first remaining number of path planning cameras in the path planning cameras as cameras to be pulled; the sum of the first residual quantity and the number of the user-defined cameras is equal to the target quantity;
f4, if the total number of the path planning cameras is less than the first residual number, taking all the custom cameras, all the path planning cameras and the second residual number of linear distance cameras as cameras to be pulled; the sum of the second remaining number and the total number of path planning cameras is the first remaining number.
The effect graph of the camera monitoring circle determined by the above embodiment can be seen in fig. 8, and fig. 8 is an exemplary graph of an automatic robust camera monitoring circle provided by an embodiment of the present invention.
For convenience of understanding the above steps e1 to e5 and steps f1 to f4, please refer to fig. 9, fig. 9 is a schematic flow chart of an automatic robust camera monitoring loop according to an embodiment of the present invention, where the process may include the following steps:
step 1: it is determined whether a set of custom cameras exists.
If yes, executing the step 2, otherwise, executing the step 4.
Step 2: it is determined whether the number of custom cameras in the set of custom cameras is greater than or equal to the target number.
If yes, executing the step 3 and ending; if not, executing the step 4.
And step 3: and taking the target number of the custom cameras in the custom camera set as the cameras to be pulled.
And 4, step 4: it is determined whether road network data exists.
If yes, go to step 5, if no, go to step 9 and end.
And 5: based on the road network data, a number of path planning cameras are determined.
Step 6: whether the total number of path planning cameras is greater than or equal to the first remaining number.
If yes, go to step 7, otherwise go to step 8.
And 7: and taking all the user-defined cameras and the first remaining number of path planning cameras in the plurality of path planning cameras as the cameras to be pulled, and ending.
And 8: and taking all the self-defined cameras, all the path planning cameras and the second remaining number of linear distance cameras as the cameras to be pulled, and ending.
And step 9: and selecting the target number or the first residual number of linear distance cameras from the plurality of linear distance cameras according to the direction from small to large of the linear distance from the center camera as the cameras to be pulled, and finishing.
It should be noted that, in the case that neither the custom camera set nor the road network data exists, step 9 is executed: selecting a target number of linear distance cameras from a plurality of linear distance cameras according to the direction from small to large of the linear distance from the center camera, taking the target number of linear distance cameras as cameras to be pulled, and finishing; in the case where there is a custom set of cameras but no road network data exists, then in step 9: and according to the direction from the small linear distance to the large linear distance from the center camera, selecting the target number of linear distance cameras from the first residual number of linear distance cameras as the cameras to be pulled, and ending.
The cross-mirror tracking method provided in the embodiment of the present application may be implemented in a hardware device or in a form of a software module, and when the cross-mirror tracking method is implemented in the form of a software module, an apparatus of the cross-mirror tracking method is further provided in the embodiment of the present application, please refer to fig. 10, where fig. 10 is a functional block diagram of the cross-mirror tracking apparatus provided in the embodiment of the present application, and the cross-mirror tracking apparatus 400 may include:
a determining module 410, configured to expand a current camera monitoring circle if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period;
the searching module 420 is configured to start a video searching mechanism based on all cameras in the enlarged camera monitoring circle, and determine a target video time for searching out a target object for the first time;
an updating module 430, configured to update the current camera monitoring circle based on the camera in which the target object appears at the target video recording time if the target video recording time is greater than the last occurrence time of the target object in the current camera monitoring circle;
and the tracking module 440 is configured to track the target object based on the updated current camera monitoring circle.
In alternative embodiments, the determination module 410, the search module 420, the update module 430, and the tracking module 440 may perform the various steps of fig. 3 in concert to achieve corresponding technical effects.
In an optional embodiment, the updating module 430 is specifically configured to execute steps b1 and b2, steps d1 to d2, steps e1 to e5, steps f1 to f4, fig. 7, and fig. 9 to achieve an effect of updating the camera monitoring circle.
In an alternative embodiment, the tracking module 440 is specifically used in steps c1 to c3 to achieve the corresponding effect.
In an alternative embodiment, the determining module 410 is further configured to perform steps a1 to a2 to achieve the corresponding technical effect.
Embodiments of the present application further provide a readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the cross-mirror tracking method according to any one of the foregoing embodiments. The computer readable storage medium may be, but is not limited to, various media that can store program codes, such as a usb disk, a removable hard disk, a ROM, a RAM, a PROM, an EPROM, an EEPROM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A cross-mirror tracking method, the method comprising:
if all cameras in the current camera monitoring circle do not acquire images of the target object within the theoretical occurrence time period and the preset buffering time period, expanding the current camera monitoring circle;
starting a video searching mechanism based on all the cameras in the enlarged camera monitoring circle, and determining the target video time of the target object;
if the target video recording time is longer than the last occurrence time of the target object in the current camera monitoring circle, updating the current camera monitoring circle based on the camera of the target object when the target object occurs in the target video recording time;
and tracking the target object based on the updated current camera monitoring circle.
2. The cross-mirror tracking method of claim 1, wherein if the target video recording time is greater than a last occurrence time of the target object within the current camera surveillance circle, updating the current camera surveillance circle based on the camera of the target object at the occurrence time of the target video recording time comprises:
if the target object is located in an indoor environment, updating the current camera monitoring circle based on a pre-generated custom camera set; wherein the customized camera set comprises adjacent cameras corresponding to the cameras;
and if the target object is located in an outdoor environment, determining adjacent cameras corresponding to a plurality of cameras based on the positions of the cameras and road network data, and updating the current camera monitoring circle based on the adjacent cameras and the cameras.
3. The cross-mirror tracking method of claim 1, further comprising:
if the total number of the cameras in the current camera monitoring circle is less than the preset number, determining a target number of cameras to be pulled so that the total number of the cameras is equal to the preset number;
updating the target number of cameras to be pulled into the current camera monitoring circle;
wherein, the camera to be pulled is any one of the following cameras and the combination thereof: self-defining a camera; a path planning camera; a linear distance camera;
the custom camera representation belongs to a camera in a custom camera set; the path planning camera characterizes a camera determined based on road network data; the straight-line distance camera characterizes the camera closest to the straight-line distance of the center camera.
4. The cross-mirror tracking method according to claim 3, wherein if the total number of cameras in the current camera monitoring circle is less than a preset number, determining a target number of cameras to be pulled so that the total number of cameras is equal to the preset number, comprises:
determining whether the set of custom cameras exists;
if yes, determining the number of the user-defined cameras in the user-defined camera set as the number of the cameras to be pulled;
if not, determining whether road network data exists or not;
if yes, determining a plurality of path planning cameras based on the road network data, and selecting a target number of path planning cameras as the cameras to be pulled;
if not, sequentially determining the target number of the linear distance cameras as the cameras to be pulled according to the direction from small to large of the linear distance between the central camera and the target number of the linear distance cameras.
5. The cross-mirror tracking method of claim 4, further comprising:
determining whether a number of custom cameras in the set of custom cameras is greater than or equal to the target number;
if yes, taking the target number of the custom cameras in the custom camera set as the cameras to be pulled;
if not, taking all the user-defined cameras and the first remaining number of the path planning cameras in the path planning cameras as the cameras to be pulled; the sum of the first remaining number and the number of custom cameras is equal to the target number;
if the total number of the path planning cameras is less than the first remaining number, all the custom cameras, all the path planning cameras and a second remaining number of the linear distance cameras are used as the cameras to be pulled; the sum of the second remaining number and the total number of path planning cameras is the first remaining number.
6. The cross-mirror tracking method of claim 1, wherein tracking the target object based on the updated current camera surveillance circle comprises:
acquiring a plurality of human face images to be processed and a plurality of human body images to be processed corresponding to the target object; the human body angles of the target objects in each human body image to be processed are different;
performing feature fusion on the multiple human face images to be processed based on preset human face organ weights to obtain fused human face features, and performing feature fusion on the multiple human body images to be processed based on preset human body angle weights to obtain fused human body features;
and tracking the target object based on the fused human body features and the fused human face features.
7. The cross-mirror tracking method of claim 1, further comprising:
if a main camera switching instruction input by a user is received, determining the switched main camera in the current camera monitoring circle;
and updating the current camera monitoring circle by taking the main camera as a center.
8. The cross-mirror tracking method according to claim 1, wherein before enlarging the current camera surveillance circle if all cameras within the current camera surveillance circle do not acquire images of the target object within a theoretical occurrence time and a preset buffer time, the method comprises:
determining a maximum distance between a master camera and a plurality of slave cameras within the current camera surveillance circle;
and determining the theoretical occurrence time period based on the maximum distance and a preset moving speed.
9. A cross-mirror tracking device, comprising:
the determining module is used for expanding the current camera monitoring circle if all cameras in the current camera monitoring circle do not acquire images of the target object within a theoretical occurrence time period and a preset buffering time period;
the searching module is used for starting a video searching mechanism based on all the cameras in the enlarged camera monitoring circle and determining the target video time of the target object;
an updating module, configured to update the current camera monitoring circle based on the camera of the target object appearing at the target video recording time if the target video recording time is greater than the last appearing time of the target object in the current camera monitoring circle;
and the tracking module is used for tracking the target object based on the updated current camera monitoring circle.
10. An electronic device comprising a processor and a memory, the memory storing a computer program executable by the processor, the processor being operable to execute the computer program to implement the method of any of claims 1 to 8.
11. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202211003302.5A 2022-08-19 2022-08-19 Cross-mirror tracking method and device, electronic equipment and readable storage medium Pending CN115665552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211003302.5A CN115665552A (en) 2022-08-19 2022-08-19 Cross-mirror tracking method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211003302.5A CN115665552A (en) 2022-08-19 2022-08-19 Cross-mirror tracking method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115665552A true CN115665552A (en) 2023-01-31

Family

ID=84983844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211003302.5A Pending CN115665552A (en) 2022-08-19 2022-08-19 Cross-mirror tracking method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115665552A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007094802A2 (en) * 2005-03-25 2007-08-23 Intellivid Corporation Intelligent camera selection and object tracking
CN102638675A (en) * 2012-04-01 2012-08-15 安科智慧城市技术(中国)有限公司 Method and system for target tracking by using multi-view videos
CN104038729A (en) * 2014-05-05 2014-09-10 重庆大学 Cascade-type multi-camera relay tracing method and system
CN109543534A (en) * 2018-10-22 2019-03-29 中国科学院自动化研究所南京人工智能芯片创新研究院 Target loses the method and device examined again in a kind of target following
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN112488068A (en) * 2020-12-21 2021-03-12 重庆紫光华山智安科技有限公司 Method, device and equipment for searching monitoring target and computer storage medium
CN112507953A (en) * 2020-12-21 2021-03-16 重庆紫光华山智安科技有限公司 Target searching and tracking method, device and equipment
CN112911205A (en) * 2019-12-04 2021-06-04 上海图漾信息科技有限公司 Monitoring system and method
CN114674323A (en) * 2022-04-12 2022-06-28 北京邮电大学 Intelligent indoor navigation method based on image target detection and tracking

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007094802A2 (en) * 2005-03-25 2007-08-23 Intellivid Corporation Intelligent camera selection and object tracking
CN102638675A (en) * 2012-04-01 2012-08-15 安科智慧城市技术(中国)有限公司 Method and system for target tracking by using multi-view videos
CN104038729A (en) * 2014-05-05 2014-09-10 重庆大学 Cascade-type multi-camera relay tracing method and system
CN109543534A (en) * 2018-10-22 2019-03-29 中国科学院自动化研究所南京人工智能芯片创新研究院 Target loses the method and device examined again in a kind of target following
CN112911205A (en) * 2019-12-04 2021-06-04 上海图漾信息科技有限公司 Monitoring system and method
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN112488068A (en) * 2020-12-21 2021-03-12 重庆紫光华山智安科技有限公司 Method, device and equipment for searching monitoring target and computer storage medium
CN112507953A (en) * 2020-12-21 2021-03-16 重庆紫光华山智安科技有限公司 Target searching and tracking method, device and equipment
CN114674323A (en) * 2022-04-12 2022-06-28 北京邮电大学 Intelligent indoor navigation method based on image target detection and tracking

Similar Documents

Publication Publication Date Title
US9466107B2 (en) Bundle adjustment based on image capture intervals
CN108234927B (en) Video tracking method and system
KR102340626B1 (en) Target tracking method, apparatus, electronic device and storage medium
US8805091B1 (en) Incremental image processing pipeline for matching multiple photos based on image overlap
CN111046752B (en) Indoor positioning method, computer equipment and storage medium
CN104303193A (en) Clustering-based object classification
KR101678004B1 (en) node-link based camera network monitoring system and method of monitoring the same
US20230351794A1 (en) Pedestrian tracking method and device, and computer-readable storage medium
KR20200094444A (en) Intelligent image photographing apparatus and apparatus and method for object tracking using the same
CN110264497B (en) Method and device for determining tracking duration, storage medium and electronic device
CN115393681A (en) Target fusion method and device, electronic equipment and storage medium
CN112631333B (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
CN112270748A (en) Three-dimensional reconstruction method and device based on image
KR101595334B1 (en) Method and apparatus for movement trajectory tracking of moving object on animal farm
CN115375870B (en) Loop detection optimization method, electronic equipment and computer readable storage device
CN115665552A (en) Cross-mirror tracking method and device, electronic equipment and readable storage medium
CN113033266A (en) Personnel motion trajectory tracking method, device and system and electronic equipment
CN110728249A (en) Cross-camera identification method, device and system for target pedestrian
CN113962338B (en) Indoor monitoring method and system for RFID (radio frequency identification device) auxiliary multi-camera detection tracking
CN115601738A (en) Parking information acquisition method, device, equipment, storage medium and program product
CN112333182B (en) File processing method, device, server and storage medium
Luo et al. Complete trajectory extraction for moving targets in traffic scenes that considers multi-level semantic features
CN110781797B (en) Labeling method and device and electronic equipment
CN114500873A (en) Tracking shooting system
JP6591594B2 (en) Information providing system, server device, and information providing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination