CN110113579B - Method and device for tracking target object - Google Patents
Method and device for tracking target object Download PDFInfo
- Publication number
- CN110113579B CN110113579B CN201910461306.XA CN201910461306A CN110113579B CN 110113579 B CN110113579 B CN 110113579B CN 201910461306 A CN201910461306 A CN 201910461306A CN 110113579 B CN110113579 B CN 110113579B
- Authority
- CN
- China
- Prior art keywords
- target object
- camera
- preset
- monitoring
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
The application provides a method and a device for tracking a target object, which are used for improving the efficiency of tracking the target. The method comprises the following steps: receiving a target tracking instruction, wherein the target tracking instruction is used for indicating to track and monitor a target object; according to the pre-learned monitoring information, a first preset bit set corresponding to the target object in a first time period is obtained; the monitoring information comprises a preset bit set corresponding to each target object in each time period, and the first time period comprises the current moment; controlling the camera to switch to each preset position in the first preset position set, and searching whether the target object exists in a monitoring picture corresponding to the switched preset position after switching to the corresponding preset position each time; and when the target object is searched, tracking and monitoring the target object.
Description
Technical Field
The application relates to the technical field of security protection, in particular to a method and a device for tracking a target object.
Background
Target tracking is commonly applied in the monitoring field, and target tracking including target discovery and target continuous tracking can be understood as continuous tracking monitoring on a specific object.
The existing target tracking mode is generally as follows: and pre-storing the characteristic information of the target object, matching the characteristic information of the object in the monitoring picture in the monitoring range with the characteristic information of the target object, determining the object as the target object after the matching is successful, and continuously tracking the target object. If the user stops tracking and needs to track next time, the steps are needed to be repeated, and the target object is tracked.
In the existing target tracking mode, when target tracking is performed each time, feature comparison needs to be performed in a monitoring range, and after a target object is found, the target object is tracked again. Therefore, the existing target tracking mode has a complex process and low efficiency.
Disclosure of Invention
The embodiment of the application provides a method and a device for tracking a target object, which are used for improving the efficiency of tracking the target.
In a first aspect, a method for tracking a target object is provided, including:
receiving a target tracking instruction, wherein the target tracking instruction is used for indicating to track and monitor a target object;
according to the pre-learned monitoring information, a first preset bit set corresponding to the target object in a first time period is obtained; the monitoring information comprises a preset bit set corresponding to each target object in each time period, and the first time period comprises the current moment;
controlling a first camera to switch to each preset position in the first preset position set, and searching whether the target object exists in a monitoring picture corresponding to the switched preset position after switching to the corresponding preset position each time;
and when the target object is searched, tracking and monitoring the target object.
In the embodiment of the application, before the controller tracks and monitors a certain target object according to the target tracking instruction, the controller searches for the target object in the monitoring picture corresponding to the preset position which is possibly appeared in the target object in the current time period according to the previous learning, and compared with a mode in which a camera rotates in sequence to search for the target object in the prior art, the mode in the embodiment of the application can relatively quickly lock the target object, and improve the efficiency of tracking the target object. Moreover, the rotation times and the search times of the camera can be relatively reduced, the energy consumption of the camera is reduced, and the target tracking efficiency is further improved.
In one possible design, controlling the first camera to switch to each preset position in the first set of preset positions includes:
arranging all preset bits in the first preset bit set in a sequence from high to low according to the success rate of searching target objects to obtain a sorted first preset bit set; the target object searching success rate is used for representing the probability of success of searching a target object in a monitoring picture corresponding to each preset position in the first preset position set aiming at a plurality of previously received tracking instructions;
and controlling the first camera to sequentially rotate to the corresponding preset positions according to the sequence of the sorted first preset position set.
In the embodiment of the application, the camera is controlled to be switched to the preset position with high target searching success rate to search the target object, so that the target searching efficiency is further improved, and the rotation times and the searching times of the camera are further reduced.
In one possible design, prior to receiving the target search instruction, the method includes:
determining a preset bit set of the target object in each time period according to pre-recorded image information; the image information comprises an image, image shooting time and a preset position where the shot image corresponds to the first camera;
and generating monitoring information according to the preset bit set of the target object in each time period.
In the embodiment of the application, the controller obtains the preset bit set corresponding to the target object in each time period according to the pre-recorded image information, and the mode of obtaining the preset bit set is simple and quick.
In one possible design, generating the monitoring information according to the preset bit set of the target object in each time period includes:
determining a reference scene element corresponding to each preset bit in a preset bit set of the target object in each time period according to the pre-recorded image information; the reference scene element is a scene element with the minimum distance from the target object, and the scene element refers to an object with a fixed and unchangeable position;
and establishing an association relation among the target object, a preset bit set of the target object in each time period and a reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, wherein the association relation is monitoring information.
In the embodiment of the application, when the preset bit set of the target object in each time period is obtained, the reference scene element corresponding to each preset bit in the preset bit set in each time period is also considered, so that the target object is searched based on the reference scene element in the monitoring image in the later period, and the efficiency of searching the target object is further improved.
In one possible design, after switching to the corresponding preset position each time, searching whether the target object exists in the monitoring picture corresponding to the switched preset position includes:
after each switching of the preset position, searching a corresponding first reference scene element in a monitoring picture corresponding to the switched preset position;
and according to a preset window, sequentially traversing the monitoring pictures corresponding to the switched preset positions by taking the first reference scene element as a starting point, and determining whether the target object exists in the monitoring pictures corresponding to the switched preset positions.
In the embodiment of the application, the reference scene element corresponding to the current preset position is used as the traversal starting point, so that the controller can conveniently and quickly find the target object, and the efficiency of finding the target object is improved. In addition, reference scene elements are added in the monitoring picture, so that the accuracy of searching the target object in the later period can be relatively improved.
In one possible design, after each time the target object is switched to the preset position, searching whether the target object exists in the monitoring picture corresponding to the switched preset position includes:
when the target object is not searched in the monitoring pictures corresponding to all the preset positions in the first preset position set, controlling the first camera to rotate in sequence along a preset direction by a preset first angle, and searching whether the target object exists in the rotated monitoring pictures after the first camera rotates once;
and after the first camera is controlled to rotate for one turn in sequence along the preset direction, and the target object is not searched, sending prompt information to a second camera, wherein the prompt information is used for prompting that the target object is not in the monitoring range.
In the embodiment of the application, if the target object is not searched in the first preset bit set, the camera is controlled to rotate one circle in sequence to search the target object, and if the target object is not found, the target object is determined not to be in the monitoring range, and the user is informed in time, so that the user can master the condition of the target object in time.
In one possible design, the target tracking instruction carries a unique identifier of a target object, and before receiving the target search instruction, the method includes:
receiving target object information sent by terminal equipment, configuring a unique identifier for the target object, and feeding back the unique identifier to the terminal equipment; wherein the target object information includes an image of the target object; or the like, or, alternatively,
receiving a unique identifier of target object information sent by a second camera; and the unique identification of the target object information is generated by the second camera according to the target object information sent by the user.
In the embodiment of the application, when the terminal device sends the target object information to the controller for the first time, the controller can configure a corresponding unique identifier for each target object, and the interaction between the terminal device and the controller in the later period can depend on the unique identifier.
In one possible design, the tracking and monitoring of the target object includes:
when the distance between the current monitoring picture position of the target object and the central point of the current monitoring picture is determined to be larger than a preset distance value, controlling a first camera to move along the moving direction of the target object in the monitoring picture so as to continuously monitor the target object;
and feeding back the monitoring picture corresponding to the target object to the second camera or the terminal equipment.
In the embodiment of the present application, when the controller determines that the target object deviates from the monitoring screen, the camera may be controlled to move toward the direction according to the direction in which the target object deviates, so as to continuously track and monitor the target object. And the monitoring picture comprising the target object is fed back to the terminal equipment, so that a user can know the condition of the monitoring target in time, and the monitoring picture comprising the target object is sent, thus reducing useless image sending and being beneficial to saving flow cost.
In one possible design, after performing tracking monitoring on the target object when the target object is searched, the method includes:
sending a notification message to a second camera so that the second camera notifies other cameras of stopping searching the target object; wherein the notification message carries the unique identifier of the first camera and the information that the target object has been searched.
In the embodiment of the application, after the first camera searches the target object, other cameras are notified to suspend searching, so that the other cameras are prevented from continuing to search the target object and monitoring resources are avoided.
In one possible design, receiving target tracking instructions includes:
receiving a target tracking instruction from the second camera.
In the embodiment of the application, the first camera can receive the target tracking instruction from the second camera, and the second camera is used for controlling the plurality of cameras, so that the reduction of interaction between the terminal equipment and the plurality of cameras is facilitated, and the flow is further saved.
In one possible design, before receiving the target tracking instruction, the method includes:
sending a connection request to the second camera; the connection request carries the camera address of the first camera;
receiving the unique identification of the first camera fed back by the second camera; wherein the unique identification of the first camera is generated by the second camera from the connection request.
In the embodiment of the application, the first camera can actively send a connection request to the second camera to establish communication connection, the second camera can distribute a corresponding unique identifier for the first camera, and interaction can be performed through the unique identifier at a later stage, so that the flow overhead is further saved.
In a second aspect, an apparatus for tracking a target object is provided, including:
the receiving and sending module is used for a target tracking instruction, and the target tracking instruction is used for indicating the tracking monitoring of a target object;
the processing module is used for acquiring a first preset bit set corresponding to the target object in a first time period according to pre-learned monitoring information; the monitoring information comprises a preset bit set corresponding to each target object in each time period, and the first time period comprises the current moment;
the processing module is further configured to control the camera to switch to each preset position in the first preset position set, and search whether the target object exists in a monitoring picture corresponding to the switched preset position after switching to the corresponding preset position each time;
the processing module is further configured to perform tracking monitoring on the target object when the target object is searched.
In one possible design, the processing module is specifically configured to:
arranging all preset bits in the first preset bit set in a sequence from high to low according to the success rate of searching target objects to obtain a sorted first preset bit set; the target object searching success rate is used for representing a plurality of tracking instructions sent before aiming at the terminal equipment, and the target object searching success rate is obtained in the monitoring picture corresponding to each preset position in the first preset position set;
and controlling the camera to sequentially rotate to the corresponding preset positions according to the sequence of the sorted first preset position set.
In one possible design, the processing module is further to:
before receiving a target searching instruction, determining a preset bit set of the target object in each time period according to pre-recorded image information; the image information comprises an image, image shooting time and a preset position where a camera corresponding to the shot image is located;
and generating monitoring information according to the preset bit set of the target object in each time period.
In one possible design, the processing module is specifically configured to:
determining a reference scene element corresponding to each preset bit in a preset bit set of the target object in each time period according to the pre-recorded image information; the reference scene element is a scene element with the minimum distance from the target object, and the scene element refers to an object with a fixed and unchangeable position;
and establishing an association relation among the target object, a preset bit set of the target object in each time period and a reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, wherein the association relation is monitoring information.
In one possible design, the processing module is specifically configured to:
after each switching of the preset position, searching a corresponding first reference scene element in a monitoring picture corresponding to the switched preset position;
and according to a preset window, sequentially traversing the monitoring pictures corresponding to the switched preset positions by taking the first reference scene element as a starting point, and determining whether the target object exists in the monitoring pictures corresponding to the switched preset positions.
In a third aspect, an apparatus for tracking a target object is provided, including:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, and wherein the at least one processor implements the method of any one of the first aspect and any one of the possible designs by executing the instructions stored by the memory.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon computer instructions which, when run on a computer, cause the computer to perform the method of any one of the first aspect and any one of the possible designs.
Drawings
Fig. 1 is a first application scenario diagram of a method for tracking a target object according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario of a method for tracking a target object according to an embodiment of the present application;
fig. 3 is a flowchart of a method for tracking a target object according to an embodiment of the present application;
fig. 4 is a schematic process diagram for editing a unique identifier of a target object according to an embodiment of the present application;
fig. 5 is a schematic diagram of a preset bit set provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a monitoring screen corresponding to a preset bit according to an embodiment of the present disclosure;
fig. 7 is a structural diagram of a method for tracking a target object according to an embodiment of the present application;
fig. 8 is a block diagram of an apparatus for tracking a target object according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an apparatus for tracking a target object according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the drawings and specific embodiments.
In order to improve the efficiency of tracking a target object, embodiments of the present application provide a method for tracking a target object, and an application scenario related to the method for tracking a target object is described below.
Referring to fig. 1, the scene includes a terminal device 11 and a first camera 12, where the terminal device 11 includes, but is not limited to, a mobile phone, a personal computer, or a tablet computer. The first camera 12 is a camera rotatable by a camera, such as a dome camera or the like. The controller may be integrated in the camera or may be provided separately from the camera, and fig. 1 illustrates an example in which the controller is integrated in the first camera 12.
In practical applications, a user needs to monitor a large range, and may install one camera which cannot meet the needs of a home, so that a plurality of cameras may be installed in the home. Referring to fig. 2, fig. 2 is a schematic view of another scenario provided in the embodiment of the present application, where the scenario includes a terminal device 11, a plurality of first cameras 12, and a second camera 20.
In the application scenario shown in fig. 2, the second camera 20 acts as a master camera and the first camera 12 acts as a slave camera. The second camera 20 is used for coordinating and managing a plurality of first cameras 12, the first cameras 12 can interact with the terminal equipment 11, and the second camera 20 can also interact with the terminal equipment 11. The interactive messages between the second camera 20 and the terminal device 11 need to be shared to the first camera 12 in some cases.
In fig. 2, the number of the first cameras 12 is 2 as an example, but the number of the first cameras 12 is not limited in practice.
A method for tracking a target object according to an embodiment of the present application is described below based on the application scenario in fig. 1. The method is performed by a controller in the first camera 12, which may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiments of the present Application, for example: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
When a user needs to track and monitor a certain target object, a target tracking instruction may be sent to the camera 12 through a software application in the terminal device 11, and the controller in the camera 12 executes the method for tracking the target object according to the target tracking instruction.
For example, the user does not worry about that children or old people are at home alone, and worry about other strangers in the home, or that children or old people are uncomfortable, the user can send a target tracking instruction to the controller through the terminal device to instruct the controller to perform tracking monitoring on the target object.
The following describes a method for tracking a target object in the embodiment of the present application with reference to fig. 3.
In step 301, the controller receives target object information sent by the terminal device 11, the target object information including an image of the target object.
Specifically, after the user has just purchased the first camera 12, or when the user wants to monitor a certain target object or certain target objects for the first time, the user may send the target object information to the controller through the terminal device 11, and the controller receives the target object information.
The target object can be understood as an object that the user needs to monitor. The target object information includes at least an image of the target object. In order to facilitate an accuracy with which the target object can be searched, the image of the target object may be a whole-body image of the target object. The target object information may also include a type of the target object and a name of the target object. The type of the target object is for example the class to which the target object belongs, for example a human or an animal. The name of the target object is, for example, the name of a person.
Step 302, the controller configures a unique identifier for the target object.
Specifically, after receiving the target object information sent by the terminal device 11, the controller configures a unique identifier for the target object. If there are multiple target objects, each target object is configured with a unique identifier.
In step 303, the controller sends the unique identifier to the terminal device 11.
Specifically, after configuring the unique identifier for the terminal device 11, the controller sends the unique identifier to the terminal device 11. There may be a plurality of terminal devices 11 transmitting corresponding target object information, and the controller transmits corresponding unique identifiers to the respective terminal devices 11 after configuring unique identifiers for each target object information.
As an embodiment, after the controller sends the unique identifier to the terminal device 11, the user may send a corresponding editing instruction to the controller through the terminal device 11, and after receiving the editing instruction, the controller performs corresponding operation on each target object according to the editing instruction.
Specifically, referring to fig. 4, the edit instruction includes an add instruction, a delete instruction, and an update instruction. The following describes the process of three editing commands in real time by the controller.
When a user wants to add a monitored target object, an adding instruction can be sent to the controller through the terminal device 11, the adding instruction carries information of the added target object, and after receiving the adding instruction, the controller configures a unique identifier for the target object in the adding instruction and then continues to add the target object.
When a user wants to delete a target object set before, a deletion instruction may be sent to the controller, where the deletion instruction carries a unique identifier indicating that a certain target object is to be deleted. After receiving the deletion instruction, the controller may delete the information of the corresponding target object according to the deletion instruction.
When a user wants to update a target object, the user can send an update instruction to the controller, and the update instruction carries a unique identifier indicating that a certain target object needs to be updated. After receiving the update instruction, the controller may update the information of the corresponding target object according to the update instruction.
It should be noted that the controller may configure corresponding permission levels for multiple users, and each user may only update or delete the target object in its own permission.
At step 304, the controller generates monitoring information.
One way to generate monitoring information is:
determining a preset bit set of the target object in each time period according to the image information;
and generating monitoring information according to the preset bit set of the target object in each time period.
Specifically, the rotating first camera 12 generally employs a preset position technique, i.e., makes a roving monitoring at a plurality of preset positions. In the case that the user does not designate to perform tracking monitoring on a certain target object, the controller may control the first camera 12 to perform round monitoring on a plurality of preset positions, and capture and record corresponding image information while performing round monitoring, where the image information includes, but is not limited to, a captured image (the image may be a single picture or a video), and an image capturing time. Of course, the image information may also include a preset position where the first camera 12 corresponding to the captured image is located, and the capturing parameters of the first camera 12.
After the image information is recorded, the controller can analyze the image information in real time or in an idle state to generate corresponding monitoring information. An idle state may be understood as a state in which the controller is not currently tracking a task monitored by a target object. The manner in which the controller generates the monitoring information from the image information will be described below.
After obtaining the image information, the controller may divide one monitoring cycle into a plurality of time periods, and analyze the preset bits of each target object in the monitoring range in each time period, thereby obtaining a set of preset bits of the target object in each time period.
The corresponding time duration of each time period may be the same or different. The controller may divide the time period according to the behavior rule of the target object, or may be manually set by the user. The behavior law can be understood that the target object may do the same thing in a certain time period, and the time taken for doing the event can be divided into a time period. For example, one monitoring cycle is a day, which may be divided into a first time period of 7-10, a second time period of 11-12, a third time period of 13-14, a fourth time period of 15-16, a fifth time period of 17-18, a sixth time period of 19-20, a seventh time period of 20-22, and an eighth time period of 22-6.
When the preset bits corresponding to the images are recorded in the image information, the control can determine the preset bits corresponding to the target object in each time period according to the image information, and so on to obtain the preset bit set of the target object in each time period. When the controller stores a plurality of target objects, each target object is processed according to the process, and the preset bit set of each target object in each time period is obtained.
When there is no preset bit corresponding to the image in the image information, the preset bit corresponding to the image may be calculated by using the coordinates of the center point corresponding to the image.
Specifically, the first camera 12 prestores a reference point, for example, an initial tracking point of the first camera 12 is used as a reference origin, since the first camera 12 generally operates in a certain angle range along a preset direction, the first camera 12 calculates a preset position corresponding to each image according to a central point, a reference point and a rotation speed of a current image, and so on, to obtain a preset position set of the target object in each time period.
The controller can perform associated storage on the target object and the preset bit set of the target object in each time period to obtain the monitoring information. In the embodiment of the present application, the monitoring information at least includes the target object and a preset bit set of the target object in each time period.
In one possible embodiment, at a predetermined position, the monitoring view corresponding to the first camera 12 may include many other objects in addition to the target object, that is, it generally takes a certain amount of time to search for the target object in the monitoring view. In order to save the time for searching the target object later, in the embodiment of the present application, another way of generating the monitoring information is described below according to the preset bit set of the target object in each time period.
One way to generate monitoring information is:
the controller establishes an association relationship among the target object, the preset bit set of the target object in each time period and the reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, wherein the association relationship is monitoring information.
Specifically, when the controller obtains the preset position set in each time period, it further determines a reference scene element corresponding to each target object at each preset position, where the reference scene element may be understood as a scene element with the smallest distance from the target object, and the scene element generally refers to an object with a relatively fixed position, such as a couch, a table, and a television in a living room.
Determining a specific manner of each preset corresponding reference scene element:
and analyzing the monitoring picture corresponding to the preset position, determining the position of the target object, then determining the distance between the scene element around the target object and the target object, wherein the distance between the scene element and the target object can be represented by the distance between the central pixel point of the scene element and the central pixel point of the target object, and if a certain scene element is closest to the target object, the scene element is a reference scene element.
As an embodiment, there may be a plurality of scene elements with equal distances to the target object, and the scene element with the highest image feature recognition rate among the plurality of scene elements may be selected as the reference scene element. The highest image feature recognition rate means that the image features of the scene elements are most easily and accurately recognized by the controller, and the condition of high image feature recognition rate includes one or more of clear outlines of the scene elements, clear textures of the scene elements, unique colors of the scene elements and shape rules of the scene elements.
After obtaining the reference scene element corresponding to each preset bit, the controller associates and stores the target object, the preset bit set of the target object in each time period, and the reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, so as to generate the monitoring information. That is to say, the monitoring information in the embodiment of the present application at least includes the target object, the set of preset bits of the target object in each time period, and the reference scene element corresponding to each preset bit in the set of preset bits of the target object in each time period.
In the two ways of generating the monitoring information, the monitoring information may further include a target object search success rate of each preset bit in the preset bit set of each time period.
Specifically, after the preset position set of each target object in each time period is obtained, the first camera 12 is controlled to search the target object on each preset position in the preset position set in the corresponding time period, the number of times of success of searching the target object on each preset position in the preset position set is determined, and the success rate of searching the target object on each preset position in the preset position set is determined. The total search times are different, and the success rates of the search target objects corresponding to the preset positions are different, so that the success rates of the search target objects corresponding to the preset positions can be updated in real time.
For example, referring to fig. 5, the first camera 12 is located at point a, the first camera 12 can rotate in one circle as shown in fig. 5, the preset position of the first camera 12 is the position point a-j in fig. 5, that is, only when the first camera 12 reaches the position point a-j, the first camera stays for a corresponding time, and the monitoring picture corresponding to the preset position is monitored. The controller analyzes that the target object is present in the kitchen and the living room at 7-10 points in the first time period, and the controller may determine that the target object has a preset set of positions { a, b } in the first time period corresponding to the preset positions a, b.
At the preset position b in fig. 5, the target object corresponding to the first camera 12 appears in the living room, the monitoring picture obtained by the first camera 12 is as shown in fig. 6, after the controller determines the position of the target object, the distance from the mural 61 to the target object in the monitoring picture corresponding to the preset position is determined to be L1, the distance from the table 62 to the target object is determined to be L2, the controller determines that L1 is smaller than L2, and the controller takes the mural 61 as the reference scene element corresponding to the preset position b to be the mural 61.
The controller searches the preset bits a and b for N times, the number of times that the target object is searched on the preset bits a is m, the number of times that the target object is searched on the preset bits b is l, the success rate of the target object searched on the preset bits a is m/N, and the success rate of the target object searched on the preset bits b is l/N.
And storing the target object corresponding to each controller, the preset position set of the target object in the time period, the reference scene element of each preset position and the success rate of searching the target object on each preset position in a correlation manner to generate the monitoring information of the target object.
After the controller generates the monitoring information, step 205 is executed, i.e., a target tracking command is received.
Specifically, when a user needs to monitor a certain target object, the user needs to track and monitor the target object, and may send a target tracking instruction to the first camera 12 through the terminal device 11, or send the target tracking instruction to the first camera 12 through a software application in the terminal device 11. The target tracking instruction is used for indicating the tracking monitoring of the target object. In order to save the traffic overhead, the target tracking instruction may carry a unique identifier corresponding to the target object, and the unique identifier may refer to the content discussed in step 303, which is not described herein again.
After receiving the target tracking instruction, the controller executes step 306, that is, according to the pre-learned monitoring information, obtains a first preset bit set corresponding to the target object in the first time period.
Specifically, after receiving the target tracking instruction, the controller determines the target object that the user needs to monitor, and since the monitoring information includes monitoring information of a plurality of target objects, the controller needs to determine a first preset bit set in a first time period to which the target object belongs at the current time according to the monitoring information obtained in step 204, where the first preset bit set includes all preset bits that have occurred in the first time period to the target object.
After the step 306, the controller executes a step 307, that is, controls the first camera 12 to switch to each preset position in the first set of preset positions, and searches whether a target object exists in the monitoring picture corresponding to the switched preset position after switching to the corresponding preset position each time.
Specifically, the controller controls the first camera 12 to switch between the preset positions in the first preset position set, and the switching sequence may be implemented in various ways, which will be described below by way of example.
Switching sequence one:
and sequentially switching the preset bits according to the sequence from high to low of the success rate of the search target objects of the preset bits in the first preset bit set.
Specifically, after the first preset bit set is obtained, the preset bits in the first preset bit set may be arranged in sequence from high to low according to the success rate of searching for the target object of each preset bit in the monitoring information, so as to obtain the arranged first preset bit set. The controller switches the preset bits according to the sorted first set of preset bits.
In the embodiment of the application, the preset position with the highest success rate for searching the target object is switched to first, so that the preset position comprising the target object can be found at the fastest speed.
And a second switching sequence:
and sequentially switching the preset bits according to the sequence that the distance from each preset bit in the first preset bit set to the preset bit where the first camera 12 is located currently is from large to small.
Specifically, the controller determines which preset position in the first preset position set has the smallest distance to the preset position where the first camera 12 is currently located, and the controller controls the first camera 12 to rotate to the position first, so that the angle of rotation of the first camera 12 at each time is reduced, and the power consumption of the first camera 12 is reduced.
After the preset position is switched every time, the controller searches the monitoring picture of the camera corresponding to the switched preset position, and searches whether a target object exists in the monitoring picture. There are various ways of searching, and the following examples are given.
The first search mode is as follows:
acquiring image characteristics corresponding to a target object, matching the image characteristics in a monitoring picture with the image characteristics of the target object, and determining that the target object exists in the monitoring picture if the image characteristics of a certain area in the monitoring picture are matched with the image characteristics of the target object.
The image features include, but are not limited to, image contours, image textures, image grayscales, image colors, and the like.
And a second searching mode:
training a neural network model by using the sample image;
inputting the monitoring picture into the neural network model, and inputting the result of whether the monitoring picture comprises the target object.
Specifically, the controller acquires a large number of sample images, trains a constructed neural network model by using the sample images, and the neural network model is used for determining whether a certain image includes a target object or not, and inputs a monitoring picture to be determined into the neural network model, so as to obtain a determination result whether the monitoring picture includes the target object or not.
And a third searching mode:
searching a corresponding first reference scene element in the monitoring picture corresponding to the switched preset position, traversing the monitoring picture by taking the first reference scene element as a starting point according to a preset matching window, and determining whether a target object exists in the monitoring picture.
Specifically, the controller searches for the first reference scene element from the monitoring picture, and since the position and the image feature of the first reference scene element are relatively fixed, the controller can quickly match the prestored first reference scene element from the monitoring picture, and determine whether the target object exists in the monitoring picture by using the first reference scene element as a traversal starting point. Since the target object is closest to the first reference scene element, the target object can be found as soon as possible by using the first reference scene element as a traversal starting point.
As an embodiment, in the three search modes, when the target object is a person, the hair color of the target object may be used as the feature matching region, and since the hair color of the person is relatively fixed and the hairs are relatively concentrated in a certain time, the hair color of the target object may be used as the feature matching region, that is, the hair region corresponding to the target object may be found from the monitoring screen first, so as to match the target object quickly.
And sequentially searching whether a target object exists in the monitoring pictures corresponding to the preset positions in the first preset position set, if so, continuously tracking and monitoring the target object until a target tracking stopping instruction sent by a user is received, wherein the target tracking stopping instruction is used for indicating the controller to stop tracking the target object.
After the target object is searched on a certain preset position in the first preset positions, the success rate of the target object to be searched corresponding to each preset position in the first preset positions can be updated, so that the success rate of the target object to be searched of each preset position can be determined subsequently.
In a possible case, the controller may control the first camera 12 to sequentially rotate by a first angle along the preset direction in the monitoring picture corresponding to each preset position in the first preset position set, search whether a target object exists in the rotated monitoring image every time the first camera 12 rotates once, and if the target object is searched, continuously track and monitor the target object until receiving a target stop tracking instruction sent by the user.
After the target object is searched, the controller may perform continuous tracking monitoring on the target object, and a manner of performing continuous tracking monitoring on the target object is described below.
Specifically, after the controller searches for a target object in a certain monitoring picture, the target object may be in a motion state, and when the controller determines that the distance between the position of the target object in the monitoring picture and the central point of the monitoring picture is greater than a preset distance value, the first camera 12 may be controlled to move along the moving direction of the target object, so as to ensure that the target object is always in the monitoring picture, and thus, the target object is continuously monitored. The preset distance value may be a default setting of the controller or may be set by the user.
When controlling the first camera 12 to continuously monitor the target object, the controller may feed back the monitoring picture corresponding to the target object to the terminal device 11, so that the terminal device 11 may obtain the monitoring picture of the target object in real time.
In a possible case, the controller may control the first camera 12 to rotate by a first angle in sequence along the preset direction, after one rotation, the target object is still not searched, and the controller determines that the target object is not within the monitoring range of the first camera 12, and may send a prompt message to the terminal device 11, where the prompt message is used to prompt that the target object is not within the monitoring range, so that the user may take corresponding measures in time. One rotation means that the first camera 12 is sequentially rotated within the maximum range of rotation.
In the embodiment of the present application, a method for tracking a target object in an application scenario shown in fig. 2 is described. Referring to fig. 7, fig. 7 is a flowchart of a method for tracking a target object in the scene. The method is performed by a controller in the first camera 12. The controller can refer to the content discussed above, and is not described herein again, and fig. 7 illustrates an example in which the controller is integrated in the camera. In fig. 7, the first camera 12 includes a camera B or a camera C as an example, and the second camera 20 includes a camera a as an example.
When a user sets a plurality of cameras within a monitoring range, the terminal device 11 connects the respective cameras.
Specifically, the user may scan the two-dimensional code of each camera by using the software application on the terminal device 11 to access each camera. The terminal device 11 can also search for cameras within range in real time and automatically connect to each camera.
In order to facilitate the terminal device 11 to identify the respective cameras, the terminal device 11 may configure the respective cameras with corresponding camera numbers.
After the terminal device 11 is connected to the respective cameras, the second camera 20 is determined from the plurality of cameras.
The terminal device 11 designates one of the plurality of cameras as the second camera 20, or the manufacturer configures the second camera 20 with a corresponding identifier when manufacturing the second camera, and the terminal device 11 identifies the camera as the second camera 20 after recognizing the identifier. Or the user may arbitrarily select one camera among the plurality of cameras as the second camera 20.
If the second camera 20 fails after the second camera 20 is determined, the other first cameras 12 can be set as the second cameras 20 again.
In fig. 7, the camera a is taken as the second camera 20 as an example, and after the second camera 20 is determined, the camera a needs to establish a connection with another camera.
Specifically, the user can send the address of the camera a to the other first cameras 12 (i.e., the camera B and the camera C in fig. 7) through the terminal device to notify the camera B and the camera C that the connection with the camera a is established.
As an example, a plurality of first cameras 12 may also establish connections in order to facilitate interaction between the various cameras.
Specifically, camera a broadcasts the numbers of a plurality of first cameras 12, and camera B can obtain the number of camera a.
In step 701, each first camera 12 sends a connection request to the second camera 20.
Step 701 includes step 701a and step 701 b. The execution order of step 701a and step 701b may be arbitrary.
Specifically, in step 701a, the camera B sends a first connection request to the camera a, where the first connection request carries a camera address of the camera B, and the first connection request may also carry a serial number of the camera B. In step 701b, the camera C may send a second connection request to the camera a, where the second connection request carries the camera address of the camera C and may also carry the serial number of the camera C.
At step 702, camera a sends a corresponding unique identifier to each first camera 12.
Step 702 includes step 702a and step 702b, and the execution order of step 702a and step 702b may be arbitrary and is not limited herein.
In step 701a, camera a sends camera B the unique identifier of camera B. Step 702b camera a sends camera C the unique identification of camera C.
Specifically, the camera a feeds back the unique identifier to the camera B after receiving the first connection request sent by the camera B, and the camera a feeds back the unique identifier of the camera C to the camera C after receiving the second connection request sent by the camera C. The unique identification may be the number of the respective camera, etc.
In the embodiment of the present application, after the unique identifier is sent to each of the camera B and the camera C, the camera a can normally communicate with the camera B and the camera C in order to ensure that the cameras a can normally communicate with each other. Camera a may make a tentative connection with camera B and camera C.
Specifically, the camera a sends an tentative communication message to the camera B according to the unique identifier of the camera B, and if the camera a receives the feedback of the camera B within a preset time period, it indicates that the camera B can communicate with the camera a, and so on until the camera a and all the first cameras 12 finish the tentative connection, and the number of cameras that can normally communicate with the camera a is obtained.
In step 703, the terminal apparatus 11 transmits target object information to the camera a.
Specifically, in order to avoid traffic consumption caused by interaction between the terminal device 11 and all cameras, in this embodiment of the application, the terminal device 11 only needs to send target object information to the camera a, and the target object information may refer to the foregoing discussion, and is not described here again.
At step 704, camera a configures a unique identifier for the target object.
Specifically, after receiving the target object information, the camera a configures a unique identifier for the target object.
Similarly, the user may send a corresponding editing instruction to the camera a through the terminal device 11, and after receiving the editing instruction, the camera a performs corresponding operations on each target object according to the editing instruction. The editing instructions can refer to the contents discussed above, and are not described in detail here.
In step 705, camera a feeds back the unique identifier to terminal device 11.
Specifically, after the camera a generates the unique identifier of the target object, the camera a configures the unique identifier for the target object, and after the camera a generates the unique identifier of the target object, the camera a sends the unique identifier of the target object to the terminal device 11, so that interaction can be performed between the camera a and the terminal device 11 in a later period according to the unique identifier of the target object, and traffic is saved.
At step 706, camera a sends the unique identification of the target object to camera B and camera C.
Wherein, step 706 includes step 706a and step 706b, and the execution sequence of step 706a and step 706b may be arbitrary.
Specifically, at step 706a, camera a sends camera B a unique identification of the target object. At step 706b, camera a sends camera C the unique identification of the target object.
In order to facilitate searching for the target object by the cameras B and C, in the embodiment of the present application, the camera a may send target object information to the cameras B and C.
In step 707, the terminal apparatus 11 transmits a target tracking instruction to the camera a.
Specifically, when a user needs to monitor a target object within the monitoring range, the user may send a target tracking instruction to the camera a through the terminal device 11. The target tracking instruction may refer to the foregoing discussion and will not be described in detail here.
At step 708, camera a sends target tracking instructions to camera B and camera C.
Wherein, step 708 includes step 708a and step 708c, and the execution sequence of step 708a and step 708c may be arbitrary and is not specifically limited herein.
Specifically, in step 708a, camera a sends a target tracking command to camera B. At step 708b, camera a sends a target tracking command to camera C. That is, the camera a may transmit the target tracking instruction to the plurality of first cameras 12 after receiving the target tracking instruction transmitted by the terminal device 11.
In step 709, camera B searches for a target object.
Specifically, after receiving the target tracking instruction, the camera B may search for the target object within the monitoring range of the camera B according to the target tracking instruction, and the manner in which the camera B searches for the target object may refer to the contents discussed in the foregoing step 304, step 307, and step 308, which is not described herein again.
In step 710, camera C searches for a target object.
Similarly, after receiving the target tracking instruction, the camera C may search for the target object within the monitoring range of the camera C according to the target tracking instruction, and the manner of searching for the target object by the camera C may refer to the contents discussed in the foregoing step 304, step 307, and step 308, and is not described herein again.
It should be noted that the order of step 709 and step 710 may be arbitrary, and step 709 is executed first in fig. 7, but the order of step 709 and step 710 is not limited in practice.
In the embodiment of the present application, after the second camera 20 receives the target tracking instruction, it may notify other cameras to perform target search, so as to further improve the efficiency of searching for the target object by the camera and improve the efficiency of tracking the target object.
The plurality of first cameras 12 each search for the target object in the respective monitoring ranges, and after the target object is searched for by the camera B, step 711 of sending a notification message to the camera a is performed.
Specifically, the plurality of first cameras 12 are all searching for the target object, and after a certain first camera 12 searches for the target object, the first camera 12 may send a notification message to the camera a, where the notification message carries the unique identifier of the camera and information that the camera a has searched for the target object.
In step 712, camera a sends a stop search instruction to camera C after receiving the notification message sent by camera B.
Specifically, after receiving the notification message sent by the camera B, the camera a determines that the camera B has searched the target object, and may notify other cameras, so as to avoid resource waste caused by the other cameras continuing to search.
As an embodiment, the camera B and the other cameras can communicate with each other, and the camera B may directly send a notification message to the other cameras after searching for the target object.
The monitoring ranges of the cameras may overlap, that is, more than one camera may search for the target object, and the cameras that have searched for the target object may all track and monitor the target object and send the acquired video including the target object to the terminal device 11, or send the video to the camera a, and the camera a sends the video to the terminal device 11 again.
As an embodiment, after receiving some interactive messages, each camera needs to determine whether the interactive messages need to be shared, and if the interactive messages need to be shared, the interactive messages are sent to corresponding objects. If the interaction message does not need to be shared, the message is cached.
Based on the foregoing discussion of a method for tracking a target object, an embodiment of the present application provides an apparatus for tracking a target object, which corresponds to the foregoing controller, and referring to fig. 8, the apparatus includes:
the receiving and sending module 801 is configured to receive a target tracking instruction, where the target tracking instruction is used to instruct tracking and monitoring of a target object;
the processing module 802 is configured to obtain a first preset bit set corresponding to the target object in a first time period according to the pre-learned monitoring information; the monitoring information comprises a preset bit set corresponding to each target object in each time period, and the first time period comprises the current moment;
the processing module 802 is further configured to control the first camera to switch to each preset position in the first preset position set, and after each time the first camera is switched to the corresponding preset position, search whether a target object exists in a monitoring picture corresponding to the switched preset position;
the processing module 802 is further configured to perform tracking monitoring on the target object when the target object is searched.
In one possible design, the processing module 802 is specifically configured to:
arranging all preset bits in the first preset bit set in a sequence from high to low according to the success rate of searching target objects to obtain a sorted first preset bit set; the target object searching success rate is used for representing the probability of success of searching the target object in the monitoring picture corresponding to each preset position in the first preset position set aiming at the multiple tracking instructions received before;
and controlling the first camera to sequentially rotate to the corresponding preset positions according to the sequence of the sorted first preset position set.
In one possible design, processing module 802 is further configured to:
before receiving a target search instruction, determining a preset bit set of a target object in each time period according to pre-recorded image information; the image information comprises an image, image shooting time and a preset position where the shot image corresponds to the first camera;
and generating monitoring information according to the preset bit set of the target object in each time period.
In one possible design, the processing module 802 is specifically configured to:
determining a reference scene element corresponding to each preset position in a preset position set of the target object in each time period according to pre-recorded image information; the reference scene element is a scene element with the minimum distance from the target object, and the scene element refers to an object with a fixed and unchangeable position;
and establishing an association relation among the target object, a preset bit set of the target object in each time period and a reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, wherein the association relation is monitoring information.
In one possible design, the processing module 802 is specifically configured to:
after each switching of the preset position, searching a corresponding first reference scene element in a monitoring picture corresponding to the switched preset position;
and according to a preset window, with the first reference scene element as a starting point, sequentially traversing the monitoring pictures corresponding to the switched preset positions, and determining whether a target object exists in the monitoring pictures corresponding to the switched preset positions.
In one possible design, processing module 802 is further configured to:
after switching to the preset position each time, searching whether a target object exists in a monitoring picture corresponding to the switched preset position, and when no target object is searched in the monitoring pictures corresponding to all the preset positions in the first preset position set, controlling the first camera to rotate in sequence along the preset direction for a preset first angle, and after the first camera rotates once, searching whether a target object exists in the rotated monitoring picture;
and after the first camera is controlled to rotate for one turn in sequence along the preset direction, and no target object is searched, sending prompt information to the second camera, wherein the prompt information is used for prompting that the target object is not in the monitoring range.
In one possible design, the target tracking instruction carries a unique identifier of the target object, and the processing module 802 is further configured to:
receiving target object information sent by the terminal equipment, configuring a unique identifier for a target object, and feeding back the unique identifier to the terminal equipment; wherein the target object information includes an image of the target object; or the like, or, alternatively,
receiving a unique identifier of target object information sent by a second camera; and the unique identification of the target object information is generated by the second camera according to the target object information sent by the user.
In one possible design, the processing module 802 is specifically configured to:
when the distance between the current monitoring picture position of the target object and the central point of the current monitoring picture is determined to be larger than a preset distance value, controlling the first camera to move along the moving direction of the target object in the monitoring picture so as to continuously monitor the target object;
and feeding back the monitoring picture corresponding to the target object to the second camera or the terminal equipment.
In one possible design, the transceiver module 801 is further configured to:
when the target object is searched, after the target object is tracked and monitored, a notification message is sent to the second camera, so that the second camera notifies other cameras of stopping searching the target object; the notification message carries the unique identifier of the first camera and the information of the searched target object.
In one possible design, the transceiver module 801 is further configured to:
before receiving a target tracking instruction, sending a connection request to a second camera; the connection request carries the camera address of the first camera;
receiving the unique identification of the first camera fed back by the second camera; wherein the unique identification of the first camera is generated by the second camera in accordance with the connection request.
Based on the foregoing discussion of a method for tracking a target object, an embodiment of the present application provides an apparatus for tracking a target object, which corresponds to the foregoing controller, and referring to fig. 9, the apparatus includes:
at least one processor 901, and
a memory 902 communicatively connected to the at least one processor 901;
wherein the memory 902 stores instructions executable by the at least one processor 901, and the at least one processor 901 implements the method of fig. 2 by executing the instructions stored by the memory 902.
The functions of the processing module 802 in fig. 8 may be implemented by the processor 901 in fig. 9 as an embodiment.
It should be noted that fig. 9 illustrates one processor 901, but the number of processors 901 is not limited in practice.
For one embodiment, the processor 901 and the memory 902 may be coupled or may be relatively independent.
On the basis of the foregoing discussion of a method for tracking a target object, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the method as described in fig. 3 or fig. 7.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (16)
1. A method of tracking a target object, comprising:
determining a preset bit set of the target object in each time period according to pre-recorded image information, and generating monitoring information according to the preset bit set of the target object in each time period; the image information comprises an image, image shooting time and a preset position where the shot image corresponds to the first camera;
receiving a target tracking instruction, wherein the target tracking instruction is used for indicating to track and monitor a target object;
according to the pre-learned monitoring information, a first preset bit set corresponding to the target object in a first time period is obtained; the monitoring information comprises a preset bit set corresponding to each target object in each time period, and the first time period comprises the current moment;
controlling a first camera to switch to each preset position in the first preset position set, and searching whether the target object exists in a monitoring picture corresponding to the switched preset position after switching to the corresponding preset position each time;
and when the target object is searched, tracking and monitoring the target object.
2. The method of claim 1, wherein controlling the first camera to switch to each of the set of preset bits in the first set of preset bits comprises:
arranging all preset bits in the first preset bit set in a sequence from high to low according to the success rate of searching target objects to obtain a sorted first preset bit set; the target object searching success rate is used for representing the probability of success of searching a target object in a monitoring picture corresponding to each preset position in the first preset position set aiming at a plurality of previously received tracking instructions;
and controlling the first camera to sequentially rotate to the corresponding preset positions according to the sequence of the sorted first preset position set.
3. The method of claim 1, wherein generating monitoring information based on a set of preset bits of the target object for each time period comprises:
determining a reference scene element corresponding to each preset bit in a preset bit set of the target object in each time period according to the pre-recorded image information; the reference scene element is a scene element with the minimum distance from the target object, and the scene element refers to an object with a fixed and unchangeable position;
and establishing an association relation among the target object, a preset bit set of the target object in each time period and a reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, wherein the association relation is monitoring information.
4. The method of claim 3, wherein searching whether the target object exists in the monitoring picture corresponding to the switched preset bit after switching to the corresponding preset bit each time comprises:
after each switching of the preset position, searching a corresponding first reference scene element in a monitoring picture corresponding to the switched preset position;
and according to a preset window, sequentially traversing the monitoring pictures corresponding to the switched preset positions by taking the first reference scene element as a starting point, and determining whether the target object exists in the monitoring pictures corresponding to the switched preset positions.
5. The method according to any one of claims 1 to 4, wherein after searching whether the target object exists in the monitoring picture corresponding to the switched preset bit after switching to the preset bit each time, the method comprises:
when the target object is not searched in the monitoring pictures corresponding to all the preset positions in the first preset position set, controlling the first camera to rotate in sequence along a preset direction by a preset first angle, and searching whether the target object exists in the rotated monitoring pictures after the first camera rotates once;
and after the first camera is controlled to rotate for one turn in sequence along the preset direction, and the target object is not searched, sending prompt information to a second camera, wherein the prompt information is used for prompting that the target object is not in the monitoring range.
6. The method of any one of claims 1 to 4, wherein the target tracking instruction carries a unique identifier of the target object, and before receiving the target search instruction, comprises:
receiving target object information sent by terminal equipment, configuring a unique identifier for the target object, and feeding back the unique identifier to the terminal equipment; wherein the target object information includes an image of the target object; or the like, or, alternatively,
receiving a unique identifier of target object information sent by a second camera; and the unique identification of the target object information is generated by the second camera according to the target object information sent by the user.
7. The method of claim 6, wherein tracking the target object comprises:
when the distance between the current monitoring picture position of the target object and the central point of the current monitoring picture is determined to be larger than a preset distance value, controlling a first camera to move along the moving direction of the target object in the monitoring picture so as to continuously monitor the target object;
and feeding back the monitoring picture corresponding to the target object to the second camera or the terminal equipment.
8. The method of any one of claims 1-4, after tracking monitoring the target object when the target object is searched, comprising:
sending a notification message to a second camera so that the second camera notifies other cameras of stopping searching the target object; wherein the notification message carries the unique identifier of the first camera and the information that the target object has been searched.
9. The method of claim 8, wherein receiving target tracking instructions comprises:
receiving a target tracking instruction from the second camera.
10. The method of claim 8, prior to receiving the target tracking instruction, comprising:
sending a connection request to the second camera; the connection request carries the camera address of the first camera;
receiving the unique identification of the first camera fed back by the second camera; wherein the unique identification of the first camera is generated by the second camera from the connection request.
11. An apparatus for tracking a target object, comprising:
the processing module is used for determining a preset bit set of the target object in each time period according to pre-recorded image information and generating monitoring information according to the preset bit set of the target object in each time period; the image information comprises an image, image shooting time and a preset position where the shot image corresponds to the first camera;
the receiving and sending module is used for receiving a target tracking instruction, and the target tracking instruction is used for indicating to track and monitor a target object;
the processing module is further configured to obtain a first preset bit set corresponding to the target object in a first time period according to pre-learned monitoring information; the monitoring information comprises a preset bit set corresponding to each target object in each time period, and the first time period comprises the current moment;
the processing module is further configured to control the first camera to switch to each preset position in the first preset position set, and search whether the target object exists in a monitoring picture corresponding to the switched preset position after switching to the corresponding preset position each time;
the processing module is further configured to perform tracking monitoring on the target object when the target object is searched.
12. The apparatus of claim 11, wherein the processing module is specifically configured to:
arranging all preset bits in the first preset bit set in a sequence from high to low according to the success rate of searching target objects to obtain a sorted first preset bit set; the target object searching success rate is used for representing the probability of success of searching a target object in a monitoring picture corresponding to each preset position in the first preset position set aiming at a plurality of previously received tracking instructions;
and controlling the first camera to sequentially rotate to the corresponding preset positions according to the sequence of the sorted first preset position set.
13. The apparatus of claim 11, wherein the processing module is specifically configured to:
determining a reference scene element corresponding to each preset bit in a preset bit set of the target object in each time period according to the pre-recorded image information; the reference scene element is a scene element with the minimum distance from the target object, and the scene element refers to an object with a fixed and unchangeable position;
and establishing an association relation among the target object, a preset bit set of the target object in each time period and a reference scene element corresponding to each preset bit in the preset bit set of the target object in each time period, wherein the association relation is monitoring information.
14. The apparatus of claim 13, wherein the processing module is specifically configured to:
after each switching of the preset position, searching a corresponding first reference scene element in a monitoring picture corresponding to the switched preset position;
and according to a preset window, sequentially traversing the monitoring pictures corresponding to the switched preset positions by taking the first reference scene element as a starting point, and determining whether the target object exists in the monitoring pictures corresponding to the switched preset positions.
15. An apparatus for tracking a target object, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of claims 1-7 by executing the instructions stored by the memory.
16. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910461306.XA CN110113579B (en) | 2019-05-30 | 2019-05-30 | Method and device for tracking target object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910461306.XA CN110113579B (en) | 2019-05-30 | 2019-05-30 | Method and device for tracking target object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110113579A CN110113579A (en) | 2019-08-09 |
CN110113579B true CN110113579B (en) | 2021-04-16 |
Family
ID=67492873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910461306.XA Active CN110113579B (en) | 2019-05-30 | 2019-05-30 | Method and device for tracking target object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110113579B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112492261A (en) * | 2019-09-12 | 2021-03-12 | 华为技术有限公司 | Tracking shooting method and device and monitoring system |
CN111179317A (en) * | 2020-01-04 | 2020-05-19 | 阔地教育科技有限公司 | Interactive teaching system and method |
CN111581245B (en) * | 2020-03-26 | 2023-10-17 | 口口相传(北京)网络技术有限公司 | Data searching method and device |
CN111405203B (en) * | 2020-03-30 | 2022-11-04 | 杭州海康威视数字技术股份有限公司 | Method and device for determining picture switching, electronic equipment and storage medium |
CN113840073B (en) * | 2020-06-08 | 2023-08-15 | 浙江宇视科技有限公司 | Shooting equipment control method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101511004A (en) * | 2009-03-25 | 2009-08-19 | 北京中星微电子有限公司 | Method and apparatus for monitoring camera shot |
CN104809874A (en) * | 2015-04-15 | 2015-07-29 | 东软集团股份有限公司 | Traffic accident detection method and device |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5074951B2 (en) * | 2008-02-18 | 2012-11-14 | 株式会社日立国際電気 | Television camera apparatus and position correction method |
CN101699862B (en) * | 2009-11-16 | 2011-04-13 | 上海交通大学 | Acquisition method of high-resolution region-of-interest image of PTZ camera |
JP5804841B2 (en) * | 2011-08-16 | 2015-11-04 | キヤノン株式会社 | IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD |
CN102592146B (en) * | 2011-12-28 | 2013-09-18 | 浙江大学 | Face detection and camera tripod control method applied to video monitoring |
CN104427300A (en) * | 2013-08-27 | 2015-03-18 | 华为技术有限公司 | Control method and device of video monitoring device |
CN103607569B (en) * | 2013-11-22 | 2017-05-17 | 广东威创视讯科技股份有限公司 | Method and system for tracking monitored target in process of video monitoring |
WO2016002622A1 (en) * | 2014-06-30 | 2016-01-07 | 株式会社日立国際電気 | Information display system |
CN106303402A (en) * | 2015-06-11 | 2017-01-04 | 杭州海康威视系统技术有限公司 | Presetting bit method to set up, call method and the device of monopod video camera |
CN105096596A (en) * | 2015-07-03 | 2015-11-25 | 北京润光泰力科技发展有限公司 | Traffic violation detecting method and system |
JP6539253B2 (en) * | 2016-12-06 | 2019-07-03 | キヤノン株式会社 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM |
CN107358146B (en) * | 2017-05-22 | 2018-05-22 | 深圳云天励飞技术有限公司 | Method for processing video frequency, device and storage medium |
CN109511090A (en) * | 2018-10-17 | 2019-03-22 | 陆浩洁 | A kind of interactive mode tracing and positioning anticipation system |
CN109561285A (en) * | 2018-12-10 | 2019-04-02 | 深圳市凯达尔科技实业有限公司 | A kind of video capture evidence obtaining linked system |
CN109657879B (en) * | 2019-01-07 | 2023-06-09 | 平安科技(深圳)有限公司 | Method, device, computer equipment and storage medium for obtaining predicted route |
CN109743552A (en) * | 2019-01-17 | 2019-05-10 | 宇龙计算机通信科技(深圳)有限公司 | A kind of object monitor method, apparatus, server and storage medium |
-
2019
- 2019-05-30 CN CN201910461306.XA patent/CN110113579B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101511004A (en) * | 2009-03-25 | 2009-08-19 | 北京中星微电子有限公司 | Method and apparatus for monitoring camera shot |
CN104809874A (en) * | 2015-04-15 | 2015-07-29 | 东软集团股份有限公司 | Traffic accident detection method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110113579A (en) | 2019-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110113579B (en) | Method and device for tracking target object | |
KR102211014B1 (en) | Identification and control of smart devices | |
US11410415B2 (en) | Processing method for augmented reality scene, terminal device, system, and computer storage medium | |
KR101852284B1 (en) | Alarming method and device | |
RU2629469C1 (en) | Method and device for alarm | |
US11575721B2 (en) | Breakout session assignment by device affiliation | |
US20170171613A1 (en) | Method and apparatus for controlling electronic device, and storage medium | |
US20100157064A1 (en) | Object tracking system, method and smart node using active camera handoff | |
CN105139470A (en) | Checking-in method, device and system based on face recognition | |
US20170125060A1 (en) | Video playing method and device | |
CN104932456A (en) | Intelligent scene realizing method and device, intelligent terminal and controller | |
CN111866468B (en) | Object tracking distribution method, device, storage medium and electronic device | |
KR20150019230A (en) | Method and apparatus for tracking object using multiple camera | |
CN108664847B (en) | Object identification method, device and system | |
CN113268211A (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN103116737A (en) | Distributed type video image identification system and image identification method thereof | |
CN109218612B (en) | Tracking shooting system and shooting method | |
CN112235510A (en) | Shooting method, shooting device, electronic equipment and medium | |
Salisbury et al. | Crowdar: augmenting live video with a real-time crowd | |
KR20210024935A (en) | Apparatus for monitoring video and apparatus for analyzing video, and on-line machine learning method | |
KR102658563B1 (en) | Apparatus for monitoring video, apparatus for analyzing video and learning methods thereof | |
US20210344875A1 (en) | Method and system for identifying a video camera of a video surveillance environment | |
CN113989706A (en) | Image processing method and device, server, electronic device and readable storage medium | |
CN109272538B (en) | Picture transmission method and device | |
KR20240107835A (en) | Smart camera, cloud server and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |