WO2018019135A1 - 一种目标监控方法、摄像头、控制器和目标监控系统 - Google Patents
一种目标监控方法、摄像头、控制器和目标监控系统 Download PDFInfo
- Publication number
- WO2018019135A1 WO2018019135A1 PCT/CN2017/092864 CN2017092864W WO2018019135A1 WO 2018019135 A1 WO2018019135 A1 WO 2018019135A1 CN 2017092864 W CN2017092864 W CN 2017092864W WO 2018019135 A1 WO2018019135 A1 WO 2018019135A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- camera
- monitoring screen
- grid
- monitoring
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
- G06F16/7335—Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/51—Housings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- the embodiments of the present invention relate to the field of monitoring technologies, and in particular, to a target monitoring method, a camera, a controller, and a target monitoring system.
- the entire monitoring scene is usually divided into several independent areas, each camera independently monitors an area, locates the target when the target appears, simultaneously tracks and detects, and then transmits the monitored record to the target.
- the server and the server analyze the monitored records, and then uniformly deploy the cameras to coordinately monitor the targets.
- each camera when the moving target is continuously tracked, since each camera independently monitors the respective area, each camera is uniformly monitored and scheduled by the server, and each camera uses the characteristic information of the target in itself.
- the monitoring area itself detects itself to determine the location of the target, which results in a longer detection time per camera and less efficient tracking of the target.
- the embodiments of the present invention provide a target monitoring method, a camera, a controller, and a target monitoring system, which can achieve location sharing of targets and improve tracking efficiency of targets.
- the embodiment of the present invention provides a target monitoring method, where the target monitoring method is applied to a target monitoring system, where the target monitoring system includes a first camera and a second camera, and the target monitoring method includes:
- the target monitoring system acquires location information of the target to be tracked in the first monitoring screen, and the first monitoring image is captured by the first camera;
- the target monitoring system determines, according to the location information of the target in the first monitoring screen, whether the location of the target in the first monitoring screen is in an overlapping area, where the overlapping area is the view of the first camera. a range of overlap of the domain with the field of view of the second camera;
- the target monitoring system switches the current primary surveillance camera to the second camera.
- the main camera switches from the first camera to the first camera.
- the second camera, the first camera and the second camera can achieve the position sharing of the target.
- the second camera can determine the target in its own field of view by using the adjacent first camera. The position in the picture can be quickly tracked to the target according to the position, and each camera does not need to detect itself to determine the position of the target, thereby improving the tracking efficiency.
- the target monitoring system determines, according to the location information of the target in the first monitoring screen, that the target is in the first monitoring screen. Whether the location is in an overlapping area, including:
- the target monitoring system Determining, by the target monitoring system, the grid number of the target falling in the first monitoring screen according to the location information of the target in the first monitoring screen, where the first monitoring screen has a preset Grids, the plurality of grids having different grid numbers, the plurality of grids corresponding to different location areas in the first monitoring screen;
- the target monitoring system queries the grid correspondence table of the overlapping area according to the grid number of the target falling in the first monitoring screen, where the grid correspondence table includes the same physical location point on the first monitoring screen. Corresponding relationship between the grid in the grid and the grid in the second monitoring screen, the second monitoring screen being photographed by the second camera;
- the target monitoring system determines that the location of the target in the first monitoring screen is in the Within the overlapping area.
- the grid correspondence table may be configured, and the grid is queried.
- the correspondence table can obtain that the position of the determination target in the first monitoring screen is within the overlapping area.
- the method in conjunction with the first possible implementation of the first aspect, in a second possible implementation manner of the first aspect, after the target monitoring system switches the current primary surveillance camera to the second camera, the method also includes:
- the target monitoring system queries the grid correspondence table according to the acquired grid number of the target falling into the first monitoring screen;
- the target monitoring system determines that the target is found in the second monitoring screen
- the target monitoring system acquires location information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the target monitoring system queries the grid correspondence table according to the mesh number of the acquired target falling into the first monitoring screen, and if the target corresponding to the target in the second monitoring screen is queried in the grid correspondence table
- the target number monitoring system determines that the target is found in the second monitoring screen, so that the target monitoring system acquires the position information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the target monitoring system acquires the target to be tracked in the first monitoring Before the location information in the picture, the method further includes:
- the triggering is performed as follows: the target monitoring system acquires location information of the target to be tracked in the first monitoring screen.
- the camera appearing in the target can be switched to the main camera.
- the target monitoring system further includes a controller The target monitoring system acquires location information of the target to be tracked in the first monitoring screen, including:
- the controller acquires feature information of the target, and detects feature information of the target in a first monitoring screen
- the controller calculates location information of the target in the first monitoring screen, and sends the target to the first camera in the Location information in the first monitor screen.
- the first camera can acquire the position information of the target in the first monitoring screen from the controller, so that the first camera as the main camera can achieve continuous tracking of the target.
- the target monitoring The system further includes a controller and a monitoring screen, and the target monitoring system switches the current main monitoring camera to the second camera, specifically:
- the controller switches the second monitoring screen to the monitoring screen
- the controller highlights the second monitoring screen on the monitoring screen
- the controller displays the second monitoring screen and the first monitoring screen in series on the monitoring screen.
- the target moving process of moving the target from the first monitoring screen to the second monitoring screen can be visually displayed through the monitoring screen, thereby realizing source tracking and destination tracking of the target.
- the target monitoring system further includes a controller and a memory. After the target monitoring system switches the current main surveillance camera to the second camera, the method further includes:
- the controller stores a second monitoring picture obtained by the second camera on the target image into the memory.
- the second monitoring image captured by the second camera as the main camera may be stored in the memory, thereby facilitating direct retrieval to the target in the second monitoring screen through the memory.
- the video image that appears makes it easy and intuitive to capture the video image of the target without manually searching all the monitoring images that have not been captured.
- the embodiment of the present invention further provides a camera, where the camera is specifically a first camera in the target monitoring system, the target monitoring system includes the first camera and the second camera, and the first camera includes :
- a first location acquiring module configured to acquire location information of a target to be tracked in a first monitoring screen when the first camera is used as a current primary surveillance camera, where the first monitoring image is captured by the first camera ;
- An overlapping area determining module configured to determine, according to the location information of the target in the first monitoring screen, whether the location of the target in the first monitoring screen is in an overlapping area, where the overlapping area is the first camera The overlapping range of the field of view with the field of view of the second camera;
- a switching module configured to switch the current main surveillance camera to the second camera if the location of the target in the first monitoring screen is in the overlapping area.
- the main camera switches from the first camera to the first camera.
- the second camera, the first camera and the second camera can achieve the position sharing of the target.
- the second camera can determine the target in its own field of view by using the adjacent first camera. The position in the picture can be quickly tracked to the target according to the position, and each camera does not need to detect itself to determine the position of the target, thereby improving the tracking efficiency.
- the overlapping area determining module includes:
- a grid determining module configured to determine, according to the location information of the target in the first monitoring screen, a grid number of the target falling in the first monitoring screen, where the first monitoring screen is preset There are a plurality of grids having different grid numbers, the plurality of grids corresponding to different location areas in the first monitoring screen;
- a first grid query module configured to query a grid correspondence table of the overlapping area according to the grid number of the target falling in the first monitoring screen, where the grid correspondence table includes the same physical location point in the Corresponding relationship between the grid in the first monitoring screen and the grid in the second monitoring screen, the second monitoring screen being photographed by the second camera;
- An overlapping area determining module configured to determine, in the grid correspondence table, a grid number of the target falling in the first monitoring screen, determining that a location of the target in the first monitoring screen is Within the overlapping area.
- the grid correspondence table may be configured, and the grid is queried.
- the correspondence table can obtain that the position of the determination target in the first monitoring screen is within the overlapping area.
- the first camera further includes: a feature detecting module, wherein
- the feature detecting module is configured to acquire the feature information of the target before acquiring the location information of the target to be tracked in the first monitoring screen, and detect whether the target is based on the feature information Appearing in the first monitoring screen; if the target appears in the first monitoring screen, triggering execution of the location acquiring module.
- the camera appearing in the target can be switched to the main camera.
- the embodiment of the present invention further provides a target monitoring system, the target monitoring system comprising: the first camera and the second camera according to any one of the preceding claims 8 to 10.
- the main camera switches from the first camera to the first camera.
- the second camera, the first camera and the second camera can achieve the position sharing of the target.
- the second camera can determine the target in its own field of view by using the adjacent first camera. The position in the picture can be quickly tracked to the target according to the position, and each camera does not need to detect itself to determine the position of the target, thereby improving the tracking efficiency.
- the second camera includes:
- a second grid query module configured to: after the first camera switches the current main surveillance camera to the second camera, when the second camera is switched to the main surveillance camera, according to the acquired target Entering the grid correspondence table into the grid number in the first monitoring screen;
- a target locking module configured to: if the grid number of the target falling in the second monitoring screen is queried in the grid correspondence table, determine to find the target in the second monitoring screen;
- a second location acquiring module configured to acquire location information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the target monitoring system queries the grid correspondence table according to the mesh number of the acquired target falling into the first monitoring screen, and if the target corresponding to the target in the second monitoring screen is queried in the grid correspondence table
- the target number monitoring system determines that the target is found in the second monitoring screen, so that the target monitoring system acquires the position information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the embodiment of the present invention further provides a controller, where the controller is deployed in a target monitoring system, where the target monitoring system includes: the controller, a first camera, and a second camera, the controller include:
- a location acquisition module configured to acquire location information of a target to be tracked in a first monitoring screen when the first camera is used as a current primary surveillance camera, where the first surveillance image is captured by the first camera;
- An overlapping area determining module configured to determine, according to the location information of the target in the first monitoring screen, whether the location of the target in the first monitoring screen is in an overlapping area, where the overlapping area is the first camera The overlapping range of the field of view with the field of view of the second camera;
- a switching module configured to switch the current main surveillance camera to the second camera if the location of the target in the first monitoring screen is in the overlapping area.
- the main camera switches from the first camera to the first camera.
- the second camera, the first camera and the second camera can achieve the position sharing of the target.
- the second camera can determine the target in its own field of view by using the adjacent first camera. The position in the picture can be quickly tracked to the target according to the position, and each camera does not need to detect itself to determine the position of the target, thereby improving the tracking efficiency.
- the overlapping area determining module includes:
- a grid determining module configured to determine, according to the location information of the target in the first monitoring screen, a grid number of the target falling in the first monitoring screen, where the first monitoring screen is preset There are a plurality of grids having different grid numbers, the plurality of grids corresponding to different location areas in the first monitoring screen;
- a grid query module configured to query a grid correspondence table of the overlapping area according to the grid number of the target falling in the first monitoring screen, where the grid correspondence table includes the same physical location point in the first Monitoring a correspondence between a grid in the screen and a grid in the second monitoring screen, the second monitoring screen being photographed by the second camera;
- An overlapping area determining module configured to determine, in the grid correspondence table, a grid number of the target falling in the first monitoring screen, determining that a location of the target in the first monitoring screen is Within the overlapping area.
- the grid correspondence table may be configured, and the grid is queried.
- the correspondence table can obtain that the position of the determination target in the first monitoring screen is within the overlapping area.
- the controller further includes:
- a target locking module configured to: after the first camera switches the current primary surveillance camera to the second camera, when the second camera is switched to the primary surveillance camera, the target falls according to the acquired target
- the grid number in the first monitoring screen queries the grid correspondence table; if the grid number of the target falling in the second monitoring screen is queried in the grid correspondence table, it is determined in the Finding the target in the second monitoring screen;
- the location obtaining module is further configured to acquire location information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the target monitoring system queries the grid correspondence table according to the mesh number of the acquired target falling into the first monitoring screen, and if the target corresponding to the target in the second monitoring screen is queried in the grid correspondence table
- the target number monitoring system determines that the target is found in the second monitoring screen, so that the target monitoring system acquires the position information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the location acquiring module specifically for the location acquiring module Obtaining, before the location information of the target to be tracked in the first monitoring screen, detecting the feature information of the target in the first monitoring screen; if the feature information is detected in the first monitoring screen, calculating the Position information of the target in the first monitoring screen, and transmitting location information of the target in the first monitoring screen to the first camera.
- the first camera can acquire the position information of the target in the first monitoring screen from the controller, so that the first camera as the main camera can achieve continuous tracking of the target.
- the target monitoring system further includes a monitoring screen
- the switching module is specifically configured to switch the second monitoring screen to the monitoring screen; or display the second monitoring screen on the monitoring screen; or, the second monitoring A picture is displayed in series with the first monitoring picture on the monitoring screen.
- the target moving process of moving the target from the first monitoring screen to the second monitoring screen can be visually displayed through the monitoring screen, thereby realizing source tracking and destination tracking of the target.
- the target monitoring The system also includes a memory, the controller further comprising: storing And a storage module, configured to: after the switching module switches the current main surveillance camera to the second camera, store the second monitoring image obtained by the second camera on the target image into the memory.
- the second monitoring image captured by the second camera as the main camera may be stored in the memory, thereby facilitating direct retrieval to the target in the second monitoring screen through the memory.
- the video image that appears makes it easy and intuitive to capture the video image of the target without manually searching all the monitoring images that have not been captured.
- the embodiment of the present invention further provides a target monitoring system, the target monitoring system comprising: the controller, the first camera and the second camera according to any of the foregoing fourth aspects.
- the main camera switches from the first camera to the first camera.
- the second camera, the first camera and the second camera can achieve the position sharing of the target.
- the second camera can determine the target in its own field of view by using the adjacent first camera. The position in the picture can be quickly tracked to the target according to the position, and each camera does not need to detect itself to determine the position of the target, thereby improving the tracking efficiency.
- the target monitoring system further includes: a monitoring screen, and/or a memory.
- the second monitoring image captured by the second camera as the main camera may be stored in the memory, thereby facilitating direct retrieval to the target in the second monitoring screen through the memory.
- the video image that appears makes it easy and intuitive to capture the video image of the target without manually searching all the monitoring images that have not been captured.
- FIG. 1 is a schematic flowchart of a target monitoring method according to an embodiment of the present invention
- FIG. 2 is another schematic flowchart of a target monitoring method according to an embodiment of the present invention.
- FIG. 3 is another schematic flowchart of a target monitoring method according to an embodiment of the present invention.
- FIG. 4 is another schematic flowchart of a target monitoring method according to an embodiment of the present invention.
- FIG. 5 is another schematic flowchart of a target monitoring method according to an embodiment of the present invention.
- 6-a is a schematic structural view of a camera according to an embodiment of the present invention.
- FIG. 6-b is another schematic structural diagram of a camera according to an embodiment of the present invention.
- 6-c is another schematic structural diagram of a camera according to an embodiment of the present invention.
- FIG. 7-a is a schematic diagram of a target monitoring system according to an embodiment of the present invention.
- 7-b is a schematic diagram of a target monitoring system according to an embodiment of the present invention.
- FIG. 8-a is a schematic structural diagram of a controller according to an embodiment of the present invention.
- FIG. 8-b is another schematic structural diagram of a controller according to an embodiment of the present invention.
- FIG. 8-c is another schematic structural diagram of a controller according to an embodiment of the present invention.
- FIG. 8-d is another schematic structural diagram of a controller according to an embodiment of the present invention.
- 9-a is a schematic diagram of a target monitoring system according to an embodiment of the present invention.
- 9-b is another schematic diagram of a target monitoring system according to an embodiment of the present invention.
- FIG. 10 is another schematic structural diagram of a camera according to an embodiment of the present invention.
- FIG. 11 is a schematic diagram of another structure of a controller according to an embodiment of the present invention.
- each camera performs target detection and tracking according to a separate monitoring area, that is, the monitoring areas of the cameras do not overlap, and the cameras cannot Sharing location coordinates, collaborative tracking is less efficient.
- the plurality of cameras may include adjacent first cameras and second cameras, and the viewing area of the first camera and the second camera have overlapping regions, below
- the embodiment is described by taking a first camera and a second camera as an example of a camera that cooperatively tracks a target.
- the implementation process of the main camera in the embodiment of the present invention is performed according to the movement of the target.
- the target monitoring method provided by the embodiment of the present invention can be applied to the target monitoring system, where the target monitoring system includes a first camera and a second camera. Referring to FIG. 1, the method of this embodiment includes:
- the target monitoring system acquires location information of the target to be tracked in the first monitoring screen, and the first monitoring image is captured by the first camera.
- the target monitoring system includes at least a first camera and a second camera.
- the two cameras are adjacent cameras.
- the target monitoring system of the embodiment of the present invention includes a first camera and In addition to the second camera, more cameras may be included.
- the target monitoring method between other cameras may refer to the main camera switching process between the first camera and the second camera in the embodiment herein.
- the first camera captures the first monitoring screen.
- the target monitoring system acquires the location information of the target to be tracked in the first monitoring screen, for example, may be acquired by the first camera.
- the location information of the target in the first monitoring screen may be obtained by the controller in the target monitoring system.
- the location information of the target to be tracked in the first monitoring screen is not limited herein.
- the method provided by the embodiment of the present invention before the target monitoring system acquires the location information of the target to be tracked in the first monitoring screen, the method provided by the embodiment of the present invention further includes:
- the target monitoring system acquires feature information of the target, and detects whether the target appears in the first monitoring screen according to the feature information;
- Step 101 The target monitoring system acquires location information of the target to be tracked in the first monitoring screen.
- the target feature information may be pre-configured into the target monitoring system, and the target monitoring system detects whether the target appears in the first monitoring screen according to the feature information of the target, and if the target appears in the first monitoring screen, the current main camera is The first camera corresponding to the first monitoring screen is configured, and then step 101 is performed. By detecting the feature of the target to be tracked, it is possible to determine in real time which camera the target appears in the target monitoring system, and the camera that appears in the target can be switched to the main camera.
- the target monitoring system further includes a controller, and the target monitoring system acquires the location information of the target to be tracked in the first monitoring screen, including:
- the controller acquires feature information of the target, and detects feature information of the target in the first monitoring screen;
- the controller calculates the location information of the target in the first monitoring screen, and sends the location information of the target in the first monitoring screen to the first camera.
- a controller may be configured in the target monitoring system, and the controller may be configured to detect feature information of the target, thereby determining whether the feature information of the target appears in the first monitoring screen, if the first monitoring is performed.
- the feature information is detected in the screen, the controller calculates the position information of the target in the first monitoring screen, and the controller can send the position information of the target in the first monitoring screen to the first camera, and the first camera can be obtained from the controller.
- the location information of the target in the first monitoring screen the first camera can acquire the position information of the target in the first monitoring screen from the controller, so that the first camera as the main camera can achieve continuous tracking of the target.
- the target monitoring system determines, according to the location information of the target in the first monitoring screen, whether the location of the target in the first monitoring screen is in the overlapping area, and the overlapping area is an overlap of the viewing area of the first camera and the viewing area of the second camera. range.
- the fields of view of the two cameras may be configured for the first camera and the second camera to be installed to generate an overlapping area, which is the field of view of the first camera and the field of view of the second camera.
- the overlapping range is not limited to that the camera having an overlapping area with the first camera can be limited to the second camera, where the second camera is only one way achievable.
- the target to be tracked is moved in real time, the target is movable in the first monitoring screen, and the target can be moved to an overlapping area between the first camera and the second camera, and the target can also be moved to the first camera and
- the overlapping area between the third cameras is not limited herein.
- the target monitoring system may determine, according to the position information of the target in the first monitoring screen, whether the position of the target in the first monitoring screen is in the overlapping area, if the target is in the first monitoring screen. The position is in the overlapping area, and step 103 is triggered.
- the target monitoring system of step 102 determines whether the location of the target in the first monitoring screen is in the overlapping area according to the location information of the target in the first monitoring screen, including:
- the target monitoring system determines the grid number of the target falling into the first monitoring screen according to the position information of the target in the first monitoring screen, and the plurality of grids are preset on the first monitoring screen, and the plurality of grids have different Grid number, the plurality of grids corresponding to different location areas in the first monitoring screen;
- the target monitoring system queries the grid correspondence table of the overlapping area according to the grid number of the target falling in the first monitoring screen, and the grid correspondence table includes the grid of the same physical location point in the first monitoring screen and the second Monitoring the correspondence between the grids in the screen, and the second monitoring screen is photographed by the second camera;
- the target monitoring system determines that the position of the target in the first monitoring screen is in the overlapping area.
- a plurality of grids may be preset on the first monitoring screen, and the plurality of grids have different grid numbers, and the plurality of grids correspond to different location areas in the first monitoring screen.
- the grid number of the target in the first monitoring screen can be obtained by the correspondence between the target position and the grid number, and then the target monitoring system queries the network of the overlapping area according to the grid number of the target falling into the first monitoring screen.
- the grid correspondence table includes a correspondence between a grid of the same physical location point in the first monitoring screen and a grid in the second monitoring screen, and the second monitoring screen is photographed by the second camera, if The grid number corresponding to the target falling in the first monitoring screen is queried in the grid correspondence table, and the target monitoring system determines that the position of the target in the first monitoring screen is within the overlapping area.
- a grid correspondence table may be configured for the overlapping area of the first camera and the second camera in the viewing area, and the determination may be obtained by querying the grid correspondence table. The position of the target in the first monitoring picture is in the overlapping area.
- the target monitoring system switches the current primary surveillance camera to the second camera.
- the target monitoring system can switch the current main surveillance camera to the second camera, so that the second camera serves as the main camera.
- Can continue Tracking the target in the embodiment of the present invention, by configuring an overlapping area between the first camera and the second camera, and performing real-time detection on whether the target appears in the overlapping area, the real camera can be real-time according to the moving position of the target. Switching so that the target can be continuously tracked by the main camera.
- the method provided by the embodiment of the present invention further include:
- the target monitoring system queries the grid correspondence table according to the mesh number of the acquired target falling into the first monitoring screen;
- the target monitoring system determines that the target is found in the second monitoring screen
- the target monitoring system acquires the location information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the implementation of the target in the second monitoring screen may also be determined by querying the grid correspondence table in the implementation scenario in which the mesh is used to locate the target in steps C1 to C3.
- the target monitoring system queries the grid correspondence table according to the mesh number of the acquired target falling into the first monitoring screen, and if the target is in the grid correspondence table, the target falls into the second Monitoring the grid number in the screen, the target monitoring system determines that the target is found in the second monitoring screen, and the target monitoring system acquires the position of the target in the second monitoring screen according to the grid number of the target falling into the second monitoring screen.
- the target monitoring system determines that the target is found in the second monitoring screen, and the target monitoring system acquires the position of the target in the second monitoring screen according to the grid number of the target falling into the second monitoring screen.
- the target monitoring system further includes a controller and a monitoring screen.
- the target monitoring system switches the current main monitoring camera to the second camera.
- the controller switches the second monitoring screen to the monitoring screen
- the controller 2 displays the second monitoring screen on the monitoring screen
- the controller displays the second monitoring screen and the first monitoring screen in series on the monitoring screen.
- the monitoring screen is configured, and the monitoring screen of the plurality of cameras can be displayed on the monitoring screen.
- an achievable manner is the controller.
- the second monitoring screen is switched to the monitoring screen, so that the target in the second monitoring screen can be displayed through the monitoring screen.
- the controller may further highlight the second monitoring screen on the monitoring screen.
- the controller may further display the second monitoring screen and the first monitoring screen in series on the monitoring screen, thereby Through the monitoring screen, the target moving process of moving the target from the first monitoring screen to the second monitoring screen can be visually displayed, thereby realizing source tracking and destination tracking of the target.
- the target monitoring system further includes a controller and a memory. After the target monitoring system switches the current main monitoring camera to the second camera, the method provided by the embodiment of the present invention further includes:
- the controller stores the second monitoring picture obtained by the second camera on the target image into the memory.
- the target monitoring system can also be configured with a memory.
- the second camera is switched to the main camera, in order to record the target tracking, the second monitoring image captured by the second camera as the main camera can be stored in the memory. Therefore, it is convenient to directly capture the video image appearing in the second monitoring screen through the memory, so that the video image of the target can be obtained quickly and intuitively without manually searching all the monitoring images that have not captured the target.
- the exemplification of the present invention by the foregoing embodiment shows that there is an overlapping area in the field of view of the first camera and the second camera.
- the main camera is from the first camera.
- a camera switch To the second camera the position sharing of the target can be achieved between the first camera and the second camera.
- the second camera can determine the target in itself by using the adjacent first camera.
- the position in the viewport can be quickly tracked to the target based on the position.
- Each camera does not need to detect itself to determine the position of the target, thus improving tracking efficiency.
- the camera is arranged according to the overlapping area of the adjacent camera's field of view.
- the observation angle of each camera is obtained by combining the current observation angle, height, position, focal length and other parameters of each camera. (ie, the field of view), a virtual grid is created in the view field of each camera observation area.
- the size and shape of the grid may be the same or different. The smaller and denser the grid, the higher the observation accuracy.
- the size and shape of the mesh are not specifically limited, as long as the mesh is covered to cover the view of the desired observation area; then a two-dimensional coordinate system is established, and the coordinates corresponding to each virtual mesh are recorded (ie, the mesh) Coordinates form a grid coordinate list for the view frame of each camera.
- the grid coordinate list reflects the correspondence between the mesh and the coordinates in the view field.
- the grid coordinate list of each camera includes the mesh number and grid.
- the coordinates and the correspondence between the two, the grid coordinates refer to the coordinates included in the grid.
- the grid coordinate list of the camera can be as shown in Table 1.
- the X in the grid coordinates can represent the starting position of the grid
- the abscissa can represent the starting position of the grid
- W can represent the width of the grid
- H can represent the height of the grid.
- the grid coordinates can also be expressed in other forms, for example, the grid coordinates are represented as a two-dimensional coordinate set included in each grid, and the specific representation of the grid coordinates is not limited herein.
- a grid correspondence table is set for the overlap region of the view of the adjacent cameras, and the grid correspondence table represents the correspondence relationship of the meshes of the view frames belonging to different cameras in the overlap region.
- a plurality of physical positioning anchor points may be set in the overlapping area, and the same physical positioning anchor point is simultaneously observed by the adjacent camera, and the mesh corresponding to the grid number of the same physical positioning anchor point on the viewing frame of different cameras is established. table. If there are multiple cameras adjacent to one camera, there is an overlapping area between the field of view of the camera and the field of view of each adjacent camera, and a grid correspondence table will be established for each overlapping area.
- the created grid correspondence table can be as shown in Table 2:
- the correspondence relationship shown in Table 2 is specifically that the grid 1 of the first camera corresponds to the grid 2 of the second camera, the grid 2 of the first camera corresponds to the grid 4 of the second camera, and the grid 3 of the first camera Corresponding to the grid 3 of the second camera, the network of the first camera The grid 4 corresponds to the grid 1 of the second camera.
- the created grid coordinate list and the grid correspondence table may be stored in the corresponding camera, or may be stored in the controller, and the controller may be a device such as a terminal or a server.
- the target monitoring method provided by the embodiment of the present invention is applied to a target monitoring system, where the target monitoring system includes a first camera and a second camera.
- the target monitoring system includes a first camera and a second camera.
- another method provided in this embodiment includes:
- the second camera receives, by the controller, a first mesh number sent by the first camera, where the first mesh number is used by the first camera according to the first central coordinate of the target in the view frame of the first camera.
- the grid coordinate list preset by the camera's field of view screen is obtained by finding the first grid coordinate corresponding to the first center coordinate and the first grid number corresponding to the first grid coordinate.
- the first camera includes a grid coordinate list preset for the viewport of the first camera, and a grid correspondence table preset for the overlap region of the first camera and the adjacent second camera.
- the second camera stores therein a grid coordinate list preset for the viewing area screen of the second camera, and a grid correspondence table preset for the viewing area overlapping area of the second camera and the adjacent first camera.
- the grid correspondence table stored in different cameras for the same overlapping area is the same.
- there are multiple cameras adjacent to a certain camera there are multiple grid correspondence tables stored in the camera.
- the grid correspondence table is for an overlapping area.
- the controller may first determine in which view of the camera the target is currently in, for example, determining that the target is currently in the field of view of the first camera, the controller may send a target tracking request to the first camera, the target
- the tracking request may include feature information of the target, such as color, gradation, and the like of the target, and the feature information of the target may be expressed by an image, a text, or the like, or may be identified by an image recognition technology.
- the first camera receives the target tracking request sent by the controller, and the first camera may detect the position of the target in the view frame of the target according to the feature information of the target, and track the target according to the position of the target in the view frame of the target; or The position of the target can be manually selected by the user in the view frame of the first camera, and the first camera tracks the target according to the position selected by the user.
- the first central coordinate of the target in the view frame of the first camera may be calculated, and the first central coordinate may refer to the coordinate of the center point of the target in the view frame of the first camera, the central coordinate For example, P(x0, y0), after obtaining the first central coordinate, the first camera uses the grid coordinate list preset for the view frame of the first camera to find the first grid coordinate corresponding to the first central coordinate and the first The first mesh number corresponding to the grid coordinates, and the first camera sends the first mesh number to the second camera through the controller.
- each grid coordinate includes the width and height of each grid, and the center coordinates P(x0, y0) can be compared with each grid coordinate, and which P(x0, y0) falls into Within the coordinate range of the grid, it can be determined that P(x0, y0) corresponds to the grid coordinates, and after determining the grid number corresponding to the grid coordinates, it can be determined that the target has entered the grid.
- the grid coordinate list stored in the first camera as shown in Table 1, if the first center coordinate is P (23, 42), according to Table 1, it can be determined that the target is currently entering the view frame of the first camera. Numbered within the grid of 2.
- the second camera searches for a second grid number corresponding to the first grid number by using a grid correspondence table preset for the overlap region; if yes, executing step 203, otherwise, returning to step 201 .
- the second camera uses the grid correspondence table preset for the overlapping area, the result is that there is no second grid number corresponding to the first grid number, indicating that the target does not enter the first camera and the second The field of view overlaps the camera, and the second camera returns to step 201 to continue to wait for receiving the next grid number sent by the first camera.
- the second camera uses the grid correspondence table preset for the overlapping area, the result is that there is a second grid number corresponding to the first grid number, indicating that the target has entered the first camera and the second camera.
- the area of view overlaps.
- the second camera is preset.
- the grid correspondence table is as shown in Table 2. The second camera can determine, according to Table 2, that the target has entered the grid 4 of its own view frame.
- the second camera searches for the second grid coordinate corresponding to the second grid number by using a grid coordinate list preset for the view frame of the second camera.
- the second camera can cooperate at the grid coordinates (18, 50, 4, 4) corresponding to the grid 4. Track the target.
- the second camera cooperatively tracks the target by using the second grid coordinate.
- the controller may further receive the feature information of the target sent by the first camera, and after the second camera finds the second grid coordinate, It is also possible to further confirm whether the target at the second grid coordinate is the target to be tracked based on the feature information of the target.
- the first camera may send a target disappearing notification to the second camera through the controller, after which the second camera will calculate the target at
- the second center coordinate in the view frame of the second camera is used to find the third grid coordinate corresponding to the second center coordinate and the third grid coordinate corresponding to the grid coordinate list preset for the view frame of the second camera
- the third grid number; the third grid number is sent to the adjacent camera through the controller.
- the above process can be understood as: initially, when it is determined that the target is located in the view frame of the first camera, the first camera can be regarded as the main camera, and the other cameras adjacent to the first camera are the slave camera, and the first camera will be the target.
- the position coordinates ie, the grid number
- other adjacent cameras can achieve coordinated tracking according to the position coordinates of the target.
- the target monitoring method provided by the embodiment of the present invention is described below from the controller side. Referring to FIG. 3, the method in this embodiment includes:
- the controller receives a first mesh number sent by the first camera, where the first mesh number is used by the first camera according to the first central coordinate of the target in the view frame of the first camera, and uses the view for the first camera.
- the preset grid coordinate list of the screen is obtained by finding the first grid coordinate corresponding to the first center coordinate and the first grid number corresponding to the first grid coordinate.
- the controller may first determine in which view of the camera the target is currently in, for example, determining that the target is currently in the field of view of the first camera, the controller may send the target to the first camera.
- the tracking request may include the feature information of the target, the feature information of the target such as the color of the target, the gray scale, and the like, and the feature information of the target may be represented by an image, a text, or the like.
- the target tracking request can be initiated directly by the controller, and the target tracking request can also be received by the controller from other devices.
- the first camera may detect the target according to the feature information of the target.
- the position in the own view frame is tracked according to the position of the target in the own view frame; or the user can manually select the position of the target in the view frame of the first camera, the first camera according to the user
- the selected location tracks the target.
- the first camera tracks the target, using the first center coordinate of the target in the view frame of the first camera, searching for the grid coordinate list preset for the view frame of the first camera to obtain the first corresponding to the first central coordinate
- the first grid number corresponding to the grid coordinates and the first grid coordinates is sent to the controller, and the controller receives the first grid number sent by the first camera.
- the controller sends the first mesh number to the second camera, so that the second camera cooperatively tracks the target according to the first mesh number; and the second camera uses the mesh correspondence table preset for the overlapping region to find whether the first and the third a second grid number corresponding to a grid number, if present, searching for a second grid coordinate corresponding to the second grid number by using a grid coordinate list preset for the view frame of the second camera, using the second network
- the grid coordinates coordinate the target.
- the first camera sends a target disappear notification to the controller, and the controller receives the target disappear notification sent by the first camera, and sends the target disappear notification to the second camera, the second camera.
- the controller After receiving the target disappearance notification, calculating a second center coordinate of the target in the view frame of the second camera, and using the grid coordinate list preset for the view frame of the second camera, searching for the third corresponding to the second central coordinate a grid number and a third grid number corresponding to the third grid coordinate, the third grid number is sent to the controller, and the controller sends the third grid number to the first camera, the first camera according to the third network The grid number coordinates the target.
- the controller of the embodiment mainly plays the role of information transmission.
- the cameras are arranged in such a manner that there is an overlapping area in the viewing area of the adjacent camera, and the grid coordinate list is preset for the viewing area screen of each camera, and the overlapping area of the viewing area of each camera and other cameras is
- the preset grid correspondence table stores the grid coordinate list and the grid correspondence table in the corresponding camera.
- the target position ie, the grid number
- the second camera can determine the target by using the grid number sent by the adjacent first camera and the preset grid correspondence table and the grid coordinate list when the target enters the view frame of the adjacent second camera.
- the location does not require self-test to determine the location of the target, thus improving the efficiency of collaborative tracking.
- Another method provided in this embodiment includes:
- the controller receives a view frame of the target that is sent by the first camera.
- the controller stores a grid coordinate list preset for the view frame of the first camera, a grid coordinate list preset for the view frame of the second camera, and the first camera and the adjacent camera.
- a grid correspondence table preset by the field of view overlap region of the second camera.
- the controller may first determine in which view of the camera the target is currently in, for example, determining that the target is currently in the field of view of the first camera, the controller may send a target tracking request to the first camera, the target
- the tracking request may include feature information of the target, the feature information of the target such as the color, gray scale, and the like of the target, and the feature information of the target may be expressed by an image, a text, or the like.
- the first camera receives the target tracking request sent by the controller, and the first camera may detect the position of the target in the view frame of the target according to the feature information of the target, and track the target according to the position of the target in the view frame of the target; or The position of the target can be manually selected by the user in the view frame of the first camera, and the first camera tracks the target according to the position selected by the user. After the first camera tracks the target, the view frame containing the target is sent to the controller.
- the controller calculates a center coordinate of the target in the view frame of the first camera, and determines a first grid coordinate corresponding to the center coordinate and the first mesh by using a grid coordinate list preset for the view frame of the first camera.
- the first grid number corresponding to the coordinates.
- the center coordinate may be a two-dimensional coordinate, and the center coordinate may refer to a coordinate of the center point of the target in the view frame of the first camera, for example, the center coordinate is P(x0, y0).
- the grid coordinate list includes grid coordinates, each grid coordinate includes the width and height of each grid, and the center coordinates P(x0, y0) can be compared with each grid coordinate. , P(x0, y0) falls within the coordinate range of the grid, then it can be determined that P(x0, y0) corresponds to the grid coordinates, and after determining the grid number corresponding to the grid coordinates, it can be determined that the target enters the grid.
- the grid coordinate list preset by the viewfinder screen for the first camera stored in the controller is as shown in Table 1. If the first central coordinate is P (23, 42), the target current can be determined according to Table 1. Entered the grid numbered 2 in the viewport of the first camera.
- the controller uses a grid correspondence table preset for the overlapping area to find whether there is a second grid number corresponding to the first grid number, and if yes, preset with a view frame for the second camera.
- the grid coordinate list finds the second grid coordinate corresponding to the second grid number.
- the controller uses the grid correspondence table preset for the overlapping area, the result is that there is no second grid number corresponding to the first grid number, indicating that the target does not enter the first camera and the second camera The view overlaps the area, and the controller returns to step 401 to continue receiving the view frame including the target sent by the first camera.
- the controller uses the grid correspondence table preset for the overlapping area, the result is that there is a second grid number corresponding to the first grid number, indicating that the target has entered the first camera and the second camera.
- the area of view overlaps.
- the controller can determine that the target enters the second according to Table 2.
- the controller sends the second grid coordinate to the second camera, so that the second camera cooperatively tracks the target by using the second grid coordinate.
- the controller may send the grid coordinates (18, 50, 4, 4) corresponding to the grid 4.
- the second camera is given such that the second camera cooperatively tracks the target at grid coordinates (18, 50, 4, 4).
- the target monitoring method provided by the embodiment of the present invention is described below from the camera side. Referring to FIG. 5, the method in this embodiment includes:
- the second camera receives the second grid coordinate sent by the controller, and the second grid coordinate is obtained by the controller according to the view frame of the target sent by the first camera; the controller calculates the target in the field of view of the first camera Determining a first grid coordinate corresponding to the center coordinate and a first grid number corresponding to the first grid coordinate by using a grid coordinate list preset for the viewport screen of the first camera, Finding a second grid number corresponding to the first grid number by using a grid correspondence table preset for the overlapping area, and obtaining the number by searching a grid coordinate list preset for a viewing area screen of the second camera The second grid coordinate corresponding to the second grid number.
- the second camera cooperatively tracks the target by using the second grid coordinate.
- the second camera also transmits the tracked view frame containing the target to the controller.
- the cameras are arranged in such a manner that there is an overlapping area in the viewing area of the adjacent camera, and the grid coordinate list is preset for the viewing area screen of each camera, and the overlapping area of the viewing area of each camera and other cameras is Presetting the grid correspondence table, storing the grid coordinate list and the grid correspondence table in the controller, when the first camera tracks the target, the view frame containing the target is transmitted to the controller, and the controller calculates the target in the first a position in the view frame of a camera, and when determining that the target enters the overlap region of the two cameras, the position coordinates of the target in the view frame of the second camera are transmitted to the second camera, and the second camera is controlled according to the control
- the position coordinates sent by the device can determine the position of the target, and do not need to detect itself to determine the position of the target, thereby improving the efficiency of the cooperative tracking.
- the controller may be a terminal or a server.
- the controller may receive the target tracking request initiated by the terminal, send the target tracking request to the corresponding camera, and cooperate with the camera to track the target.
- the terminal stores a photo of a suspect to be tracked. If the terminal acts as a controller, the terminal can directly request a target tracking from the camera hair and cooperate with the camera to track the target; if the server acts as a controller, the terminal can locate the suspect.
- the photo is sent to the server, and the server makes a target tracking request to the camera hair and coordinates the target with the camera.
- the invention can realize the positioning and tracking target based on visual cooperation, and after the tracking target and the position information thereof are determined by the main camera, the information such as the position of the tracking target is shared with other surrounding cameras, and other cameras can quickly capture and track according to the position information and the like. aims.
- the main camera can share the target position and other auxiliary information, which can assist other cameras to quickly capture the tracking target, thereby improving the efficiency of detecting the target and the ability to continuously track the target, thereby enabling the multi-camera to efficiently capture and track specific targets, and can realize Automatic continuous capture tracking of targets, taking advantage of multiple camera collaboration tracking.
- the solution provided by the embodiment of the present invention includes the following steps:
- the screen space is meshed: the field of view (FOV) of the adjacent camera has an overlapping area, and a camera A combines the current observation angle, height, and focal length of the camera, at the current focal length of the camera.
- a virtual grid is placed on the observed picture space, and the size and shape of the grid are not limited. It is only necessary to cover the required observation area, and the virtual coordinates of each grid are recorded to form a grid of the camera. Coordinate list GridA.
- construct the grid correspondence of the overlapping regions it is determined by the actual measurement that, under certain focal length and depth of field, two adjacent cameras (A, B) can be established in two grids corresponding to the same physical position point in their respective pictures. Correspondence relationship, and recorded into the grid correspondence table Map (A, B).
- a certain camera A detects or manually selects any target to be tracked according to the object feature, and uses it as the current main camera.
- the main camera continuously tracks the target and continuously uses the center coordinates of the target in the current picture.
- the grid coordinate list GridA is matched. Once the center coordinate of the matching target enters a certain grid, the grid number k of the current target is obtained, and then the main camera adds the grid number k and the additional feature information of the target. (Other information about the target provided by the main camera includes: color, grayscale, etc.) is sent to other cameras B adjacent to it.
- the adjacent camera B receives the target data, and combines the received mesh number of the target to query the grid correspondence table Map(A, B). If the grid number corresponding to itself can be queried, , the target is quickly captured according to the grid number and continuously tracked: the neighboring camera B determines the color information such as the color in the grid area according to the mesh number s obtained by the foregoing content, such as the additional feature information received. If the match indicates that the tracking target has entered the mesh area s of the adjacent camera, the camera immediately captures and starts tracking the target, and once the target disappears in the visible area of the main camera, the adjacent camera becomes the current one. Main camera.
- Step 1 Perform gridding on the screen space.
- the field of view FOV of the adjacent camera should have an overlapping area, and the camera A combines the current observation angle, height and range of the camera, and a virtual grid is placed on the picture space observed under the current focal length of the camera A, and the size of the grid is And the shape is not limited, just cover the required observation area.
- the smaller the grid the denser the observation, the higher the observation accuracy.
- the grid farther from the camera can be set smaller, and the virtual coordinates of each grid are recorded.
- the camera's grid coordinate list GridA, the grid coordinate list can be saved in the camera's own storage device, or can be saved on the central controller (such as a video server).
- Step 2 Construct a grid correspondence relationship of overlapping regions.
- two adjacent cameras can establish corresponding correspondences between two grids corresponding to the same physical position point in their respective pictures, and record corresponding to the grid.
- Table Map (A, B), in particular, can place some anchor anchor nodes on the ground, observe these anchor nodes through the screen of two cameras, and then according to the same observed in the same grid The anchor node establishes a correspondence between the camera A and the mesh in the camera B.
- Step 3 Query the position coordinates of the shared target.
- P(x0, y0) falls into the coverage area of a certain grid k(Xk, Yk), it can be determined that the target has entered the grid, so that the grid number of the current target is k, and then The main camera A transmits the mesh number k and additional feature information of the target (other information of the target provided by the main camera may include: color, gradation, etc.) to other cameras B adjacent thereto.
- Step 4. Calculate the relative position and angle.
- the adjacent camera B receives the target data, and combines the grid number of the received target to query the grid correspondence table Map(A, B). If the grid number s corresponding to itself can be queried, then step 5 is performed. Otherwise go to step four.
- Step 5 Quickly capture the target based on the grid number and keep track of it.
- the adjacent camera B obtains the coordinates of the current grid s of the target according to the grid number information s obtained in step 4, and then checks the GridB, and then judges the additional feature information in the grid region s, if it meets the acceptance
- the additional feature information of the target to the target indicates that the tracking target has entered the mesh area s of the camera, and the camera can immediately capture and track the target, and at the same time, once the target disappears in the visible area of the main camera, the center is With the help of the controller, the adjacent camera B automatically changes to the current main camera and proceeds to step 3.
- the other camera can quickly capture the target and continuously track through the acquired position and feature information of the tracking target, shortening the detection time, improving the tracking efficiency, and facilitating automatic and efficient multiple cameras.
- the invention can improve the efficiency of multiple cameras to identify and track the target, and has broad application prospects in security monitoring, traffic monitoring, public security monitoring, especially real-time automatic tracking of targets.
- other information captured by the camera such as the store information near the object being tracked, it can also provide a good basis for location based services. It can not only install the overhead of short-distance communication equipment in different locations, but also improve the target monitoring efficiency.
- the foregoing embodiment provides a detailed description of the target monitoring method provided by the present invention.
- the camera, the controller, and the target monitoring system provided by the embodiments of the present invention are described in detail.
- the composition of the camera can be as shown in Figure 6-a, Figure 6-b, Figure 6-c.
- the structure of the target monitoring system is shown in Figure 7-a and Figure 7-b.
- the structure of the controller can be as shown in Figure 8. -a, 8-b, 8-c, 8-d, the structure of another target monitoring system is shown in Figure 9-a and Figure 9-b.
- the camera 600 shown in FIG. 6-a is specifically a first camera in the target monitoring system, and the target monitoring system includes the first camera and the second camera, and the first camera 600 includes:
- the first location acquiring module 601 is configured to acquire, when the first camera is the current main surveillance camera, location information of the target to be tracked in the first monitoring screen, where the first monitoring screen is used by the first camera Shooting
- the overlapping area determining module 602 is configured to determine, according to the location information of the target in the first monitoring screen, whether the location of the target in the first monitoring screen is in an overlapping area, where the overlapping area is the first a range of overlap of the field of view of the camera with the field of view of the second camera;
- the switching module 603 is configured to switch the current main surveillance camera to the second camera if the location of the target in the first monitoring screen is within the overlapping area.
- the overlapping area determining module 602 includes:
- a grid determining module 6021 configured to determine, according to location information of the target in the first monitoring screen, the target drop Entering a grid number in the first monitoring screen, the first monitoring screen is preset with a plurality of grids, the plurality of grids having different grid numbers, and the plurality of grids corresponding to the Describe different location areas in the first monitoring screen;
- the first grid query module 6022 is configured to query a grid correspondence table of the overlapping area according to the grid number of the target falling in the first monitoring screen, where the grid correspondence table includes the same physical location point Corresponding relationship between the grid in the first monitoring screen and the grid in the second monitoring screen, the second monitoring screen being photographed by the second camera;
- An overlap region determining module 6023 configured to determine, in the grid correspondence table, a mesh number of the target falling in the first monitoring screen, and determine a location of the target in the first monitoring screen Within the overlapping area.
- the first camera 600 further includes: a feature detecting module 604, where
- the feature detecting module 604 is configured to acquire the feature information of the target before acquiring the location information of the target to be tracked in the first monitoring screen, and detect the feature according to the feature information. Whether the target appears in the first monitoring screen; if the target appears in the first monitoring screen, triggering execution of the location acquiring module.
- the target monitoring system 700 includes the first camera 701 and the second camera 702 as described in any of the foregoing FIGS. 6-a, 6-b, and 6-c.
- the second camera 702 includes:
- the second grid query module 7021 is configured to: after the first camera switches the current main surveillance camera to the second camera, when the second camera is switched to the primary surveillance camera, according to the acquired target Locating the grid corresponding to the grid number in the first monitoring screen;
- a target locking module 7022 configured to: if the grid number of the target falling in the second monitoring screen is queried in the grid correspondence table, determining to find the target in the second monitoring screen ;
- the second location acquiring module 7023 is configured to acquire location information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- an embodiment of the present invention provides a controller 800.
- the controller 800 is deployed in a target monitoring system, where the target monitoring system includes: the controller 800, a first camera, and a The second camera, the controller 800 includes:
- the location obtaining module 801 is configured to: when the first camera is used as the current main surveillance camera, acquire location information of the target to be tracked in the first monitoring screen, where the first monitoring screen is captured by the first camera;
- the overlapping area determining module 802 is configured to determine, according to the location information of the target in the first monitoring screen, whether the location of the target in the first monitoring screen is in an overlapping area, where the overlapping area is the first a range of overlap of the field of view of the camera with the field of view of the second camera;
- the switching module 803 is configured to switch the current main surveillance camera to the second camera if the location of the target in the first monitoring screen is in the overlapping area.
- the overlapping area determining module 802 includes:
- a grid determining module 8021 configured to determine, according to location information of the target in the first monitoring screen, a grid number of the target falling in the first monitoring screen, where the first monitoring screen is pre- Having a plurality of grids having different grid numbers, the plurality of grids corresponding to different location areas in the first monitor screen;
- the grid query module 8022 is configured to query a grid correspondence table of the overlapping area according to the grid number of the target falling in the first monitoring screen, where the grid correspondence table includes the same physical location point in the first Corresponding relationship between a grid in the monitoring screen and a grid in the second monitoring screen, the second monitoring screen being photographed by the second camera;
- the overlapping area determining module 8023 is configured to: if the target is found in the grid correspondence table, fall into the first monitoring screen a grid number in the medium, determining that the location of the target in the first monitoring picture is within the overlapping area.
- the controller 800 further includes:
- a target locking module 804 configured to: after the first camera switches the current main surveillance camera to the second camera, when the second camera is switched to the primary surveillance camera, according to the acquired target falling into the target Querying the grid correspondence table in the grid number in the first monitoring screen; if the grid number of the target falling in the second monitoring screen is queried in the grid correspondence table, determining Finding the target in the second monitoring screen;
- the location obtaining module 801 is further configured to acquire location information of the target in the second monitoring screen according to the mesh number of the target falling in the second monitoring screen.
- the location obtaining module 801 is specifically configured to: before the location acquiring module acquires location information of the target to be tracked in the first monitoring screen, and detect the location in the first monitoring screen. Feature information of the target; if the feature information is detected in the first monitoring screen, calculating location information of the target in the first monitoring screen, and transmitting the target to the first camera The position information in the first monitoring screen is described.
- the target monitoring system further includes a monitoring screen
- the switching module 803 is specifically configured to switch the second monitoring screen to the monitoring screen; or, the second The monitoring screen is highlighted on the monitoring screen; or the second monitoring screen is serially connected to the first monitoring screen and displayed on the monitoring screen.
- the target monitoring system further includes a memory.
- the controller 800 further includes: a storage module 805, wherein the switching module 803 will be the current master. After the surveillance camera is switched to the second camera, the second camera that is captured by the second camera to the target is stored in the memory.
- the target monitoring system 900 includes: the controller 901, as described in any one of the foregoing FIG. 8-a, FIG. 8-b, FIG. 8-c, and FIG. 8-d A camera 902 and a second camera 903.
- the target monitoring system 900 further includes: a monitoring screen 904, and/or a memory 905.
- the target monitoring system 900 includes a monitoring screen 904 and a memory 905 in FIG. 9-b for a schematic illustration.
- the first camera 1000 includes: a memory 1001, a processor 1002, and a transceiver 1003.
- the controller 1100 includes a memory 1101, a processor 1102, and a transceiver 1103.
- the memory may include a random access memory (RAM), a non-volatile memory, a disk storage, and the like.
- the transceiver implements the functions of the receiving unit and the transmitting unit, and the transceiver can be composed of an antenna, a circuit, and the like.
- the processor may be a central processing unit (CPU), or the processor may be an Application Specific Integrated Circuit (ASIC), or the processor may be configured to implement an embodiment of the present invention. Or multiple integrated circuits.
- the processor and the memory and the transceiver can be connected by a bus, and the processor is used for controlling the reading and writing of the memory and the transceiver of the transceiver.
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- Another point, the mutual coupling or direct coupling shown or discussed The or communication connection may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
- a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
本发明公开了一种目标监控方法、摄像头、控制器和目标监控系统,目标监控系统中包括第一摄像头及第二摄像头,第一摄像头的视域与第二摄像头的视域存在重叠区域,目标监控方法包括:当所述第一摄像头作为当前的主监控摄像头时,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;所述目标监控系统根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;若所述目标在所述第一监控画面中的位置在所述重叠区域内,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头。
Description
本发明实施例涉及监控技术领域,尤其涉及一种目标监控方法、摄像头、控制器和目标监控系统。
在目前的广域视频监控系统中,通过布置多个摄像头,可实现多角度、大范围观察场景并跟踪运动目标。在这种监控系统中,通常将整个监控场景划分成若干独立的区域,每个摄像头独立监测一个区域,在目标出现时对目标进行定位,并同时进行追踪和检测,而后将监测的记录传给服务器,服务器对监测的记录进行分析,然后统一调配各个摄像头对目标进行协同监控。
在上述的监控系统中,对运动目标进行持续追踪时,由于每个摄像头独立监控各自的区域,各个摄像头之间是由服务器进行统一的监控调度,每个摄像头都要利用目标的特征信息在自身的监控区域内自行检测以确定目标的位置,这会导致每个摄像头的检测时间较长,追踪目标的效率较低。
发明内容
有鉴于此,本发明实施例提供了一种目标监控方法、摄像头、控制器和目标监控系统,能够实现目标的位置共享,提高对目标的追踪效率。
第一方面,本发明实施例提供一种目标监控方法,所述目标监控方法应用于目标监控系统中,所述目标监控系统包括第一摄像头及第二摄像头,所述目标监控方法包括:
当所述第一摄像头作为当前的主监控摄像头时,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;
所述目标监控系统根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;
若所述目标在所述第一监控画面中的位置在所述重叠区域内,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头。
本发明实施例中,第一摄像头和第二摄像头的视域存在重叠区域,针对重叠区域,当待追踪的目标在第一监控画面中移动到该重叠区域时,主摄像头从第一摄像头切换至第二摄像头,第一摄像头和第二摄像头之间可以实现目标的位置共享,一旦目标进入两个摄像头在视域上的重叠区域,第二摄像头利用相邻的第一摄像头可确定目标在自身视域画面中的位置,根据该位置即可快速追踪到目标,每个摄像头不需要自行检测以确定目标的位置,因而能够提高追踪效率。
结合第一方面,在第一方面的第一种可能的实现方式中,所述目标监控系统根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,包括:
所述目标监控系统根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;
所述目标监控系统根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;
若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,所述目标监控系统确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
本发明实施例中,通过对第一监控画面和第二监控画面进行预先配置网格,对于第一摄像头和第二摄像头在视域上的重叠区域,可以配置网格对应表,通过查询网格对应表可以获取到确定目标在第一监控画面中的位置在重叠区域内。
结合第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头之后,所述方法还包括:
当所述第二摄像头切换为主监控摄像头时,所述目标监控系统根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;
若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则所述目标监控系统确定在所述第二监控画面中查找到所述目标;
所述目标监控系统根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
本发明实施例中,目标监控系统根据获取到的目标落入第一监控画面中的网格编号查询网格对应表,若在网格对应表中查询到目标落入第二监控画面中的网格编号,则目标监控系统确定在第二监控画面中查找到目标,从而目标监控系统根据目标落入第二监控画面中的网格编号获取到目标在第二监控画面中的位置信息。
结合第一方面或第一方面的第一种可能或第二种可能的实现方式,在第一方面的第三种可能的实现方式中,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息之前,所述方法还包括:
所述目标监控系统获取所述目标的特征信息,并根据所述特征信息检测所述目标是否出现在所述第一监控画面中;
若所述目标出现在所述第一监控画面中,触发执行如下步骤:所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息。
本发明实施例中,通过对待追踪目标的特征检测,可以实时的确定出目标出现在目标监控系统中的哪个摄像头中,该目标所出现的摄像头可以被切换为主摄像头。
结合第一方面或第一方面的第一种可能或第二种可能或第三种可能的实现方式,在第一方面的第四种可能的实现方式中,所述目标监控系统还包括控制器,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息,包括:
所述控制器获取所述目标的特征信息,并在第一监控画面中检测所述目标的特征信息;
若在所述第一监控画面中检测到所述特征信息,所述控制器计算所述目标在所述第一监控画面中的位置信息,并向所述第一摄像头发送所述目标在所述第一监控画面中的位置信息。
本发明实施例中通过控制器的实时检测,第一摄像头可以从控制器获取到目标在第一监控画面中的位置信息,从而第一摄像头作为主摄像头可以实现对目标的持续追踪。
结合第一方面或第一方面的第一种可能或第二种可能或第三种可能或第四种可能的实现方式,在第一方面的第五种可能的实现方式中,所述目标监控系统还包括控制器和监控屏幕,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头,具体包括:
所述控制器将所述第二监控画面切换到所述监控屏幕上;
或者,所述控制器将所述第二监控画面在所述监控屏幕上突出显示;
或者,所述控制器将所述第二监控画面与所述第一监控画面串接在一起显示在所述监控屏幕上。
本发明实施例中,通过监控屏幕可以直观的显示出目标从第一监控画面移动至第二监控画面的目标移动过程,从而实现对目标的来源追踪和去向追踪。
结合第一方面或第一方面的第一种可能或第二种可能或第三种可能或第四种可能或第五种可能的实现方式,在第一方面的第六种可能的实现方式中,所述目标监控系统还包括控制器和存储器,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头之后,所述方法还包括:
所述控制器将所述第二摄像头对所述目标拍摄得到的第二监控画面存储到所述存储器中。
本发明实施例中,为了实现对目标追踪的记录,可以将第二摄像头作为主摄像头时拍摄到的第二监控画面存储到存储器中,从而便于通过存储器直接调取到目标在第二监控画面中出现的视频图像,便于快捷直观的获取到目标的视频图像,而不需要对没有拍摄到目标的所有监控画面进行人工查找。
第二方面,本发明实施例还提供一种摄像头,所述摄像头具体为目标监控系统中的第一摄像头,所述目标监控系统包括所述第一摄像头及第二摄像头,所述第一摄像头包括:
第一位置获取模块,用于当所述第一摄像头作为当前的主监控摄像头时,获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;
重叠区域判断模块,用于根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;
切换模块,用于若所述目标在所述第一监控画面中的位置在所述重叠区域内,将当前的主监控摄像头切换为所述第二摄像头。
本发明实施例中,第一摄像头和第二摄像头的视域存在重叠区域,针对重叠区域,当待追踪的目标在第一监控画面中移动到该重叠区域时,主摄像头从第一摄像头切换至第二摄像头,第一摄像头和第二摄像头之间可以实现目标的位置共享,一旦目标进入两个摄像头在视域上的重叠区域,第二摄像头利用相邻的第一摄像头可确定目标在自身视域画面中的位置,根据该位置即可快速追踪到目标,每个摄像头不需要自行检测以确定目标的位置,因而能够提高追踪效率。
结合第二方面,在第二方面的第一种可能的实现方式中,所述重叠区域判断模块,包括:
网格确定模块,用于根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;
第一网格查询模块,用于根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;
重叠区域确定模块,用于若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
本发明实施例中,通过对第一监控画面和第二监控画面进行预先配置网格,对于第一摄像头和第二摄像头在视域上的重叠区域,可以配置网格对应表,通过查询网格对应表可以获取到确定目标在第一监控画面中的位置在重叠区域内。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,
所述第一摄像头还包括:特征检测模块,其中,
所述特征检测模块,用于所述第一位置获取模块获取待追踪的目标在第一监控画面中的位置信息之前,获取所述目标的特征信息,并根据所述特征信息检测所述目标是否出现在所述第一监控画面中;若所述目标出现在所述第一监控画面中,触发执行所述位置获取模块。
本发明实施例中,通过对待追踪目标的特征检测,可以实时的确定出目标出现在目标监控系统中的哪个摄像头中,该目标所出现的摄像头可以被切换为主摄像头。
第三方面,本发明实施例还提供一种目标监控系统,所述目标监控系统包括:如前述权利要求8至10中任一项所述的第一摄像头和第二摄像头。
本发明实施例中,第一摄像头和第二摄像头的视域存在重叠区域,针对重叠区域,当待追踪的目标在第一监控画面中移动到该重叠区域时,主摄像头从第一摄像头切换至第二摄像头,第一摄像头和第二摄像头之间可以实现目标的位置共享,一旦目标进入两个摄像头在视域上的重叠区域,第二摄像头利用相邻的第一摄像头可确定目标在自身视域画面中的位置,根据该位置即可快速追踪到目标,每个摄像头不需要自行检测以确定目标的位置,因而能够提高追踪效率。
结合第三方面,在第三方面的第一种可能的实现方式中,所述第二摄像头包括:
第二网格查询模块,用于所述第一摄像头将当前的主监控摄像头切换为所述第二摄像头之后,当所述第二摄像头切换为主监控摄像头时,根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;
目标锁定模块,用于若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则确定在所述第二监控画面中查找到所述目标;
第二位置获取模块,用于根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
本发明实施例中,目标监控系统根据获取到的目标落入第一监控画面中的网格编号查询网格对应表,若在网格对应表中查询到目标落入第二监控画面中的网格编号,则目标监控系统确定在第二监控画面中查找到目标,从而目标监控系统根据目标落入第二监控画面中的网格编号获取到目标在第二监控画面中的位置信息。
第四方面,本发明实施例还提供一种控制器,所述控制器部署在目标监控系统中,所述目标监控系统包括:所述控制器、第一摄像头和第二摄像头,所述控制器包括:
位置获取模块,用于当所述第一摄像头作为当前的主监控摄像头时,获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;
重叠区域判断模块,用于根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;
切换模块,用于若所述目标在所述第一监控画面中的位置在所述重叠区域内,将当前的主监控摄像头切换为所述第二摄像头。
本发明实施例中,第一摄像头和第二摄像头的视域存在重叠区域,针对重叠区域,当待追踪的目标在第一监控画面中移动到该重叠区域时,主摄像头从第一摄像头切换至第二摄像头,第一摄像头和第二摄像头之间可以实现目标的位置共享,一旦目标进入两个摄像头在视域上的重叠区域,第二摄像头利用相邻的第一摄像头可确定目标在自身视域画面中的位置,根据该位置即可快速追踪到目标,每个摄像头不需要自行检测以确定目标的位置,因而能够提高追踪效率。
结合第四方面,在第四方面的第一种可能的实现方式中,所述重叠区域判断模块,包括:
网格确定模块,用于根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;
网格查询模块,用于根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;
重叠区域确定模块,用于若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
本发明实施例中,通过对第一监控画面和第二监控画面进行预先配置网格,对于第一摄像头和第二摄像头在视域上的重叠区域,可以配置网格对应表,通过查询网格对应表可以获取到确定目标在第一监控画面中的位置在重叠区域内。
结合第四方面或第四方面的第一种可能的实现方式,在第四方面的第二种可能的实现方式中,所述控制器还包括:
目标锁定模块,用于所述第一摄像头将当前的主监控摄像头切换为所述第二摄像头之后,当所述第二摄像头切换为主监控摄像头时,根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则确定在所述第二监控画面中查找到所述目标;
所述位置获取模块,还用于根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
本发明实施例中,目标监控系统根据获取到的目标落入第一监控画面中的网格编号查询网格对应表,若在网格对应表中查询到目标落入第二监控画面中的网格编号,则目标监控系统确定在第二监控画面中查找到目标,从而目标监控系统根据目标落入第二监控画面中的网格编号获取到目标在第二监控画面中的位置信息。
结合第四方面或第四方面的第一种可能或第二种可能的实现方式,在第四方面的第三种可能的实现方式中,所述位置获取模块,具体用于所述位置获取模块获取待追踪的目标在第一监控画面中的位置信息之前,并在第一监控画面中检测所述目标的特征信息;若在所述第一监控画面中检测到所述特征信息,计算所述目标在所述第一监控画面中的位置信息,并向所述第一摄像头发送所述目标在所述第一监控画面中的位置信息。
本发明实施例中通过控制器的实时检测,第一摄像头可以从控制器获取到目标在第一监控画面中的位置信息,从而第一摄像头作为主摄像头可以实现对目标的持续追踪。
结合第四方面或第四方面的第一种可能或第二种可能或第三种可能的实现方式,在第四方面的第四种可能的实现方式中,所述目标监控系统还包括监控屏幕,所述切换模块,具体用于将所述第二监控画面切换到所述监控屏幕上;或者,将所述第二监控画面在所述监控屏幕上突出显示;或者,将所述第二监控画面与所述第一监控画面串接在一起显示在所述监控屏幕上。
本发明实施例中,通过监控屏幕可以直观的显示出目标从第一监控画面移动至第二监控画面的目标移动过程,从而实现对目标的来源追踪和去向追踪。
结合第四方面或第四方面的第一种可能或第二种可能或第三种可能或第四种可能的实现方式,在第四方面的第五种可能的实现方式中,所述目标监控系统还包括存储器,所述控制器还包括:存
储模块,用于所述切换模块将当前的主监控摄像头切换为所述第二摄像头之后,将所述第二摄像头对所述目标拍摄得到的第二监控画面存储到所述存储器中。
本发明实施例中,为了实现对目标追踪的记录,可以将第二摄像头作为主摄像头时拍摄到的第二监控画面存储到存储器中,从而便于通过存储器直接调取到目标在第二监控画面中出现的视频图像,便于快捷直观的获取到目标的视频图像,而不需要对没有拍摄到目标的所有监控画面进行人工查找。
第五方面,本发明实施例还提供一种目标监控系统,所述目标监控系统包括:如前述第四方面中任一项所述的控制器、第一摄像头和第二摄像头。
本发明实施例中,第一摄像头和第二摄像头的视域存在重叠区域,针对重叠区域,当待追踪的目标在第一监控画面中移动到该重叠区域时,主摄像头从第一摄像头切换至第二摄像头,第一摄像头和第二摄像头之间可以实现目标的位置共享,一旦目标进入两个摄像头在视域上的重叠区域,第二摄像头利用相邻的第一摄像头可确定目标在自身视域画面中的位置,根据该位置即可快速追踪到目标,每个摄像头不需要自行检测以确定目标的位置,因而能够提高追踪效率。
结合第五方面,在第五方面的第一种可能的实现方式中,所述目标监控系统,还包括:监控屏幕,和/或存储器。
本发明实施例中,为了实现对目标追踪的记录,可以将第二摄像头作为主摄像头时拍摄到的第二监控画面存储到存储器中,从而便于通过存储器直接调取到目标在第二监控画面中出现的视频图像,便于快捷直观的获取到目标的视频图像,而不需要对没有拍摄到目标的所有监控画面进行人工查找。
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例目标监控方法一个流程示意图
图2为本发明实施例目标监控方法另一个流程示意图;
图3为本发明实施例目标监控方法另一个流程示意图;
图4为本发明实施例目标监控方法另一个流程示意图;
图5为本发明实施例目标监控方法另一个流程示意图;
图6-a为本发明实施例摄像头一个结构示意图;
图6-b为本发明实施例摄像头另一个结构示意图;
图6-c为本发明实施例摄像头另一个结构示意图;
图7-a为本发明实施例目标监控系统的一个示意图;
图7-b为本发明实施例目标监控系统的一个示意图;
图8-a为本发明实施例控制器一个结构示意图;
图8-b为本发明实施例控制器另一个结构示意图;
图8-c为本发明实施例控制器另一个结构示意图;
图8-d为本发明实施例控制器另一个结构示意图;
图9-a为本发明实施例目标监控系统的一个示意图;
图9-b为本发明实施例目标监控系统另一个示意图;
图10为本发明实施例摄像头另一个结构示意图;
图11为本发明实施例控制器另一结构示意图。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
通过背景技术的描述可知,现有技术中无法实现多个摄像头协同追踪目标,通常每个摄像头按照划分独立的监控区域进行目标检测以及追踪,即各个摄像头的监控区域不重叠,各个摄像头之间无法共享位置坐标,协同追踪效率较低。
本发明实施例中,协同追踪目标的摄像头有多个,多个摄像头中可以包括相邻的第一摄像头和第二摄像头,第一摄像头的视域与第二摄像头的视域存在重叠区域,以下实施例以协同追踪目标的摄像头包括第一摄像头和第二摄像头为例进行说明。
首先介绍本发明实施例中主摄像头根据目标的移动进行切换的实现过程,本发明实施例提供的目标监控方法可应用于目标监控系统中,所述目标监控系统包括第一摄像头及第二摄像头,请参阅图1,本实施例的方法包括:
101、当第一摄像头作为当前的主监控摄像头时,目标监控系统获取待追踪的目标在第一监控画面中的位置信息,第一监控画面由第一摄像头拍摄。
在本发明实施例中,目标监控系统中至少包括第一摄像头和第二摄像头,这两个摄像头为相邻布设的摄像头,不限定的是,本发明实施例目标监控系统除了包括第一摄像头和第二摄像头之外,还可以包括更多的摄像头,其它摄像头之间的目标监控方法可以参阅此处实施例中对第一摄像头和第二摄像头之间的主摄像头切换过程。第一摄像头拍摄第一监控画面,当第一摄像头作为当前的主监控摄像头时,目标监控系统获取待追踪的目标在第一监控画面中的位置信息,例如,可以由第一摄像头来获取到待追踪的目标在第一监控画面中的位置信息,也可以由目标监控系统中的控制器来获取到待追踪的目标在第一监控画面中的位置信息,具体此处不做限定。
在本发明的一些实施例中,步骤101目标监控系统获取待追踪的目标在第一监控画面中的位置信息之前,本发明实施例提供的方法还包括:
A1、目标监控系统获取目标的特征信息,并根据特征信息检测目标是否出现在第一监控画面中;
A2、若目标出现在第一监控画面中,触发执行如下步骤101:目标监控系统获取待追踪的目标在第一监控画面中的位置信息。
其中,目标的特征信息可以预先配置到目标监控系统中,目标监控系统根据目标的特征信息检测目标是否出现在第一监控画面中,若目标出现在第一监控画面中,则将当前的主摄像头配置第一监控画面对应的第一摄像头,然后再执行步骤101。通过对待追踪目标的特征检测,可以实时的确定出目标出现在目标监控系统中的哪个摄像头中,该目标所出现的摄像头可以被切换为主摄像头。
在本发明的一些实施例中,目标监控系统还包括控制器,步骤101目标监控系统获取待追踪的目标在第一监控画面中的位置信息,包括:
B1、控制器获取目标的特征信息,并在第一监控画面中检测目标的特征信息;
B2、若在第一监控画面中检测到特征信息,控制器计算目标在第一监控画面中的位置信息,并向第一摄像头发送目标在第一监控画面中的位置信息。
其中,本发明实施例中可以在目标监控系统中配置一控制器,该控制器可以用于检测目标的特征信息,从而判断在第一监控画面中是否出现目标的特征信息,若在第一监控画面中检测到特征信息,控制器计算目标在第一监控画面中的位置信息,控制器可以向第一摄像头发送目标在第一监控画面中的位置信息,则第一摄像头可以从控制器获取到该目标在第一监控画面中的位置信息。本发明实施例中通过控制器的实时检测,第一摄像头可以从控制器获取到目标在第一监控画面中的位置信息,从而第一摄像头作为主摄像头可以实现对目标的持续追踪。
102、目标监控系统根据目标在第一监控画面中的位置信息判断目标在第一监控画面中的位置是否在重叠区域内,重叠区域为第一摄像头的视域与第二摄像头的视域的重叠范围。
在本发明实施例中,针对第一摄像头和第二摄像头在安装时可以配置两个摄像头的视域,使其产生重叠区域,该重叠区域为第一摄像头的视域与第二摄像头的视域的重叠范围,不限的是,与第一摄像头存在重叠区域的摄像头可以限定于第二摄像头,此处第二摄像头只是可实现的一种方式。待追踪的目标是实时移动的,该目标在第一监控画面中是可移动的,该目标可以移动至第一摄像头与第二摄像头之间的重叠区域,该目标也可以移动至第一摄像头与第三摄像头之间的重叠区域,此处不做限定。当目标在第一监控画面中移动时,目标监控系统可以根据目标在第一监控画面中的位置信息判断目标在第一监控画面中的位置是否在重叠区域内,若目标在第一监控画面中的位置在重叠区域内,触发执行步骤103。
在本发明的一些实施例中,步骤102目标监控系统根据目标在第一监控画面中的位置信息判断目标在第一监控画面中的位置是否在重叠区域内,包括:
C1、目标监控系统根据目标在第一监控画面中的位置信息,确定目标落入第一监控画面中的网格编号,第一监控画面上预置有多个网格,多个网格具有不同的网格编号,多个网格对应于第一监控画面中不同的位置区域;
C2、目标监控系统根据目标落入第一监控画面中的网格编号查询重叠区域的网格对应表,网格对应表包括同一个物理位置点在第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,第二监控画面由第二摄像头拍摄;
C3、若在网格对应表中查询到目标落入第一监控画面中的网格编号,目标监控系统确定目标在第一监控画面中的位置在重叠区域内。
其中,本发明的一些实施例中,可以对第一监控画面上预置多个网格,多个网格具有不同的网格编号,多个网格对应于第一监控画面中不同的位置区域,则目标在第一监控画面中所在的网格编号可以通过目标位置与网格编号的对应关系获取到,然后目标监控系统根据目标落入第一监控画面中的网格编号查询重叠区域的网格对应表,网格对应表包括同一个物理位置点在第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,第二监控画面由第二摄像头拍摄,若在网格对应表中查询到目标落入第一监控画面中的网格编号,目标监控系统确定目标在第一监控画面中的位置在重叠区域内。通过对第一监控画面和第二监控画面进行预先配置网格,对于第一摄像头和第二摄像头在视域上的重叠区域,可以配置网格对应表,通过查询网格对应表可以获取到确定目标在第一监控画面中的位置在重叠区域内。
103、若目标在第一监控画面中的位置在重叠区域内,目标监控系统将当前的主监控摄像头切换为第二摄像头。
在本发明实施例中,通过前述步骤102判断出目标在第一监控画面中的位置在重叠区域内,目标监控系统可以将当前的主监控摄像头切换为第二摄像头,从而第二摄像头作为主摄像头可以继续
对该目标进行追踪,本发明实施例中通过在第一摄像头和第二摄像头之间配置重叠区域,并对目标是否出现在重叠区域进行实时检测,从而可以实现主摄像头根据目标的移动位置进行实时切换,从而可以由主摄像头对目标进行持续追踪。
进一步的,在本发明的一些实施例中,在前述执行步骤C1至C3的实现场景下,步骤103目标监控系统将当前的主监控摄像头切换为第二摄像头之后,本发明实施例提供的方法还包括:
D1、当第二摄像头切换为主监控摄像头时,目标监控系统根据获取到的目标落入第一监控画面中的网格编号查询网格对应表;
D2、若在网格对应表中查询到目标落入第二监控画面中的网格编号,则目标监控系统确定在第二监控画面中查找到目标;
D3、目标监控系统根据目标落入第二监控画面中的网格编号获取到目标在第二监控画面中的位置信息。
其中,在步骤C1至C3采用网格来定位目标的实现场景下,目标在第二监控画面中的位置也可以通过查询网格对应表来确定。当第二摄像头切换为主监控摄像头时,目标监控系统根据获取到的目标落入第一监控画面中的网格编号查询网格对应表,若在网格对应表中查询到目标落入第二监控画面中的网格编号,则目标监控系统确定在第二监控画面中查找到目标,目标监控系统根据目标落入第二监控画面中的网格编号获取到目标在第二监控画面中的位置信息,对于网格编号的具体实现场景详见后续实施例中的举例说明。
在本发明的一些实施例中,目标监控系统还包括控制器和监控屏幕,步骤104目标监控系统将当前的主监控摄像头切换为第二摄像头,具体包括:
E1、控制器将第二监控画面切换到监控屏幕上;
或者,E2、控制器将第二监控画面在监控屏幕上突出显示;
或者,E3、控制器将第二监控画面与第一监控画面串接在一起显示在监控屏幕上。
其中,本发明实施例提供的目标监控系统中配置监控屏幕,则在监控屏幕上可以显示多个摄像头的监控画面,当第二摄像头被切换为主摄像头时,一种可实现的方式是控制器将第二监控画面切换到监控屏幕上,从而通过监控屏幕可以显示出第二监控画面中的目标。在本发明的另一种实现方式中,若监控屏幕中同时显示多个摄像头的监控画面,则当第二摄像头切换为主摄像头时,控制器还可以将第二监控画面在监控屏幕上突出显示,在本发明的另一种实现方式中,若监控屏幕中同时显示多个摄像头的监控画面,控制器还可以将第二监控画面与第一监控画面串接在一起显示在监控屏幕上,从而通过监控屏幕可以直观的显示出目标从第一监控画面移动至第二监控画面的目标移动过程,从而实现对目标的来源追踪和去向追踪。
在本发明的一些实施例中,目标监控系统还包括控制器和存储器,步骤104目标监控系统将当前的主监控摄像头切换为第二摄像头之后,本发明实施例提供的方法还包括:
F1、控制器将第二摄像头对目标拍摄得到的第二监控画面存储到存储器中。
其中,目标监控系统中还可以配置存储器,则当第二摄像头切换为主摄像头时,为了实现对目标追踪的记录,可以将第二摄像头作为主摄像头时拍摄到的第二监控画面存储到存储器中,从而便于通过存储器直接调取到目标在第二监控画面中出现的视频图像,便于快捷直观的获取到目标的视频图像,而不需要对没有拍摄到目标的所有监控画面进行人工查找。
通过前述实施例对本发明的举例说明可知,第一摄像头和第二摄像头的视域存在重叠区域,针对重叠区域,当待追踪的目标在第一监控画面中移动到该重叠区域时,主摄像头从第一摄像头切换
至第二摄像头,第一摄像头和第二摄像头之间可以实现目标的位置共享,一旦目标进入两个摄像头在视域上的重叠区域,第二摄像头利用相邻的第一摄像头可确定目标在自身视域画面中的位置,根据该位置即可快速追踪到目标,每个摄像头不需要自行检测以确定目标的位置,因而能够提高追踪效率。
本发明实施例中,按照相邻摄像头的视域要存在重叠区域的方式布置摄像头,布置完成后,结合每个摄像头的当前观测角度、高度、位置、焦距等参数,得到每个摄像头的观测区域(即视域),在每个摄像头观测区域的视域画面打一层虚拟网格,网格的大小、形状可以相同,也可以不同,网格越小、越密,则观测精度越高,此处对网格大小、形状不作具体限定,只要所打网格覆盖所需观测区域的视域画面即可;接下来建立二维坐标系,记录每个虚拟网格对应的坐标(即网格坐标)形成针对每个摄像头的视域画面的网格坐标列表,网格坐标列表体现视域画面内网格与坐标的对应关系,每个摄像头的网格坐标列表中包括网格编号、网格坐标及二者的对应关系,网格坐标指的是网格中所包括的坐标,在一个具体的实施例中,摄像头的网格坐标列表可如表1所示:
表1
表1中,网格坐标中的X可以代表网格的起始位置横坐标,Y可以代表网格的起始位置纵坐标,W可以代表网格的宽度,H可以代表网格的高度。当然网格坐标还可以表示成其他形式,例如网格坐标表示成每个网格包括的二维坐标集合,此处对网格坐标的具体表示形式不做限定。
接下来针对相邻摄像头的视域重叠区域设置网格对应表,网格对应表体现重叠区域内同时属于不同摄像头的视域画面的网格的对应关系。具体地,可以在重叠区域设置若干物理定位锚点,利用相邻摄像头同时观测同一物理定位锚点,根据同一物理定位锚点在不同摄像头的视域画面上所属的网格编号来建立网格对应表。如果与一个摄像头相邻的摄像头有多个,则该摄像头的视域与每个相邻的摄像头的视域都存在重叠区域,针对每个重叠区域都将建立网格对应表。在一个具体的实施例中,以相邻摄像头为第一摄像头及第二摄像头为例,所建立的网格对应表可如表2所示:
表2
表2中所表示的对应关系具体为:第一摄像头的网格1对应第二摄像头的网格2,第一摄像头的网格2对应第二摄像头的网格4,第一摄像头的网格3对应第二摄像头的网格3,第一摄像头的网
格4对应第二摄像头的网格1。
所建立的网格坐标列表及网格对应表可以存储在对应的摄像头内,也可以存储在控制器,控制器具体可以为终端、服务器等设备。
本发明实施例提供的目标监控方法应用于目标监控系统中,所述目标监控系统包括第一摄像头及第二摄像头,请参阅图2,本实施例提供的另一种方法包括:
201、第二摄像头通过控制器接收第一摄像头发送的第一网格编号,第一网格编号由第一摄像头根据目标在第一摄像头的视域画面中的第一中心坐标,利用针对第一摄像头的视域画面预置的网格坐标列表,查找第一中心坐标对应的第一网格坐标及第一网格坐标对应的第一网格编号得到。
具体实现中,第一摄像头内存储有针对第一摄像头的视域画面预置的网格坐标列表,以及针对第一摄像头与相邻的第二摄像头的视域重叠区域预置的网格对应表;第二摄像头内存储有针对第二摄像头的视域画面预置的网格坐标列表,以及针对第二摄像头与相邻的第一摄像头的视域重叠区域预置的网格对应表。不同摄像头内存储的针对同一重叠区域预置的网格对应表相同,当与某个摄像头相邻的摄像头的数量为多个时,该摄像头内存储的网格对应表也有多个,每个网格对应表针对一个重叠区域。
当需要追踪某个目标时,可由控制器先确定目标当前处于哪个摄像头的视域中,例如确定目标当前处于第一摄像头的视域中,则控制器可以向第一摄像头发送目标追踪请求,目标追踪请求中可以包括目标的特征信息,目标的特征信息例如目标的颜色、灰度等,目标的特征信息可通过图像、文本等表现,也可以通过图像识别技术来识别所述目标。第一摄像头接收控制器发送的目标追踪请求,第一摄像头可以根据目标的特征信息检测以确定目标在自身的视域画面中的位置,根据目标在自身的视域画面中的位置追踪目标;或者可以由用户在第一摄像头的视域画面中手动框选出目标的位置,第一摄像头根据用户选择的位置追踪目标。
第一摄像头追踪到目标后,可以计算目标在第一摄像头的视域画面中的第一中心坐标,第一中心坐标可以指目标的中心点在第一摄像头的视域画面中的坐标,中心坐标例如P(x0,y0),得到第一中心坐标之后,第一摄像头利用针对第一摄像头的视域画面预置的网格坐标列表,查找第一中心坐标对应的第一网格坐标及第一网格坐标对应的第一网格编号,第一摄像头将第一网格编号通过控制器发送给第二摄像头。
如表1所示,每个网格坐标包括每个网格的宽度和高度,可以将中心坐标P(x0,y0)与每个网格坐标进行比对,P(x0,y0)落入哪个网格的坐标范围内,则可以确定P(x0,y0)对应该网格坐标,确定该网格坐标对应的网格编号之后就可以确定目标进入了该网格。以第一摄像头内存储的网格坐标列表如表1所示为例,若第一中心坐标为P(23,42),则根据表1可确定目标当前进入了第一摄像头的视域画面中编号为2的网格内。
202、第二摄像头利用针对所述重叠区域预置的网格对应表查找是否存在与所述第一网格编号对应的第二网格编号;若存在,则执行步骤203,否则,返回步骤201。
若第二摄像头利用针对所述重叠区域预置的网格对应表查找的结果是不存在与所述第一网格编号对应的第二网格编号,则说明目标未进入第一摄像头与第二摄像头的视域重叠区域,第二摄像头返回执行步骤201,继续等待接收第一摄像头发送的下一个网格编号。
若第二摄像头利用针对所述重叠区域预置的网格对应表查找的结果是存在与所述第一网格编号对应的第二网格编号,则说明目标已进入第一摄像头与第二摄像头的视域重叠区域。具体在上面的例子中,若第二摄像头通过控制器接收到的第一摄像头发送的网格编号为2,第二摄像头内预置
的网格对应表如表2所示,则第二摄像头根据表2可以确定目标进入了自身视域画面的网格4。
203、第二摄像头利用针对第二摄像头的视域画面预置的网格坐标列表查找所述第二网格编号对应的第二网格坐标;
例如,第二摄像头内存储的网格坐标列表如表3所示:
表3
上面的例子中,若查找到的对应的第二网格编号为网格4,则根据表3,第二摄像头可以在网格4对应的网格坐标(18,50,4,4)处协同追踪目标。
204、所述第二摄像头利用所述第二网格坐标协同追踪所述目标。
另外,第二摄像头通过控制器接收第一摄像头发送的第一网格编号时,还可以通过控制器接收第一摄像头发送的目标的特征信息,在第二摄像头查找到第二网格坐标之后,还可以根据目标的特征信息进一步确认第二网格坐标处的目标是否为所需追踪的目标。
在第二摄像头协同追踪目标的过程中,若目标在第一摄像头的视域画面中消失,则第一摄像头可通过控制器向第二摄像头发送目标消失通知,此后,第二摄像头将计算目标在第二摄像头的视域画面中的第二中心坐标,利用针对第二摄像头的视域画面预置的网格坐标列表,查找第二中心坐标对应的第三网格坐标及第三网格坐标对应的第三网格编号;将第三网格编号通过控制器发送给相邻摄像头。
上述过程可以理解为,初始时,当确定目标位于第一摄像头的视域画面中时,可以认为第一摄像头为主摄像头,与第一摄像头相邻的其他摄像头为从摄像头,第一摄像头将目标在自身的视域画面中的位置坐标(即网格编号)共享给其他相邻摄像头,其他相邻摄像头根据目标的位置坐标可以实现协同追踪。一旦目标从第一摄像头的视域画面中消失,第一摄像头向其他相邻摄像头发送目标消失通知之后,可以认为目标当前所处的其他相邻摄像头为主摄像头,该相邻摄像头同样将目标的位置坐标共享给与自身相邻的其他摄像头,以使得其他摄像头根据共享的位置坐标协同追踪目标。
下面从控制器侧描述本发明实施例提供的目标监控方法,请参阅图3,本实施例的方法包括:
301、控制器接收第一摄像头发送的第一网格编号,第一网格编号由第一摄像头根据目标在第一摄像头的视域画面中的第一中心坐标,利用针对第一摄像头的视域画面预置的网格坐标列表,查找第一中心坐标对应的第一网格坐标及第一网格坐标对应的第一网格编号得到。
具体实现中,当需要追踪某个目标时,可由控制器先确定目标当前处于哪个摄像头的视域中,例如确定目标当前处于第一摄像头的视域中,则控制器可以向第一摄像头发送目标追踪请求,目标追踪请求中可以包括目标的特征信息,目标的特征信息例如目标的颜色、灰度等,目标的特征信息可通过图像、文本等表现。目标追踪请求可由控制器直接发起,目标追踪请求也可以是由控制器从其他设备处接收的。
第一摄像头接收到控制器发送的目标追踪请求之后,可以根据目标的特征信息检测以确定目标
在自身的视域画面中的位置,根据目标在自身的视域画面中的位置追踪目标;或者可以由用户在第一摄像头的视域画面中手动框选出目标的位置,第一摄像头根据用户选择的位置追踪目标。第一摄像头追踪到目标之后,利用目标在第一摄像头的视域画面中的第一中心坐标,查找针对第一摄像头的视域画面预置的网格坐标列表得到第一中心坐标对应的第一网格坐标及第一网格坐标对应的第一网格编号,将第一网格编号发送给控制器,控制器接收第一摄像头发送的第一网格编号。
302、控制器将第一网格编号发送给第二摄像头,以使得第二摄像头根据第一网格编号协同追踪目标;第二摄像头利用针对重叠区域预置的网格对应表查找是否存在与第一网格编号对应的第二网格编号,若存在,则利用针对第二摄像头的视域画面预置的网格坐标列表查找第二网格编号对应的第二网格坐标,利用第二网格坐标协同追踪目标。
当目标在第一摄像头的视域画面中消失时,第一摄像头向控制器发送目标消失通知,控制器接收第一摄像头发送的目标消失通知,将目标消失通知发送给第二摄像头,第二摄像头接收到目标消失通知之后,计算目标在第二摄像头的视域画面中的第二中心坐标,利用针对第二摄像头的视域画面预置的网格坐标列表,查找第二中心坐标对应的第三网格坐标及所第三网格坐标对应的第三网格编号,将第三网格编号发送给控制器,控制器将第三网格编号发送给第一摄像头,第一摄像头根据第三网格编号协同追踪目标。
由于网格坐标列表及网格对应表存储在对应的摄像头内,由对应摄像头进行相应的计算处理,本实施例的控制器主要信息传递的作用。
为描述简洁,本实施例未做详细描述的步骤可参阅上述实施例的描述。
本实施例中,按照相邻摄像头的视域画面要存在重叠区域的方式布置摄像头,针对每个摄像头的视域画面预置网格坐标列表,针对每个摄像头与其他摄像头的视域画面重叠区域预置网格对应表,将网格坐标列表及网格对应表存储在对应的摄像头内,当第一摄像头追踪到目标后,会将目标的位置(即网格编号)共享给相邻的第二摄像头,在目标进入相邻的第二摄像头的视域画面时,第二摄像头利用相邻的第一摄像头发送的网格编号及预置的网格对应表及网格坐标列表即可确定目标的位置,不需要自行检测以确定目标的位置,因而能够提高协同追踪效率。
下面介绍当将所建立的网格坐标列表及网格对应表存储在控制器时,本发明实施例提供的目标监控方法,请参阅图4,本实施例提供的另一种方法包括:
401、控制器接收第一摄像头发送的包含目标的视域画面。
具体实现中,控制器内存储有针对第一摄像头的视域画面预置的网格坐标列表,针对第二摄像头的视域画面预置的网格坐标列表,以及针对第一摄像头与相邻的第二摄像头的视域重叠区域预置的网格对应表。
当需要追踪某个目标时,可由控制器先确定目标当前处于哪个摄像头的视域中,例如确定目标当前处于第一摄像头的视域中,则控制器可以向第一摄像头发送目标追踪请求,目标追踪请求中可以包括目标的特征信息,目标的特征信息例如目标的颜色、灰度等,目标的特征信息可通过图像、文本等表现。第一摄像头接收控制器发送的目标追踪请求,第一摄像头可以根据目标的特征信息检测以确定目标在自身的视域画面中的位置,根据目标在自身的视域画面中的位置追踪目标;或者可以由用户在第一摄像头的视域画面中手动框选出目标的位置,第一摄像头根据用户选择的位置追踪目标。第一摄像头追踪到目标后,将包含目标的视域画面发送给控制器。
402、控制器计算目标在第一摄像头的视域画面中的中心坐标,利用针对第一摄像头的视域画面预置的网格坐标列表确定中心坐标对应的第一网格坐标及第一网格坐标对应的第一网格编号。
中心坐标可以是一个二维坐标,中心坐标可以指目标的中心点在第一摄像头的视域画面中的坐标,例如中心坐标为P(x0,y0)。如表1所示,网格坐标列表中包括网格坐标,每个网格坐标包括每个网格的宽度和高度,可以将中心坐标P(x0,y0)与每个网格坐标进行比对,P(x0,y0)落入哪个网格的坐标范围内,则可以确定P(x0,y0)对应该网格坐标,确定该网格坐标对应的网格编号之后就可以确定目标进入了该网格。以控制器内存储的针对第一摄像头的视域画面预置的网格坐标列表如表1所示为例,若第一中心坐标为P(23,42),则根据表1可确定目标当前进入了第一摄像头的视域画面中编号为2的网格内。
403、控制器利用针对所述重叠区域预置的网格对应表查找是否存在与第一网格编号对应的第二网格编号,若存在,则利用针对第二摄像头的视域画面预置的网格坐标列表查找第二网格编号对应的第二网格坐标。
若控制器利用针对所述重叠区域预置的网格对应表查找的结果是不存在与所述第一网格编号对应的第二网格编号,则说明目标未进入第一摄像头与第二摄像头的视域重叠区域,控制器返回执行步骤401,继续接收第一摄像头发送的包括目标的视域画面。
若控制器利用针对所述重叠区域预置的网格对应表查找的结果是存在与所述第一网格编号对应的第二网格编号,则说明目标已进入第一摄像头与第二摄像头的视域重叠区域。具体在上面的例子中,若控制器计算得到的第一网格编号为2,针对重叠区域预置的网格对应表如表2所示,则控制器根据表2可以确定目标进入了第二摄像头的视域画面的网格4。
404、控制器将第二网格坐标发送给所述第二摄像头,以使得所述第二摄像头利用所述第二网格坐标协同追踪所述目标。
若控制器内存储的针对第二摄像头的视域画面预置的网格坐标列表如表3所述,则控制器可以将网格4对应的网格坐标(18,50,4,4)发送给第二摄像头,以使得第二摄像头在网格坐标(18,50,4,4)处协同追踪目标。
下面从摄像头侧描述本发明实施例提供的目标监控方法,请参阅图5,本实施例的方法包括:
501、第二摄像头接收控制器发送的第二网格坐标,第二网格坐标由控制器根据第一摄像头发送的包含目标的视域画面获得;控制器通过计算目标在第一摄像头的视域画面中的中心坐标,利用针对第一摄像头的视域画面预置的网格坐标列表确定所述中心坐标对应的第一网格坐标及所述第一网格坐标对应的第一网格编号,利用针对所述重叠区域预置的网格对应表查找所述第一网格编号对应的第二网格编号,通过查找针对第二摄像头的视域画面预置的网格坐标列表获得所述第二网格编号对应的所述第二网格坐标。
502、第二摄像头利用所述第二网格坐标协同追踪所述目标。
此后,第二摄像头也将追踪到的包含目标的视域画面发送给控制器。
本实施例中,按照相邻摄像头的视域画面要存在重叠区域的方式布置摄像头,针对每个摄像头的视域画面预置网格坐标列表,针对每个摄像头与其他摄像头的视域画面重叠区域预置网格对应表,将网格坐标列表及网格对应表存储在控制器,当第一摄像头追踪到目标后,会将包含目标的视域画面传给控制器,控制器计算目标在第一摄像头的视域画面中的位置,并在确定目标进入两个摄像头的视域重叠区域时,将目标在第二摄像头的视域画面中的位置坐标发送给第二摄像头,第二摄像头根据控制器发送的位置坐标即可确定目标的位置,不需要自行检测以确定目标的位置,因而能够提高协同追踪效率。
下面介绍本发明实施例提供的摄像头及控制器,控制器可以是终端或服务器。当控制器为终端
时,可直接由控制器发起目标追踪请求并协同摄像头追踪目标;当控制器为服务器时,控制器可接收终端发起的目标追踪请求,将目标追踪请求发送给对应摄像头之后,协同摄像头追踪目标。例如,终端内存储有某个要追踪的嫌疑人的照片,如果终端作为控制器,则终端可以直接向摄像头发起目标追踪请求并协同摄像头追踪目标;如果服务器作为控制器,则终端可以将嫌疑人的照片发送给服务器,由服务器向摄像头发起目标追踪请求并协同摄像头追踪目标。
本发明可以实现基于视觉协同的定位追踪目标,通过主摄像头确定追踪目标及其位置信息后,将追踪目标的位置等信息共享给周边其他摄像头,其他摄像头根据此位置信息等可快速地捕获并追踪目标。本发明实施例通过主摄像头共享目标位置和其它辅助信息,可协助其它摄像头快速捕获追踪目标,从而提高检测目标的效率和持续追踪目标的能力,可实现多摄像头高效协作捕获追踪特定目标,能实现对目标的自动连续捕获追踪,发挥了多个摄像头协作追踪的优势。本发明实施例提供的方案包括以下步骤:
首先,对画面空间进行网格化处理:相邻摄像头的视域(Field of View,FOV)要有重叠区域,某个摄像头A结合当前自身的观测角度、高度、焦距,在摄像头当前焦距下所观测到的画面空间上打上一层虚拟的网格,对网格的大小和形状不做限定,只需覆盖所需观测区域即可,并记录每个网格的虚拟坐标形成该摄像头的网格坐标列表GridA。然后构建重叠区域的网格对应关系:经实测确定,在一定焦距和景深下,将两个相邻摄像头(A,B)在各自画面中能对应到相同物理位置点的两个网格建立起对应关系,并记录进网格对应表Map(A,B)。接下来查询共享目标的位置坐标:某个摄像头A根据对象特征检测或手动框选任意待追踪目标后,将其作为当前主摄像头,主摄像头持续追踪目标,并用目标的中心坐标不断与当前画面中网格坐标列表GridA进行匹配,一旦匹配到目标的中心坐标进入到某个网格,即可获知当前目标所在的网格编号k,然后,主摄像头将该网格编号k和目标的附加特征信息(主摄像头提供的目标的其他信息包括:颜色、灰度等)发送给与其相邻其它摄像头B。计算相对位置和角度:相邻摄像头B接收目标数据,结合所收到的该目标的网格编号,查询网格对应表Map(A,B),如能查询到对应到自身的网格编号s,则根据网格编号快速捕获目标并持续追踪:相邻摄像头B根据前述内容获得的网格编号s,对该网格区域内的颜色等特征信息进行判断,如与所收到的附加特征信息相符,则说明追踪目标已进入该相邻摄像头的网格区域s,该摄像头即刻捕获到并开始追踪目标,同时,一旦目标在主摄像头可视区域消失,则本相邻摄像头就变为当前的主摄像头。
接下来以具体的实施例对本发明中的技术方案进行详细描述。本发明实施例提供的方案包括以下步骤:
步骤一、对画面空间进行网格化处理。
相邻摄像头的视域FOV要有重叠区域,摄像头A结合当前自身的观测角度、高度和范围,在摄像头A当前焦距下所观测到的画面空间上打上一层虚拟网格,对这些格子的大小和形状不做限定,只需覆盖所需的观测区域即可,格子越小越密则观测精度越高,距离摄像头较远的格子可以设置得小一些,并记录每个网格的虚拟坐标形成该摄像头的网格坐标列表GridA,网格坐标列表可以保存在摄像头自身的存储设备中,也可保存在中央控制器上(例如视频服务器)。
步骤二、构建重叠区域的网格对应关系。
经实测确定,在一定焦距和观测景深下,将两个相邻摄像头(A,B)在各自画面中能对应到同一物理位置点的两个网格建立起对应关系,并记录进网格对应表Map(A,B),具体地,可以在地面上放置一些定位锚节点,通过两个摄像头的画面观测这些锚节点,再根据在同一网格中所观测到的相同
锚节点来建立摄像头A和摄像头B中的网格的对应关系。
步骤三、查询共享目标的位置坐标。
在摄像头A的拍摄画面中,首先根据给定特征检测或手动框选出待追踪目标后,将A作为当前主摄像头,主摄像头A持续追踪目标,并计算出目标中心点在画面中的坐标P(x0,y0),然后用P(x0,y0)去查网格坐标列表GridA,由于每个GridA中的每个网格都有宽度和高度,用P(x0,y0)逐一计算比对,一旦P(x0,y0)落入了某个网格k(Xk,Yk)的覆盖区域,即可判定目标进入了该网格,由此可获知当前目标所在的网格编号为k,然后,主摄像头A将该网格编号k和目标的附加特征信息(主摄像头提供的目标的其他信息可包括:颜色、灰度等)发送给与其相邻接的其它摄像头B。
步骤四、计算相对位置和角度。
相邻摄像头B接收目标数据,并结合所收到的目标的网格编号,查询网格对应表Map(A,B),如能查询到对应到自身的网格编号s,则转步骤五,否则转步骤四。
步骤五、根据网格编号快速捕获目标并持续追踪。
相邻摄像头B根据步骤四获得的网格编号信息s,再查表GridB可获得目标当前所在网格s的坐标,然后,对该网格区域s内的附加特征信息进行判断,如符合所收到的目标的附加特征信息,则说明追踪目标已进入本摄像头的该网格区域s,本摄像头立刻能捕获到并追踪到该目标,同时,一旦目标在主摄像头可视区域消失,则在中央控制器的协助下,本相邻摄像头B就自动变为当前的主摄像头并转步骤三。
本发明通过共享目标的位置信息,其他摄像头能通过获取到的追踪目标的位置和特征信息,快速捕获到目标并进行连续追踪,缩短了检测时间,提高了追踪效率,有利于多个摄像头自动高效协作追踪特定目标;主摄像头角色的切换保证了追踪的连续性和稳定性,充分发挥了多个摄像头合作的优势。本发明能提高多个摄像头联动识别追踪目标的效率,在安全监控、交通监控、公安监控尤其是实时自动追踪目标方面有广阔的应用前景。此外,如果结合摄像头捕获的其他信息,例如被追踪对象附近的商铺信息,还可以为基于位置服务(location based service)提供很好的基础。既能在不同位置安装短距离通信设备的开销,又能提高目标监控效率。
前述实施例对本发明提供的目标监控方法进行了详细说明,接下来对本发明实施例提供的摄像头、控制器以及目标监控系统进行详细说明。其中摄像头的组成结构可如图6-a、图6-b、图6-c所示,目标监控系统的结构如图7-a和图7-b所示,控制器的结构可如图8-a、8-b、8-c、8-d所示,另一种目标监控系统的结构如图9-a和图9-b所示。
图6-a所示的摄像头600具体为目标监控系统中的第一摄像头,所述目标监控系统包括所述第一摄像头及第二摄像头,所述第一摄像头600包括:
第一位置获取模块601,用于当所述第一摄像头作为当前的主监控摄像头时,获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;
重叠区域判断模块602,用于根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;
切换模块603,用于若所述目标在所述第一监控画面中的位置在所述重叠区域内,将当前的主监控摄像头切换为所述第二摄像头。
在本发明的一些实施例中,如图6-b所示,所述重叠区域判断模块602,包括:
网格确定模块6021,用于根据所述目标在所述第一监控画面中的位置信息,确定所述目标落
入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;
第一网格查询模块6022,用于根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;
重叠区域确定模块6023,用于若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
在本发明的一些实施例中,如图6-c所示,所述第一摄像头600还包括:特征检测模块604,其中,
所述特征检测模块604,用于所述第一位置获取模块601获取待追踪的目标在第一监控画面中的位置信息之前,获取所述目标的特征信息,并根据所述特征信息检测所述目标是否出现在所述第一监控画面中;若所述目标出现在所述第一监控画面中,触发执行所述位置获取模块。
请参阅图7-a所示,所述目标监控系统700包括:如前述图6-a、图6-b、图6-c中任一项所述的第一摄像头701和第二摄像头702。
进一步的,在图7-b中,所述第二摄像头702包括:
第二网格查询模块7021,用于所述第一摄像头将当前的主监控摄像头切换为所述第二摄像头之后,当所述第二摄像头切换为主监控摄像头时,根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;
目标锁定模块7022,用于若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则确定在所述第二监控画面中查找到所述目标;
第二位置获取模块7023,用于根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
请参阅图8-a所示,本发明实施例提供一种控制器800,所述控制器800部署在目标监控系统中,所述目标监控系统包括:所述控制器800、第一摄像头和第二摄像头,所述控制器800包括:
位置获取模块801,用于当所述第一摄像头作为当前的主监控摄像头时,获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;
重叠区域判断模块802,用于根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;
切换模块803,用于若所述目标在所述第一监控画面中的位置在所述重叠区域内,将当前的主监控摄像头切换为所述第二摄像头。
在本发明的一些实施例中,请参阅图8-b所示,所述重叠区域判断模块802,包括:
网格确定模块8021,用于根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;
网格查询模块8022,用于根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;
重叠区域确定模块8023,用于若在所述网格对应表中查询到所述目标落入所述第一监控画面
中的网格编号,确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
在本发明的一些实施例中,请参阅图8-c所示,所述控制器800还包括:
目标锁定模块804,用于所述第一摄像头将当前的主监控摄像头切换为所述第二摄像头之后,当所述第二摄像头切换为主监控摄像头时,根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则确定在所述第二监控画面中查找到所述目标;
所述位置获取模块801,还用于根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
在本发明的一些实施例中,所述位置获取模块801,具体用于所述位置获取模块获取待追踪的目标在第一监控画面中的位置信息之前,并在第一监控画面中检测所述目标的特征信息;若在所述第一监控画面中检测到所述特征信息,计算所述目标在所述第一监控画面中的位置信息,并向所述第一摄像头发送所述目标在所述第一监控画面中的位置信息。
在本发明的一些实施例中,所述目标监控系统还包括监控屏幕,所述切换模块803,具体用于将所述第二监控画面切换到所述监控屏幕上;或者,将所述第二监控画面在所述监控屏幕上突出显示;或者,将所述第二监控画面与所述第一监控画面串接在一起显示在所述监控屏幕上。
在本发明的一些实施例中,所述目标监控系统还包括存储器,请参阅图8-d所示,所述控制器800还包括:存储模块805,用于所述切换模块803将当前的主监控摄像头切换为所述第二摄像头之后,将所述第二摄像头对所述目标拍摄得到的第二监控画面存储到所述存储器中。
请参阅图9-a所示,所述目标监控系统900包括:如前述图8-a、图8-b、图8-c、图8-d中任一项所述的控制器901、第一摄像头902和第二摄像头903。
进一步的,在图9-b中,所述目标监控系统900,还包括:监控屏幕904,和/或存储器905。
图9-b中以目标监控系统900包括监控屏幕904和存储器905进行示意说明。
在前述实施例中所述的第一摄像头以及控制器中,请分别参阅图10和图11,图10中,第一摄像头1000包括:存储器1001、处理器1002、收发器1003,图11中,控制器1100包括:存储器1101、处理器1102、收发器1103,存储器可以包含随机存取存储器(Random Access Memory,RAM)、非易失性存储器(non-volatile memory)、磁盘存储器等。收发器实现接收单元和发送单元的功能,收发器可以由天线、电路等构成。处理器可以是一个中央处理器(Central Processing Unit,CPU),或者处理器可以是特定集成电路(Application Specific Integrated Circuit,简称为ASIC),或者处理器可以是被配置成实施本发明实施例的一个或多个集成电路。处理器与存储器及收发器之间可通过总线相连,处理器用于控制存储器的读写及收发器的收发。处理器具体执行的方法步骤可参阅前述实施例中对方法部分的具体描述,此处不再赘述。
需要说明的是,本发明装置实施例未做详细描述的步骤及有益效果均可参阅对应方法实施例的描述,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦
合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。
Claims (20)
- 一种目标监控方法,其特征在于,所述目标监控方法应用于目标监控系统中,所述目标监控系统包括第一摄像头及第二摄像头,所述目标监控方法包括:当所述第一摄像头作为当前的主监控摄像头时,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;所述目标监控系统根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;若所述目标在所述第一监控画面中的位置在所述重叠区域内,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头。
- 根据权利要求1所述的方法,其特征在于,所述目标监控系统根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,包括:所述目标监控系统根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;所述目标监控系统根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,所述目标监控系统确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
- 根据权利要求2所述的方法,其特征在于,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头之后,所述方法还包括:当所述第二摄像头切换为主监控摄像头时,所述目标监控系统根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则所述目标监控系统确定在所述第二监控画面中查找到所述目标;所述目标监控系统根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息之前,所述方法还包括:所述目标监控系统获取所述目标的特征信息,并根据所述特征信息检测所述目标是否出现在所述第一监控画面中;若所述目标出现在所述第一监控画面中,触发执行如下步骤:所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息。
- 根据权利要求1至4中任一项所述的方法,其特征在于,所述目标监控系统还包括控制器,所述目标监控系统获取待追踪的目标在第一监控画面中的位置信息,包括:所述控制器获取所述目标的特征信息,并在第一监控画面中检测所述目标的特征信息;若在所述第一监控画面中检测到所述特征信息,所述控制器计算所述目标在所述第一监控画面中的位置信息,并向所述第一摄像头发送所述目标在所述第一监控画面中的位置信息。
- 根据权利要求1至5中任一项所述的方法,其特征在于,所述目标监控系统还包括控制器和监控屏幕,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头,具体包括:所述控制器将所述第二监控画面切换到所述监控屏幕上;或者,所述控制器将所述第二监控画面在所述监控屏幕上突出显示;或者,所述控制器将所述第二监控画面与所述第一监控画面串接在一起显示在所述监控屏幕上。
- 根据权利要求1至6中任一项所述的方法,其特征在于,所述目标监控系统还包括控制器和存储器,所述目标监控系统将当前的主监控摄像头切换为所述第二摄像头之后,所述方法还包括:所述控制器将所述第二摄像头对所述目标拍摄得到的第二监控画面存储到所述存储器中。
- 一种摄像头,其特征在于,所述摄像头具体为目标监控系统中的第一摄像头,所述目标监控系统包括所述第一摄像头及第二摄像头,所述第一摄像头包括:第一位置获取模块,用于当所述第一摄像头作为当前的主监控摄像头时,获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;重叠区域判断模块,用于根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;切换模块,用于若所述目标在所述第一监控画面中的位置在所述重叠区域内,将当前的主监控摄像头切换为所述第二摄像头。
- 根据权利要求8所述的摄像头,其特征在于,所述重叠区域判断模块,包括:网格确定模块,用于根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;第一网格查询模块,用于根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;重叠区域确定模块,用于若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
- 根据权利要求8或9所述的摄像头,其特征在于,所述第一摄像头还包括:特征检测模块,其中,所述特征检测模块,用于所述第一位置获取模块获取待追踪的目标在第一监控画面中的位置信息之前,获取所述目标的特征信息,并根据所述特征信息检测所述目标是否出现在所述第一监控画面中;若所述目标出现在所述第一监控画面中,触发执行所述位置获取模块。
- 一种目标监控系统,其特征在于,所述目标监控系统包括:如前述权利要求8至10中任一项所述的第一摄像头和第二摄像头。
- 根据权利要求11述的目标监控系统,其特征在于,所述第二摄像头包括:第二网格查询模块,用于所述第一摄像头将当前的主监控摄像头切换为所述第二摄像头之后,当所述第二摄像头切换为主监控摄像头时,根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;目标锁定模块,用于若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编 号,则确定在所述第二监控画面中查找到所述目标;第二位置获取模块,用于根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
- 一种控制器,其特征在于,所述控制器部署在目标监控系统中,所述目标监控系统包括:所述控制器、第一摄像头和第二摄像头,所述控制器包括:位置获取模块,用于当所述第一摄像头作为当前的主监控摄像头时,获取待追踪的目标在第一监控画面中的位置信息,所述第一监控画面由所述第一摄像头拍摄;重叠区域判断模块,用于根据所述目标在第一监控画面中的位置信息判断所述目标在所述第一监控画面中的位置是否在重叠区域内,所述重叠区域为所述第一摄像头的视域与所述第二摄像头的视域的重叠范围;切换模块,用于若所述目标在所述第一监控画面中的位置在所述重叠区域内,将当前的主监控摄像头切换为所述第二摄像头。
- 根据权利要求13所述的控制器,其特征在于,所述重叠区域判断模块,包括:网格确定模块,用于根据所述目标在所述第一监控画面中的位置信息,确定所述目标落入所述第一监控画面中的网格编号,所述第一监控画面上预置有多个网格,所述多个网格具有不同的网格编号,所述多个网格对应于所述第一监控画面中不同的位置区域;网格查询模块,用于根据所述目标落入所述第一监控画面中的网格编号查询重叠区域的网格对应表,所述网格对应表包括同一个物理位置点在所述第一监控画面中的网格和在第二监控画面中的网格之间的对应关系,所述第二监控画面由所述第二摄像头拍摄;重叠区域确定模块,用于若在所述网格对应表中查询到所述目标落入所述第一监控画面中的网格编号,确定所述目标在所述第一监控画面中的位置在所述重叠区域内。
- 根据权利要求13或14中任一项所述的控制器,其特征在于,所述控制器还包括:目标锁定模块,用于所述第一摄像头将当前的主监控摄像头切换为所述第二摄像头之后,当所述第二摄像头切换为主监控摄像头时,根据获取到的所述目标落入所述第一监控画面中的网格编号查询所述网格对应表;若在所述网格对应表中查询到所述目标落入所述第二监控画面中的网格编号,则确定在所述第二监控画面中查找到所述目标;所述位置获取模块,还用于根据所述目标落入所述第二监控画面中的网格编号获取到所述目标在所述第二监控画面中的位置信息。
- 根据权利要求13至15中任一项所述的控制器,其特征在于,所述位置获取模块,具体用于所述位置获取模块获取待追踪的目标在第一监控画面中的位置信息之前,并在第一监控画面中检测所述目标的特征信息;若在所述第一监控画面中检测到所述特征信息,计算所述目标在所述第一监控画面中的位置信息,并向所述第一摄像头发送所述目标在所述第一监控画面中的位置信息。
- 根据权利要求13至16中任一项所述的控制器,其特征在于,所述目标监控系统还包括监控屏幕,所述切换模块,具体用于将所述第二监控画面切换到所述监控屏幕上;或者,将所述第二监控画面在所述监控屏幕上突出显示;或者,将所述第二监控画面与所述第一监控画面串接在一起显示在所述监控屏幕上。
- 根据权利要求13至17中任一项所述的控制器,其特征在于,所述目标监控系统还包括存储器,所述控制器还包括:存储模块,用于所述切换模块将当前的主监控摄像头切换为所述第二摄像头之后,将所述第二摄像头对所述目标拍摄得到的第二监控画面存储到所述存储器中。
- 一种目标监控系统,其特征在于,所述目标监控系统包括:如前述权利要求13至18中任一项所述的控制器、第一摄像头和第二摄像头。
- 根据权利要求19所述的目标监控系统,其特征在于,所述目标监控系统,还包括:监控屏幕,和/或存储器。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/321,744 US11023727B2 (en) | 2016-07-29 | 2017-07-13 | Target monitoring method, camera, controller, and target monitoring system |
EP17833438.9A EP3483837B1 (en) | 2016-07-29 | 2017-07-13 | Target monitoring method, camera, controller and target monitoring system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610624921.4 | 2016-07-29 | ||
CN201610624921.4A CN107666590B (zh) | 2016-07-29 | 2016-07-29 | 一种目标监控方法、摄像头、控制器和目标监控系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018019135A1 true WO2018019135A1 (zh) | 2018-02-01 |
Family
ID=61016858
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/092864 WO2018019135A1 (zh) | 2016-07-29 | 2017-07-13 | 一种目标监控方法、摄像头、控制器和目标监控系统 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11023727B2 (zh) |
EP (1) | EP3483837B1 (zh) |
CN (1) | CN107666590B (zh) |
WO (1) | WO2018019135A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108924429A (zh) * | 2018-08-27 | 2018-11-30 | Oppo广东移动通信有限公司 | 一种预览画面显示方法、预览画面显示装置及终端设备 |
CN111093059A (zh) * | 2019-12-12 | 2020-05-01 | 深圳市大拿科技有限公司 | 监控方法及相关设备 |
WO2020153568A1 (en) | 2019-01-21 | 2020-07-30 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
CN113114938A (zh) * | 2021-04-12 | 2021-07-13 | 滁州博格韦尔电气有限公司 | 一种基于电子信息的目标精确监控系统 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091025B (zh) * | 2018-10-23 | 2023-04-18 | 阿里巴巴集团控股有限公司 | 图像处理方法、装置和设备 |
CN111291585B (zh) * | 2018-12-06 | 2023-12-08 | 杭州海康威视数字技术股份有限公司 | 一种基于gps的目标跟踪系统、方法、装置及球机 |
KR20210149722A (ko) | 2019-04-08 | 2021-12-09 | 가부시키가이샤 유야마 세이사쿠쇼 | 약품 불출 시스템, 약품 불출 프로그램 |
CN110472551A (zh) * | 2019-08-09 | 2019-11-19 | 视云融聚(广州)科技有限公司 | 一种提高准确度的跨镜追踪方法、电子设备及存储介质 |
CN110446014B (zh) * | 2019-08-26 | 2021-07-20 | 达闼机器人有限公司 | 一种监控方法、监控设备及计算机可读存储介质 |
CN112468765B (zh) * | 2019-09-06 | 2022-04-15 | 杭州海康威视系统技术有限公司 | 跟踪目标对象的方法、装置、系统、设备及存储介质 |
US20240297962A1 (en) * | 2019-10-21 | 2024-09-05 | Sony Group Corporation | Display control device, display control method, and recording medium |
CN112911205B (zh) * | 2019-12-04 | 2023-08-29 | 上海图漾信息科技有限公司 | 监测系统和方法 |
CN113011445A (zh) * | 2019-12-19 | 2021-06-22 | 斑马智行网络(香港)有限公司 | 标定方法、识别方法、装置及设备 |
US11593951B2 (en) * | 2020-02-25 | 2023-02-28 | Qualcomm Incorporated | Multi-device object tracking and localization |
CN111405203B (zh) * | 2020-03-30 | 2022-11-04 | 杭州海康威视数字技术股份有限公司 | 一种画面切换的确定方法、装置、电子设备及存储介质 |
CN111711845B (zh) * | 2020-06-29 | 2022-07-08 | 广州视源电子科技股份有限公司 | 信号处理方法、设备、系统及存储介质 |
CN111768433B (zh) * | 2020-06-30 | 2024-05-24 | 杭州海康威视数字技术股份有限公司 | 一种移动目标追踪的实现方法、装置及电子设备 |
US11812182B1 (en) * | 2020-09-28 | 2023-11-07 | United Services Automobile Association (Usaa) | Field of view handoff for home security |
CN112380894B (zh) * | 2020-09-30 | 2024-01-19 | 北京智汇云舟科技有限公司 | 一种基于三维地理信息系统的视频重叠区域目标去重方法和系统 |
CN112637550B (zh) * | 2020-11-18 | 2022-12-16 | 合肥市卓迩无人机科技服务有限责任公司 | 多路4k准实时拼接视频的ptz动目标跟踪方法 |
WO2022198442A1 (zh) * | 2021-03-23 | 2022-09-29 | 深圳市锐明技术股份有限公司 | 一种货箱监控方法、终端设备及存储介质 |
CN113012199B (zh) * | 2021-03-23 | 2024-01-12 | 北京灵汐科技有限公司 | 运动目标追踪的系统和方法 |
CN113301273B (zh) * | 2021-05-24 | 2023-06-13 | 浙江大华技术股份有限公司 | 跟踪方式的确定方法、装置、存储介质及电子装置 |
CN114125267B (zh) * | 2021-10-19 | 2024-01-19 | 上海赛连信息科技有限公司 | 一种摄像头智能跟踪的方法和装置 |
CN115209191A (zh) * | 2022-06-14 | 2022-10-18 | 海信视像科技股份有限公司 | 显示设备、终端设备和设备间共享摄像头的方法 |
CN116600194B (zh) * | 2023-05-05 | 2024-07-23 | 长沙妙趣新媒体技术有限公司 | 一种用于多镜头的切换控制方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101068342A (zh) * | 2007-06-05 | 2007-11-07 | 西安理工大学 | 基于双摄像头联动结构的视频运动目标特写跟踪监视方法 |
JP2011193187A (ja) * | 2010-03-15 | 2011-09-29 | Omron Corp | 監視カメラ端末 |
CN102387345A (zh) * | 2011-09-09 | 2012-03-21 | 浙江工业大学 | 基于全方位视觉的独居老人安全监护系统 |
CN102414719A (zh) * | 2009-07-22 | 2012-04-11 | 欧姆龙株式会社 | 监视摄像机终端 |
CN104123732A (zh) * | 2014-07-14 | 2014-10-29 | 中国科学院信息工程研究所 | 一种基于多摄像头的在线目标跟踪方法及系统 |
Family Cites Families (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0795598A (ja) * | 1993-09-25 | 1995-04-07 | Sony Corp | 目標追尾装置 |
US6922493B2 (en) * | 2002-03-08 | 2005-07-26 | Anzus, Inc. | Methods and arrangements to enhance gridlocking |
US20030202102A1 (en) * | 2002-03-28 | 2003-10-30 | Minolta Co., Ltd. | Monitoring system |
US6791603B2 (en) * | 2002-12-03 | 2004-09-14 | Sensormatic Electronics Corporation | Event driven video tracking system |
US7242423B2 (en) * | 2003-06-16 | 2007-07-10 | Active Eye, Inc. | Linking zones for object tracking and camera handoff |
JP4195991B2 (ja) * | 2003-06-18 | 2008-12-17 | パナソニック株式会社 | 監視映像モニタリングシステム、監視映像生成方法、および監視映像モニタリングサーバ |
MY147105A (en) * | 2003-09-03 | 2012-10-31 | Stratech Systems Ltd | Apparatus and method for locating, identifying and tracking vehicles in a parking area |
US7450735B1 (en) * | 2003-10-16 | 2008-11-11 | University Of Central Florida Research Foundation, Inc. | Tracking across multiple cameras with disjoint views |
WO2005107240A1 (ja) * | 2004-04-28 | 2005-11-10 | Chuo Electronics Co., Ltd. | 自動撮影方法および装置 |
JP4587166B2 (ja) * | 2004-09-14 | 2010-11-24 | キヤノン株式会社 | 移動体追跡システム、撮影装置及び撮影方法 |
JP4650669B2 (ja) * | 2004-11-04 | 2011-03-16 | 富士ゼロックス株式会社 | 動体認識装置 |
US20060107296A1 (en) * | 2004-11-16 | 2006-05-18 | Motorola, Inc. | Remote image tracking and methods thereof |
US10019877B2 (en) * | 2005-04-03 | 2018-07-10 | Qognify Ltd. | Apparatus and methods for the semi-automatic tracking and examining of an object or an event in a monitored site |
US8284254B2 (en) * | 2005-08-11 | 2012-10-09 | Sightlogix, Inc. | Methods and apparatus for a wide area coordinated surveillance system |
US8599267B2 (en) | 2006-03-15 | 2013-12-03 | Omron Corporation | Tracking device, tracking method, tracking device control program, and computer-readable recording medium |
DE102006012239A1 (de) * | 2006-03-16 | 2007-09-20 | Siemens Ag | Video-Überwachungssystem |
US8335345B2 (en) | 2007-03-05 | 2012-12-18 | Sportvision, Inc. | Tracking an object with multiple asynchronous cameras |
GB2452512B (en) * | 2007-09-05 | 2012-02-29 | Sony Corp | Apparatus and method of object tracking |
TWI391801B (zh) * | 2008-12-01 | 2013-04-01 | Inst Information Industry | 接手視訊監控方法與系統以及電腦裝置 |
IL201131A (en) * | 2009-09-23 | 2014-08-31 | Verint Systems Ltd | Location-based multimedia monitoring systems and methods |
IL201129A (en) * | 2009-09-23 | 2014-02-27 | Verint Systems Ltd | A system and method for automatically switching cameras according to location measurements |
CN102137251A (zh) * | 2010-01-22 | 2011-07-27 | 鸿富锦精密工业(深圳)有限公司 | 影像监控系统及方法 |
WO2011114799A1 (ja) * | 2010-03-15 | 2011-09-22 | オムロン株式会社 | 監視カメラ端末 |
JP2012065010A (ja) * | 2010-09-14 | 2012-03-29 | Hitachi Kokusai Electric Inc | 遠隔映像監視システム |
KR20120110422A (ko) * | 2011-03-29 | 2012-10-10 | 주식회사 아이티엑스시큐리티 | 방범 시스템과 연동할 수 있는 지능형 영상인식장치 및 그 연동방법 |
CN202190348U (zh) | 2011-04-01 | 2012-04-11 | 天津长城科安电子科技有限公司 | 目标自动跟踪智能摄像机 |
KR101758684B1 (ko) * | 2012-07-23 | 2017-07-14 | 한화테크윈 주식회사 | 객체 추적 장치 및 방법 |
US8698896B2 (en) * | 2012-08-06 | 2014-04-15 | Cloudparc, Inc. | Controlling vehicle use of parking spaces and parking violations within the parking spaces using multiple cameras |
KR101726692B1 (ko) * | 2012-08-24 | 2017-04-14 | 한화테크윈 주식회사 | 객체 추출 장치 및 방법 |
CN102821246B (zh) * | 2012-08-29 | 2015-04-15 | 上海天跃科技股份有限公司 | 一种摄像头联动控制方法及监控系统 |
KR20140106927A (ko) * | 2013-02-27 | 2014-09-04 | 한국전자통신연구원 | 파노라마 생성 장치 및 방법 |
CN103400371B (zh) | 2013-07-09 | 2016-11-02 | 河海大学 | 一种多摄像头协同监控设备及方法 |
CN103607569B (zh) * | 2013-11-22 | 2017-05-17 | 广东威创视讯科技股份有限公司 | 视频监控中的监控目标跟踪方法和系统 |
US9756074B2 (en) * | 2013-12-26 | 2017-09-05 | Fireeye, Inc. | System and method for IPS and VM-based detection of suspicious objects |
JP6371553B2 (ja) * | 2014-03-27 | 2018-08-08 | クラリオン株式会社 | 映像表示装置および映像表示システム |
KR101472077B1 (ko) * | 2014-04-01 | 2014-12-16 | 주식회사 베스트디지탈 | 누적된 객체 특징을 기반으로 하는 감시 시스템 및 방법 |
JP6283105B2 (ja) * | 2014-05-28 | 2018-02-21 | 京セラ株式会社 | ステレオカメラ装置、ステレオカメラ装置を設置した車両及びプログラム |
US10442355B2 (en) * | 2014-09-17 | 2019-10-15 | Intel Corporation | Object visualization in bowl-shaped imaging systems |
US10070077B2 (en) * | 2014-09-26 | 2018-09-04 | Sensormatic Electronics, LLC | System and method for automated camera guard tour operation |
WO2016074123A1 (zh) * | 2014-11-10 | 2016-05-19 | 深圳锐取信息技术股份有限公司 | 一种视频生成系统的视频生成方法及装置 |
CN104660998B (zh) * | 2015-02-16 | 2018-08-07 | 阔地教育科技有限公司 | 一种接力跟踪方法及系统 |
US20180077355A1 (en) * | 2015-03-17 | 2018-03-15 | Nec Corporation | Monitoring device, monitoring method, monitoring program, and monitoring system |
US10075651B2 (en) * | 2015-04-17 | 2018-09-11 | Light Labs Inc. | Methods and apparatus for capturing images using multiple camera modules in an efficient manner |
US10007849B2 (en) * | 2015-05-29 | 2018-06-26 | Accenture Global Solutions Limited | Predicting external events from digital video content |
JP6700752B2 (ja) * | 2015-12-01 | 2020-05-27 | キヤノン株式会社 | 位置検出装置、位置検出方法及びプログラム |
CN105763847A (zh) * | 2016-02-26 | 2016-07-13 | 努比亚技术有限公司 | 一种监控方法及监控终端 |
-
2016
- 2016-07-29 CN CN201610624921.4A patent/CN107666590B/zh active Active
-
2017
- 2017-07-13 US US16/321,744 patent/US11023727B2/en active Active
- 2017-07-13 WO PCT/CN2017/092864 patent/WO2018019135A1/zh unknown
- 2017-07-13 EP EP17833438.9A patent/EP3483837B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101068342A (zh) * | 2007-06-05 | 2007-11-07 | 西安理工大学 | 基于双摄像头联动结构的视频运动目标特写跟踪监视方法 |
CN102414719A (zh) * | 2009-07-22 | 2012-04-11 | 欧姆龙株式会社 | 监视摄像机终端 |
JP2011193187A (ja) * | 2010-03-15 | 2011-09-29 | Omron Corp | 監視カメラ端末 |
CN102387345A (zh) * | 2011-09-09 | 2012-03-21 | 浙江工业大学 | 基于全方位视觉的独居老人安全监护系统 |
CN104123732A (zh) * | 2014-07-14 | 2014-10-29 | 中国科学院信息工程研究所 | 一种基于多摄像头的在线目标跟踪方法及系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3483837A4 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108924429A (zh) * | 2018-08-27 | 2018-11-30 | Oppo广东移动通信有限公司 | 一种预览画面显示方法、预览画面显示装置及终端设备 |
CN108924429B (zh) * | 2018-08-27 | 2020-08-21 | Oppo广东移动通信有限公司 | 一种预览画面显示方法、预览画面显示装置及终端设备 |
WO2020153568A1 (en) | 2019-01-21 | 2020-07-30 | Samsung Electronics Co., Ltd. | Electronic apparatus and control method thereof |
EP3874453A4 (en) * | 2019-01-21 | 2022-03-23 | Samsung Electronics Co., Ltd. | ELECTRONIC DEVICE AND ITS CONTROL METHOD |
CN111093059A (zh) * | 2019-12-12 | 2020-05-01 | 深圳市大拿科技有限公司 | 监控方法及相关设备 |
CN113114938A (zh) * | 2021-04-12 | 2021-07-13 | 滁州博格韦尔电气有限公司 | 一种基于电子信息的目标精确监控系统 |
CN113114938B (zh) * | 2021-04-12 | 2022-07-19 | 滁州博格韦尔电气有限公司 | 一种基于电子信息的目标精确监控系统 |
Also Published As
Publication number | Publication date |
---|---|
EP3483837A1 (en) | 2019-05-15 |
EP3483837A4 (en) | 2019-05-15 |
US11023727B2 (en) | 2021-06-01 |
US20190163974A1 (en) | 2019-05-30 |
CN107666590B (zh) | 2020-01-17 |
CN107666590A (zh) | 2018-02-06 |
EP3483837B1 (en) | 2020-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018019135A1 (zh) | 一种目标监控方法、摄像头、控制器和目标监控系统 | |
JP6696615B2 (ja) | 監視システム、監視方法、及び監視プログラムを記憶する記録媒体 | |
JP6399356B2 (ja) | 追跡支援装置、追跡支援システムおよび追跡支援方法 | |
JP6621063B2 (ja) | カメラ選択方法及び映像配信システム | |
KR102296088B1 (ko) | 보행자 추적 방법 및 전자 디바이스 | |
KR101472077B1 (ko) | 누적된 객체 특징을 기반으로 하는 감시 시스템 및 방법 | |
JP6622650B2 (ja) | 情報処理装置及びその制御方法、撮影システム | |
KR101695249B1 (ko) | 감시 영상 표시 방법 및 시스템 | |
JP2013168757A (ja) | 映像監視装置、監視システム、監視システム構築方法 | |
JP6077655B2 (ja) | 撮影システム | |
US20130090133A1 (en) | Apparatus and method for identifying point of interest in contents sharing system | |
JP2009171296A (ja) | 映像ネットワークシステム及び映像データ管理方法 | |
CN103686131A (zh) | 使用图像的3d信息的监控设备和系统以及监控方法 | |
WO2014082407A1 (zh) | 一种视频监控图像的显示方法及系统 | |
KR20150032630A (ko) | 촬상 시스템에 있어서의 제어방법, 제어장치 및 컴퓨터 판독 가능한 기억매체 | |
KR20120118790A (ko) | 카메라 간 협업을 이용한 영상 감시 시스템 및 방법 | |
US20170201723A1 (en) | Method of providing object image based on object tracking | |
KR20160078724A (ko) | 카메라 감시 영역 표시 방법 및 장치 | |
KR101990789B1 (ko) | 관심객체의 선택에 의한 영상 탐색장치 및 방법 | |
JP6396682B2 (ja) | 監視カメラシステム | |
JP2022167992A (ja) | 物体追跡装置、物体追跡方法、およびプログラム | |
US20120162412A1 (en) | Image matting apparatus using multiple cameras and method of generating alpha maps | |
KR101700651B1 (ko) | 위치정보 기반의 공유 경로데이터를 이용한 객체 트래킹 장치 | |
JP2012022561A5 (zh) | ||
KR101781158B1 (ko) | 다중 카메라를 이용한 영상 매팅 장치 및 알파맵 생성 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17833438 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017833438 Country of ref document: EP Effective date: 20190211 |