WO2018228413A1 - 一种目标对象抓拍方法、装置及视频监控设备 - Google Patents

一种目标对象抓拍方法、装置及视频监控设备 Download PDF

Info

Publication number
WO2018228413A1
WO2018228413A1 PCT/CN2018/090992 CN2018090992W WO2018228413A1 WO 2018228413 A1 WO2018228413 A1 WO 2018228413A1 CN 2018090992 W CN2018090992 W CN 2018090992W WO 2018228413 A1 WO2018228413 A1 WO 2018228413A1
Authority
WO
WIPO (PCT)
Prior art keywords
target block
target
target object
magnification
camera
Prior art date
Application number
PCT/CN2018/090992
Other languages
English (en)
French (fr)
Inventor
申琳
童鸿翔
沈林杰
张尚迪
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to US16/623,229 priority Critical patent/US11107246B2/en
Priority to EP18817790.1A priority patent/EP3641298B1/en
Publication of WO2018228413A1 publication Critical patent/WO2018228413A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19641Multiple cameras having overlapping views on a single scene
    • G08B13/19643Multiple cameras having overlapping views on a single scene wherein the cameras play different roles, e.g. different resolution, different camera type, master-slave camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19617Surveillance camera constructional details
    • G08B13/1963Arrangements allowing camera rotation to change view, e.g. pivoting camera, pan-tilt and zoom [PTZ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a target object capture method, device, and video monitoring device.
  • the existing video monitoring device has a problem that the monitoring range and the target object clarity cannot be combined.
  • the purpose of the embodiment of the present application is to provide a target object capture method, device, and video monitoring device, so as to improve the definition of the target object under the premise of ensuring the monitoring scope.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides a method for capturing a target object, where the method includes:
  • each target block includes one or more target objects
  • the detail camera is controlled to adjust its position and magnification, and the adjusted detail camera is controlled to capture the target block.
  • the step of determining the magnification corresponding to each target object according to the size of each target object includes:
  • the magnification corresponding to the angle of view is determined according to the correspondence between the preset magnification and the angle of view, and the determined magnification is used as the magnification corresponding to the target object.
  • controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail before the camera captures the target block.
  • the method also includes:
  • controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail camera to capture the target block includes:
  • the number of captured shots of the target block is updated, and the step of performing the untaken target block based on the number of captured shots of each target block is performed.
  • the step of calculating a snap priority of each untaken target block includes:
  • the target is calculated according to the attribute information of each target object included in the target block and the corresponding weight, and/or the difference between the target block and the last captured target block and the corresponding weight.
  • Calculating a snap priority of the target block, and the attribute information of any target object includes: a moving direction, a captured number of times, and a leaving time, the target block for each uncaptured, according to each of the included in the target block
  • the method further includes: the attribute information of the target object and the corresponding weight, and the difference between the position of the target block and the last captured target block and the corresponding weight, before calculating the capture priority of the target block, the method further includes:
  • Determining speed information of each target object included in the target block and determining, according to first position information, moving direction, and speed information of each target object included in the target block in the current panoramic video frame, each target object departure time;
  • Determining the location of the target block and the last captured target block according to the first location information of the target block in the current panoramic video frame and the first location information of the last captured target block in the current panoramic video frame. difference.
  • the step of calculating the snap priority of the target block includes:
  • the w 1 is a weight corresponding to the moving direction;
  • the c is the captured number of the target object, and the w 2 is The weight corresponding to the number of times of capture;
  • the d is the position difference between the target block and the last captured target block, the w 4 The weight corresponding to the position difference.
  • the step of determining a magnification corresponding to the target block according to a magnification corresponding to each first target object includes:
  • the maximum value of the magnifications corresponding to the first target objects is used as the magnification corresponding to the target block, or the magnification of each of the first target objects is multiplied by the corresponding weight to obtain a comprehensive magnification as the magnification corresponding to the target block.
  • the step of detecting the target object in the current panoramic video frame collected by the panoramic camera includes:
  • a target object in the current panoramic video frame acquired by the panoramic camera and not present in the previous video frame is detected.
  • an embodiment of the present application provides a target object capture device, where the device includes:
  • a first detecting module configured to detect a target object in a current panoramic video frame collected by the panoramic camera, and determine first position information and size of each target object in the current panoramic video frame;
  • a first determining module configured to determine, according to the first location information of each target object, and the position mapping relationship between the pre-built panoramic camera and the detail camera, the detailed camera location information corresponding to each target object, and determine according to the size of each target object The magnification corresponding to each target object;
  • a processing module configured to perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or more target objects;
  • An identification module configured to identify, in each target block, a first target object at an edge position among each target object included in the target block, and determine, according to the detailed camera position information corresponding to each first target object, the target block Determining the camera position information, determining a magnification corresponding to the target block according to a magnification corresponding to each of the first target objects;
  • control module configured, for each target block, according to the detailed camera position information and the magnification corresponding to the target block, controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail camera to capture the target block.
  • the first determining module includes:
  • a first determining submodule configured to determine, according to a size of the target object, a corresponding field of view for each target object
  • the second determining submodule is configured to determine a magnification corresponding to the field of view according to a preset correspondence between the magnification and the angle of view, and use the determined magnification as the magnification corresponding to the target object.
  • the device further includes:
  • a setting module configured to set the number of times of snapping of each target block to an initial value
  • the control module includes:
  • a judging sub-module configured to determine whether there is an uncaptured target block according to the number of times the target block has been captured
  • control submodule configured to calculate a capture priority of each uncaptured target block when the determination result of the determination submodule is YES, for the target block with the highest priority, according to the detailed camera position information and the magnification corresponding to the target block Controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail to capture the target block;
  • the update submodule is configured to update the captured number of times of the target block, and trigger the determining submodule.
  • control submodule is specifically configured to, according to each target block that is not captured, attribute information and corresponding weights of each target object included in the target block, and/or the target block and the last snapshot.
  • the position difference of the target block and the corresponding weight are calculated, and the snap priority of the target block is calculated; wherein the attribute information of any target object includes at least one of the following: a moving direction, a captured number of times, and a leaving time.
  • the device when the control submodule is specifically used for each uncaptured target block, according to attribute information of each target object included in the target block and corresponding weight, and the target block and the last captured target And the device further includes: a position difference of the block and a corresponding weight, and a snap priority of the target block is calculated, and the attribute information of the target object includes: a moving direction, a captured number of times, and a leaving time, the device further includes:
  • a second detecting module configured to detect each target object included in the target block, and determine that a moving direction of each target object is moving toward the panoramic camera or not moving toward the panoramic camera;
  • a second determining module configured to determine speed information of each target object included in the target block, and according to each target object included in the target block, first position information, moving direction, and speed in the current panoramic video frame Information to determine the departure time of each target object;
  • a third determining module configured to determine, according to the first location information of the target block in the current panoramic video frame, and the first location information of the last captured target block in the current panoramic video frame, determining the target block and The position of the last captured target block was poor.
  • control sub-module is specifically configured to calculate a capture priority W of the target block according to the following formula for any target block that is not captured:
  • the w 1 is a weight corresponding to the moving direction;
  • the c is the captured number of the target object, and the w 2 is The weight corresponding to the number of times of capture;
  • the d is the position difference between the target block and the last captured target block, the w 4 The weight corresponding to the position difference.
  • the first determining module is specifically configured to use a maximum value of the magnifications corresponding to the first target objects as a magnification corresponding to the target block, or multiply a magnification of each first target object by a corresponding weight.
  • the comprehensive magnification is used as the magnification corresponding to the target block.
  • the first detecting module is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous video frame.
  • an embodiment of the present application provides a video monitoring device, including a panoramic camera, a detail camera, and a processor;
  • the panoramic camera is configured to collect a current panoramic video frame, and send the current panoramic video frame to the processor;
  • the processor is configured to detect a target object in the current panoramic video frame, determine first location information and size of each target object in the current panoramic video frame, and perform first location information according to each target object, and a position mapping relationship between the pre-built panoramic camera and the detail camera, determining detailed camera position information corresponding to each target object, and determining a magnification corresponding to each target object according to the size of each target object; according to the detailed camera position information corresponding to each target object and Multiplying, each block object is subjected to block processing to obtain at least one target block, wherein each target block includes one or more target objects; for each target block, the target objects included in the target block are identified at the edge a first target object of the location, and determining detailed camera position information corresponding to the target block according to the detailed camera position information corresponding to each first target object, determining a magnification corresponding to the target block according to a magnification corresponding to each first target object; Detailed target position information and magnification corresponding to the target block for each target block To the details of
  • the detail camera is configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target block, and capture the target block.
  • the processor is specifically configured to determine, according to a size of the target object, a corresponding field of view for each target object, and determine the field of view according to a preset relationship between the magnification and the angle of view. Corresponding magnification, and the determined magnification is taken as the magnification corresponding to the target object.
  • the processor is further configured to set a captured number of times of each target block as an initial value
  • the processor is specifically configured to determine, according to the number of captured times of each target block, whether there is an uncaptured target block; if yes, calculate a snapshot priority of each uncaptured target block, and for the target block with the highest priority, Transmitting the detailed camera position information and the magnification corresponding to the target block to the detail camera; updating the captured number of times of the target block, and returning to perform the captured number of times according to each target block to determine whether there is an uncaptured target block. step;
  • the detail camera is specifically configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target block, and capture the target block.
  • the processor is specifically configured to, according to each target block that is not captured, attribute information and corresponding weights of each target object included in the target block, and/or the target block and the last captured target.
  • the position difference of the block and the corresponding weight are used to calculate a snap priority of the target block; wherein the attribute information of any target object includes at least one of the following: a moving direction, a captured number of times, and a leaving time.
  • calculating a snap priority of the target block, and the attribute information of any target object includes: a moving direction, a captured number of times, and a leaving time,
  • the processor is further configured to detect each target object included in the target block, determine that a moving direction of each target object is moving toward the panoramic camera or not moving toward the panoramic camera; determining the target block Speed information of each target object included in the target object, and determining a departure time of each target object according to first position information, moving direction, and speed information of each target object included in the target block in the current panoramic video frame; Determining a position difference between the target block and the last captured target block, the first position information of the target block in the current panoramic video frame, and the first position information of the last captured target block in the current panoramic video frame .
  • the processor is specifically configured to: for any target block that is not captured, calculate a snapshot priority W of the target block according to the following formula:
  • the w 1 is a weight corresponding to the moving direction;
  • the c is the captured number of the target object, and the w 2 is The weight corresponding to the number of times of capture;
  • the d is the position difference between the target block and the last captured target block, the w 4 The weight corresponding to the position difference.
  • the processor is specifically configured to use a maximum value of the magnification corresponding to each first target object as a magnification corresponding to the target block, or multiply a magnification of each first target object by a corresponding weight to obtain a comprehensive magnification. , as the magnification corresponding to the target block.
  • the processor is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous video frame.
  • the present application provides a storage medium, wherein the storage medium is configured to store executable program code for executing a target object according to the first aspect of the present application at runtime Capture method.
  • the present application provides an application, wherein the application is configured to execute a target object capture method according to the first aspect of the present application at runtime.
  • the embodiment of the present application provides a target object capture method, device, and video monitoring device.
  • the method includes: detecting a target object in a current panoramic video frame collected by a panoramic camera, and determining each target object in the current panoramic video frame.
  • the size determines the magnification corresponding to each target object; according to the detailed camera position information and the magnification corresponding to each target object, each target object is subjected to block processing to obtain at least one target block, wherein each target block includes one or more target objects.
  • each target object in the panoramic camera when the target object in the panoramic camera is detected, each target object can be subjected to block processing to obtain a plurality of target blocks, and for each target block, according to each target object included in the target block. Position information, size, etc. Adjustment details The camera captures the position and magnification corresponding to each target block, thereby improving the capture efficiency and sharpness of the target object while ensuring the monitoring range.
  • FIG. 1 is a flowchart of a method for capturing a target object according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a panoramic video frame according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of location information of a target object in a panoramic video frame according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a result of partitioning a target object according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a result of determining a first target object in a target block according to an embodiment of the present application
  • FIG. 6 is another flowchart of a target object capture method according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a target object capture device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a video monitoring device according to an embodiment of the present application.
  • FIG. 1 illustrates a flow of a target object capture method according to an embodiment of the present application, and the method may include the following steps:
  • the video monitoring device of the embodiment of the present application may at least include a panoramic camera, a detail camera, and a processor.
  • the panoramic camera can be a camera with a large monitoring range, such as a gun machine, a fisheye camera, etc.
  • the detail camera can be a camera capable of adjusting the capture magnification, such as a ball machine.
  • the position of the detail camera can also be adjusted, so that the monitoring range and the size of the target object in the acquired image can be adjusted.
  • the panoramic camera can collect the panoramic video frame.
  • the panoramic camera can periodically collect panoramic video frames at preset time intervals.
  • the panoramic camera can send the current panoramic video frame it has acquired to the processor.
  • the processor can detect the target object in the current panoramic video frame.
  • the processor may use a target detection algorithm such as a DPM (deformable parts model) or a FRCNN (Faster Region Convolutional Neural Network) to detect a target object in the current panoramic video frame.
  • the target object may be a person, a vehicle, or the like.
  • the target object capture method provided by the embodiment of the present application is described by taking the target object as a human.
  • the current panoramic video frame acquired by the panoramic camera includes target objects 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10.
  • the processor may also determine first location information and size of each target object in the current panoramic video frame. For example, the processor may determine, for each target object, a rectangular area where the target object is located, and determine, according to a preset coordinate system, the upper left corner coordinate and the lower right corner coordinate of the rectangular area as the first position information of the target object. . Correspondingly, the processor can determine the size of the rectangular area where the target object is located as the size of the target object.
  • the target object 1 it can be determined that the rectangular area in which it is located is 210, and according to the coordinate system constructed in the figure, the first position information of the target object 1 can be the upper left corner 220 and the lower right corner of the area 210. 230 coordinate information.
  • the size of the target object 1 may be the size of the area 210.
  • S102 Determine, according to the first location information of each target object, a location mapping relationship between the panoramic camera and the detail camera, and determine the detailed camera location information corresponding to each target object, and determine, according to the size of each target object, the target object corresponding to each target object.
  • the position mapping relationship between the panoramic camera and the detail camera may be constructed in advance.
  • the position information of any target object in the panoramic video frame acquired by the panoramic camera is a1
  • the position information of the corresponding detail camera is b1
  • the position information of any target object in the panoramic video frame acquired by the panoramic camera is b2 and the like.
  • the position information of the detail camera may include its horizontal direction position information and vertical direction position information.
  • the detailed camera location information corresponding to each target object may be determined according to the first location information of each target object and the location mapping relationship between the pre-built panoramic camera and the detail camera. That is to say, the position where the detail camera is used to capture each target object is determined.
  • the processor may search for the first location information of the target object in the location mapping relationship between the pre-stored panoramic camera and the detail camera, and use the location information of the detail camera corresponding to the first location information as The detailed camera position information corresponding to the target object.
  • the magnification of the detail camera can be adjusted.
  • the processor may determine a magnification corresponding to each target object according to the size of each target object.
  • a person's pixel width in the image reaches 240 as an identifiable detail standard.
  • the processor can determine the magnification corresponding to the target objects of different sizes. For example, for a larger target object, the magnification of the detail camera can be adjusted to a smaller value to capture a complete target object; for a smaller target object, the magnification of the detail camera can be adjusted to a larger value to obtain Increase the clarity of the target object as large as possible.
  • the processor may determine, according to the size of the target object, a corresponding field of view for each target object, and then determine the field of view according to a preset relationship between the magnification and the angle of view. Corresponding magnification, and the determined magnification is taken as the magnification corresponding to the target object.
  • S103 Perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or more target objects.
  • the processor may perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or more targets. Object.
  • the detail camera corresponds to different position ranges at different magnifications. After obtaining the detailed camera position information and the magnification corresponding to each target object, all the targets that can satisfy the detailed camera position range within a certain range (for example, 0.5 times) are searched for by one part, and finally different target blocks are formed.
  • a certain range for example, 0.5 times
  • each target block obtained may be as shown in FIG. 4.
  • each target object can be divided into four blocks, one for the target objects 7, 8, 9, and 10, one for the target objects 2, 3, and 4, and one for the target objects 5 and 6. 1 is a piece.
  • S104 Identify, for each target block, a first target object at an edge position among each target object included in the target block, and determine a detailed camera position corresponding to the target block according to the detailed camera position information corresponding to each first target object.
  • the information determines a magnification corresponding to the target block according to a magnification corresponding to each of the first target objects.
  • the processor may further determine detailed camera position information and magnification corresponding to each target block. Specifically, the processor may first identify, for each target block, a first target object at an edge location among each target object included in the target block.
  • FIG. 5 it shows a schematic diagram including a plurality of target objects in a target block.
  • the processor can recognize that the first target objects at the edge positions are the target objects 510, 520, 550, and 560, respectively.
  • the processor may determine the detailed camera position information corresponding to the target block according to the detailed camera position information corresponding to each of the first target objects, and corresponding to the first target objects.
  • the magnification determines the magnification corresponding to the target block.
  • the maximum value of the magnifications corresponding to the first target objects may be used as the magnification corresponding to the target block, or the magnification of each first target object may be multiplied by the corresponding weight to obtain a comprehensive magnification as the magnification corresponding to the target block.
  • control the detail camera For each target block, control the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target block, and control the adjusted detail camera to capture the target block.
  • the processor may control the detail camera to adjust its position and magnification according to the detailed camera position information and the magnification corresponding to the target block for each target block, and control the adjusted The detail camera captures the target block.
  • the processor may transmit, to each of the target blocks, a snap instruction including detailed camera position information and magnification corresponding to the target block to the detail camera.
  • the camera After receiving the snap command, the camera can adjust its position and magnification according to the detailed camera position information and magnification contained in it, and capture the target block.
  • each target object in the panoramic camera when the target object in the panoramic camera is detected, each target object can be subjected to block processing to obtain a plurality of target blocks, and for each target block, according to each target object included in the target block. Position information, size, etc. Adjustment details The camera captures the position and magnification corresponding to each target block, thereby improving the capture efficiency and sharpness of the target object while ensuring the monitoring range.
  • the processor when the processor detects the target object in the current panoramic video frame, the processor may detect the current panoramic video frame collected by the panoramic camera, and does not exist. The target object in the previous panoramic video frame.
  • each target block in order to better capture each target block, for example, each target block can be captured, and captured is the front side of each target object in the target block, etc., the processor Each target block can be prioritized to capture each target block in order of priority.
  • the processor can perform the following steps:
  • step S601. Determine, according to the number of captured times of each target block, whether there is an uncaptured target block, and if yes, execute step S602.
  • the processor may determine whether there is an uncaptured target block according to the number of times the target block has been captured. For example, when the number of captured shots of at least one target block is 0, it may be determined that there is a target block that is not captured. When the number of captured shots of each target block is non-zero, it may be determined that there is no target block that is not captured.
  • S602. Calculate a capture priority of each target block that is not captured. For the target block with the highest priority, control the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target block, and control the adjusted The detail camera captures the target block.
  • the processor may calculate a snap priority of each target block that is not captured. For example, the processor may, for each uncaptured target block, according to attribute information of each target object included in the target block and corresponding weight, and/or a difference in position between the target block and the last captured target block and corresponding Weighting, calculating a snap priority of the target block; wherein the attribute information of any target object includes at least one of the following: a moving direction, a captured number of times, and a leaving time.
  • the moving direction of the target object may be moving toward the panoramic camera or moving toward the panoramic camera.
  • the departure time is the time when the target object leaves the panoramic camera to monitor the scene.
  • the difference between the target block and the last captured target block is the distance between the target block and the last captured target block in the panoramic video frame.
  • the processor calculates the attribute information of the target object included in the target block and the corresponding weight, and the position difference between the target block and the last captured target block and the corresponding weight.
  • the capture priority of the target block, and the attribute information of any target object includes: the moving direction, the number of captured times, and the leaving time, before calculating the capturing priority of each untaken target block, the processor may The target block is captured, and the moving direction of each target object included in the target block, the leaving time, and the position difference between the target block and the last captured target block are determined.
  • the processor may detect each target object included in the target block, and determine that the moving direction of each target object is moving toward the panoramic camera or not moving toward the panoramic camera.
  • the processor may use a target detection type algorithm such as DPM or FRCNN to determine the moving direction of each target object.
  • the processor may determine speed information of each target object included in the target block, and determine each target object according to first position information, moving direction, and speed information of each target object included in the target block in the current panoramic video frame. The departure time.
  • the processor may first determine whether the target object exists in the previously acquired panoramic video frame, such as in the previous video frame; if so, according to multiple video frames, Determine the speed information of the target object. Further, the processor may determine, according to the first location information and the moving direction of the target object, the distance of the target object from the edge of the monitoring scene for any target object, and further calculate the distance according to the distance and the speed information of the target object. The departure time of the target object.
  • the processor may determine, according to the first location information of the target block in the current panoramic video frame, and the first location information of the last captured target block in the current panoramic video frame, determine a position difference between the target block and the last captured target block. .
  • the processor may calculate the target block according to the following formula for the target block. Capture priority W:
  • Each weight can be pre-set and saved in the processor.
  • the weights of the weights can be set according to the actual application requirements, which is not limited in this embodiment of the present application.
  • the processor can control the detail camera to adjust its position and magnification according to the target camera position information and the magnification corresponding to the target block with the highest priority target block, and control the adjusted details.
  • the camera captures the target block.
  • the processor may update the captured number of times of the target block, such as updating the number of times it has been captured to 1, and returning to step S601 to capture the next untaken target block.
  • the processor may determine the capture priority of each target block, and then sequentially capture the target blocks according to the capture priority of each target block.
  • the priority of each target block is calculated according to the moving direction of each target object included in the target block, the departure time, and the difference between the target block and the position of the last captured target block, the target block with the higher priority is the target.
  • the object is a target object moving toward the panoramic camera, a target object with a short departure time, a target object that is not captured, and the target block is closer to the last captured target block, thereby ensuring that each target object in the target block is captured
  • the resolution is high, each target object is captured as much as possible, and the capture efficiency is high.
  • the embodiment of the present application further provides a target object capture device.
  • the device includes:
  • a first detecting module 710 configured to detect a target object in a current panoramic video frame collected by the panoramic camera, and determine first location information and size of each target object in the current panoramic video frame;
  • the first determining module 720 is configured to determine, according to the first location information of each target object, and the location mapping relationship between the pre-built panoramic camera and the detail camera, the detailed camera location information corresponding to each target object, and according to the size of each target object Determining the magnification corresponding to each target object;
  • the processing module 730 is configured to perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or more target objects;
  • the identification module 740 is configured to identify, in each target block, a first target object at an edge position among each target object included in the target block, and determine, according to the detailed camera position information corresponding to each first target object, the target block corresponding to the target object The detailed camera position information determines a magnification corresponding to the target block according to a magnification corresponding to each of the first target objects;
  • the control module 750 is configured to, for each target block, control the detail camera to adjust its position and magnification according to the detailed camera position information and the magnification corresponding to the target block, and control the adjusted detail camera to capture the target block.
  • each target object in the panoramic camera when the target object in the panoramic camera is detected, each target object can be subjected to block processing to obtain a plurality of target blocks, and for each target block, according to each target object included in the target block. Position information, size, etc. Adjustment details The camera captures the position and magnification corresponding to each target block, thereby improving the capture efficiency and sharpness of the target object while ensuring the monitoring range.
  • the first determining module 720 includes:
  • a first determining sub-module (not shown in the figure), configured to determine, according to the size of the target object, a corresponding field of view for each target object;
  • the second determining sub-module (not shown) is configured to determine a magnification corresponding to the viewing angle according to a preset correspondence between the magnification and the viewing angle, and use the determined magnification as the magnification corresponding to the target object.
  • the device further includes:
  • a setting module (not shown) for setting the number of times of capturing of each target block to an initial value
  • the control module 750 includes:
  • a judging sub-module (not shown) for judging whether there is an uncaptured target block according to the number of times the target block has been captured;
  • control sub-module configured to calculate a capture priority of each uncaptured target block when the determination result of the determination sub-module is YES, corresponding to the target block with the highest priority, according to the target block Details of the camera position information and magnification, controlling the detail camera to adjust its position and magnification, controlling the adjusted details of the camera to capture the target block;
  • An update sub-module (not shown) is used to update the captured number of times of the target block and trigger the determination sub-module.
  • control sub-module is specifically configured, for each target block that is not captured, according to attribute information of each target object included in the target block, and corresponding weight, and/or The difference between the target block and the last captured target block and the corresponding weight, and the capture priority of the target block is calculated; wherein the attribute information of any target object includes at least one of the following: the moving direction, the number of times captured, and the leaving time.
  • the control submodule when the control submodule is specifically used for each uncaptured target block, according to attribute information of each target object included in the target block, and corresponding weight, and The difference between the target block and the last captured target block and the corresponding weight, the snap priority of the target block is calculated, and the attribute information of any target object includes: the moving direction, the number of captured times, and the leaving time, the device Also includes:
  • a second detecting module for detecting each target object included in the target block, determining that a moving direction of each target object is moving toward the panoramic camera or not facing the panoramic camera mobile;
  • a second determining module for determining speed information of each target object included in the target block, and first in the current panoramic video frame according to each target object included in the target block Position information, moving direction and speed information, determining the departure time of each target object;
  • a third determining module for first location information in the current panoramic video frame according to the target block, and a first location of the last captured target block in the current panoramic video frame Information, determining the difference between the target block and the last captured target block.
  • control sub-module is specifically configured to calculate a capture priority W of the target block according to the following formula for any target block that is not captured:
  • the w 1 is a weight corresponding to the moving direction;
  • the c is the captured number of the target object, and the w 2 is The weight corresponding to the number of times of capture;
  • the d is the position difference between the target block and the last captured target block, the w 4 The weight corresponding to the position difference.
  • the first determining module is specifically configured to use a maximum value of the magnification corresponding to each first target object as a magnification corresponding to the target block, or The magnification is multiplied by the corresponding weight to obtain a comprehensive magnification as the magnification corresponding to the target block.
  • the first detecting module is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous video frame.
  • the embodiment of the present application further provides a video monitoring device, as shown in FIG. 8, including a panoramic camera 810, a processor 820, and a detail camera 830;
  • the panoramic camera 810 is configured to collect a current panoramic video frame, and send the current panoramic video frame to the processor 820;
  • the processor 820 is configured to detect a target object in the current panoramic video frame, determine first location information and size of each target object in the current panoramic video frame, and according to first location information of each target object, And a position mapping relationship between the pre-built panoramic camera and the detail camera, determining detailed camera position information corresponding to each target object, and determining a magnification corresponding to each target object according to the size of each target object; according to the detailed camera position information corresponding to each target object And magnification, performing block processing on each target object to obtain at least one target block, wherein each target block includes one or more target objects; for each target block, identifying each target object included in the target block a first target object at an edge position, and determining detailed camera position information corresponding to the target block according to the detailed camera position information corresponding to each first target object, and determining a magnification corresponding to the target block according to a magnification corresponding to each first target object; Detailed camera position information and times corresponding to the target block for each target block Send details to the camera;
  • the detail camera 830 is configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target block, and capture the target block.
  • each target object in the panoramic camera when the target object in the panoramic camera is detected, each target object can be subjected to block processing to obtain a plurality of target blocks, and for each target block, according to each target object included in the target block. Position information, size, etc. Adjustment details The camera captures the position and magnification corresponding to each target block, thereby improving the capture efficiency and sharpness of the target object while ensuring the monitoring range.
  • the processor 820 is specifically configured to determine, according to the size of the target object, a corresponding field of view for each target object; according to a preset magnification and an angle of view Corresponding relationship, determining a magnification corresponding to the angle of view, and determining the determined magnification as the magnification corresponding to the target object.
  • the processor 820 is further configured to set the captured number of times of each target block as an initial value
  • the processor 820 is specifically configured to determine, according to the number of captured times of each target block, whether there is an uncaptured target block; if yes, calculate a snapshot priority of each uncaptured target block, for the target block with the highest priority, Transmitting the detailed camera position information and the magnification corresponding to the target block to the detail camera 830; updating the captured number of times of the target block, and returning to execute the captured number of times according to each target block to determine whether there is an untaken target Block step
  • the detail camera 830 is specifically configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target block, and capture the target block.
  • the processor 820 is specifically configured to, according to each target block that is not captured, attribute information and corresponding weights of each target object included in the target block, and/or The difference between the target block and the last captured target block and the corresponding weight, and the capture priority of the target block is calculated; wherein the attribute information of any target object includes at least one of the following: the moving direction, the number of times captured, and the leaving time.
  • the processor 820 when the processor 820 is for each target block that is not captured, according to the attribute information of each target object included in the target block and the corresponding weight, and the target block and the upper The position difference of the target block and the corresponding weight are calculated, and the capture priority of the target block is calculated, and the attribute information of any target object includes: a moving direction, a captured number of times, and a departure time.
  • the processor 820 is further configured to: detect each target object included in the target block, determine that a moving direction of each target object is moving toward the panoramic camera or not moving toward the panoramic camera; determining the target Speed information of each target object included in the block, and determining a departure time of each target object according to first position information, moving direction, and speed information of each target object included in the target block in the current panoramic video frame; Determining the location of the target block and the last captured target block according to the first location information of the target block in the current panoramic video frame and the first location information of the last captured target block in the current panoramic video frame. difference.
  • the processor 820 is specifically configured to: for any target block that is not captured, calculate a snapshot priority W of the target block according to the following formula:
  • the w 1 is a weight corresponding to the moving direction;
  • the c is the captured number of the target object, and the w 2 is The weight corresponding to the number of times of capture;
  • the d is the position difference between the target block and the last captured target block, the w 4 The weight corresponding to the position difference.
  • the processor 820 is specifically configured to use a maximum value of the magnification corresponding to each first target object as a magnification corresponding to the target block, or a magnification of each first target object. Multiply the corresponding weight to obtain the comprehensive magnification as the magnification corresponding to the target block.
  • the processor 820 is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous video frame.
  • the embodiment of the present application further provides a storage medium, where the storage medium is used to store executable program code, and the executable program code is used to execute a target described in the embodiment of the present application at runtime.
  • the object capture method wherein the target object capture method comprises:
  • each target block includes one or more target objects
  • the detail camera is controlled to adjust its position and magnification, and the adjusted detail camera is controlled to capture the target block.
  • each target object in the panoramic camera when the target object in the panoramic camera is detected, each target object can be subjected to block processing to obtain a plurality of target blocks, and for each target block, according to each target object included in the target block. Position information, size, etc. Adjustment details The camera captures the position and magnification corresponding to each target block, thereby improving the capture efficiency and sharpness of the target object while ensuring the monitoring range.
  • the embodiment of the present application further provides an application program, where the application is used to execute a target object capture method according to the embodiment of the present application at runtime, where the target object capture method includes:
  • each target block includes one or more target objects
  • the detail camera is controlled to adjust its position and magnification, and the adjusted detail camera is controlled to capture the target block.
  • each target object in the panoramic camera when the target object in the panoramic camera is detected, each target object can be subjected to block processing to obtain a plurality of target blocks, and for each target block, according to each target object included in the target block. Position information, size, etc. Adjustment details The camera captures the position and magnification corresponding to each target block, thereby improving the capture efficiency and sharpness of the target object while ensuring the monitoring range.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请实施例提供了一种目标对象抓拍方法、装置及视频监控设备,所述方法包括:检测全景相机所采集的当前全景视频帧中的目标对象;确定各目标对象对应的细节相机位置信息,并确定各目标对象对应的倍率;对各目标对象进行分块处理,得到至少一个目标块;针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象确定该目标块对应的细节相机位置信息和倍率;针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。本申请实施例能够在保证监控范围的前提下,提高目标对象的清晰度。

Description

一种目标对象抓拍方法、装置及视频监控设备
本申请要求于2017年6月16日提交中国专利局、申请号为201710459265.1发明名称为“一种目标对象抓拍方法、装置及视频监控设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种目标对象抓拍方法、装置及视频监控设备。
背景技术
随着视频监控技术的不断发展,视频监控设备已广泛应用于安防领域。在监控场景中,通常要求监控设备能监控到较大范围的场景,且捕获到较高清晰度的监控图像。
然而,当使用监控范围较大的全景相机(如枪机等)进行监控时,监控图像中的目标通常会较小,从而导致看不清目标对象细节等问题。当使用细节相机(如球机等)进行监控时,监控图像中通常能获取到清晰的目标对象,但是监控范围往往会较小。因此,现有的视频监控设备,存在监控范围和目标对象清晰度不可兼得的问题。
发明内容
本申请实施例的目的在于提供一种目标对象抓拍方法、装置及视频监控设备,以在保证监控范围的前提下,提高目标对象的清晰度。具体技术方案如下:
第一方面,本申请实施例提供了一种目标对象抓拍方法,所述方法包括:
检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标 对象的大小确定各目标对象对应的倍率;
根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
可选的,所述根据各目标对象的大小确定各目标对象对应的倍率的步骤包括:
针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
可选的,所述针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块之前,所述方法还包括:
设置各目标块的已抓拍次数为初始值;
所述针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块的步骤包括:
根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;
如果存在,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块;
更新该目标块的已抓拍次数,并返回执行所述根据各目标块的已抓拍次 数,确定是否存在未抓拍的目标块的步骤。
可选的,所述计算各未抓拍的目标块的抓拍优先级的步骤包括:
针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
可选的,当针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,所述针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级之前,所述方法还包括:
对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;
确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;
根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
可选的,所述针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级的步骤包括:
针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
Figure PCTCN2018090992-appb-000001
其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
可选的,所述根据各第一目标对象对应的倍率确定该目标块对应的倍率的步骤包括:
将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
可选的,所述检测全景相机所采集的当前全景视频帧中的目标对象的步骤包括:
检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
第二方面,本申请实施例提供了一种目标对象抓拍装置,所述装置包括:
第一检测模块,用于检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
第一确定模块,用于根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
处理模块,用于根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
识别模块,用于针对每个目标块,在该目标块包含的各目标对象中识别 处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
控制模块,用于针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
可选的,所述第一确定模块包括:
第一确定子模块,用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
第二确定子模块,用于根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
可选的,所述装置还包括:
设置模块,用于设置各目标块的已抓拍次数为初始值;
所述控制模块包括:
判断子模块,用于根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;
控制子模块,用于当所述判断子模块判断结果为是时,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块;
更新子模块,用于更新该目标块的已抓拍次数,并触发所述判断子模块。
可选的,所述控制子模块,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
可选的,当所述控制子模块,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,所述装置还包括:
第二检测模块,用于对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;
第二确定模块,用于确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;
第三确定模块,用于根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
可选的,所述控制子模块,具体用于针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
Figure PCTCN2018090992-appb-000002
其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
可选的,所述第一确定模块,具体用于将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
可选的,所述第一检测模块,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
第三方面,本申请实施例提供了一种视频监控设备,包括全景相机、细节相机、以及处理器;
所述全景相机,用于采集当前全景视频帧,并将所述当前全景视频帧发送给所述处理器;
所述处理器,用于检测所述当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;并针对每个目标块,将该目标块对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机,用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并抓拍该目标块。
可选的,所述处理器,具体用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
可选的,所述处理器,还用于设置各目标块的已抓拍次数为初始值;
所述处理器,具体用于根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;如果存在,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,将该目标块对应的细节相机位置信息和倍率发送至所述细节相机;更新该目标块的已抓拍次数,并返回执行所述根据各目标块的已抓拍次数,确定是否存在未抓拍的目标块的步骤;
所述细节相机,具体用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并抓拍该目标块。
可选的,所述处理器,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
可选的,当所述处理器针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,
所述处理器,还用于对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
可选的,所述处理器,具体用于针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
Figure PCTCN2018090992-appb-000003
其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重; 所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
可选的,所述处理器,具体用于将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
可选的,所述处理器,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
第四方面,本申请提供了一种存储介质,其中,该存储介质用于存储可执行程序代码,所述可执行程序代码用于在运行时执行本申请第一方面所述的一种目标对象抓拍方法。
第五方面,本申请提供了一种应用程序,其中,该应用程序用于在运行时执行本申请第一方面所述的一种目标对象抓拍方法。
本申请实施例提供了一种目标对象抓拍方法、装置及视频监控设备,所述方法包括:检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
本申请实施例中,当检测到全景相机中的目标对象时,能够对各目标对象进行分块处理得到多个目标块,并且针对各目标块,能够根据该目标块中包括的各目标对象的位置信息、大小等,调整细节相机采用与各目标块对应的位置和倍率对其进行抓拍,从而能够在保证监控范围的前提下,提高目标 对象的抓拍效率和清晰度。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例的一种目标对象抓拍方法的流程图;
图2为本申请实施例的一种全景视频帧示意图;
图3为本申请实施例的一种全景视频帧中目标对象位置信息示意图;
图4为本申请实施例的对目标对象进行分块的结果示意图;
图5为本申请实施例的确定目标块中第一目标对象的结果示意图;
图6为本申请实施例的一种目标对象抓拍方法的另一流程图;
图7为本申请实施例的一种目标对象抓拍装置的结构示意图;
图8为本申请实施例的一种视频监控设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下通过具体实施例,对本申请进行详细说明。
请参考图1,其示出了本申请实施例的一种目标对象抓拍方法流程,该方法可以包括以下步骤:
S101,检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小。
本申请实施例提供的方法可以应用于视频监控设备。具体的,本申请实施例的视频监控设备至少可以包括全景相机、细节相机、以及处理器。其中,全景相机可以为监控范围较大的相机,例如枪机、鱼眼相机等;细节相机可以为能够调节抓拍倍率的相机,如球机等。并且,细节相机的位置也是可以调整的,从而,其监控范围和所采集图像中目标对象的大小都是可以调整的。
在本申请实施例中,全景相机可以采集全景视频帧。如,全景相机可以按照预设的时间间隔,周期性采集全景视频帧。并且,全景相机可以将其采集的当前全景视频帧发送给处理器。
处理器接收到全景相机发送的当前全景视频帧后,可以对当前全景视频帧中的目标对象进行检测。例如,处理器可以采用DPM(deformable parts model,可形变部件模型)或FRCNN(Faster Region Convolutional Neural Network,快速区域卷积神经网络)等目标检测类算法,来检测当前全景视频帧中的目标对象。其中,上述目标对象可以为人、车辆等。本申请实施例中,以目标对象为人为例,来说明本申请实施例提供的目标对象抓拍方法。
参考图2,其示出了全景相机采集的当前全景视频帧的示意图。如图2所示,全景相机采集的当前全景视频帧中包括目标对象1、2、3、4、5、6、7、8、9、10。
检测到各目标对象后,处理器还可以确定各目标对象在当前全景视频帧中的第一位置信息、大小。如,处理器可以针对每个目标对象,确定该目标对象所在的长方形区域,并根据预设的坐标系,将该长方形区域的左上角坐标和右下角坐标确定为该目标对象的第一位置信息。相应的,处理器可以将该目标对象所在长方形区域的大小确定为该目标对象的大小。
如图3所示,针对目标对象1,可以确定其所在的长方形区域为210,并且,根据图中构建的坐标系,目标对象1的第一位置信息可以为区域210的左上角220和右下角230的坐标信息。目标对象1的大小可以为区域210的大小。
S102,根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率。
在本申请实施例中,可以预先构建全景相机和细节相机的位置映射关系。如,当任一目标对象在全景相机采集的全景视频帧中的位置信息为a1时,对应的细节相机的位置信息为b1;当任一目标对象在全景相机采集的全景视频帧中的位置信息为a2时,对应的细节相机的位置信息为b2等。其中,细节相机的位置信息可以包括其水平方向位置信息和垂直方向位置信息。
当处理器确定各目标对象的第一位置信息后,可以根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息。也就是说,确定细节相机用于抓拍各目标对象时所在的位置。
如,针对任一目标对象,处理器可以在预先保存的全景相机和细节相机的位置映射关系中,查找该目标对象的第一位置信息,并将第一位置信息对应的细节相机的位置信息作为该目标对象对应的细节相机位置信息。
在本申请实施例中,为了能够清晰的对目标对象进行抓拍,细节相机的倍率是可以调整的。具体的,处理器可以根据各目标对象的大小,确定各目标对象对应的倍率。
通常情况下,图像中人的像素宽度达到240为可辨认的细节标准。根据该标准,处理器可以确定不同大小的目标对象对应的倍率。如,针对较大的目标对象,可以将细节相机的倍率调整为较小值,以抓拍到完整的目标对象;针对较小的目标对象,可以将细节相机的倍率调整为较大值,以获得尽可能大的目标对象,提高其清晰度。
在一种实现方式中,处理器可以针对每个目标对象,根据该目标对象的大小,确定对应的视场角,进而可以根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
S103,根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象。
本申请实施例中,处理器可以根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目 标块中包含一个或多个目标对象。
细节相机在不同的倍率下,对应不同的位置范围。当得到各目标对象对应的细节相机位置信息和倍率后,通过搜索寻找倍率在一定范围内(如0.5倍)能满足细节相机位置范围的所有目标分为一块,最终形成不同的目标块。
参考图4,对图2所示的全景视频帧中各目标对象进行分块处理后,得到的各目标块可以如图4所示。如图4所示,可以将各目标对象分为4块,分别为目标对象7、8、9、10为一块,目标对象2、3、4为一块,目标对象5、6为一块,目标对象1为一块。
S104,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率。
得到多个目标块后,处理器可以进一步确定各目标块对应的细节相机位置信息和倍率。具体的,处理器可以首先针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象。
如图5所示,其示出了一目标块中包括多个目标对象的示意图。如图5所示,针对该目标块,处理器可以识别出处于边缘位置的第一目标对象分别为目标对象510、520、550、和560。
识别出各目标块中处于边缘位置的各第一目标对象后,处理器可以根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率。
如,可以将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
S105,针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
得到各目标块对应的细节相机位置信息和倍率后,处理器可以针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
例如,处理器可以针对每个目标块,向细节相机发送包含将该目标块对应的细节相机位置信息和倍率的抓拍指令。细节相机接收到抓拍指令后,可以根据其中包含的细节相机位置信息和倍率,调整其自身的位置和倍率,并抓拍该目标块。
本申请实施例中,当检测到全景相机中的目标对象时,能够对各目标对象进行分块处理得到多个目标块,并且针对各目标块,能够根据该目标块中包括的各目标对象的位置信息、大小等,调整细节相机采用与各目标块对应的位置和倍率对其进行抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍效率和清晰度。
作为本申请实施例的一种实施方式,为了提高目标对象抓拍效率,处理器对当前全景视频帧中的目标对象进行检测时,可以检测全景相机所采集的当前全景视频帧中的,且不存在于上一全景视频帧中的目标对象。
可以理解,相邻全景视频帧中的同一目标对象,其相似度一般是较高的。因此,针对出现在相邻全景视频帧中的同一目标对象,可以仅对其进行一次细节抓拍,从而能提高目标对象抓拍效率。
作为本申请实施例的一种实施方式,为了更好的抓拍到各目标块,如,能够将每个目标块都抓拍到,且抓拍到的为目标块中各目标对象的正面等,处理器可以对每个目标块进行优先级排序,进而根据优先级顺序对每个目标块进行抓拍。
具体的,处理器对各目标块进行抓拍之前,可以设置各目标块的已抓拍次数为初始值,如为0。在对各目标块进行抓拍时,如图6所示,处理器可以执行以下步骤:
S601,根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块,如果是,执行步骤S602。
在本申请实施例中,处理器可以根据各目标块的已抓拍次数,判断是否 存在未抓拍的目标块。如,当存在至少一目标块的已抓拍次数为0时,可以确定存在未抓拍的目标块,当各目标块的已抓拍次数均为非0时,可以确定不存在未抓拍的目标块。
S602,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块。
在本申请实施例中,处理器可以计算各未抓拍的目标块的抓拍优先级。例如,处理器可以针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
其中,目标对象的移动方向可以为正朝全景相机移动,或非正朝全景相机移动。离开时间为目标对象离开全景相机监控场景的时间。该目标块与上次抓拍目标块的的位置差,为该目标块与上次抓拍目标块在全景视频帧中的距离。
当针对每个未抓拍的目标块,处理器根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、和离开时间时,在计算各未抓拍的目标块的抓拍优先级之前,处理器可以针对每个未抓拍目标块,确定该目标块中包括的各目标对象的移动方向、离开时间、以及该目标块与上次抓拍目标块的位置差。
具体的,处理器可以对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝全景相机移动或非正朝全景相机移动。如,处理器可以采用DPM或FRCNN等目标检测类算法,确定各目标对象的移动方向。
处理器可以确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间。
在确定任一目标对象的速度信息时,处理器可以先确定该目标对象是否 存在于之前采集的全景视频帧中,如,前一张视频帧中;如果是,可以根据多张视频帧,来确定目标对象的速度信息。进一步地,处理器可以针对任一目标对象,根据该目标对象的第一位置信息和移动方向,确定该目标对象距离监控场景边缘的距离,进而根据该距离和该目标对象的速度信息,计算该目标对象的离开时间。
处理器可以根据该目标块在当前全景视频帧中的第一位置信息,以及上次抓拍目标块在当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
获取到该目标块中包括的各目标对象的移动方向、离开时间、以及该目标块与上次抓拍目标块的位置差后,处理器可以针对该目标块,根据以下公式,计算该目标块的抓拍优先级W:
Figure PCTCN2018090992-appb-000004
其中,n为该目标块中包括的目标对象个数;f为任一目标对象的移动方向,当该目标对象的移动方向为正朝全景相机移动时,f=1,当该目标对象的移动方向为非正朝全景相机移动时,f=0;w 1为移动方向对应的权重;c为该目标对象的已抓拍次数,w 2为已抓拍次数对应的权重;t为该目标对象的离开时间,w 3为离开时间对应的权重;d为该目标块与上次抓拍目标块的位置差,w 4为位置差对应的权重。
各权重可以预先设定好并保存在处理器中。并且,各权重大小可以根据实际应用需要进行设置,本申请实施例对此不进行限定。
得到各未抓拍目标块的抓拍优先级后,处理器可以针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块。
S603,更新该目标块的已抓拍次数,并返回执行步骤S601。
对优先级最高的目标块进行抓拍后,处理器可以更新该目标块的已抓拍次数,如将其已抓拍次数更新为1,并返回执行步骤S601,以对下一未抓拍目标块进行抓拍。
本实施例中,处理器可以确定各目标块的抓拍优先级,进而根据各目标块的抓拍优先级依次对各目标块进行抓拍。当根据目标块中包括的各目标对象的移动方向、离开时间、以及该目标块与上次抓拍目标块的位置差计算各目标块的优先级时,优先级较高的目标块即为包含目标对象为正朝全景相机移动的目标对象、离开时间较短的目标对象、未抓拍的目标对象,并且,该目标块离上次抓拍目标块较近,因此,能够保证抓拍目标块中各目标对象的清晰度较高,各目标对象都尽可能被抓拍到,且抓拍效率较高。
相应的,本申请实施例还提供了一种目标对象抓拍装置,如图7所示,所述装置包括:
第一检测模块710,用于检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
第一确定模块720,用于根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
处理模块730,用于根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
识别模块740,用于针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
控制模块750,用于针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
本申请实施例中,当检测到全景相机中的目标对象时,能够对各目标对象进行分块处理得到多个目标块,并且针对各目标块,能够根据该目标块中包括的各目标对象的位置信息、大小等,调整细节相机采用与各目标块对应 的位置和倍率对其进行抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍效率和清晰度。
作为本申请实施例的一种实施方式,所述第一确定模块720包括:
第一确定子模块(图中未示出),用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
第二确定子模块(图中未示出),用于根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
作为本申请实施例的一种实施方式,所述装置还包括:
设置模块(图中未示出),用于设置各目标块的已抓拍次数为初始值;
所述控制模块750包括:
判断子模块(图中未示出),用于根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;
控制子模块(图中未示出),用于当所述判断子模块判断结果为是时,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块;
更新子模块(图中未示出),用于更新该目标块的已抓拍次数,并触发所述判断子模块。
作为本申请实施例的一种实施方式,所述控制子模块,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
作为本申请实施例的一种实施方式,当所述控制子模块,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍 次数、以及离开时间时,所述装置还包括:
第二检测模块(图中未示出),用于对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;
第二确定模块(图中未示出),用于确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;
第三确定模块(图中未示出),用于根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
作为本申请实施例的一种实施方式,所述控制子模块,具体用于针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
Figure PCTCN2018090992-appb-000005
其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
作为本申请实施例的一种实施方式,所述第一确定模块,具体用于将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
作为本申请实施例的一种实施方式,所述第一检测模块,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
相应的,本申请实施例还提供了一种视频监控设备,如图8所示,包括全景相机810、处理器820、细节相机830;
所述全景相机810,用于采集当前全景视频帧,并将所述当前全景视频帧发送给所述处理器820;
所述处理器820,用于检测所述当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;并针对每个目标块,将该目标块对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机830,用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并抓拍该目标块。
本申请实施例中,当检测到全景相机中的目标对象时,能够对各目标对象进行分块处理得到多个目标块,并且针对各目标块,能够根据该目标块中包括的各目标对象的位置信息、大小等,调整细节相机采用与各目标块对应的位置和倍率对其进行抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍效率和清晰度。
作为本申请实施例的一种实施方式,所述处理器820,具体用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
作为本申请实施例的一种实施方式,所述处理器820,还用于设置各目标 块的已抓拍次数为初始值;
所述处理器820,具体用于根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;如果存在,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,将该目标块对应的细节相机位置信息和倍率发送至所述细节相机830;更新该目标块的已抓拍次数,并返回执行所述根据各目标块的已抓拍次数,确定是否存在未抓拍的目标块的步骤;
所述细节相机830,具体用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并抓拍该目标块。
作为本申请实施例的一种实施方式,所述处理器820,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
作为本申请实施例的一种实施方式,当所述处理器820针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,
所述处理器820,还用于对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
作为本申请实施例的一种实施方式,所述处理器820,具体用于针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
Figure PCTCN2018090992-appb-000006
其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
作为本申请实施例的一种实施方式,所述处理器820,具体用于将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
作为本申请实施例的一种实施方式,所述处理器820,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
相应地,本申请实施例还提供了一种存储介质,其中,该存储介质用于存储可执行程序代码,所述可执行程序代码用于在运行时执行本申请实施例所述的一种目标对象抓拍方法,其中,所述目标对象抓拍方法包括:
检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
本申请实施例中,当检测到全景相机中的目标对象时,能够对各目标对象进行分块处理得到多个目标块,并且针对各目标块,能够根据该目标块中包括的各目标对象的位置信息、大小等,调整细节相机采用与各目标块对应的位置和倍率对其进行抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍效率和清晰度。
相应地,本申请实施例还提供了一种应用程序,其中,该应用程序用于在运行时执行本申请实施例所述的一种目标对象抓拍方法,其中,所述目标对象抓拍方法包括:
检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
本申请实施例中,当检测到全景相机中的目标对象时,能够对各目标对象进行分块处理得到多个目标块,并且针对各目标块,能够根据该目标块中包括的各目标对象的位置信息、大小等,调整细节相机采用与各目标块对应的位置和倍率对其进行抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍效率和清晰度。
对于装置/视频监控设备/存储介质/应用程序实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员可以理解实现上述方法实施方式中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于计算机可读取存储介质中,这里所称得的存储介质,如:ROM/RAM、磁碟、光盘等。
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (25)

  1. 一种目标对象抓拍方法,其特征在于,所述方法包括:
    检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
    根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
    根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
    针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
    针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
  2. 根据权利要求1所述的方法,其特征在于,所述根据各目标对象的大小确定各目标对象对应的倍率的步骤包括:
    针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
    根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
  3. 根据权利要求1所述的方法,其特征在于,所述针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块之前,所述方法还包括:
    设置各目标块的已抓拍次数为初始值;
    所述针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标 块的步骤包括:
    根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;
    如果存在,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块;
    更新该目标块的已抓拍次数,并返回执行所述根据各目标块的已抓拍次数,确定是否存在未抓拍的目标块的步骤。
  4. 根据权利要求3所述的方法,其特征在于,所述计算各未抓拍的目标块的抓拍优先级的步骤包括:
    针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
  5. 根据权利要求4所述的方法,其特征在于,当针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,所述针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级之前,所述方法还包括:
    对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;
    确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;
    根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍 目标块的位置差。
  6. 根据权利要求5所述的方法,其特征在于,所述针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级的步骤包括:
    针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
    Figure PCTCN2018090992-appb-100001
    其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据各第一目标对象对应的倍率确定该目标块对应的倍率的步骤包括:
    将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
  8. 根据权利要求1-6任一项所述的方法,其特征在于,所述检测全景相机所采集的当前全景视频帧中的目标对象的步骤包括:
    检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
  9. 一种目标对象抓拍装置,其特征在于,所述装置包括:
    第一检测模块,用于检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;
    第一确定模块,用于根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
    处理模块,用于根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
    识别模块,用于针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;
    控制模块,用于针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机抓拍该目标块。
  10. 根据权利要求9所述的装置,其特征在于,所述第一确定模块包括:
    第一确定子模块,用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
    第二确定子模块,用于根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
  11. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    设置模块,用于设置各目标块的已抓拍次数为初始值;
    所述控制模块包括:
    判断子模块,用于根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;
    控制子模块,用于当所述判断子模块判断结果为是时,计算各未抓拍的 目标块的抓拍优先级,针对优先级最高的目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,控制调整后的细节相机抓拍该目标块;
    更新子模块,用于更新该目标块的已抓拍次数,并触发所述判断子模块。
  12. 根据权利要求11所述的装置,其特征在于,所述控制子模块,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
  13. 根据权利要求12所述的装置,其特征在于,当所述控制子模块,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,所述装置还包括:
    第二检测模块,用于对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;
    第二确定模块,用于确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;
    第三确定模块,用于根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
  14. 根据权利要求13所述的装置,其特征在于,所述控制子模块,具体用于针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
    Figure PCTCN2018090992-appb-100002
    其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象 的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
  15. 根据权利要求9-14任一项所述的装置,其特征在于,所述第一确定模块,具体用于将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
  16. 根据权利要求9-14任一项所述的装置,其特征在于,所述第一检测模块,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
  17. 一种视频监控设备,其特征在于,包括全景相机、细节相机、以及处理器;
    所述全景相机,用于采集当前全景视频帧,并将所述当前全景视频帧发送给所述处理器;
    所述处理器,用于检测所述当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小;根据各目标对象的第一位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;根据各目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第一目标对象,并根据各第一目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第一目标对象对应的倍率确定该目标块对应的倍率;并针对每个目标块,将该目标块对应的细节相机位置信息和倍率发送至细节相机;
    所述细节相机,用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并抓拍该目标块。
  18. 根据权利要求17所述的设备,其特征在于,所述处理器,具体用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
  19. 根据权利要求17所述的设备,其特征在于,所述处理器,还用于设置各目标块的已抓拍次数为初始值;
    所述处理器,具体用于根据各目标块的已抓拍次数,判断是否存在未抓拍的目标块;如果存在,计算各未抓拍的目标块的抓拍优先级,针对优先级最高的目标块,将该目标块对应的细节相机位置信息和倍率发送至所述细节相机;更新该目标块的已抓拍次数,并返回执行所述根据各目标块的已抓拍次数,确定是否存在未抓拍的目标块的步骤;
    所述细节相机,具体用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并抓拍该目标块。
  20. 根据权利要求19所述的设备,其特征在于,所述处理器,具体用于针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和/或该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级;其中,任一目标对象的属性信息包括以下至少一项:移动方向、已抓拍次数、以及离开时间。
  21. 根据权利要求20所述的设备,其特征在于,当所述处理器针对每个未抓拍的目标块,根据该目标块中包括的各目标对象的属性信息以及对应的权重,和该目标块与上次抓拍目标块的位置差以及对应的权重,计算该目标块的抓拍优先级,且任一目标对象的属性信息包括:移动方向、已抓拍次数、以及离开时间时,
    所述处理器,还用于对该目标块中包括的各目标对象进行检测,确定各目标对象的移动方向为正朝所述全景相机移动或非正朝所述全景相机移动;确定该目标块中包括的各目标对象的速度信息,并根据该目标块中包括的各 目标对象在所述当前全景视频帧中的第一位置信息、移动方向和速度信息,确定各目标对象的离开时间;根据该目标块在所述当前全景视频帧中的第一位置信息,以及上次抓拍目标块在所述当前全景视频帧中的第一位置信息,确定该目标块与上次抓拍目标块的位置差。
  22. 根据权利要求21所述的设备,其特征在于,所述处理器,具体用于针对任一未抓拍的目标块,根据以下公式,计算该目标块的抓拍优先级W:
    Figure PCTCN2018090992-appb-100003
    其中,所述n为该目标块中包括的目标对象个数;所述f为任一目标对象的移动方向,当该目标对象的移动方向为正朝所述全景相机移动时,f=1,当该目标对象的移动方向为非正朝所述全景相机移动时,f=0;所述w 1为移动方向对应的权重;所述c为该目标对象的已抓拍次数,所述w 2为已抓拍次数对应的权重;所述t为该目标对象的离开时间,所述w 3为离开时间对应的权重;所述d为该目标块与上次抓拍目标块的位置差,所述w 4为位置差对应的权重。
  23. 根据权利要求17-22任一项所述的设备,其特征在于,所述处理器,具体用于将各第一目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第一目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
  24. 一种存储介质,其特征在于,所述存储介质用于存储可执行程序代码,所述可执行程序代码用于在运行时执行如权利要求1-8任一项所述的一种目标对象抓拍方法。
  25. 一种应用程序,其特征在于,所述应用程序用于在运行时执行如权利要求1-8任一项所述的一种目标对象抓拍方法。
PCT/CN2018/090992 2017-06-16 2018-06-13 一种目标对象抓拍方法、装置及视频监控设备 WO2018228413A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/623,229 US11107246B2 (en) 2017-06-16 2018-06-13 Method and device for capturing target object and video monitoring device
EP18817790.1A EP3641298B1 (en) 2017-06-16 2018-06-13 Method and device for capturing target object and video monitoring device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710459265.1A CN109151295B (zh) 2017-06-16 2017-06-16 一种目标对象抓拍方法、装置及视频监控设备
CN201710459265.1 2017-06-16

Publications (1)

Publication Number Publication Date
WO2018228413A1 true WO2018228413A1 (zh) 2018-12-20

Family

ID=64660096

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090992 WO2018228413A1 (zh) 2017-06-16 2018-06-13 一种目标对象抓拍方法、装置及视频监控设备

Country Status (4)

Country Link
US (1) US11107246B2 (zh)
EP (1) EP3641298B1 (zh)
CN (1) CN109151295B (zh)
WO (1) WO2018228413A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111372037A (zh) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 目标抓拍系统和方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698413B (zh) * 2019-03-13 2021-05-14 杭州海康威视数字技术股份有限公司 一种对象的图像获取方法、装置及电子设备
CN110312100A (zh) * 2019-06-06 2019-10-08 西安中易建科技有限公司 安防监控方法及装置
CN110519510B (zh) * 2019-08-08 2021-02-02 浙江大华技术股份有限公司 一种抓拍方法、装置、球机及存储介质
CN111083444B (zh) * 2019-12-26 2021-10-15 浙江大华技术股份有限公司 一种抓拍方法、装置、电子设备及存储介质
CN111385476A (zh) * 2020-03-16 2020-07-07 浙江大华技术股份有限公司 一种拍照设备拍摄位置的调整方法及装置
US11908194B2 (en) * 2020-03-16 2024-02-20 New York University Tracking sparse objects and people in large scale environments
JP2022102461A (ja) * 2020-12-25 2022-07-07 株式会社リコー 動画生成装置、動画生成方法、プログラム、記憶媒体
CN113206956B (zh) * 2021-04-29 2023-04-07 维沃移动通信(杭州)有限公司 图像处理方法、装置、设备及存储介质
CN113592427A (zh) * 2021-06-29 2021-11-02 浙江大华技术股份有限公司 工时统计方法、工时统计装置及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543200A (zh) * 2003-04-22 2004-11-03 ���µ�����ҵ��ʽ���� 联合摄像机构成的监视装置
US20060056056A1 (en) * 2004-07-19 2006-03-16 Grandeye Ltd. Automatically expanding the zoom capability of a wide-angle video camera
CN102148965A (zh) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 多目标跟踪特写拍摄视频监控系统
CN102342099A (zh) * 2009-05-29 2012-02-01 (株)荣国电子 智能型监控摄像装置及采用该装置的影像监控系统

Family Cites Families (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04373371A (ja) * 1991-06-24 1992-12-25 Matsushita Electric Ind Co Ltd 熱画像検出手段を有するビデオカメラシステム
US6108035A (en) * 1994-06-07 2000-08-22 Parkervision, Inc. Multi-user camera control system and method
EP0878965A3 (en) * 1997-05-14 2000-01-12 Hitachi Denshi Kabushiki Kaisha Method for tracking entering object and apparatus for tracking and monitoring entering object
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US20100002070A1 (en) * 2004-04-30 2010-01-07 Grandeye Ltd. Method and System of Simultaneously Displaying Multiple Views for Video Surveillance
US20050062845A1 (en) * 2003-09-12 2005-03-24 Mills Lawrence R. Video user interface system and method
US7542588B2 (en) * 2004-04-30 2009-06-02 International Business Machines Corporation System and method for assuring high resolution imaging of distinctive characteristics of a moving object
US8427538B2 (en) * 2004-04-30 2013-04-23 Oncam Grandeye Multiple view and multiple object processing in wide-angle video camera
JP4140567B2 (ja) * 2004-07-14 2008-08-27 松下電器産業株式会社 物体追跡装置および物体追跡方法
JP4140591B2 (ja) * 2004-09-13 2008-08-27 ソニー株式会社 撮像システム及び撮像方法
US20060126738A1 (en) * 2004-12-15 2006-06-15 International Business Machines Corporation Method, system and program product for a plurality of cameras to track an object using motion vector data
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
WO2007014216A2 (en) * 2005-07-22 2007-02-01 Cernium Corporation Directed attention digital video recordation
US7681615B2 (en) * 2005-08-04 2010-03-23 The Boeing Company Tow width adaptable placement head device and method
US8471910B2 (en) * 2005-08-11 2013-06-25 Sightlogix, Inc. Methods and apparatus for providing fault tolerance in a surveillance system
US20070035628A1 (en) * 2005-08-12 2007-02-15 Kunihiko Kanai Image-capturing device having multiple optical systems
JP4188394B2 (ja) * 2005-09-20 2008-11-26 フジノン株式会社 監視カメラ装置及び監視カメラシステム
US20070291104A1 (en) * 2006-06-07 2007-12-20 Wavetronex, Inc. Systems and methods of capturing high-resolution images of objects
JP4655054B2 (ja) * 2007-02-26 2011-03-23 富士フイルム株式会社 撮像装置
NO327899B1 (no) * 2007-07-13 2009-10-19 Tandberg Telecom As Fremgangsmate og system for automatisk kamerakontroll
KR101006368B1 (ko) * 2008-07-22 2011-01-10 삼성전자주식회사 휴대 단말기의 카메라 조절 방법 및 장치
US8488001B2 (en) * 2008-12-10 2013-07-16 Honeywell International Inc. Semi-automatic relative calibration method for master slave camera control
US20100214445A1 (en) * 2009-02-20 2010-08-26 Sony Ericsson Mobile Communications Ab Image capturing method, image capturing apparatus, and computer program
US9215358B2 (en) * 2009-06-29 2015-12-15 Robert Bosch Gmbh Omni-directional intelligent autotour and situational aware dome surveillance camera system and method
KR100999056B1 (ko) * 2009-10-30 2010-12-08 (주)올라웍스 이미지 컨텐츠에 대해 트리밍을 수행하기 위한 방법, 단말기 및 컴퓨터 판독 가능한 기록 매체
US20110128385A1 (en) * 2009-12-02 2011-06-02 Honeywell International Inc. Multi camera registration for high resolution target capture
JP5538865B2 (ja) * 2009-12-21 2014-07-02 キヤノン株式会社 撮像装置およびその制御方法
BR112012019126A2 (pt) * 2010-02-01 2016-06-28 Younkook Electronics Co Ltd dispositivo de monitoramento e rastreamento e sistema de monitoramento remoto utilizando o mesmo.
AU2010201740B2 (en) * 2010-04-30 2013-03-07 Canon Kabushiki Kaisha Method, apparatus and system for performing a zoom operation
US9723260B2 (en) * 2010-05-18 2017-08-01 Polycom, Inc. Voice tracking camera with speaker identification
JP4978724B2 (ja) * 2010-09-27 2012-07-18 カシオ計算機株式会社 撮像装置、及びプログラム
CN101969548B (zh) * 2010-10-15 2012-05-23 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
KR101666397B1 (ko) * 2010-12-21 2016-10-14 한국전자통신연구원 객체 영상 획득 장치 및 방법
JP5834232B2 (ja) * 2011-01-17 2015-12-16 パナソニックIpマネジメント株式会社 撮像画像認識装置、撮像画像認識システム及び撮像画像認識方法
JP5791448B2 (ja) * 2011-09-28 2015-10-07 京セラ株式会社 カメラ装置および携帯端末
US9749594B2 (en) * 2011-12-22 2017-08-29 Pelco, Inc. Transformation between image and map coordinates
CN103780830B (zh) * 2012-10-17 2017-04-12 晶睿通讯股份有限公司 连动式摄影系统及其多摄影机的控制方法
US9210385B2 (en) * 2012-11-20 2015-12-08 Pelco, Inc. Method and system for metadata extraction from master-slave cameras tracking system
JP5882975B2 (ja) * 2012-12-26 2016-03-09 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法、及び記録媒体
BR112015021770A2 (pt) * 2013-03-08 2017-08-22 Denso Corp Método para controlar aparelho de monitoramento
JP5866499B2 (ja) * 2014-02-24 2016-02-17 パナソニックIpマネジメント株式会社 監視カメラシステム及び監視カメラシステムの制御方法
EP2922288A1 (en) * 2014-03-18 2015-09-23 Thomson Licensing Method for processing a video sequence, corresponding device, computer program and non-transitory computer-readable medium
WO2015151095A1 (en) 2014-04-03 2015-10-08 Pixellot Ltd. Method and system for automatic television production
US20160078298A1 (en) * 2014-09-16 2016-03-17 Geovision Inc. Surveillance Method and Camera System Using the Same
US20160127695A1 (en) * 2014-10-30 2016-05-05 Motorola Solutions, Inc Method and apparatus for controlling a camera's field of view
AU2015203591A1 (en) * 2015-06-26 2017-01-19 Canon Kabushiki Kaisha System and method for object matching
US9781350B2 (en) * 2015-09-28 2017-10-03 Qualcomm Incorporated Systems and methods for performing automatic zoom
US20180070010A1 (en) * 2016-09-02 2018-03-08 Altek Semiconductor Corp. Image capturing apparatus and image zooming method thereof
US10402987B2 (en) * 2017-05-24 2019-09-03 Qualcomm Incorporated Methods and systems of determining object status for false positive removal in object tracking for video analytics
EP3419283B1 (en) * 2017-06-21 2022-02-16 Axis AB System and method for tracking moving objects in a scene
JP2019029998A (ja) * 2017-07-28 2019-02-21 キヤノン株式会社 撮像装置、撮像装置の制御方法、および制御プログラム
EP3451650B1 (en) * 2017-08-29 2020-01-08 Axis AB A method of calibrating a direction of a pan, tilt, zoom, camera with respect to a fixed camera, and a system in which such a calibration is carried out
NO344836B1 (en) * 2019-04-08 2020-05-18 Huddly As Interpolation based camera motion for transitioning between best overview frames in live video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543200A (zh) * 2003-04-22 2004-11-03 ���µ�����ҵ��ʽ���� 联合摄像机构成的监视装置
US20060056056A1 (en) * 2004-07-19 2006-03-16 Grandeye Ltd. Automatically expanding the zoom capability of a wide-angle video camera
CN102342099A (zh) * 2009-05-29 2012-02-01 (株)荣国电子 智能型监控摄像装置及采用该装置的影像监控系统
CN102148965A (zh) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 多目标跟踪特写拍摄视频监控系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3641298A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111372037A (zh) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 目标抓拍系统和方法
CN111372037B (zh) * 2018-12-25 2021-11-02 杭州海康威视数字技术股份有限公司 目标抓拍系统和方法

Also Published As

Publication number Publication date
US11107246B2 (en) 2021-08-31
US20200167959A1 (en) 2020-05-28
EP3641298B1 (en) 2022-10-26
EP3641298A4 (en) 2020-07-01
EP3641298A1 (en) 2020-04-22
CN109151295B (zh) 2020-04-03
CN109151295A (zh) 2019-01-04

Similar Documents

Publication Publication Date Title
WO2018228413A1 (zh) 一种目标对象抓拍方法、装置及视频监控设备
WO2018228410A1 (zh) 一种目标对象抓拍方法、装置及视频监控设备
US10445887B2 (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
WO2017080399A1 (zh) 一种人脸位置跟踪方法、装置和电子设备
KR102101438B1 (ko) 연속 시점 전환 서비스에서 객체의 위치 및 크기를 유지하기 위한 다중 카메라 제어 장치 및 방법
US11196943B2 (en) Video analysis and management techniques for media capture and retention
US10474935B2 (en) Method and device for target detection
US8532337B2 (en) Object tracking method
JP5484184B2 (ja) 画像処理装置、画像処理方法及びプログラム
US20200267309A1 (en) Focusing method and device, and readable storage medium
JP7192582B2 (ja) 物体追跡装置および物体追跡方法
CN109981972B (zh) 一种机器人的目标跟踪方法、机器人及存储介质
KR20150032630A (ko) 촬상 시스템에 있어서의 제어방법, 제어장치 및 컴퓨터 판독 가능한 기억매체
US8406468B2 (en) Image capturing device and method for adjusting a position of a lens of the image capturing device
US20090043422A1 (en) Photographing apparatus and method in a robot
US10313596B2 (en) Method and apparatus for correcting tilt of subject ocuured in photographing, mobile terminal, and storage medium
US9031355B2 (en) Method of system for image stabilization through image processing, and zoom camera including image stabilization function
JP6494418B2 (ja) 画像解析装置、画像解析方法、およびプログラム
US20080226159A1 (en) Method and System For Calculating Depth Information of Object in Image
CN103607558A (zh) 一种视频监控系统及其目标匹配方法和装置
TWI556651B (zh) 具攝影機自動調派功能之3d影像監控系統及其監控方法
JP2006259847A (ja) 自動追尾装置及び自動追尾方法
TWI736063B (zh) 物件偵測方法以及電子裝置
CN102469247A (zh) 摄像装置及其动态对焦方法
JP2020144607A (ja) 人検出装置および人検出方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18817790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2018817790

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018817790

Country of ref document: EP

Effective date: 20200116