WO2018228410A1 - 一种目标对象抓拍方法、装置及视频监控设备 - Google Patents

一种目标对象抓拍方法、装置及视频监控设备 Download PDF

Info

Publication number
WO2018228410A1
WO2018228410A1 PCT/CN2018/090987 CN2018090987W WO2018228410A1 WO 2018228410 A1 WO2018228410 A1 WO 2018228410A1 CN 2018090987 W CN2018090987 W CN 2018090987W WO 2018228410 A1 WO2018228410 A1 WO 2018228410A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
target
camera
magnification
position information
Prior art date
Application number
PCT/CN2018/090987
Other languages
English (en)
French (fr)
Inventor
申琳
沈林杰
张尚迪
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to US16/622,568 priority Critical patent/US11102417B2/en
Priority to EP18818742.1A priority patent/EP3641304B1/en
Publication of WO2018228410A1 publication Critical patent/WO2018228410A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a target object capture method, device, and video monitoring device.
  • the target in the surveillance image is usually small, resulting in problems such as the visibility of the target object.
  • a detail camera such as a ball machine, etc.
  • a clear target object is usually obtained in the surveillance image, but the monitoring range tends to be small. Therefore, the existing video monitoring equipment has a problem that the monitoring range and the target object capture quality cannot be both.
  • the purpose of the embodiment of the present application is to provide a target object capture method, device, and video monitoring device, so as to improve the capture quality of the target object under the premise of ensuring the monitoring scope.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides a method for capturing a target object, where the method includes:
  • the step of calculating the capture position information of each target object according to the first location information, the moving direction and speed information of each target object, and the preset detail camera position adjustment time includes:
  • the capture position information of each target object is determined according to the first position information of each target object and the corresponding position change information.
  • the step of determining the magnification corresponding to each target object according to the size of each target object includes:
  • the magnification corresponding to the angle of view is determined according to the correspondence between the preset magnification and the angle of view, and the determined magnification is used as the magnification corresponding to the target object.
  • the step of determining the tracking duration of each target object includes:
  • the preset condition includes: the sum of the tracking durations of the target objects is smaller than the departure time of any target object, and each target object The sum of the tracking durations is the largest, and the variance of the tracking duration of each target object is the smallest.
  • controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail camera to track the target object.
  • the method further includes: before capturing the target object within the duration;
  • the steps of the target object include:
  • the detail camera According to the order of capturing priority from high to low, for each target object, according to the detailed camera position information and magnification corresponding to the target object, controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail camera in the The target object is captured within the tracking duration corresponding to the target object.
  • the step of determining the tracking duration of each target object includes:
  • the method further includes: before the calculating the capture position information of each target object, according to the first position information, the movement direction and speed information of each target object, and the preset detail camera position adjustment time, the method further includes:
  • each target object identifying a moving direction as a first target object moving toward the panoramic camera
  • the camera captures the first target object within a tracking duration corresponding to the first target object.
  • the method further includes:
  • N images having the best image quality are identified and saved, wherein N is an integer greater than zero.
  • the determining a tracking duration of each target object, for each target object, controlling the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target object, and controlling the adjusted details includes:
  • each target block includes one or more target objects
  • Determining a tracking duration of each target block identifying, for each target block, a second target object at an edge position among each target object included in the target block, and determining the position according to detailed camera position information corresponding to each second target object Determining the camera position information corresponding to the target block, and determining a magnification corresponding to the target block according to a magnification corresponding to each second target object;
  • controlling the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target block, and controlling the adjusted detail camera to capture the target block within the tracking duration corresponding to the target block.
  • the step of determining the tracking duration of each target block includes:
  • the step of determining a magnification corresponding to the target block according to a magnification corresponding to each second target object includes:
  • the maximum value of the magnifications corresponding to the second target objects is used as the magnification corresponding to the target block, or the magnification of each second target object is multiplied by the corresponding weight to obtain a comprehensive magnification as the magnification corresponding to the target block.
  • the step of detecting the target object in the current panoramic video frame collected by the panoramic camera includes:
  • a target object in the current panoramic video frame acquired by the panoramic camera and not present in the previous video frame is detected.
  • an embodiment of the present application provides a target object capture device, where the device includes:
  • a detecting module configured to detect a target object in a current panoramic video frame collected by the panoramic camera, and determine first position information, size, moving direction, and speed information of each target object in the current panoramic video frame;
  • a calculation module configured to calculate snap position information of each target object according to first position information, moving direction and speed information of each target object, and preset detail camera position adjustment time;
  • a first determining module configured to determine, according to the capture position information of each target object, the position mapping relationship between the pre-constructed panoramic camera and the detail camera, the detailed camera position information corresponding to each target object, and determine each according to the size of each target object The magnification corresponding to the target object;
  • control module configured to determine a tracking duration of each target object, for each target object, according to the detailed camera position information and magnification corresponding to the target object, controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail camera
  • the target object is captured within the tracking duration corresponding to the target object.
  • the computing module includes:
  • a first determining submodule configured to determine position change information of each target object according to speed information of each target object, a moving direction, and a preset detail camera position adjustment time
  • the second determining submodule is configured to determine, according to the first location information of each target object and the corresponding location change information, the capture location information of each target object.
  • the first determining module includes:
  • a third determining submodule configured to determine, according to the size of the target object, a corresponding field of view for each target object
  • a fourth determining submodule configured to determine a magnification corresponding to the field of view according to a preset correspondence between the magnification and the angle of view, and use the determined magnification as a magnification corresponding to the target object.
  • control module includes:
  • a first calculation sub-module configured to determine a distance of each target object from an edge of the monitoring scene according to a moving direction of each target object, calculate a distance according to a distance of each target object from the edge of the monitoring scene, and a speed corresponding to each target object The departure time of the object;
  • a second calculation sub-module configured to calculate a tracking duration of each target object according to the departure time of each target object and a preset condition; wherein the preset condition includes: the sum of the tracking durations of each target object is less than any target The departure time of the object, the sum of the tracking durations of each target object is the largest, and the variance of the tracking duration of each target object is the smallest.
  • the device further includes:
  • a second determining module configured to determine a snap priority of each target object according to an order of departure time of each target object from small to large;
  • the control module is specifically configured to control, according to the detailed camera position information and the magnification corresponding to the target object, the position camera to adjust the position and the magnification according to the priority camera position information and the magnification corresponding to the target object according to the order of the capture priority from high to low, and Controlling the adjusted detail
  • the camera captures the target object within the tracking duration corresponding to the target object.
  • control module is configured to obtain a preset tracking duration, and use the acquired tracking duration as a tracking duration of each target object.
  • the device further includes:
  • An identification module configured to identify, in each target object, a first target object whose moving direction is moving toward the panoramic camera;
  • the calculating module is configured to calculate snap position information of each first target object according to the first position information, the moving direction and speed information of each first target object, and the preset detailed camera position adjustment time;
  • the first determining module is configured to determine, according to the captured position information of each of the first target objects, and the position mapping relationship between the pre-built panoramic camera and the detailed camera, the detailed camera position information corresponding to each of the first target objects, and according to each The size of the first target object determines a magnification corresponding to each of the first target objects;
  • the control module is configured to determine a tracking duration of each first target object, and for each first target object, control the detail camera to adjust its position and magnification according to the detailed camera position information and the magnification corresponding to the first target object. And controlling the adjusted detail camera to capture the first target object within the tracking duration corresponding to the first target object.
  • the device further includes:
  • An acquiring module configured to acquire, according to any target object, multiple images corresponding to the target object collected by the detail camera;
  • a storage module configured to identify and save N images with optimal image quality in the plurality of images, where N is an integer greater than 0.
  • control module includes:
  • a molecular module for performing block processing on each target object according to the detailed camera position information and magnification corresponding to each target object, to obtain at least one target block, wherein each target block includes one or more target objects;
  • a fifth determining submodule configured to determine a tracking duration of each target block, for each target block, identifying a second target object at an edge position among each target object included in the target block, and according to each second target object
  • Corresponding detail camera position information determines detailed camera position information corresponding to the target block, and determines a magnification corresponding to the target block according to a magnification corresponding to each second target object
  • control submodule configured, for each target block, according to the detailed camera position information and magnification corresponding to the target block, controlling the detail camera to adjust its position and magnification, and controlling the adjusted detail camera to track the target block Capture the target block within the duration.
  • the fifth determining submodule includes:
  • Determining a subunit, for each target block determining, according to a moving direction of each target object included in the target block, a third target object having the same moving direction and the largest number, and according to a moving direction of each third target object, Determining the distance of the target block from the edge of the monitoring scene;
  • a first calculating subunit configured to calculate a departure time of each target block according to a distance of each target block distance monitoring scene edge, and an average speed of each third target object included in each target block;
  • a second calculating sub-unit configured to calculate a tracking duration of each target block according to a departure time of each target block and a preset condition; wherein the preset condition includes: a sum of tracking durations of each target block is less than any target The departure time of the block, the sum of the tracking durations of the target blocks is the largest, and the variance of the tracking duration of each target block is the smallest.
  • the fifth determining sub-module is specifically configured to use a maximum value of the magnifications corresponding to each second target object as a magnification corresponding to the target block, or multiply a magnification of each second target object by a corresponding weight.
  • the comprehensive magnification is obtained as the magnification corresponding to the target block.
  • the detecting module is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous panoramic video frame.
  • an embodiment of the present application provides a video monitoring device, including a panoramic camera, a detail camera, and a processor;
  • the panoramic camera is configured to collect a current panoramic video frame, and send the current panoramic video frame to the processor;
  • the processor is configured to detect a target object in the current panoramic video frame, determine first location information, size, moving direction, and speed information of each target object in the current panoramic video frame; First position information, moving direction and speed information, and preset detail camera position adjustment time, calculating snap position information of each target object; capturing position information according to each target object, and pre-built panoramic camera and detail camera position Mapping relationship, determining detailed camera position information corresponding to each target object, determining a magnification corresponding to each target object according to the size of each target object; determining a tracking duration of each target object, and corresponding to the target object for each target object Detail camera position information and magnification are sent to the detail camera;
  • the detail camera is configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target object, and control the adjusted detail camera to capture the target object within the tracking duration corresponding to the target object.
  • the processor is specifically configured to determine location change information of each target object according to speed information, a moving direction, and a preset detail camera position adjustment time of each target object; and according to the first position of each target object The information, and the corresponding position change information, determine the capture position information of each target object.
  • the processor is specifically configured to determine, according to a size of the target object, a corresponding field of view for each target object, and determine the field of view according to a preset relationship between the magnification and the angle of view. Corresponding magnification, and the determined magnification is taken as the magnification corresponding to the target object.
  • the processor is specifically configured to determine, according to a moving direction of each target object, a distance of each target object from an edge of the monitoring scene, and monitor a distance of the edge of the scene according to the distance of each target object, and a speed corresponding to each target object.
  • the processor is further configured to control, according to the detailed camera position information and the magnification corresponding to the target object, the detail camera to adjust its position and magnification, and control the adjusted detail camera for each target object.
  • the processor Before capturing the target object within the tracking duration corresponding to the target object, determining a capture priority of each target object according to an order of departure time of each target object from small to large;
  • the processor is specifically configured to send the detailed camera position information and the magnification corresponding to the target object to the detail camera for each target object according to the order of the capture priority from high to low;
  • the detail camera is specifically configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target object, and control the adjusted detail camera to capture the target within the tracking duration corresponding to the target object. Object.
  • the processor is specifically configured to obtain a preset tracking duration, and use the acquired tracking duration as a tracking duration of each target object.
  • the processor is further configured to: before calculating the capture position information of each target object, according to the first position information, the movement direction and speed information of each target object, and the preset detail camera position adjustment time Among the target objects, the recognition moving direction is the first target object moving toward the panoramic camera;
  • the processor is configured to calculate, according to the first location information, the moving direction and speed information of each first target object, and the preset detailed camera position adjustment time, the capture position information of each first target object; Positioning information of a target object, and a position mapping relationship between the pre-built panoramic camera and the detail camera, determining detailed camera position information corresponding to each first target object, and determining each first target object according to the size of each first target object Corresponding magnification; determining a tracking duration of each first target object, and for each first target object, transmitting the detailed camera position information and the magnification corresponding to the first target object to the detail camera;
  • the detail camera is configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the first target object, and control the adjusted detail camera to be within the tracking duration of the first target object. Capture the first target object.
  • the processor is further configured to acquire, according to any target object, a plurality of images corresponding to the target object acquired by the detail camera; and identify and save an image quality optimal in the multiple images. N images, where N is an integer greater than zero.
  • the processor is configured to perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or a plurality of target objects; determining a tracking duration of each target block, identifying, for each target block, a second target object at an edge position among the target objects included in the target block, and according to details corresponding to each second target object
  • the camera position information determines the detailed camera position information corresponding to the target block, and determines a magnification corresponding to the target block according to a magnification corresponding to each second target object; for each target block, sends the detailed camera position information and the magnification corresponding to the target block To the detail camera;
  • the detail camera is specifically configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target block, and control the adjusted detail camera to capture the target within the tracking duration corresponding to the target block. Piece.
  • the processor is specifically configured to determine, according to a moving direction of each target object included in the target block, a third target object with the same moving direction and the largest number for each target block, and according to each third The moving direction of the target object, determining the distance of the target block from the edge of the monitoring scene; calculating the distance of each target block according to the distance of each target block distance from the monitoring scene edge and the average speed of each third target object included in each target block
  • the departure time is calculated according to the departure time of each target block and the preset condition; wherein the preset condition includes: the sum of the tracking durations of the target blocks is smaller than the departure time of any target block, The sum of the tracking durations of each target block is the largest, and the variance of the tracking duration of each target block is the smallest.
  • the processor is specifically configured to use a maximum value of the magnification corresponding to each second target object as a magnification corresponding to the target block, or multiply a magnification of each second target object by a corresponding weight to obtain a comprehensive magnification. , as the magnification corresponding to the target block.
  • the processor is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous video frame.
  • the present application provides a storage medium, wherein the storage medium is configured to store executable program code for executing a target object according to the first aspect of the present application at runtime Capture method.
  • the present application provides an application, wherein the application is configured to execute a target object capture method according to the first aspect of the present application at runtime.
  • a target object capture method, device, and video monitoring device provided by the embodiment of the present application, the method includes: detecting a target object in a current panoramic video frame collected by a panoramic camera, and determining each target object in the current panoramic video frame. First position information, size, moving direction and speed information; calculating snap position information of each target object according to first position information, moving direction and speed information of each target object, and preset detail camera position adjustment time; Determining the detailed camera position information corresponding to each target object according to the capture position information of each target object and the position mapping relationship of the pre-built panoramic camera and the detail camera, and determining the magnification corresponding to each target object according to the size of each target object; Tracking duration of each target object, for each target object, controlling the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target object, and controlling the adjusted detail camera corresponding to the target object Capture the target object within the tracking time.
  • the position camera and the magnification corresponding to each target object can be adjusted according to specific position information, size, moving direction, speed information, and the like of each target object.
  • the capture is performed, and each target object can be captured multiple times within the capture duration corresponding to each target object, so that the capture quality of the target object can be improved under the premise of ensuring the monitoring range.
  • FIG. 1 is a flowchart of a method for capturing a target object according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a panoramic video frame according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of location information of a target object in a panoramic video frame according to an embodiment of the present application
  • FIG. 4 is another flowchart of a target object capture method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a result of blocking a target object according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a result of determining a second target object in a target block according to an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of a target object capture device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a video monitoring device according to an embodiment of the present application.
  • FIG. 1 illustrates a flow of a target object capture method according to an embodiment of the present application, and the method may include the following steps:
  • S101 Detect a target object in a current panoramic video frame collected by the panoramic camera, and determine first location information, size, moving direction, and speed information of each target object in the current panoramic video frame.
  • the video monitoring device of the embodiment of the present application may at least include a panoramic camera, a detail camera, and a processor.
  • the panoramic camera can be a camera with a large monitoring range, such as a gun machine, a fisheye camera, etc.
  • the detail camera can be a camera capable of adjusting the capture magnification, such as a ball machine.
  • the position of the detail camera can also be adjusted, so that the monitoring range and the size of the target object in the acquired image can be adjusted.
  • the panoramic camera can collect the panoramic video frame.
  • the panoramic camera can periodically collect panoramic video frames at preset time intervals.
  • the panoramic camera can send the acquired current panoramic video frame to the processor.
  • the processor can detect the target object in the current panoramic video frame.
  • the processor may use a target detection algorithm such as a DPM (deformable parts model) or a FRCNN (Faster Region Convolutional Neural Network) to detect a target object in the current panoramic video frame.
  • the target object may be a person, a vehicle, or the like.
  • the target object capture method provided by the embodiment of the present application is described by taking the target object as a human.
  • the current panoramic video frame acquired by the panoramic camera includes target objects 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10.
  • the processor can also determine first position information, size, moving direction, and speed information of each target object in the current panoramic video frame. For example, the processor may determine, for each target object, a rectangular area where the target object is located, and determine, according to a preset coordinate system, the upper left corner coordinate and the lower right corner coordinate of the rectangular area as the first position information of the target object. . Correspondingly, the processor can determine the size of the rectangular area where the target object is located as the size of the target object.
  • the target object 1 it can be determined that the rectangular area in which it is located is 210, and according to the coordinate system constructed in the figure, the first position information of the target object 1 can be the upper left corner 220 and the lower right corner of the area 210. 230 coordinate information.
  • the size of the target object 1 may be the size of the area 210.
  • the processor may first determine whether the target object exists in the previously acquired panoramic video frame, for example, in the previous video frame; if yes, according to multiple videos Frame to determine the direction and speed information of the target object.
  • S102 Calculate snap position information of each target object according to the first position information, the moving direction and speed information of each target object, and the preset detailed camera position adjustment time.
  • the position of the detail camera is adjustable. Also, it takes a certain amount of time to adjust its position. This time can be pre-set and saved in the processor.
  • each target object has its corresponding speed.
  • its location in the current panoramic video frame may not be where it was captured by the detail camera.
  • the processor may calculate each target object according to the first position information, the moving direction and speed information of each target object, and the preset detailed camera position adjustment time.
  • the capture position information that is, the position information in the panoramic video frame when each target object is captured by the detail camera.
  • the processor may determine the position change information of each target object according to the speed information of each target object, the moving direction, and the preset panoramic camera position adjustment time. Then, based on the first location information of each target object and the corresponding location change information, the capture location information of each target object may be determined.
  • S103 Determine, according to the capture position information of each target object, the position mapping relationship between the pre-constructed panoramic camera and the detail camera, determine the detailed camera position information corresponding to each target object, and determine the magnification corresponding to each target object according to the size of each target object. .
  • the position mapping relationship between the panoramic camera and the detail camera may be constructed in advance.
  • the position information of any target object in the panoramic video frame acquired by the panoramic camera is a1
  • the position information of the corresponding detail camera is b1
  • the position information of any target object in the panoramic video frame acquired by the panoramic camera is b2 and the like.
  • the position information of the detail camera may include its horizontal direction position information and vertical direction position information.
  • the detailed camera position information corresponding to each target object may be determined according to the capture position information of each target object and the position mapping relationship between the pre-built panoramic camera and the detail camera. That is to say, the position where the detail camera is used to capture each target object is determined.
  • the processor may search for the capture position information of the target object in the position mapping relationship between the pre-stored panoramic camera and the detail camera, and use the position information of the detail camera corresponding to the capture position information as the The detailed camera position information corresponding to the target object.
  • the magnification of the detail camera can be adjusted.
  • the processor may determine a magnification corresponding to each target object according to the size of each target object.
  • a person's pixel width in the image reaches 240 as an identifiable detail standard.
  • the processor can determine the magnification corresponding to the target objects of different sizes. For example, for a larger target object, the magnification of the detail camera can be adjusted to a smaller value to capture a complete target object; for a smaller target object, the magnification of the detail camera can be adjusted to a larger value to obtain Increase the clarity of the target object as large as possible.
  • the processor may determine, according to the size of the target object, a corresponding field of view for each target object, and then determine the field of view according to a preset relationship between the magnification and the angle of view. Corresponding magnification, and the determined magnification is taken as the magnification corresponding to the target object.
  • S104 Determine a tracking duration of each target object. For each target object, control the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target object, and control the adjusted detail camera at the target. The target object is captured within the tracking duration of the object.
  • the processor may determine the tracking duration of each target object before capturing the target object, and may perform multiple captures on each target object within the tracking duration of each target object.
  • the processor may preset the tracking duration of each target object to be the same value, obtain a preset tracking duration before performing the target object capture, and use the acquired tracking duration as the tracking duration of each target object.
  • the processor may control the detail camera to adjust its position and magnification according to the detailed camera position information and the magnification corresponding to the target object for each target object, and control the adjusted detail camera within the tracking time corresponding to the target object. Capture the target object.
  • the processor may sequentially transmit a snap command including the detailed camera position information and the magnification corresponding to the target object to the detail camera for each target object in the detection order of each target object.
  • the detail camera can adjust its own position and magnification according to the detailed camera position information and magnification contained therein, and capture the target object within the tracking duration corresponding to the target object.
  • the detail camera can capture the target object according to a fixed capture frequency within the tracking duration corresponding to any target object.
  • the position camera and the magnification corresponding to each target object can be adjusted according to specific position information, size, moving direction, speed information, and the like of each target object.
  • the capture is performed, and each target object can be captured multiple times within the capture duration corresponding to each target object, thereby improving the capture quality of the target object under the premise of ensuring the monitoring range.
  • the processor when the processor detects the target object in the current panoramic video frame, the processor may detect the current panoramic video frame collected by the panoramic camera, and does not exist. The target object in the previous panoramic video frame.
  • the processor may determine the distance of each target object from the edge of the monitoring scene according to the moving direction of each target object, and monitor the scene according to the distance of each target object. Calculating the departure time of each target object according to the distance of the edge and the speed of each target object, and then calculating the tracking duration of each target object according to the departure time of each target object and the preset condition; wherein the preset condition The sum of the tracking durations of the target objects is smaller than the departure time of any of the target objects, and the sum of the tracking durations of the target objects is the largest, and the variance of the tracking duration of each target object is the smallest.
  • the processor can calculate the tracking duration T k(i) of each target object i by using the following formula:
  • T l(n) is the departure time of any target object.
  • each target object when determining the tracking duration of each target object, it can be ensured that each target object can be allocated a certain tracking duration, thereby ensuring that each target object can be captured. Moreover, the tracking duration of each target object is as long as possible, and the tracking duration of each target object is small, preventing local optimization.
  • the processor may determine the capture priority of each target object, and then may capture each target object according to the priority order.
  • the processor may determine the capture priority of each target object according to the order of departure time from small to large. That is to say, the target object with a smaller departure time has a higher priority for capturing.
  • the processor may control the detail camera to adjust the target camera according to the detailed camera position information and the magnification corresponding to the target object according to the priority of the capture priority from high to low. Position and magnification, and control the adjusted details.
  • the camera captures the target object within the tracking time corresponding to the target object.
  • the capture priority of each target object can be determined according to the departure time of each target object, and each target object is captured according to the order of the capture priority, so that the target object can be captured as much as possible.
  • Each target object is not located at the edge position, and the snap quality of each target object is improved.
  • the processor may identify the moving direction as a panoramic camera in each target object.
  • the first target object that is moved For example, the processor may identify the first target object moving toward the panoramic camera according to the moving direction of each target object.
  • the processor may only capture the first target object. Specifically, the processor may calculate the capture position information of each first target object according to the first position information, the movement direction and speed information of each first target object, and the preset detail camera position adjustment time; The capture position information of the target object, and the position mapping relationship between the pre-built panoramic camera and the detail camera, determine the detailed camera position information corresponding to each first target object, and determine the correspondence of each first target object according to the size of each first target object.
  • the magnification of each first target object is determined, and for each first target object, according to the detailed camera position information and magnification corresponding to the first target object, the detail camera is controlled to adjust its position and magnification, and control The adjusted detail camera captures the first target object within the tracking duration corresponding to the first target object.
  • the processor controls the detail camera to capture each target object, it may also identify one or more pieces of higher quality in the corresponding image for each target object.
  • the image is saved, and the saved image can be subjected to feature extraction and the like.
  • the processor may acquire, according to any target object, multiple images corresponding to the target object collected by the detailed camera, and thereby identify and save N images with optimal image quality in the multiple images, where N is An integer greater than 0.
  • the processor may perform block processing on each target object in the current panoramic video frame.
  • the process of capturing a target object by the processor may include the following steps:
  • S401 Perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or more target objects.
  • the processor may perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where each target block includes one or more targets. Object.
  • the detail camera corresponds to different position ranges at different magnifications. After obtaining the detailed camera position information and the magnification corresponding to each target object, all the targets that can satisfy the detailed camera position range within a certain range (for example, 0.5 times) are searched for by one part, and finally different target blocks are formed.
  • a certain range for example, 0.5 times
  • each target block obtained may be as shown in FIG. 5.
  • each target object can be divided into four blocks, one for the target objects 7, 8, 9, and 10, one for the target objects 2, 3, and 4, and one for the target objects 5 and 6, and the target object. 1 is a piece.
  • S402. Determine a tracking duration of each target block. For each target block, identify a second target object at an edge position among each target object included in the target block, and according to detailed camera position information corresponding to each second target object. Determining the detailed camera position information corresponding to the target block, and determining a magnification corresponding to the target block according to a magnification corresponding to each second target object.
  • the processor can determine the tracking duration of each target block. Specifically, for any target block, the processor may first determine a third target object with the same moving direction and the largest number according to the moving direction of each target object included in the target block, and according to the moving direction of each third target object. , determine the distance of the target block from the edge of the monitoring scene.
  • the forward facing camera can be The five target objects moved are determined as the third target object.
  • the processor may determine the distance of the target block from the edge of the monitoring scene when the moving direction of the target block is the moving direction of the third target object.
  • the processor may calculate the departure time of each target block according to the distance of each target block distance monitoring the edge of the scene and the average speed of each third target object included in each target block; and, according to each target block
  • the departure time and the preset condition are used to calculate the tracking duration of each target block.
  • the preset condition includes: the sum of the tracking durations of the target blocks is smaller than the departure time of any target block, and the sum of the tracking durations of the target blocks is the largest.
  • the variance of the tracking duration of each target block is the smallest.
  • the process of determining the tracking duration of each target block by the processor may refer to the process of determining the tracking duration of each target object, and details are not described herein.
  • the processor may further identify, in each target block, a second target object at an edge position among each target object included in the target block.
  • FIG. 6 it shows a schematic diagram including a plurality of target objects in a target block.
  • the processor can recognize that the second target objects at the edge positions are the target objects 610, 620, 650, and 660, respectively.
  • the processor may determine the detailed camera position information corresponding to the target blocks according to the detailed camera position information corresponding to the second target objects, and corresponding to the second target objects.
  • the magnification determines the magnification corresponding to the target block.
  • the maximum value of the magnifications corresponding to the second target objects may be used as the magnification corresponding to the target block, or the magnification of each second target object may be multiplied by the corresponding weight to obtain a comprehensive magnification as the magnification corresponding to the target block.
  • the processor may adjust the detail camera according to the detailed camera position information and the magnification corresponding to the target block for each target block. Position and magnification, and control the adjusted details.
  • the camera captures the target block within the tracking duration corresponding to the target block.
  • the processor can perform block processing on each target object, and then can capture each target block to improve the capture efficiency.
  • the embodiment of the present application further provides a target object capture device.
  • the device includes:
  • the detecting module 710 is configured to detect a target object in a current panoramic video frame collected by the panoramic camera, and determine first location information, size, moving direction, and speed information of each target object in the current panoramic video frame.
  • the calculating module 720 is configured to calculate snap position information of each target object according to the first position information, the moving direction and speed information of each target object, and the preset detailed camera position adjustment time;
  • the first determining module 730 is configured to determine, according to the capture position information of each target object, the position mapping relationship between the panoramic camera and the detail camera, and determine the detailed camera position information corresponding to each target object, and determine according to the size of each target object.
  • the control module 740 is configured to determine a tracking duration of each target object, and for each target object, control the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target object, and control the adjusted details.
  • the camera captures the target object within the tracking duration corresponding to the target object.
  • the position camera and the magnification corresponding to each target object can be adjusted according to specific position information, size, moving direction, speed information, and the like of each target object.
  • the capture is performed, and each target object can be captured multiple times within the capture duration corresponding to each target object, so that the capture quality of the target object can be improved under the premise of ensuring the monitoring range.
  • the calculating module 720 includes:
  • a first determining sub-module for determining position change information of each target object according to speed information, a moving direction, and a preset detail camera position adjusting time of each target object;
  • the second determining sub-module (not shown) is configured to determine the capturing position information of each target object according to the first position information of each target object and the corresponding position change information.
  • the first determining module 730 includes:
  • a third determining sub-module (not shown) for determining, for each target object, a corresponding field of view according to the size of the target object;
  • the fourth determining sub-module (not shown) is configured to determine a magnification corresponding to the viewing angle according to a preset correspondence between the magnification and the viewing angle, and use the determined magnification as the magnification corresponding to the target object.
  • control module 740 includes:
  • a first calculation sub-module for determining a distance of each target object from an edge of the monitoring scene according to a moving direction of each target object, monitoring a distance of the edge of the scene according to each target object distance, and corresponding to each target object The speed of the calculation, the calculation of the departure time of each target object;
  • a second calculation sub-module (not shown), configured to calculate a tracking duration of each target object according to a departure time of each target object and a preset condition; wherein the preset condition includes: tracking of each target object The sum of the durations is smaller than the departure time of any target object, the sum of the tracking durations of each target object is the largest, and the variance of the tracking duration of each target object is the smallest.
  • the device further includes:
  • a second determining module (not shown in the figure), configured to determine a snap priority of each target object according to an order of departure time of each target object from small to large;
  • the control module 740 is specifically configured to control, according to the detailed camera position information and the magnification corresponding to the target object, the position and magnification of the detail camera according to the priority of the capture priority from high to low. And controlling the adjusted detail camera to capture the target object within the tracking duration corresponding to the target object.
  • control module 740 is specifically configured to obtain a preset tracking duration, and use the acquired tracking duration as the tracking duration of each target object.
  • the device further includes:
  • An identification module (not shown) for identifying, in each target object, a first target object whose moving direction is moving toward the panoramic camera;
  • the calculating module 720 is configured to calculate snap position information of each first target object according to the first position information, the moving direction and speed information of each first target object, and the preset detailed camera position adjustment time;
  • the first determining module 730 is configured to determine, according to the captured position information of each first target object, and the position mapping relationship between the pre-built panoramic camera and the detail camera, the detailed camera position information corresponding to each first target object, and according to The size of each first target object determines a magnification corresponding to each first target object;
  • the control module 740 is configured to determine a tracking duration of each first target object, and for each first target object, control the detail camera to adjust its position according to the detailed camera position information and the magnification corresponding to the first target object. Magnification, and controlling the adjusted details The camera captures the first target object within the tracking duration corresponding to the first target object.
  • the device further includes:
  • Obtaining a module (not shown in the figure), configured to acquire, according to any target object, a plurality of images corresponding to the target object collected by the detail camera;
  • a storage module (not shown) for identifying and storing N images of optimal image quality among the plurality of images, wherein N is an integer greater than zero.
  • control module 740 includes:
  • a molecular module (not shown) for performing block processing on each target object according to the detailed camera position information and magnification corresponding to each target object, to obtain at least one target block, wherein each target block includes one Or multiple target objects;
  • a fifth determining sub-module for determining a tracking duration of each target block, and for each target block, identifying a second target object at an edge position among each target object included in the target block, And determining the detailed camera position information corresponding to the target block according to the detailed camera position information corresponding to each second target object, and determining the magnification corresponding to the target block according to the magnification corresponding to each second target object;
  • control sub-module for controlling, for each target block, the detail camera to adjust its position and magnification according to the detailed camera position information and magnification corresponding to the target block, and controlling the adjusted detail camera
  • the target block is captured within the tracking duration corresponding to the target block.
  • the fifth determining submodule includes:
  • Determining a subunit for determining, for each target block, a third target object having the same moving direction and the largest number according to the moving direction of each target object included in the target block, and according to each The moving direction of the three target object, determining the distance of the target block from the edge of the monitoring scene;
  • a first calculation subunit (not shown) for calculating a departure time of each target block according to a distance of each target block distance monitoring scene edge and an average speed of each third target object included in each target block ;
  • a second calculation subunit (not shown), configured to calculate a tracking duration of each target block according to an departure time of each target block and a preset condition; wherein the preset condition includes: tracking of each target block The sum of the durations is smaller than the departure time of any target block, the sum of the tracking durations of each target block is the largest, and the variance of the tracking duration of each target block is the smallest.
  • the fifth determining sub-module is specifically configured to use a maximum value of the magnification corresponding to each second target object as a magnification corresponding to the target block, or use each second target object The magnification is multiplied by the corresponding weight to obtain a comprehensive magnification as the magnification corresponding to the target block.
  • the detecting module 710 is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous panoramic video frame.
  • the video monitoring device includes a panoramic camera 810, a processor 820, and a detail camera 830.
  • the panoramic camera 810 is configured to collect a current panoramic video frame, and send the current panoramic video frame to the processor 820;
  • the processor 820 is configured to detect a target object in the current panoramic video frame, determine first location information, size, moving direction, and speed information of each target object in the current panoramic video frame; according to each target object First position information, moving direction and speed information, and preset detail camera position adjustment time, calculating snap position information of each target object; capturing position information according to each target object, and pre-built panoramic camera and detail camera Position mapping relationship, determining detailed camera position information corresponding to each target object, determining a magnification corresponding to each target object according to the size of each target object; determining a tracking duration of each target object, and corresponding to each target object, corresponding to the target object The details of the camera position information and the magnification are sent to the detail camera;
  • the detail camera 830 is configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the target object, and capture the target object within the tracking duration corresponding to the target object.
  • the position camera and the magnification corresponding to each target object can be adjusted according to specific position information, size, moving direction, speed information, and the like of each target object.
  • the capture is performed, and each target object can be captured multiple times within the capture duration corresponding to each target object, so that the capture quality of the target object can be improved under the premise of ensuring the monitoring range.
  • the processor 820 is specifically configured to determine location change information of each target object according to speed information, a moving direction, and a preset detail camera position adjustment time of each target object; The capture position information corresponding to each target object is determined according to the first position information of each target object and the corresponding position change information.
  • the processor 820 is specifically configured to determine, according to the size of the target object, a corresponding field of view for each target object; according to a preset magnification and an angle of view Corresponding relationship, determining a magnification corresponding to the angle of view, and determining the determined magnification as the magnification corresponding to the target object.
  • the processor 820 is specifically configured to determine, according to a moving direction of each target object, a distance between each target object from an edge of the monitoring scene, and monitor a distance of the edge of the scene according to each target object distance, And calculating a departure time of each target object according to the speed of each target object; calculating a tracking duration of each target object according to the departure time of each target object and a preset condition; wherein the preset condition includes: each target object The sum of the tracking durations is smaller than the departure time of any target object, the sum of the tracking durations of each target object is the largest, and the variance of the tracking duration of each target object is the smallest.
  • the processor 820 is further configured to control the detail camera to adjust its position and magnification according to the detailed camera position information and the magnification corresponding to the target object for each target object. And controlling the adjusted details before the camera captures the target object within the tracking duration corresponding to the target object, determining the capture priority of each target object according to the order of departure time of each target object from small to large;
  • the processor 820 is specifically configured to send the detailed camera position information and the magnification corresponding to the target object to the detail camera 830 according to the order of the capture priority from high to low;
  • the detail camera 830 is specifically configured to adjust its position and magnification according to the received detailed camera position information and magnification corresponding to the target object, and control the adjusted detail camera to capture the tracking time corresponding to the target object. target.
  • the processor 820 is specifically configured to acquire a preset tracking duration, and use the acquired tracking duration as a tracking duration of each target object.
  • the processor 820 is further configured to calculate each target according to the first position information, the moving direction and speed information of each target object, and the preset detailed camera position adjustment time. Before capturing the position information of the object, among the target objects, identifying a moving direction is a first target object moving toward the panoramic camera;
  • the processor 820 is configured to calculate, according to the first location information, the moving direction and speed information of each first target object, and the preset detailed camera position adjustment time, the capture location information of each first target object; Capture position information of the first target object, and a position mapping relationship between the pre-built panoramic camera and the detail camera, determining detailed camera position information corresponding to each first target object, and determining each first target according to the size of each first target object The magnification corresponding to the object; determining the tracking duration of each first target object, for each first target object, transmitting the detailed camera position information and magnification corresponding to the first target object to the detail camera 830;
  • the detail camera 830 is configured to adjust its own position and magnification according to the received detailed camera position information and magnification corresponding to the first target object, and control the tracking time of the adjusted detail camera in the first target object. Capture the first target object internally.
  • the processor 820 is further configured to acquire, according to any target object, a plurality of images corresponding to the target object collected by the detail camera; in the multiple images, N images of the best image quality are identified and saved, where N is an integer greater than zero.
  • the processor 820 is specifically configured to perform block processing on each target object according to the detailed camera position information and the magnification corresponding to each target object, to obtain at least one target block, where Each target block includes one or more target objects; determining a tracking duration of each target block, for each target block, identifying a second target object at an edge position among each target object included in the target block, and according to The detailed camera position information corresponding to each second target object determines detailed camera position information corresponding to the target block, and determines a magnification corresponding to the target block according to a magnification corresponding to each second target object; for each target block, the target block is corresponding to the target block Details of the camera position information and magnification are sent to the detail camera 830;
  • the detail camera 830 is specifically configured to adjust its position and magnification according to the received detailed camera position information and magnification corresponding to the target block, and control the adjusted detail camera to capture the tracking time corresponding to the target block. Target block.
  • the processor 820 is specifically configured to determine, according to a moving direction of each target object included in the target block, a third direction with the same moving direction and the largest number for each target block.
  • Target object and according to the moving direction of each third target object, determining the distance of the target block from the edge of the monitoring scene; the distance of the monitoring scene edge according to each target block distance, and corresponding to each third target object included in each target block
  • the average speed is used to calculate the departure time of each target block;
  • the tracking duration of each target block is calculated according to the departure time of each target block and the preset condition; wherein the preset condition includes: the sum of the tracking durations of the target blocks is smaller than The departure time of any target block, the sum of the tracking durations of each target block is the largest, and the variance of the tracking duration of each target block is the smallest.
  • the processor 820 is specifically configured to use a maximum value of the magnification corresponding to each second target object as a magnification corresponding to the target block, or a magnification of each second target object. Multiply the corresponding weight to obtain the comprehensive magnification as the magnification corresponding to the target block.
  • the processor 820 is specifically configured to detect a target object in the current panoramic video frame collected by the panoramic camera and not present in the previous video frame.
  • the embodiment of the present application further provides a storage medium, where the storage medium is used to store executable program code, and the executable program code is used to execute a target described in the embodiment of the present application at runtime.
  • the object capture method wherein the target object capture method comprises:
  • the position camera and the magnification corresponding to each target object can be adjusted according to specific position information, size, moving direction, speed information, and the like of each target object.
  • the capture is performed, and each target object can be captured multiple times within the capture duration corresponding to each target object, so that the capture quality of the target object can be improved under the premise of ensuring the monitoring range.
  • the embodiment of the present application further provides an application program, where the application is used to execute a target object capture method according to the embodiment of the present application at runtime, where the target object capture method includes:
  • the position camera and the magnification corresponding to each target object can be adjusted according to specific position information, size, moving direction, speed information, and the like of each target object.
  • the capture is performed, and each target object can be captured multiple times within the capture duration corresponding to each target object, so that the capture quality of the target object can be improved under the premise of ensuring the monitoring range.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.

Abstract

本申请实施例提供了一种目标对象抓拍方法、装置及视频监控设备,所述方法包括:检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;计算各目标对象的抓拍位置信息;确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。本申请实施例能够在保证监控范围的前提下,提高目标对象的抓拍质量。

Description

一种目标对象抓拍方法、装置及视频监控设备
本申请要求于2017年6月16日提交中国专利局、申请号为201710459273.6发明名称为“一种目标对象抓拍方法、装置及视频监控设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种目标对象抓拍方法、装置及视频监控设备。
背景技术
随着视频监控技术的不断发展,视频监控设备已广泛应用于安防领域。在监控场景中,通常要求监控设备能监控到较大范围的场景,且捕获到较高清晰度的监控图像。
然而,当使用监控范围较大的全景相机(如枪机等)进行监控时,监控图像中的目标通常会较小,从而导致看不清目标对象细节等问题。当使用细节相机(如球机等)进行监控时,监控图像中通常能获取到清晰的目标对象,但是监控范围往往会较小。因此,现有的视频监控设备,存在监控范围和目标对象抓拍质量不可兼得的问题。
发明内容
本申请实施例的目的在于提供一种目标对象抓拍方法、装置及视频监控设备,以在保证监控范围的前提下,提高目标对象的抓拍质量。具体技术方案如下:
第一方面,本申请实施例提供了一种目标对象抓拍方法,所述方法包括:
检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
可选的,所述根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息的步骤包括:
根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;
根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
可选的,所述根据各目标对象的大小确定各目标对象对应的倍率的步骤包括:
针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
可选的,所述确定各目标对象的跟踪时长的步骤包括:
根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;
根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
可选的,所述针对每个目标对象,根据该目标对象对应的细节相机位置 信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象之前,所述方法还包括:
根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
所述针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象的步骤包括:
根据抓拍优先级从高到低的顺序,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
可选的,所述确定各目标对象的跟踪时长的步骤包括:
获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
可选的,所述根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息之前,所述方法还包括:
在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
相应的,所述根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象对应的抓拍位置信息;根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象的步骤包括:
根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设 的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;
根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;
确定各第一目标对象的跟踪时长,针对每个第一目标对象,根据该第一目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
可选的,所述方法还包括:
针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;
在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
可选的,所述确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象的步骤包括:
根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;
针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
可选的,所述确定每个目标块的跟踪时长的步骤包括:
针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;
根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;
根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
可选的,所述根据各第二目标对象对应的倍率确定该目标块对应的倍率的步骤包括:
将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
可选的,所述检测全景相机所采集的当前全景视频帧中的目标对象的步骤包括:
检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
第二方面,本申请实施例提供了一种目标对象抓拍装置,所述装置包括:
检测模块,用于检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
计算模块,用于根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
第一确定模块,用于根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
控制模块,用于确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
可选的,所述计算模块包括:
第一确定子模块,用于根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;
第二确定子模块,用于根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
可选的,所述第一确定模块包括:
第三确定子模块,用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
第四确定子模块,用于根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
可选的,所述控制模块包括:
第一计算子模块,用于根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;
第二计算子模块,用于根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
可选的,所述装置还包括:
第二确定模块,用于根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
所述控制模块,具体用于根据抓拍优先级从高到低的顺序,针对每个目 标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
可选的,所述控制模块,具体用于获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
可选的,所述装置还包括:
识别模块,用于在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
相应的,所述计算模块,用于根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;
所述第一确定模块,用于根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;
所述控制模块,用于确定各第一目标对象的跟踪时长,针对每个第一目标对象,根据该第一目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
可选的,所述装置还包括:
获取模块,用于针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;
存储模块,用于在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
可选的,所述控制模块包括:
切分子模块,用于根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含 一个或多个目标对象;
第五确定子模块,用于确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;
控制子模块,用于针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
可选的,所述第五确定子模块包括:
确定子单元,用于针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;
第一计算子单元,用于根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;
第二计算子单元,用于根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
可选的,所述第五确定子模块,具体用于将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
可选的,所述检测模块,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一全景视频帧中的目标对象。
第三方面,本申请实施例提供了一种视频监控设备,包括全景相机、细节相机、以及处理器;
所述全景相机,用于采集当前全景视频帧,并将所述当前全景视频帧发送给所述处理器;
所述处理器,用于检测所述当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,并针对每个目标对象,将该目标对象对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机,用于根据接收到的该目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
可选的,所述处理器,具体用于根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
可选的,所述处理器,具体用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
可选的,所述处理器,具体用于根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
可选的,所述处理器,还用于在针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象之前,根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
所述处理器,具体用于根据抓拍优先级从高到低的顺序,针对每个目标对象,将该目标对象对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机,具体用于根据接收到的该目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
可选的,所述处理器,具体用于获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
可选的,所述处理器,还用于在根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息之前,在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
所述处理器,具体用于根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;确定各第一目标对象的跟踪时长,针对每个第一目标对象,将该第一目标对象对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机,用于根据接收到的该第一目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
可选的,所述处理器,还用于针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
可选的,所述处理器,具体用于根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象, 并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;针对每个目标块,将该目标块对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机,具体用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
可选的,所述处理器,具体用于针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
可选的,所述处理器,具体用于将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
可选的,所述处理器,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
第四方面,本申请提供了一种存储介质,其中,该存储介质用于存储可执行程序代码,所述可执行程序代码用于在运行时执行本申请第一方面所述的一种目标对象抓拍方法。
第五方面,本申请提供了一种应用程序,其中,该应用程序用于在运行时执行本申请第一方面所述的一种目标对象抓拍方法。
本申请实施例提供的一种目标对象抓拍方法、装置及视频监控设备,所述方法包括:检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的 细节相机位置调整时间,计算各目标对象的抓拍位置信息;根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
本申请实施例中,当检测到全景相机中的目标对象时,能够根据各目标对象的具体位置信息、大小、移动方向和速度信息等,调整细节相机采用与各目标对象对应的位置和倍率对其进行抓拍,并且能够在各目标对象对应的抓拍时长内对各目标对象进行多次抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍质量。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例的一种目标对象抓拍方法的流程图;
图2为本申请实施例的一种全景视频帧示意图;
图3为本申请实施例的一种全景视频帧中目标对象位置信息示意图;
图4为本申请实施例的一种目标对象抓拍方法的另一流程图;
图5为本申请实施例的对目标对象进行分块的结果示意图;
图6为本申请实施例的确定目标块中第二目标对象的结果示意图;
图7为本申请实施例的一种目标对象抓拍装置的结构示意图;
图8为本申请实施例的一种视频监控设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下通过具体实施例,对本申请进行详细说明。
请参考图1,其示出了本申请实施例的一种目标对象抓拍方法流程,该方法可以包括以下步骤:
S101,检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息。
本申请实施例提供的方法可以应用于视频监控设备。具体的,本申请实施例的视频监控设备至少可以包括全景相机、细节相机、以及处理器。其中,全景相机可以为监控范围较大的相机,例如枪机、鱼眼相机等;细节相机可以为能够调节抓拍倍率的相机,如球机等。并且,细节相机的位置也是可以调整的,从而,其监控范围和所采集图像中目标对象的大小都是可以调整的。
在本申请实施例中,全景相机可以采集全景视频帧。如,全景相机可以按照预设的时间间隔,周期性采集全景视频帧。并且,全景相机可以将采集的当前全景视频帧发送给处理器。
处理器接收到全景相机发送的当前全景视频帧后,可以对当前全景视频帧中的目标对象进行检测。例如,处理器可以采用DPM(deformable parts model,可形变部件模型)或FRCNN(Faster Region Convolutional Neural Network,快速区域卷积神经网络)等目标检测类算法,来检测当前全景视频帧中的目标对象。其中,上述目标对象可以为人、车辆等。本申请实施例中,以目标对象为人为例,来说明本申请实施例提供的目标对象抓拍方法。
参考图2,其示出了全景相机采集的当前全景视频帧的示意图。如图2所示,全景相机采集的当前全景视频帧中包括目标对象1、2、3、4、5、6、7、8、9、10。
检测到各目标对象后,处理器还可以确定各目标对象在当前全景视频帧中的第一位置信息、大小、移动方向和速度信息。如,处理器可以针对每个目标对象,确定该目标对象所在的长方形区域,并根据预设的坐标系,将该长方形区域的左上角坐标和右下角坐标确定为该目标对象的第一位置信息。相应的,处理器可以将该目标对象所在长方形区域的大小确定为该目标对象的大小。
如图3所示,针对目标对象1,可以确定其所在的长方形区域为210,并且,根据图中构建的坐标系,目标对象1的第一位置信息可以为区域210的左上角220和右下角230的坐标信息。目标对象1的大小可以为区域210的大小。
在确定任一目标对象的移动方向和速度信息时,处理器可以先确定该目标对象是否存在于之前采集的全景视频帧中,如,前一张视频帧中;如果是,可以根据多张视频帧,来确定目标对象的移动方向和速度信息。
S102,根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息。
在本申请实施例中,细节相机的位置是可以调整的。并且,调整其位置需要花费一定的时间。该时间可以预先设定好并保存在处理器中。
可以理解,由于细节相机位置调整需要一定的时间,且各目标对象有其对应的速度。因此,针对任一目标对象,其在当前全景视频帧中的位置可能并不是其被细节相机抓拍到时所在的位置。
处理器确定各目标对象的第一位置信息、移动方向和速度信息后,可以根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息,也就是各目标对象被细节相机抓拍到时所在全景视频帧中的位置信息。
例如,处理器可以根据各目标对象的速度信息、移动方向,以及预设的全景相机位置调整时间,确定各目标对象的位置变化信息。之后可以根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
S103,根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细 节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率。
在本申请实施例中,可以预先构建全景相机和细节相机的位置映射关系。如,当任一目标对象在全景相机采集的全景视频帧中的位置信息为a1时,对应的细节相机的位置信息为b1;当任一目标对象在全景相机采集的全景视频帧中的位置信息为a2时,对应的细节相机的位置信息为b2等。其中,细节相机的位置信息可以包括其水平方向位置信息和垂直方向位置信息。
当处理器确定各目标对象的抓拍位置信息后,可以根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息。也就是说,确定细节相机用于抓拍各目标对象时所在的位置。
如,针对任一目标对象,处理器可以在预先保存的全景相机和细节相机的位置映射关系中,查找该目标对象的抓拍位置信息,并将该抓拍位置信息对应的细节相机的位置信息作为该目标对象对应的细节相机位置信息。
在本申请实施例中,为了能够清晰的对目标对象进行抓拍,细节相机的倍率是可以调整的。具体的,处理器可以根据各目标对象的大小,确定各目标对象对应的倍率。
通常情况下,图像中人的像素宽度达到240为可辨认的细节标准。根据该标准,处理器可以确定不同大小的目标对象对应的倍率。如,针对较大的目标对象,可以将细节相机的倍率调整为较小值,以抓拍到完整的目标对象;针对较小的目标对象,可以将细节相机的倍率调整为较大值,以获得尽可能大的目标对象,提高其清晰度。
在一种实现方式中,处理器可以针对每个目标对象,根据该目标对象的大小,确定对应的视场角,进而可以根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
S104,确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
在本申请实施例中,为了提高目标对象的抓拍质量,可以针对任一目标对象,对其进行多次抓拍。具体的,处理器可以在对各目标对象进行抓拍之前,确定各目标对象的跟踪时长,进而可以在各目标对象的跟踪时长内对各目标对象进行多次抓拍。
例如,处理器可以预先设定各目标对象的跟踪时长为相同值,在进行目标对象抓拍之前,获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
确定各目标对象的跟踪时长,且得到各目标对象的细节相机位置信息和倍率后,即可对各目标对象进行细节抓拍。具体的,处理器可以针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
例如,处理器可以按照各目标对象的检测顺序,依次针对每个目标对象,向细节相机发送包含将该目标对象对应的细节相机位置信息和倍率的抓拍指令。细节相机接收到抓拍指令后,可以根据其中包含的细节相机位置信息和倍率,调整其自身的位置和倍率,并在该目标对象对应的跟踪时长内抓拍该目标对象。如,针对任一目标对象,细节相机可以在任一目标对象对应的跟踪时长内按照一固定的抓拍频率,对该目标对象进行抓拍。
本申请实施例中,当检测到全景相机中的目标对象时,能够根据各目标对象的具体位置信息、大小、移动方向和速度信息等,调整细节相机采用与各目标对象对应的位置和倍率对其进行抓拍,并且能够在各目标对象对应的抓拍时长内对各目标对象进行多次抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍质量
作为本申请实施例的一种实施方式,为了提高目标对象抓拍效率,处理器对当前全景视频帧中的目标对象进行检测时,可以检测全景相机所采集的当前全景视频帧中的,且不存在于上一全景视频帧中的目标对象。
可以理解,相邻全景视频帧中的同一目标对象,其相似度一般是较高的。因此,针对出现在相邻全景视频帧中的同一目标对象,可以仅对其进行一次 细节抓拍,从而能提高目标对象抓拍效率。
作为本申请实施例的一种实施方式,处理器在确定各目标对象的跟踪时长时,可以根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间,进而可以根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,上述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
具体的,处理器可以通过以下公式,计算各目标对象i的跟踪时长T k(i)
Figure PCTCN2018090987-appb-000001
其中,T l(n)为任一目标对象的离开时间。
并且,要满足以下两个条件:
Figure PCTCN2018090987-appb-000002
达到最大,且
Figure PCTCN2018090987-appb-000003
达到最小。
也就是说,在确定各目标对象的跟踪时长时,能够保证每个目标对象都能分配到一定的跟踪时长,从而保证每个目标对象都能被抓拍到。并且,各目标对象的跟踪时长尽可能长,且各目标对象的跟踪时长相差较小,防止局部最优。
在本申请实施例中,为了进一步提高各目标对象的抓拍质量,处理器可以确定各目标对象的抓拍优先级,进而可以根据优先级顺序,对各目标对象进行抓拍。
具体的,处理器可以根据离开时间从小到大的顺序,确定各目标对象的抓拍优先级。也就是说,离开时间越小的目标对象,其抓拍优先级越大。
相应的,对目标对象进行抓拍时,处理器可以根据抓拍优先级从高到低的顺序,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目 标对象对应的跟踪时长内抓拍该目标对象。
本申请实施例中,能够根据各目标对象的离开时间,确定各目标对象的抓拍优先级,并根据抓拍优先级顺序,对各目标对象进行抓拍,从而能够尽可能保证对各目标对象进行抓拍时,各目标对象不会位于边缘位置,提高各目标对象的抓拍质量。
可以理解,对目标对象进行抓拍时,面朝全景相机移动的目标对象,可以抓拍到质量比较好的图像。因此,作为本申请实施例的一种实施方式,为了提高抓拍质量和抓拍效率,在确定各目标对象对应的抓拍位置信息之前,处理器可以在各目标对象中,识别移动方向为正朝全景相机移动的第一目标对象。例如,处理器可以根据各目标对象的移动方向,来识别正朝全景相机移动的第一目标对象。
相应的,识别出第一目标对象后,处理器可以仅对各第一目标对象进行抓拍。具体的,处理器可以根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;然后根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;之后确定各第一目标对象的跟踪时长,针对每个第一目标对象,根据该第一目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
作为本申请实施例的一种实施方式,处理器控制细节相机对各目标对象进行抓拍后,其还可以针对每个目标对象,在其对应的图像中,识别质量较高的一张或多张图像进行保存,进而可以对保存的图像进行特征提取等操作。
具体的,处理器可以针对任一目标对象,获取细节相机采集的该目标对象对应的多张图像,进而在该多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
作为本申请实施例的一种实施方式,为了提高目标对象的抓拍效率,处理器可以对当前全景视频帧中的各个目标对象进行分块处理。如图4所示,处理器对各目标对象进行抓拍的过程可以包括以下步骤:
S401,根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象。
本申请实施例中,处理器可以根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象。
细节相机在不同的倍率下,对应不同的位置范围。当得到各目标对象对应的细节相机位置信息和倍率后,通过搜索寻找倍率在一定范围内(如0.5倍)能满足细节相机位置范围的所有目标分为一块,最终形成不同的目标块。
参考图5,对图2所示的全景视频帧中各目标对象进行分块处理后,得到的各目标块可以如图5所示。如图5所示,可以将各目标对象分为4块,分别为目标对象7、8、9、10为一块,目标对象2、3、4为一块,目标对象5、6为一块,目标对象1为一块。
S402,确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率。
得到目标块后,处理器可以确定每个目标块的跟踪时长。具体的,针对任一目标块,处理器可以首先根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离。
如,当任一目标块中包括6个目标对象,其中5个目标对象的移动方向为正朝全景相机移动,另一个目标对象的移动方向为背向全景相机移动时,可以将正朝全景相机移动的5个目标对象确定为第三目标对象。进而,处理器可 以确定当该目标块的移动方向为第三目标对象的移动方向时,该目标块距离监控场景边缘的距离。
进一步地,处理器可以根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;并且,可以根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,该预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
处理器确定各目标块的跟踪时长的过程,可以参考确定各目标对象的跟踪时长的过程,在此不做赘述。
在本实施例中,处理器还可以针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象。
如图6所示,其示出了一目标块中包括多个目标对象的示意图。如图6所示,针对该目标块,处理器可以识别出处于边缘位置的第二目标对象分别为目标对象610、620、650、和660。
识别出各目标块中处于边缘位置的各第二目标对象后,处理器可以根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率。
如,可以将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
S403,针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
得到各目标块对应的细节相机位置信息和倍率,以及各目标块对应的跟踪时长后,处理器可以针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
本实施例中,处理器可以对各目标对象进行分块处理,进而可以对各个目标块进行抓拍,提高抓拍效率。
相应的,本申请实施例还提供了一种目标对象抓拍装置,如图7所示,所述装置包括:
检测模块710,用于检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
计算模块720,用于根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
第一确定模块730,用于根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
控制模块740,用于确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
本申请实施例中,当检测到全景相机中的目标对象时,能够根据各目标对象的具体位置信息、大小、移动方向和速度信息等,调整细节相机采用与各目标对象对应的位置和倍率对其进行抓拍,并且能够在各目标对象对应的抓拍时长内对各目标对象进行多次抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍质量。
作为本申请实施例的一种实施方式,所述计算模块720包括:
第一确定子模块(图中未示出),用于根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;
第二确定子模块(图中未示出),用于根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
作为本申请实施例的一种实施方式,所述第一确定模块730包括:
第三确定子模块(图中未示出),用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
第四确定子模块(图中未示出),用于根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
作为本申请实施例的一种实施方式,所述控制模块740包括:
第一计算子模块(图中未示出),用于根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;
第二计算子模块(图中未示出),用于根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
作为本申请实施例的一种实施方式,所述装置还包括:
第二确定模块(图中未示出),用于根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
所述控制模块740,具体用于根据抓拍优先级从高到低的顺序,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
作为本申请实施例的一种实施方式,,所述控制模块740,具体用于获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
作为本申请实施例的一种实施方式,所述装置还包括:
识别模块(图中未示出),用于在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
相应的,所述计算模块720,用于根据各第一目标对象的第一位置信息、 移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;
所述第一确定模块730,用于根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;
所述控制模块740,用于确定各第一目标对象的跟踪时长,针对每个第一目标对象,根据该第一目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
作为本申请实施例的一种实施方式,所述装置还包括:
获取模块(图中未示出),用于针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;
存储模块(图中未示出),用于在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
作为本申请实施例的一种实施方式,所述控制模块740包括:
切分子模块(图中未示出),用于根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
第五确定子模块(图中未示出),用于确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;
控制子模块(图中未示出),用于针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
作为本申请实施例的一种实施方式,所述第五确定子模块包括:
确定子单元(图中未示出),用于针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;
第一计算子单元(图中未示出),用于根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;
第二计算子单元(图中未示出),用于根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
作为本申请实施例的一种实施方式,所述第五确定子模块,具体用于将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
作为本申请实施例的一种实施方式,所述检测模块710,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一全景视频帧中的目标对象。
相应的,本申请实施例还提供了一种视频监控设备,如图8所示,所述视频监控设备包括全景相机810、处理器820、以及细节相机830;
所述全景相机810,用于采集当前全景视频帧,并将所述当前全景视频帧发送给所述处理器820;
所述处理器820,用于检测所述当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,并针对每个目标对象,将该 目标对象对应的细节相机位置信息和倍率发送至细节相机;
所述细节相机830,用于根据接收到的该目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并在该目标对象对应的跟踪时长内抓拍该目标对象。
本申请实施例中,当检测到全景相机中的目标对象时,能够根据各目标对象的具体位置信息、大小、移动方向和速度信息等,调整细节相机采用与各目标对象对应的位置和倍率对其进行抓拍,并且能够在各目标对象对应的抓拍时长内对各目标对象进行多次抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍质量。
作为本申请实施例的一种实施方式,所述处理器820,具体用于根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象对应的抓拍位置信息。
作为本申请实施例的一种实施方式,所述处理器820,具体用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
作为本申请实施例的一种实施方式,所述处理器820,具体用于根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
作为本申请实施例的一种实施方式,所述处理器820,还用于在针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象之前,根据各目标对象的离开时间从小到大的顺序, 确定各目标对象的抓拍优先级;
所述处理器820,具体用于根据抓拍优先级从高到低的顺序,针对每个目标对象,将该目标对象对应的细节相机位置信息和倍率发送至细节相机830;
所述细节相机830,具体用于根据接收到的该目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
作为本申请实施例的一种实施方式,所述处理器820,具体用于获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
作为本申请实施例的一种实施方式,所述处理器820,还用于在根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息之前,在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
所述处理器820,具体用于根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;确定各第一目标对象的跟踪时长,针对每个第一目标对象,将该第一目标对象对应的细节相机位置信息和倍率发送至细节相机830;
所述细节相机830,用于根据接收到的该第一目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
作为本申请实施例的一种实施方式,所述处理器820,还用于针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
作为本申请实施例的一种实施方式,所述处理器820,具体用于根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;确定每个 目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;针对每个目标块,将该目标块对应的细节相机位置信息和倍率发送至细节相机830;
所述细节相机830,具体用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
作为本申请实施例的一种实施方式,所述处理器820,具体用于针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
作为本申请实施例的一种实施方式,所述处理器820,具体用于将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
作为本申请实施例的一种实施方式,所述处理器820,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一视频帧中的目标对象。
相应地,本申请实施例还提供了一种存储介质,其中,该存储介质用于存储可执行程序代码,所述可执行程序代码用于在运行时执行本申请实施例所述的一种目标对象抓拍方法,其中,所述目标对象抓拍方法包括:
检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
本申请实施例中,当检测到全景相机中的目标对象时,能够根据各目标对象的具体位置信息、大小、移动方向和速度信息等,调整细节相机采用与各目标对象对应的位置和倍率对其进行抓拍,并且能够在各目标对象对应的抓拍时长内对各目标对象进行多次抓拍,从而能够在保证监控范围的前提下,提高目标对象的抓拍质量。
相应地,本申请实施例还提供了一种应用程序,其中,该应用程序用于在运行时执行本申请实施例所述的一种目标对象抓拍方法,其中,所述目标对象抓拍方法包括:
检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
本申请实施例中,当检测到全景相机中的目标对象时,能够根据各目标对象的具体位置信息、大小、移动方向和速度信息等,调整细节相机采用与各目标对象对应的位置和倍率对其进行抓拍,并且能够在各目标对象对应的抓拍时长内对各目标对象进行多次抓拍,从而能够在保证监控范围的前提下, 提高目标对象的抓拍质量。
对于装置/视频监控设备/存储介质/应用程序实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员可以理解实现上述方法实施方式中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于计算机可读取存储介质中,这里所称得的存储介质,如:ROM/RAM、磁碟、光盘等。
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (35)

  1. 一种目标对象抓拍方法,其特征在于,所述方法包括:
    检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
    根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
    根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
    确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
  2. 根据权利要求1所述的方法,其特征在于,所述根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息的步骤包括:
    根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;
    根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
  3. 根据权利要求1所述的方法,其特征在于,所述根据各目标对象的大小确定各目标对象对应的倍率的步骤包括:
    针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
    根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
  4. 根据权利要求1所述的方法,其特征在于,所述确定各目标对象的跟踪时长的步骤包括:
    根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;
    根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
  5. 根据权利要求4所述的方法,其特征在于,所述针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象之前,所述方法还包括:
    根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
    所述针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象的步骤包括:
    根据抓拍优先级从高到低的顺序,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
  6. 根据权利要求1所述的方法,其特征在于,所述确定各目标对象的跟踪时长的步骤包括:
    获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息之前,所述方法还包括:
    在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对 象;
    相应的,所述根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象的步骤包括:
    根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;
    根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;
    确定各第一目标对象的跟踪时长,针对每个第一目标对象,根据该第一目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
  8. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;
    在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
  9. 根据权利要求1-6任一项所述的方法,其特征在于,所述确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象的步骤包括:
    根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
    确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;
    针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
  10. 根据权利要求9所述的方法,其特征在于,所述确定每个目标块的跟踪时长的步骤包括:
    针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;
    根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;
    根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
  11. 根据权利要求9所述的方法,其特征在于,所述根据各第二目标对象对应的倍率确定该目标块对应的倍率的步骤包括:
    将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
  12. 根据权利要求1-6任一项所述的方法,其特征在于,所述检测全景相机所采集的当前全景视频帧中的目标对象的步骤包括:
    检测全景相机所采集的当前全景视频帧中的,且不存在于上一全景视频帧中的目标对象。
  13. 一种目标对象抓拍装置,其特征在于,所述装置包括:
    检测模块,用于检测全景相机所采集的当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;
    计算模块,用于根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;
    第一确定模块,用于根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;
    控制模块,用于确定各目标对象的跟踪时长,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
  14. 根据权利要求13所述的装置,其特征在于,所述计算模块包括:
    第一确定子模块,用于根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;
    第二确定子模块,用于根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
  15. 根据权利要求13所述的装置,其特征在于,所述第一确定模块包括:
    第三确定子模块,用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;
    第四确定子模块,用于根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
  16. 根据权利要求13所述的装置,其特征在于,所述控制模块包括:
    第一计算子模块,用于根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;
    第二计算子模块,用于根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
  17. 根据权利要求16所述的装置,其特征在于,所述装置还包括:
    第二确定模块,用于根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
    所述控制模块,具体用于根据抓拍优先级从高到低的顺序,针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
  18. 根据权利要求13所述的装置,其特征在于,所述控制模块,具体用于获取预设的跟踪时长,并将所获取的跟踪时长作为各目标对象的跟踪时长。
  19. 根据权利要求13-18任一项所述的装置,其特征在于,所述装置还包括:
    识别模块,用于在各目标对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
    相应的,所述计算模块,用于根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;
    所述第一确定模块,用于根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;
    所述控制模块,用于确定各第一目标对象的跟踪时长,针对每个第一目标对象,根据该第一目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
  20. 根据权利要求13-18任一项所述的装置,其特征在于,所述装置还包括:
    获取模块,用于针对任一目标对象,获取所述细节相机采集的该目标对象对应的多张图像;
    存储模块,用于在所述多张图像中,识别并保存图像质量最优的N张图像,其中,N为大于0的整数。
  21. 根据权利要求13-18任一项所述的装置,其特征在于,所述控制模块包括:
    切分子模块,用于根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;
    第五确定子模块,用于确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;
    控制子模块,用于针对每个目标块,根据该目标块对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
  22. 根据权利要求21所述的装置,其特征在于,所述第五确定子模块包括:
    确定子单元,用于针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;
    第一计算子单元,用于根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;
    第二计算子单元,用于根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
  23. 根据权利要求21所述的装置,其特征在于,所述第五确定子模块,具体用于将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
  24. 根据权利要求13-19任一项所述的装置,其特征在于,所述检测模块,具体用于检测全景相机所采集的当前全景视频帧中的,且不存在于上一全景视频帧中的目标对象。
  25. 一种视频监控设备,其特征在于,包括全景相机、细节相机、以及处理器;
    所述全景相机,用于采集当前全景视频帧,并将所述当前全景视频帧发送给所述处理器;
    所述处理器,用于检测所述当前全景视频帧中的目标对象,确定各目标对象在所述当前全景视频帧中的第一位置信息、大小、移动方向和速度信息;根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息;根据各目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各目标对象对应的细节相机位置信息,并根据各目标对象的大小确定各目标对象对应的倍率;确定各目标对象的跟踪时长,并针对每个目标对象,将该目标对象对应的细节相机位置信息和倍率发送至细节相机;
    所述细节相机,用于根据接收到的该目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并在该目标对象对应的跟踪时长内抓拍该目标对象。
  26. 根据权利要求25所述的设备,其特征在于,所述处理器,具体用于根据各目标对象的速度信息、移动方向,以及预设的细节相机位置调整时间,确定各目标对象的位置变化信息;根据各目标对象的第一位置信息,以及对应的位置变化信息,确定各目标对象的抓拍位置信息。
  27. 根据权利要求25所述的设备,其特征在于,所述处理器,具体用于针对每个目标对象,根据该目标对象的大小,确定对应的视场角;根据预设的倍率和视场角的对应关系,确定该视场角对应的倍率,并将确定的倍率作为该目标对象对应的倍率。
  28. 根据权利要求25所述的设备,其特征在于,所述处理器,具体用于根据各目标对象的移动方向,确定各目标对象距离监控场景边缘的距离,根据各目标对象距离监控场景边缘的距离、以及对应各目标对象的速度大小,计算各目标对象的离开时间;根据各目标对象的离开时间,以及预设条件,计算各目标对象的跟踪时长;其中,所述预设条件包括:各目标对象的跟踪时长之和小于任一目标对象的离开时间,各目标对象的跟踪时长之和最大,各目标对象的跟踪时长的方差最小。
  29. 根据权利要求28所述的设备,其特征在于,所述处理器,还用于在针对每个目标对象,根据该目标对象对应的细节相机位置信息和倍率,控制所述细节相机调整其位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象之前,根据各目标对象的离开时间从小到大的顺序,确定各目标对象的抓拍优先级;
    所述处理器,具体用于根据抓拍优先级从高到低的顺序,针对每个目标对象,将该目标对象对应的细节相机位置信息和倍率发送至细节相机;
    所述细节相机,具体用于根据接收到的该目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标对象对应的跟踪时长内抓拍该目标对象。
  30. 根据权利要求25-29任一项所述的设备,其特征在于,所述处理器,还用于在根据各目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各目标对象的抓拍位置信息之前,在各目标 对象中,识别移动方向为正朝所述全景相机移动的第一目标对象;
    所述处理器,具体用于根据各第一目标对象的第一位置信息、移动方向和速度信息,以及预设的细节相机位置调整时间,计算各第一目标对象的抓拍位置信息;根据各第一目标对象的抓拍位置信息,以及预先构建的全景相机和细节相机的位置映射关系,确定各第一目标对象对应的细节相机位置信息,并根据各第一目标对象的大小确定各第一目标对象对应的倍率;确定各第一目标对象的跟踪时长,针对每个第一目标对象,将该第一目标对象对应的细节相机位置信息和倍率发送至细节相机;
    所述细节相机,用于根据接收到的该第一目标对象对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该第一目标对象对应的跟踪时长内抓拍该第一目标对象。
  31. 根据权利要求25-29任一项所述的设备,其特征在于,所述处理器,具体用于根据每个目标对象对应的细节相机位置信息和倍率,对各目标对象进行分块处理,得到至少一个目标块,其中,各目标块中包含一个或多个目标对象;确定每个目标块的跟踪时长,针对每个目标块,在该目标块包含的各目标对象中识别处于边缘位置的第二目标对象,并根据各第二目标对象对应的细节相机位置信息确定该目标块对应的细节相机位置信息,根据各第二目标对象对应的倍率确定该目标块对应的倍率;针对每个目标块,将该目标块对应的细节相机位置信息和倍率发送至细节相机;
    所述细节相机,具体用于根据接收到的该目标块对应的细节相机位置信息和倍率调整其自身的位置和倍率,并控制调整后的细节相机在该目标块对应的跟踪时长内抓拍该目标块。
  32. 根据权利要求31所述的设备,其特征在于,所述处理器,具体用于针对每个目标块,根据该目标块中包括的各目标对象的移动方向,确定移动方向相同且数量最多的第三目标对象,并根据各第三目标对象的移动方向,确定该目标块距离监控场景边缘的距离;根据各目标块距离监控场景边缘的距离,以及对应各目标块中包括的各第三目标对象的平均速度,计算各目标块的离开时间;根据各目标块的离开时间,以及预设条件,计算各目标块的跟踪时长;其中,所述预设条件包括:各目标块的跟踪时长之和小于任一目 标块的离开时间,各目标块的跟踪时长之和最大,各目标块的跟踪时长的方差最小。
  33. 根据权利要求31所述的设备,其特征在于,所述处理器,具体用于将各第二目标对象对应的倍率中的最大值作为该目标块对应的倍率,或将各第二目标对象的倍率乘以对应的权重得到综合倍率,作为该目标块对应的倍率。
  34. 一种存储介质,其特征在于,所述存储介质用于存储可执行程序代码,所述可执行程序代码用于在运行时执行如权利要求1-12任一项所述的一种目标对象抓拍方法。
  35. 一种应用程序,其特征在于,所述应用程序用于在运行时执行如权利要求1-12任一项所述的一种目标对象抓拍方法。
PCT/CN2018/090987 2017-06-16 2018-06-13 一种目标对象抓拍方法、装置及视频监控设备 WO2018228410A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/622,568 US11102417B2 (en) 2017-06-16 2018-06-13 Target object capturing method and device, and video monitoring device
EP18818742.1A EP3641304B1 (en) 2017-06-16 2018-06-13 Target object capturing method and device, and video monitoring device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710459273.6A CN109151375B (zh) 2017-06-16 2017-06-16 一种目标对象抓拍方法、装置及视频监控设备
CN201710459273.6 2017-06-16

Publications (1)

Publication Number Publication Date
WO2018228410A1 true WO2018228410A1 (zh) 2018-12-20

Family

ID=64659429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090987 WO2018228410A1 (zh) 2017-06-16 2018-06-13 一种目标对象抓拍方法、装置及视频监控设备

Country Status (4)

Country Link
US (1) US11102417B2 (zh)
EP (1) EP3641304B1 (zh)
CN (1) CN109151375B (zh)
WO (1) WO2018228410A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886995A (zh) * 2019-01-15 2019-06-14 深圳职业技术学院 一种复杂环境下多目标跟踪方法
CN111340857A (zh) * 2020-02-20 2020-06-26 浙江大华技术股份有限公司 一种摄像机跟踪控制方法及装置
CN112616019A (zh) * 2020-12-16 2021-04-06 重庆紫光华山智安科技有限公司 目标跟踪方法、装置、云台及存储介质
CN112954274A (zh) * 2021-02-04 2021-06-11 三亚海兰寰宇海洋信息科技有限公司 一种用于船舶的视频抓拍方法及系统

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756682A (zh) * 2019-02-01 2019-05-14 孙斌 动态影像捕捉、跟踪、对焦装置及方法
CN111698413B (zh) * 2019-03-13 2021-05-14 杭州海康威视数字技术股份有限公司 一种对象的图像获取方法、装置及电子设备
CN111814517B (zh) * 2019-04-11 2021-10-22 深圳市家家分类科技有限公司 垃圾投递检测方法及相关产品
CN110221602B (zh) * 2019-05-06 2022-04-26 上海秒针网络科技有限公司 目标对象捕捉方法和装置、存储介质及电子装置
CN110519510B (zh) * 2019-08-08 2021-02-02 浙江大华技术股份有限公司 一种抓拍方法、装置、球机及存储介质
CN112954188B (zh) * 2019-12-10 2021-10-29 李思成 一种仿人眼感知的目标主动抓拍方法和装置
CN110933318A (zh) * 2019-12-12 2020-03-27 天地伟业技术有限公司 一种运动目标的抓拍方法
US11265478B2 (en) * 2019-12-20 2022-03-01 Canon Kabushiki Kaisha Tracking apparatus and control method thereof, image capturing apparatus, and storage medium
CN111263118A (zh) * 2020-02-18 2020-06-09 浙江大华技术股份有限公司 图像的获取方法、装置、存储介质及电子装置
CN113849687B (zh) * 2020-11-23 2022-10-28 阿里巴巴集团控股有限公司 视频处理方法以及装置
CN112822396B (zh) * 2020-12-31 2023-05-02 上海米哈游天命科技有限公司 一种拍摄参数的确定方法、装置、设备及存储介质
CN112887531B (zh) * 2021-01-14 2023-07-25 浙江大华技术股份有限公司 摄像机视频处理方法、装置、系统和计算机设备
CN113452903B (zh) * 2021-06-17 2023-07-11 浙江大华技术股份有限公司 一种抓拍设备、抓拍方法及主控芯片
CN113592427A (zh) * 2021-06-29 2021-11-02 浙江大华技术股份有限公司 工时统计方法、工时统计装置及计算机可读存储介质
CN114071013B (zh) * 2021-10-13 2023-06-20 浙江大华技术股份有限公司 一种用于车载摄像机的目标抓拍与跟踪方法及装置
CN114071015B (zh) * 2021-11-11 2024-02-20 浙江宇视科技有限公司 一种联动抓拍路径的确定方法、装置、介质及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100716306B1 (ko) * 2006-11-14 2007-05-08 주식회사 지.피 코리아 산불감시시스템
CN101969548A (zh) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
CN103105858A (zh) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 在固定相机和云台相机间进行目标放大、主从跟踪的方法
CN103297696A (zh) * 2013-05-24 2013-09-11 北京小米科技有限责任公司 拍摄方法、装置和终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006040687A2 (en) * 2004-07-19 2006-04-20 Grandeye, Ltd. Automatically expanding the zoom capability of a wide-angle video camera
US7796154B2 (en) * 2005-03-07 2010-09-14 International Business Machines Corporation Automatic multiscale image acquisition from a steerable camera
US20110310219A1 (en) * 2009-05-29 2011-12-22 Youngkook Electronics, Co., Ltd. Intelligent monitoring camera apparatus and image monitoring system implementing same
WO2014043975A1 (zh) * 2012-09-24 2014-03-27 天津市亚安科技股份有限公司 多方向监控区域预警定位监控装置
WO2014043976A1 (zh) * 2012-09-24 2014-03-27 天津市亚安科技股份有限公司 多方向监控区域预警定位自动跟踪监控装置
US9210385B2 (en) * 2012-11-20 2015-12-08 Pelco, Inc. Method and system for metadata extraction from master-slave cameras tracking system
CN104125433A (zh) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 基于多球机联动结构的视频运动目标监控方法
CN106791715A (zh) * 2017-02-24 2017-05-31 深圳英飞拓科技股份有限公司 分级联控智能监控方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100716306B1 (ko) * 2006-11-14 2007-05-08 주식회사 지.피 코리아 산불감시시스템
CN101969548A (zh) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
CN103105858A (zh) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 在固定相机和云台相机间进行目标放大、主从跟踪的方法
CN103297696A (zh) * 2013-05-24 2013-09-11 北京小米科技有限责任公司 拍摄方法、装置和终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3641304A4

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886995A (zh) * 2019-01-15 2019-06-14 深圳职业技术学院 一种复杂环境下多目标跟踪方法
CN109886995B (zh) * 2019-01-15 2023-05-23 深圳职业技术学院 一种复杂环境下多目标跟踪方法
CN111340857A (zh) * 2020-02-20 2020-06-26 浙江大华技术股份有限公司 一种摄像机跟踪控制方法及装置
CN111340857B (zh) * 2020-02-20 2023-09-19 浙江大华技术股份有限公司 一种摄像机跟踪控制方法及装置
CN112616019A (zh) * 2020-12-16 2021-04-06 重庆紫光华山智安科技有限公司 目标跟踪方法、装置、云台及存储介质
CN112954274A (zh) * 2021-02-04 2021-06-11 三亚海兰寰宇海洋信息科技有限公司 一种用于船舶的视频抓拍方法及系统

Also Published As

Publication number Publication date
CN109151375A (zh) 2019-01-04
US20200228720A1 (en) 2020-07-16
EP3641304B1 (en) 2023-03-01
EP3641304A1 (en) 2020-04-22
CN109151375B (zh) 2020-07-24
US11102417B2 (en) 2021-08-24
EP3641304A4 (en) 2020-04-22

Similar Documents

Publication Publication Date Title
WO2018228410A1 (zh) 一种目标对象抓拍方法、装置及视频监控设备
WO2018228413A1 (zh) 一种目标对象抓拍方法、装置及视频监控设备
CN109922250B (zh) 一种目标对象抓拍方法、装置及视频监控设备
WO2017080399A1 (zh) 一种人脸位置跟踪方法、装置和电子设备
US10445887B2 (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
CN109981972B (zh) 一种机器人的目标跟踪方法、机器人及存储介质
US11196943B2 (en) Video analysis and management techniques for media capture and retention
JP5484184B2 (ja) 画像処理装置、画像処理方法及びプログラム
US8406468B2 (en) Image capturing device and method for adjusting a position of a lens of the image capturing device
US10701281B2 (en) Image processing apparatus, solid-state imaging device, and electronic apparatus
US8532337B2 (en) Object tracking method
US10474935B2 (en) Method and device for target detection
KR20150032630A (ko) 촬상 시스템에 있어서의 제어방법, 제어장치 및 컴퓨터 판독 가능한 기억매체
CN110287907B (zh) 一种对象检测方法和装置
CN109905641B (zh) 一种目标监控方法、装置、设备及系统
JP2020149111A (ja) 物体追跡装置および物体追跡方法
WO2017101292A1 (zh) 自动对焦的方法、装置和系统
CN103607558A (zh) 一种视频监控系统及其目标匹配方法和装置
WO2013023474A1 (zh) 智能跟踪球机及其跟踪方法
KR20080079506A (ko) 촬영장치 및 이의 대상 추적방법
JP7338174B2 (ja) 物体検出装置および物体検出方法
CN102469247A (zh) 摄像装置及其动态对焦方法
US20110267463A1 (en) Image capturing device and method for controlling image capturing device
CN102457657A (zh) 摄影机装置及利用其进行人型侦测的方法
CN113470082A (zh) 一种基于动态机器视觉的营业厅客流计数方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18818742

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2018818742

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018818742

Country of ref document: EP

Effective date: 20200116