WO2021128747A1 - 监控方法、装置、系统、电子设备及存储介质 - Google Patents

监控方法、装置、系统、电子设备及存储介质 Download PDF

Info

Publication number
WO2021128747A1
WO2021128747A1 PCT/CN2020/095112 CN2020095112W WO2021128747A1 WO 2021128747 A1 WO2021128747 A1 WO 2021128747A1 CN 2020095112 W CN2020095112 W CN 2020095112W WO 2021128747 A1 WO2021128747 A1 WO 2021128747A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
target
candidate
shooting
preset
Prior art date
Application number
PCT/CN2020/095112
Other languages
English (en)
French (fr)
Inventor
王云刚
Original Assignee
深圳市鸿合创新信息技术有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市鸿合创新信息技术有限责任公司 filed Critical 深圳市鸿合创新信息技术有限责任公司
Priority to EP20905705.8A priority Critical patent/EP4068763A4/en
Priority to US17/785,940 priority patent/US11983898B2/en
Publication of WO2021128747A1 publication Critical patent/WO2021128747A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • This application relates to the field of monitoring technology, in particular to a monitoring method, device, system, electronic equipment, and storage medium.
  • the monitoring of the surveillance scene generally uses an image acquisition device with a wide-angle lens or a zoom lens.
  • the wide-angle lens has a wide angle of view and a short focal length, and can capture all objects in the scene, but the sharpness of objects farther from the lens is lower;
  • the zoom lens has a small angle of view and adjustable focal length, which can be increased by adjusting the focal length. The sharpness of the object, but the shooting angle of view is limited, and it is impossible to capture all objects in the scene.
  • the purpose of this application is to propose a monitoring method, device, system, electronic equipment, and storage medium, which can capture any object in the monitoring scene and the captured image is clear.
  • the present application provides a monitoring method, which may include: determining the target area to be monitored from the acquired image of the monitoring scene; determining the target shooting posture and the target shooting focal length according to the target area; controlling The pan-tilt camera shoots according to the target shooting posture and the target shooting focal length.
  • the foregoing determining the target shooting posture and the target shooting focal length based on the target area may include: determining the preset area where the target area is located as the target preset area; the preset area is based on the target area. According to the image of the monitoring scene, the number of the preset areas is at least two; the corresponding candidate shooting posture and the candidate shooting focal length are determined according to the target preset area; the candidate shooting posture corresponding to the preset area And the candidate shooting focal length are obtained according to the range in the monitoring scene corresponding to the preset area captured by the pan-tilt camera; the target shooting posture and the target are determined according to the determined candidate shooting posture and the candidate shooting focal length Shooting focal length.
  • determining the target shooting posture and the target shooting focal length based on the determined candidate shooting posture and the candidate shooting focal length may include: selecting a preset area from the image of the monitoring scene as First reference preset area; said first reference preset area is different from said target preset area; according to the distance between said target preset area and said first reference preset area, said target preset The candidate shooting posture corresponding to the area and the candidate shooting posture corresponding to the first reference preset area are determined to determine the posture change ratio parameter; according to the distance between the target area and the target preset area, the target preset area The corresponding candidate shooting posture and the posture change ratio parameter are used to generate the target shooting posture.
  • the aforementioned target shooting posture includes a horizontal rotation angle and a vertical rotation angle; wherein, for the horizontal rotation angle, the distance between the target preset area and the first reference preset area is The horizontal distance between the target preset area and the first reference preset area; the distance between the target area and the target preset area is between the target area and the target preset area Wherein the horizontal distance between the target preset area and the first reference preset area is not zero; and for the vertical rotation angle, the target preset area and the first reference The distance between the preset areas is the vertical distance between the target preset area and the first reference preset area; the distance between the target area and the target preset area is the target area and the The vertical distance between the target preset area; wherein the vertical distance between the target preset area and the first reference preset area is not zero.
  • determining the target shooting posture and the target shooting focal length based on the determined candidate shooting posture and the candidate shooting focal length may include: determining a second reference preset from the image of the monitoring scene Area; the second reference preset area includes the target preset area and the area of the second reference preset area is larger than the target preset area, or the target preset area includes the second reference A preset area and the area of the target preset area is larger than the second reference preset area; according to the area ratio between the target preset area and the second reference preset area, the target preset area The corresponding candidate shooting focal length and the candidate shooting focal length of the second reference preset area are obtained to obtain the focal length change ratio parameter; according to the area ratio between the target area and the target preset area, the target preset area corresponds to The candidate shooting focal length and the focal length change ratio parameter are used to generate the target shooting focal length.
  • determining the target area to be monitored from the acquired image of the surveillance scene may include: determining at least one candidate target area from the image; judging whether the number of candidate target areas is greater than The number of the PTZ cameras; if yes, perform merging processing on the determined candidate target areas, so that the number of candidate target areas after merging processing is not greater than the number of the PTZ cameras; combining the processed candidate targets The area serves as the target area.
  • the above-mentioned merging processing of the determined candidate target regions may include: pre-merging any two of the candidate target regions to obtain a candidate target region after pre-merging; determining the candidate target region after the pre-merging The area of the candidate target area of the; selecting the candidate target area after the pre-merging with the smallest area; merging the two candidate target areas corresponding to the candidate target area after the pre-merging with the smallest area.
  • the above-mentioned merging processing of at least two candidate target areas may include: determining the area of each candidate target area; selecting the candidate target area with the smallest area; determining the smallest area. The first distance between the candidate target area and each of the other candidate target areas; the candidate target area with the closest first distance is selected; the candidate target area with the smallest area is selected from the candidate target area with the closest first distance.
  • the candidate target area undergoes pre-merging processing to obtain the candidate target area after the pre-merging processing; determining the area of the candidate target area after the pre-merging processing; comparing the area of the candidate target area after the pre-merging processing with a preset Area threshold value; if the area of the candidate target area after the pre-merging process is less than or equal to the area threshold value, the candidate target area with the smallest area is compared with the candidate target area with the closest first distance Perform merge processing.
  • the above-mentioned merging processing of at least two candidate target regions may include: if the area of the candidate target region after the pre-merging processing is greater than the area threshold, then the smallest area of the The candidate target area and the candidate target areas other than the candidate target area with the closest first distance are subjected to pre-merging processing in order from near to far according to the first distance; when any pre-merging is processed When the area of the candidate target area is less than or equal to the area threshold, the two candidate target areas of the pre-merging process are merged.
  • the above-mentioned merging processing of at least two candidate target regions may further include: when the areas of all candidate target regions after the pre-merging processing are greater than the area threshold, dividing the area from the smallest The candidate target areas other than the candidate target area are selected in order from small to large in area; the second distance between the selected candidate target area and each of the other candidate target areas is determined ; The selected candidate target area and each of the other candidate target areas are pre-merged in sequence from near to far according to the second distance; when the area of any candidate target area after pre-merging is less than When it is equal to the area threshold, the two candidate target regions of the pre-merging process are merged.
  • the above-mentioned merging processing of at least two candidate target regions may include: determining the distance between the two candidate target regions; and sequentially merging the candidate target regions according to the order of the distance from the smallest to the largest. , Until the number of candidate target regions after the merging process is not greater than the number of PTZ cameras.
  • determining the target area to be monitored from the acquired images of the surveillance scene may include: determining the moving object and the area where the moving object is located according to the images of consecutive multiple frames; Determine the target area where the moving object is located.
  • determining the target area based on the area where the moving object is located may include: determining whether the number of areas where the moving object is located is greater than the number of pan-tilt cameras; if so, determining the The movement amplitude of the moving object; according to the movement amplitude of the moving object, the priority of the area where the moving object is located is arranged from high to low; according to the priority, the number of PTZ cameras is selected from high to low.
  • the area where the moving object is located is used as the target area; or, the motion amplitude of the moving object is determined; according to the motion amplitude of the moving object, the area where the candidate moving object whose motion amplitude is greater than a preset motion threshold is determined; Whether the number of areas where the candidate moving object is located is greater than the number of the PTZ cameras; if so, the area where the candidate moving object is located is merged, so that the number of areas where the candidate moving object is after the merge process is equal to the number of the cloud The number of cameras is the same; and the area where the candidate moving object after the merging process is located is used as the target area.
  • a wide-angle camera is set in the above-mentioned surveillance scene; said determining the target area to be monitored from the acquired image of the surveillance scene may include: according to the sharpness of the image captured by the wide-angle camera, The image is divided into a first area and a second area; wherein, in the first area, the sharpness of the image shot by the wide-angle camera reaches a preset sharpness threshold; and the second area is used as the target area.
  • This application also provides a monitoring device, which may include:
  • the area determination module is configured to determine the target area to be monitored from the acquired image of the monitoring scene
  • the parameter determination module is configured to determine the target shooting posture and the target shooting focal length according to the target area
  • the control module is configured to control the pan-tilt camera to shoot according to the target shooting posture and the target shooting focal length.
  • the present application also provides an electronic device, which may include a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the foregoing monitoring method when the program is executed.
  • the present application also provides a non-transitory computer-readable storage medium that can store computer instructions, and the computer instructions are used to make the computer execute the above-mentioned monitoring method.
  • This application also provides a monitoring system, which may include: a wide-angle camera, an electronic device, and a pan-tilt camera; among them:
  • Wide-angle camera configured to collect images of the surveillance scene
  • the electronic device is configured to determine the target area to be monitored from the acquired image of the monitoring scene; determine the target shooting posture and the target shooting focal length according to the target area; and control the pan-tilt camera to follow the target shooting posture and The target shooting focal length is used for shooting.
  • This application also provides another monitoring system, which may include: electronic equipment and a pan-tilt camera; wherein: the pan-tilt camera is configured to collect images of a monitoring scene;
  • the electronic device is configured to determine the target area to be monitored from the acquired image of the monitoring scene; determine the target shooting posture and the target shooting focal length according to the target area; and control the pan-tilt camera to follow the target shooting posture and The target shooting focal length is used for shooting.
  • the monitoring method, device, system, electronic equipment, and storage medium determine the target area to be monitored from the acquired image of the monitoring scene, and determine the target shooting posture and shooting posture according to the target area.
  • Target shooting focal length control the PTZ camera to shoot according to the target shooting posture and target shooting focal length.
  • This application utilizes the characteristics of adjustable shooting posture and shooting focal length of the pan-tilt camera, which can realize the monitoring of any object in the monitoring scene and guarantee the shooting effect.
  • Fig. 1 is a schematic diagram of application scenarios involved in the monitoring method according to the present application
  • Fig. 2 is a schematic flowchart of an implementation of the monitoring method according to the present application.
  • Fig. 3 is a schematic diagram of an embodiment of dividing a preset area according to the present application.
  • FIG. 4 is a schematic diagram of an implementation manner of dividing a preset area according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of supplementing a rectangular area into a square area according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of an embodiment of determining a target shooting posture and a target shooting focal length according to the present application
  • FIGS. 7A-7C are schematic diagrams of implementation manners of the positional relationship between the target area and the target preset area according to the present application.
  • FIGS. 8A-8B are schematic flowcharts of an implementation manner of generating a target shooting posture and a target shooting focal length according to the present application;
  • FIG. 9 is a schematic flowchart of an implementation manner of determining a target area according to the present application.
  • FIG. 10 is a schematic diagram of the positional relationship of candidate target regions to be merged according to the present application.
  • Fig. 11 is a schematic structural diagram of an embodiment of a monitoring device according to the present application.
  • the monitoring of objects in the monitoring scene is implemented by using image acquisition equipment.
  • image acquisition equipment can be installed in the classroom, and images of students in the classroom can be acquired through the image acquisition equipment for subsequent monitoring and processing.
  • the zoom lens can change the shooting range by zooming, and the picture within the shooting range is clear, but the shooting range is limited, and the entire classroom cannot be captured.
  • the pan-tilt camera can shoot any position in the classroom by adjusting the shooting posture, and adjust the shooting clarity by adjusting the shooting focus.
  • the pan-tilt camera cannot know the location to be shot, nor can it determine the shooting posture and shooting focus of a specific location.
  • this application provides a monitoring method, device, system, electronic equipment and storage medium, which can determine the target area to be monitored from the image of the monitoring scene, and determine the target shooting posture corresponding to the target area according to the target area Shoot the focal length with the target, and control the PTZ camera to shoot according to the shooting posture of the target and the focal length of the shooting target.
  • the present application can use the pan-tilt camera to shoot any object in the monitored scene, and can ensure that the shooting effect is good.
  • FIG. 2 is a schematic flowchart of an implementation manner of the monitoring method provided by this application.
  • the monitoring method 200 includes:
  • S201 Determine the target area to be monitored from the acquired image of the monitoring scene
  • S202 Determine the target shooting posture and the target shooting focal length according to the target area
  • S203 Control the pan-tilt camera to shoot according to the target shooting posture and the target shooting focal length.
  • the monitoring scene can be any scene that needs to be monitored, such as a school, an office building, a parking lot, a factory building, and so on.
  • the image acquisition device in the surveillance scene, then use the image acquisition device to collect the video of the surveillance scene, and then further extract the image from the video for subsequent processing.
  • the target area to be monitored in the image is the area where the moving object is located.
  • S201 may include: acquiring a video collected by an image acquisition device; extracting consecutive multiple frames of images from the video; using any moving object detection algorithm to determine the area of the moving object in the image based on the consecutive multiple frames of images; Determine the target area where the moving object is located.
  • the monitoring scene is a classroom
  • the moving objects are active students
  • the area where the active students are located in the classroom image is the target area.
  • the monitoring scene is a parking lot
  • the moving object is a driving vehicle
  • the area where the driving vehicle is located in the parking lot image is the target area.
  • the background difference method of motion segmentation of a still scene can also be used to determine the area where the moving object is located.
  • the background image of the still scene can be obtained first; then the currently obtained image frame and the background image are subjected to a difference operation to obtain the grayscale image of the target moving area; then the grayscale image is thresholded to extract the moving object your region. Further, in order to avoid the influence of changes in ambient lighting, the background image may be updated according to the currently acquired image frame.
  • the target area to be monitored in the image is the area where the preset monitoring object is located.
  • S201 may include: acquiring a video collected by an image capture device; extracting an image from the video; according to the extracted image, using an image recognition algorithm to identify the monitored object in the image; based on the area of the monitored object in the image , Determine the target area.
  • the monitoring objects are specific people, animals, plants, license plates, etc., which are not specifically limited.
  • the image acquisition device may be any device with an image acquisition function.
  • the image acquisition device may be a camera equipped with a wide-angle lens, a zoom lens, a standard lens, or a telephoto lens.
  • the specific selection of image acquisition equipment can be configured according to actual monitoring needs.
  • the surveillance scene is a classroom, and a camera with a wide-angle lens (wide-angle camera) is installed in the classroom.
  • a wide-angle camera uses a wide-angle camera to collect videos that include all the students in the classroom; then extract continuous multi-frame panoramic images from the video; then, based on the continuous multi-frame panoramic images, determine the area where the active students in the panoramic image are located; The area where the student is located is the target area to be monitored.
  • the student in the activity can be a student who performs actions such as raising hands, turning back, and talking.
  • the method of this program can monitor students in any activity in the classroom, and can be used to realize functions such as unsupervised exams and classroom activity evaluation.
  • the surveillance scene is a classroom, where at least one pan-tilt camera is installed.
  • At least one pan-tilt camera described above collects videos including all students in the classroom; then extracts continuous multi-frame panoramic images from the video; then, based on the continuous multi-frame panoramic images, determines the area where the active students in the panoramic image are located, and The area where the active student is located is the target area to be monitored.
  • the student of the activity may be a student who performs actions such as raising hands, turning back, and speaking.
  • the method of this program can monitor students in any activity in the classroom, and can be used to realize functions such as unsupervised exams and classroom activity evaluation.
  • step 202 in order to determine the target shooting posture and the target shooting focal length according to the target area, it is first necessary to divide the image of the monitoring scene into at least two preset areas.
  • a wide-angle camera or pan/tilt camera can be used to collect video of the surveillance scene first, and then a panoramic image of the surveillance scene can be extracted from the collected video of the surveillance scene, and the panoramic image can be divided into at least two preset areas .
  • the candidate shooting posture and the candidate shooting focal length corresponding to each preset area can be used to determine the aforementioned target shooting posture and target shooting focal length according to the relationship between the target area and each preset area.
  • the specific determination method will be described in detail later, and will not be repeated here.
  • the pan-tilt camera is controlled to shoot according to the above-mentioned target shooting posture and target shooting focal length.
  • the adjustable feature can realize the monitoring of any object in the monitoring scene, so as to solve the problem that the pan-tilt camera cannot know the location to be shot, nor can it determine the shooting posture and shooting focus of a specific location.
  • the pan-tilt camera is provided with an electronic device that executes the monitoring method of the present application.
  • the electronic device determines the target area to be monitored from the acquired image of the monitoring scene, and determines the target shooting posture according to the target area Shoot the focal length with the target. After that, the electronic device controls the pan-tilt camera to shoot according to the target shooting posture and the target shooting focal length.
  • the pan-tilt camera includes a pan-tilt, a lens, and electronic equipment.
  • the electronic device determines the target shooting posture and the target shooting focal length, it controls the pan-tilt to adjust to the shooting position according to the target shooting posture, and controls the lens to adjust the focal length according to the target shooting focal length, so that the cloud The camera is at the shooting position and shooting according to the adjusted focal length.
  • the pan-tilt camera can be used to shoot any object in the monitored scene, and the shooting effect is guaranteed.
  • the terminal is provided with an electronic device that executes the monitoring method of the present application, and a data connection is established between the terminal and the pan-tilt camera.
  • the electronic device determines the target area to be monitored from the acquired image of the monitoring scene, and determines the target shooting posture and the target shooting focal length according to the target area.
  • the electronic device sends a control instruction including the target shooting posture and the target shooting focal length to the pan-tilt camera.
  • the pan/tilt camera receives the control instruction, analyzes the control instruction to obtain the target shooting posture and the target shooting focal length, and then shoots according to the target shooting posture and the target shooting focal length.
  • the terminal can be a mobile terminal or a fixed terminal.
  • the mobile terminal is a terminal with data processing functions such as a smart phone, a tablet computer, and an IPAD.
  • the fixed terminal is a terminal with a data processing function such as an electronic whiteboard, a desktop computer, a server, etc. , The specific is not limited.
  • the terminal and the PTZ camera can be connected in any data connection mode, such as wired connection, wireless connection, etc.
  • the cloud can be fully utilized by controlling the pan-tilt camera to shoot according to the target shooting posture and target shooting focal length.
  • the feature of adjustable shooting posture and shooting focal length of the camera can realize the monitoring of any object in the monitoring scene, so as to solve the problem that the PTZ camera cannot know the location to be shot, and cannot determine the shooting posture and shooting focus of a specific location. problem.
  • a wide-angle camera may be used to collect video of the surveillance scene.
  • a video of a surveillance scene can be obtained from a wide-angle camera, and a panoramic image of the surveillance scene can be extracted from the video.
  • the panoramic image is divided into a number of preset areas.
  • the divided preset area may be an area of any shape such as a square area, a rectangular area, and a circular area.
  • the panoramic image is divided into a number of preset areas, and each preset area is a square area. Specifically, a number of boundary points of the image are first determined according to the panoramic image (points 1-20 in the figure); then the panoramic image is divided into a number of square areas (square areas A1-A24 in the figure) according to the boundary points.
  • the panoramic image can be divided into at least two levels of preset areas according to the size.
  • the panoramic image can be divided into a number of first-level preset regions.
  • the first-level preset area is a single square area, and the panoramic image is divided into 24 first-level preset areas A1-A24; the panoramic image can also be divided into several second-level preset areas.
  • the preset area is 2 ⁇ 2 square areas.
  • the square area composed of A1, A2, A5, and A6 is the second-level preset area; the panoramic image can also be divided into several third-level preset areas, and the third-level preset The area is 4 ⁇ 4 square areas.
  • the preset area can also be divided according to the seating plan.
  • the division may be incomplete.
  • the divided preset areas A21'-A24' are rectangular areas. In this case, the preset area A21'-A24' is still regarded as a square area, and subsequent processing is also performed in the manner of a square area, which will be described later.
  • a pan-tilt camera can be used to sequentially shoot the range in the monitored scene corresponding to each preset area, and obtain the shooting posture and shooting focal length of the range corresponding to each preset area;
  • the shooting posture and the shooting focal length of the range corresponding to each preset area are used as the candidate shooting posture and the candidate shooting focal length corresponding to each preset area.
  • the pan-tilt camera is used in a left-to-right and top-to-bottom manner.
  • Shooting posture and shooting focal length determine the candidate shooting posture and candidate shooting focal length of the first-level preset area A1.
  • the range within the surveillance scene corresponding to the first-level preset area A2 is captured, and the shooting posture and the shooting focal length of the range corresponding to the first-level preset area A2 are obtained; based on the shooting range corresponding to the first-level preset area A2
  • Shooting posture and shooting focal length determine the candidate shooting posture and candidate shooting focal length of the first-level preset area A2; after determining the candidate shooting posture and shooting focal length of the first-level preset area A4, shoot the range corresponding to the first-level preset area A8, Then the candidate shooting posture and candidate shooting focal length of the first-level preset area A8 are determined; in this order, the candidate shooting posture and candidate shooting focal length corresponding to each first-level preset area are finally determined.
  • the parameters of the four corners of the monitoring scene can be determined first, and then the parameters of each preset area can be roughly determined by averaging.
  • the pan-tilt camera can be manually adjusted so that the pan-tilt camera shoots the range of the monitored scene corresponding to each preset area; and then records the corresponding shooting posture and shooting focal length respectively, In this way, candidate shooting poses and candidate shooting focal lengths corresponding to each preset area are obtained.
  • the shooting control module can be used to control the pan-tilt camera to sequentially shoot the range in the surveillance scene corresponding to each preset area in a set sequence; then the corresponding shooting posture and shooting focal length can be recorded and stored separately, and then The candidate shooting posture and candidate shooting focal length corresponding to each preset area are obtained.
  • the rectangular area is supplemented into a square area; then the PTZ camera is used to photograph the range of the surveillance scene corresponding to the supplemented square area to obtain the supplementary shooting
  • the shooting posture and the shooting focal length of the range corresponding to the square area afterwards; based on the shooting posture and the shooting focal length of the range corresponding to the square area after the shooting supplement, the candidate shooting posture and the candidate shooting focal length of the preset area A21'-A24' are determined.
  • FIG. 6 shows a schematic flowchart of an implementation manner of determining the target shooting posture and the target shooting focal length according to the target area described in the present application. As shown in Figure 6, the method may include:
  • S601 Determine the preset area where the target area is located as the target preset area
  • S602 Determine the candidate shooting posture and candidate shooting focal length corresponding to the target preset area
  • S603 According to the determined candidate shooting posture and candidate shooting focal length, determine the target shooting posture and the target shooting focal length.
  • the candidate shooting posture and the candidate shooting focal length of each preset area determined in advance may be used; then, the target shooting posture and the target shooting focal length are calculated and generated according to the positional relationship between the target area and the target preset area.
  • the shooting attitude and shooting focal length are shooting parameters of the pan-tilt camera.
  • the shooting posture of the pan/tilt camera includes the horizontal rotation angle and the vertical rotation angle.
  • the horizontal rotation angle is the rotation angle of the pan/tilt camera in the horizontal direction
  • the vertical rotation angle is the rotation angle of the pan/tilt camera in the vertical direction.
  • the horizontal rotation angle and the vertical rotation angle together determine the shooting position of the PTZ camera.
  • the shooting focal length of the pan-tilt camera may be a zoom factor.
  • the PTZ camera can adjust the focal length according to the zoom factor to adjust the clarity of the captured image.
  • step S601 from the acquired panoramic image of the surveillance scene, according to the position of the target area in the panoramic image, the preset area where the target area is located can be determined, and the preset area is taken as the target preset. Set up the area.
  • the foregoing steps have divided the panoramic image into at least two levels of preset areas of different sizes.
  • a target area of any size and located at any position will fall into a certain preset area.
  • the target area Z falls within the first-level preset area A1
  • the preset area where the target area Z is located is the first-level preset area A1
  • the target area Z falls into the second-level preset area A1.
  • the preset area where the target area Z is located is the second-level preset area C1; as shown in Figure 7C, the target area Z falls within the fourth-level preset area D1, then the preset area where the target area Z is For the four-level preset area D1.
  • the panoramic image is divided into at least two preset areas according to the panoramic image, and the candidate shooting posture and candidate shooting focal length corresponding to each preset area are determined in advance. This can be achieved regardless of the number of target areas. How much, the preset area where the target area is located can fall into any preset area, and the candidate shooting posture and candidate shooting focal length corresponding to each preset area can be used to quickly generate the shooting posture and shooting focal length of the target area , Improve processing speed.
  • step S602 since the candidate shooting pose and candidate shooting focal length corresponding to each preset area have been determined in advance, in step S602, after the target preset area where the target area is located is determined, the target preset can be determined The candidate shooting pose and candidate shooting focal length corresponding to the area.
  • FIG. 8A shows a schematic flowchart of an implementation manner of determining the foregoing target shooting posture according to the determined candidate shooting posture corresponding to the determined target preset area in the foregoing step S603 in the present application.
  • FIG. 8B shows a schematic flowchart of an implementation manner of determining the foregoing target shooting focal length according to the determined candidate shooting focal length corresponding to the determined target preset area in the foregoing step S603 in the present application.
  • the determination of the aforementioned target shooting posture according to the candidate shooting posture corresponding to the target preset area according to the embodiment of the present application may include:
  • S811 Select a preset area from the image of the monitoring scene as the first reference preset area; wherein, the first reference preset area is different from the target preset area;
  • S812 Determine a posture change ratio parameter according to the distance between the target preset area and the first reference preset area, the candidate shooting posture corresponding to the target preset area, and the candidate shooting posture corresponding to the first reference preset area;
  • S813 Generate the target shooting posture according to the distance between the target area and the target preset area, the candidate shooting posture corresponding to the target preset area, and the posture change ratio parameter.
  • the shooting attitude of the pan/tilt camera includes a horizontal rotation angle and a vertical rotation angle
  • the first reference preset area and the target The positional relationship of the preset area is related to whether the shooting parameter in the horizontal direction or the shooting parameter in the vertical direction is determined. For example, if the shooting parameter in the horizontal direction is determined, that is, the horizontal rotation angle, the displacement in the horizontal direction between the first reference preset area and the target preset area cannot be zero. Similarly, if the shooting parameters in the vertical direction are determined, that is, the vertical rotation angle, the vertical displacement between the first reference preset area and the target preset area cannot be zero.
  • the distance between the target preset area and the first reference preset area is the horizontal distance between the target preset area and the first reference preset area ;
  • the distance between the target area and the target preset area is the horizontal distance between the target area and the target preset area.
  • the horizontal distance between the target preset area and the first reference preset area cannot be zero.
  • the distance between the target preset area and the first reference preset area is the vertical distance between the target preset area and the first reference preset area;
  • the distance between the target area and the target preset area is the vertical distance between the target area and the target preset area. Wherein, the vertical distance between the target preset area and the first reference preset area cannot be zero.
  • a preset area adjacent to the foregoing target preset area may be selected as the foregoing first reference preset area.
  • the aforementioned preset area adjacent to the target preset area refers to one of the following: the right preset area and the lower preset area of the target preset area; it can also be the right preset area of the target preset area The area and the upper preset area; it may also be the left preset area and the lower preset area of the target preset area; and the left preset area and the upper preset area of the target preset area.
  • the adjacent preset areas can be uniformly selected on the right side and the lower side preset areas.
  • the adjacent preset areas can be selected from the preset area on the left and the preset area on the lower side.
  • the candidate shooting posture and the candidate shooting focal length of the candidate preset area are determined.
  • the preset area where the target area is located is one of the preset areas on the far right
  • the adjacent preset area can select the candidate preset area on its right.
  • one of the above two alternative solutions can also be used to determine the adjacent preset areas.
  • the target shooting posture may be calculated and generated according to the following formulas (1)-(2).
  • the shooting posture of the target includes the horizontal rotation angle of the target and the vertical rotation angle of the target.
  • the position information of each vertex of the target preset area needs to be used.
  • a rectangular coordinate system is introduced.
  • the lower left corner of the image of the monitoring scene can be used as the origin of the coordinate system, and the X-axis and Y-axis can be set according to the direction of the preset area division.
  • the lower left vertex (boundary point 16) of the preset area A21 can be used as the origin of the rectangular coordinate system, and the edges defined by the boundary points 17, 18, 19 and 20 can be used as the X axis of the rectangular coordinate system.
  • the edges defined by the boundary points 14, 12, 10, 8, 6, and 1 are taken as the Y axis of the rectangular coordinate system.
  • the position of each vertex of each preset area can be expressed by coordinates. For example, if it is determined that the resolution of the image information of the monitoring scene is 720*1080, the coordinates of the four vertices of the preset area A1 can be expressed as (0,1080), (180,1080), (0,900), and (180,900) in sequence.
  • the coordinates of the boundary point 5 can be expressed as (720, 1080), the coordinate point of the boundary point 16 is (0, 0), the coordinate point of the boundary point 20 is (720, 0), and so on.
  • T 1 represents the first vertex of the target area
  • T 2 represents the second vertex of the target area
  • T 3 represents the third vertex of the target area.
  • the abscissas of T 1 and T 2 are not the same, denoted as X T1 and X T2 , respectively.
  • the ordinates of T 1 and T 2 may be the same or different.
  • the ordinates of T 1 and T 3 are not the same, and are denoted as Y T1 and Y T3 , respectively.
  • the abscissas of T 1 and T 3 may be the same or different.
  • D 1 represents the first vertex of the target preset area
  • D 2 represents the second vertex of the target preset area
  • D 3 represents the third vertex of the target preset area.
  • the abscissas of D 1 and D 2 are not the same, denoted as X D1 and X D2 , respectively.
  • the ordinates of D 1 and D 2 may be the same or different.
  • the ordinates of D 1 and D 3 are not the same, and are denoted as Y D1 and Y D3 , respectively.
  • the abscissas of T 1 and T 3 may be the same or different.
  • P D be the horizontal rotation angle of the target preset area D
  • P DI be the horizontal rotation angle corresponding to the preset area adjacent to the target preset area D in the horizontal direction (in the X-axis direction), that is, The preset area DI is located on the right or left of the preset area D
  • T D be the vertical rotation angle of the target preset area D
  • T DJ be the vertical rotation angle corresponding to the preset area adjacent to the target preset area D in the vertical direction (in the Y-axis direction), that is, the preset The area DJ is located on the upper or lower side of the preset area D.
  • the target horizontal rotation angle P T of the pan-tilt camera when shooting the target area can be calculated by the following formula (1):
  • (X T1 +X T2 )/2 represents the abscissa of the midpoint of the target area; (X D1 +X D2 )/2 represents the abscissa of the midpoint of the target preset area. Therefore, ( X T1 +X T2 )/(X D1 +X D2 ) represents the change ratio of the horizontal rotation angle of the pan/tilt camera from the target area to the target preset area, that is, the aforementioned change ratio parameter. (P DI- P D ) represents the total amount of change of the target horizontal rotation angle of the pan/tilt camera from the target preset area D to the preset area DI.
  • (X T1 +X T2 )/(X D1 +X D2 ) ⁇ (P DI -P D ) represents the change amount of the target horizontal rotation angle of the pan/tilt camera from the target area T to the preset area DI. Therefore, through the above formula (1), the target horizontal rotation angle P T of the pan-tilt camera when shooting the target area can be obtained.
  • the vertical rotation angle T T of the target when the pan-tilt camera shoots the target area can be calculated by the following formula (2):
  • T T T DJ +(Y T1 +Y T3 )/(Y D1 +Y D3 ) ⁇ (T D -T DJ ) (2)
  • (Y T1 +Y T3 )/2 represents the ordinate of the midpoint of the target area; (Y D1 +Y D3 )/2 represents the ordinate of the midpoint of the target preset area. Therefore, ( Y T1 +Y T3 )/(Y D1 +Y D3 ) represents the change ratio of the vertical rotation angle of the pan/tilt camera from the target area to the target preset area, that is, the aforementioned change ratio parameter. (T D -T DJ ) represents the total amount of change of the target vertical rotation angle of the pan/tilt camera from the preset area DJ to the target preset area D.
  • (Y T1 +Y T3 )/(Y D1 +Y D3 ) ⁇ (T D -T DJ ) represents the change amount of the target vertical rotation angle of the pan/tilt camera from the preset area DJ to the target area T. Therefore, the target vertical rotation angle T T of the pan-tilt camera when shooting the target area can be obtained by the above formula (2).
  • the method for determining the above-mentioned target shooting focal length according to the candidate shooting focal length corresponding to the target preset area may include:
  • S821 Determine the second reference preset area from the image of the monitoring scene; wherein the second reference preset area includes the target preset area and the area of the second reference preset area is larger than the target preset area, or the target preset area Including a second reference preset area and the area of the target preset area is larger than the second reference preset area;
  • S822 Obtain a focal length change ratio parameter according to the area ratio between the target preset area and the second reference preset area, the candidate shooting focal length corresponding to the target preset area, and the candidate shooting focal length of the second reference preset area;
  • S823 Generate the target shooting focal length according to the area ratio between the target area and the target preset area, the candidate shooting focal length corresponding to the target preset area, and the focal length change ratio parameter.
  • the panoramic image may be divided into at least two levels of preset areas according to the size; according to the preset area where the target area is located, it is determined that it is one or more levels different from the preset area where the target area is located. Then, the candidate shooting focal length corresponding to the second reference preset area is determined, and then according to the candidate shooting focal length corresponding to the target preset area, and the second reference The candidate shooting focal length corresponding to the preset area is calculated to generate the shooting focal length corresponding to the target area.
  • the target shooting focal length may be calculated according to formula (3); wherein, the target shooting focal length may be a zoom factor.
  • the target area is represented by T
  • the target preset area is represented by D
  • Z D is the zoom factor corresponding to the target preset area
  • Z M is the zoom factor corresponding to the second reference preset area.
  • the target zoom factor Z T can be calculated by the following formula:
  • Z a1
  • ; Z a2
  • the value of Z a3 is Z a1 , Z The larger of a2.
  • the above-mentioned Z a3 can be regarded as the above-mentioned focal length change ratio parameter; (Z M -Z D ) is the total amount of focal length change of the pan/tilt camera from the target preset area D to the second reference preset area; Z a3 ⁇ (Z M -Z D ) is the change in focal length of the pan/tilt camera from the target area T to the target preset area D. Therefore, the target shooting focal length of the pan-tilt camera when shooting the target area can be obtained by the above formula (3).
  • 20 boundary points of the panoramic image are determined according to the panoramic image, and the panoramic image is divided into 24 first-level preset regions A1-A24.
  • Use the pan-tilt camera to shoot the range in the surveillance scene corresponding to each level of preset area; then, based on the shooting posture and shooting focal length of the range corresponding to each level of preset area, determine the range of each level of preset area Candidate shooting pose and candidate shooting focal length.
  • the coordinates of each boundary point can be determined.
  • the resolution of the panoramic image information is 720*1080
  • the coordinates of boundary point 1 are (0,1080)
  • the coordinates of boundary point 5 are (720,1080)
  • the coordinates of boundary point 16 are ( 0,0)
  • the coordinate point of the boundary point 20 is (720,0);
  • the coordinates of its four vertices are A 11 (0,1080), A 12 (180,1080), A 13 (0,900), A 14 (180,900);
  • the coordinates of the three vertices are Z 11 (45,1020), Z 12 (135,1020), Z 13 (45,948).
  • the maximum horizontal rotation angle, the maximum vertical rotation angle, and the maximum zoom factor of the pan/tilt camera are known conditions.
  • the pan/tilt camera After calculating the horizontal rotation angle, vertical rotation angle, and zoom factor of the generated target area, the pan/tilt camera can be controlled to shoot according to the generated horizontal rotation angle, vertical rotation angle, and zoom factor, so that the pan/tilt camera can shoot the monitoring corresponding to the target area
  • the process of determining the target area from the acquired image of the surveillance scene there may be a situation that the number of the initially determined target area is greater than the number of pan-tilt cameras.
  • the preliminarily determined target areas are merged to determine the final target area to be monitored, so that the number of finally determined target areas to be monitored is less than or equal to the number of pan-tilt cameras.
  • FIG. 9 shows a schematic flowchart of an implementation manner of determining a target area to be monitored from an image of a monitoring scene acquired according to the present application.
  • determining the target area to be monitored from the acquired image of the monitoring scene may include:
  • S901 Determine at least one candidate target area from the image of the surveillance scene.
  • S902 Determine whether the number of candidate target areas is greater than the number of pan-tilt cameras, and when the number of candidate target areas is greater than the number of pan-tilt cameras, execute S903; otherwise, execute S905.
  • the number of pan-tilt cameras set is at least one.
  • S903 Perform merging processing on at least two candidate target areas, so that the number of candidate target areas after merging processing is the same as the number of pan-tilt cameras.
  • S904 Use the candidate target area after the merging process as the target area.
  • the aforementioned candidate target area is an area that needs attention determined according to an image of a surveillance scene, that is, an area that needs to be monitored by a pan-tilt camera.
  • image recognition processing technology can be used to determine multiple candidate target areas that need to be monitored by the pan-tilt camera through the various methods described above.
  • a separate pan/tilt camera is required to monitor each candidate target area. Therefore, when the number of pan/tilt cameras is limited, it may be necessary to merge some candidate target areas from multiple candidate target areas. To determine the target area to be monitored by each PTZ camera.
  • the candidate target area can be used as the target area, and the corresponding relationship between the target area and the pan-tilt camera can be established. Then, for each target area, Determine the target shooting posture and target shooting focal length respectively, and control the corresponding PTZ camera to shoot according to the above-mentioned target shooting posture and target shooting focal length.
  • Another situation is that if the number of candidate target areas is greater than the number of PTZ cameras, at least two candidate target areas in the above candidate target areas need to be merged, so that the number of candidate target areas after the merge process is equal to the number of cloud
  • the number of cameras is the same; then, the merged candidate target area is used as the target area, and the corresponding relationship between the target area and the PTZ camera is established; then, for each target area, the above-mentioned target shooting posture and target shooting focal length are determined, Control the PTZ camera to shoot according to the target shooting posture and the target shooting focal length.
  • At least two candidate target regions may be merged based on the distance between the candidate target regions, and S903 includes:
  • the candidate target areas are merged in the order of the above distance from small to large, until the number of candidate target areas after merging is the same as the number of pan-tilt cameras.
  • the distances between the two candidate target areas are determined respectively; then, the two candidate target areas with the closest distance are selected from them, and the two are merged into one candidate target area.
  • a situation that may arise is that after the distances between two candidate target regions are determined respectively, there are multiple groups of candidate target regions that are closest to each other. That is, there are at least two groups of candidate target areas, and each group of candidate target areas includes two candidate target areas to be merged, and the distances between the two candidate target areas to be merged in each group are equal. Then, at least two candidate target areas are matched according to the merge priority. Group candidate target areas to merge. Wherein, the merge priority is:
  • the sizes of the two candidate target regions to be merged are both smaller than the preset area threshold
  • the second priority the size of one candidate target area to be merged is greater than the area threshold, and the size of the other candidate target area to be merged is less than the area threshold;
  • the third priority is that the sizes of the two candidate target regions to be merged are both greater than the area threshold.
  • the candidate target areas Z1, Z2, Z3, and Z4 are determined from the image. If the number of pan-tilt cameras is three, the candidate target areas Z1, Z2, Z3, and Z4 need to be merged to Combine the candidate target regions into three. First determine the distance between the two candidate target areas, get the distance between the candidate target area Z1 and the candidate target area Z2, the distance between the candidate target area Z1 and the candidate target area Z3, the candidate target area Z3 and the candidate target area Z4 The distance between them is d0, the distance between the candidate target area Z2 and the candidate target area Z4 is d1, the distance between the candidate target area Z2 and the candidate target area Z3 is d2, and the distance between the candidate target area Z1 and the candidate target area Z4 is d2.
  • the distance between is d3. And it is determined that among d0, d1, d2, and d3, d0 is the minimum distance. Then, the size of the candidate target areas Z1, Z2, Z3, Z4 is further determined, and it is obtained that the sizes of the candidate target area Z1 and the candidate target area Z2 are both smaller than the set area threshold. According to the merging priority, it is determined to combine the candidate target area Z1 and the candidate target Area Z2 is merged into a candidate target area.
  • determine the distance between two candidate target regions which can be the distance between the closest edges of the two candidate target regions, or the distance between the farthest edges of the two candidate target regions, or the distance between the two candidate target regions.
  • the distance between the far vertices, etc. is not specifically limited, as long as the basis for calculating the distance is the same.
  • At least two candidate target areas may be merged based on the area of the candidate target area
  • S903 may specifically include: pre-merging any two of the candidate target areas, Obtain the pre-merged candidate target area; determine the area of the pre-merged candidate target area; select the pre-merged candidate target area with the smallest area; and determine the pre-merged candidate with the smallest area
  • the two candidate target regions corresponding to the target region are merged.
  • the determined candidate target regions may be merged based on the area of the candidate target regions and the distance between the candidate target regions.
  • S903 may specifically include: determining each The area of the candidate target area; select the candidate target area with the smallest area; determine the first distance between the candidate target area with the smallest area and each other candidate target area; select the candidate target area with the closest first distance; select the candidate with the smallest area The target area and the candidate target area with the closest first distance are merged.
  • At least two candidate target regions may be merged based on the area of the candidate target region and the distance between subsequent target regions
  • S903 may specifically include: determining each of the The area of the candidate target area; select the candidate target area with the smallest area; determine the first distance between the candidate target area with the smallest area and each of the other candidate target areas; select the candidate with the closest first distance Target area; pre-merging the candidate target area with the smallest area and the candidate target area with the closest first distance to obtain the candidate target area after the pre-merging process; determining the candidate target area after the pre-merging process The area of the candidate target area; compare the area of the candidate target area after the pre-merging process with a preset area threshold; if the area of the candidate target area after the pre-merging process is less than or equal to the area threshold, the The candidate target area with the smallest area and the candidate target area with the closest first distance are merged; if the area of the candidate target area after the pre-merging process is greater than the area
  • the candidate target regions except the candidate target region with the smallest area are selected in order from small to large.
  • the candidate target area determine the second distance between the selected candidate target area and each of the other candidate target areas; compare the selected candidate target area with each of the other candidate target areas according to the The second distance performs pre-merging processing in sequence from near to far; when the area of any candidate target area after pre-merging processing is less than or equal to the area threshold, the two candidate target areas of the pre-merging processing are merged.
  • S101 may include:
  • Determine the motion range of the moving object sort the priority of the area where the moving object is located according to the motion range of the moving object from large to small; according to the priority, select the area where the moving object is located according to the number of PTZ cameras as the target area;
  • determine the motion amplitude of the moving object according to the motion amplitude of the moving object, determine the area of the candidate moving object whose motion amplitude is greater than the preset motion threshold; determine whether the number of the candidate moving object is greater than the number of pan-tilt cameras; if so, Perform merging processing on the area where the candidate moving object is located, so that the number of the area where the candidate moving object is after merging is the same as the number of pan-tilt cameras; the area where the candidate moving object after the merging is located is taken as the target area.
  • the monitored object is a moving object
  • the area of the moving object in the panoramic image is the target area.
  • One way is to first arrange the priority of the area where the moving object is located according to the moving amplitude of the moving object from large to small; then according to the number of pan-tilt cameras, select a corresponding number of areas where the moving object is located as the target area.
  • Another way is to set the motion threshold. First, determine the area where the candidate moving object is located with a motion amplitude greater than the motion threshold; if the number of areas where the candidate moving object is determined is less than or equal to the number of pan-tilt cameras, then the area where the candidate moving object is located is taken as the target area; if the candidate is determined The number of areas where the moving objects are located is greater than the number of pan-tilt cameras, and the areas where at least two candidate moving objects are located are merged, so that the number of areas where the candidate moving objects are after the merge process is the same as the number of pan-tilt cameras.
  • the method of merging processing can refer to the method of merging processing described in the foregoing embodiment, which will not be repeated here.
  • a wide-angle camera can be used to monitor the scene. To monitor.
  • S101 includes:
  • the wide-angle camera and PTZ camera can be used to monitor the monitoring scene at the same time.
  • the image is divided into a first area and a second area, the first area is determined according to the clear shooting range of the wide-angle camera, the second area is taken as the target area, and the pan/tilt camera is used to shoot the surveillance scene corresponding to the target area. range.
  • the clear shooting range of the wide-angle camera refers to: within the clear shooting range, the sharpness of the image shot by the wide-angle camera can reach a preset sharpness threshold.
  • a wide-angle camera is installed above the blackboard in the front of the classroom.
  • the wide-angle camera is determined to shoot the front area of the classroom (such as the first three rows of desks), and the pan-tilt camera is used to shoot the back area of the classroom.
  • the wide-angle camera and the pan-tilt camera can cooperate to achieve full coverage and good image clarity.
  • FIG. 11 is a schematic diagram of the device structure of an embodiment of the application, and the monitoring device includes:
  • the area determining module 1102 is configured to determine the target area to be monitored from the acquired image of the monitoring scene;
  • the parameter determination module 1104 is configured to determine the target shooting posture and the target shooting focal length according to the target area;
  • the control module 1106 is configured to control the pan-tilt camera to shoot according to the target shooting posture and the target shooting focal length.
  • the area determination module 1102 includes:
  • the acquiring unit is configured to acquire a panoramic image of the surveillance scene
  • the determining unit is configured to determine the target area according to the panoramic image.
  • the parameter determination module 1104 includes:
  • the dividing unit is configured to divide the panoramic image into at least two preset areas
  • the candidate parameter determining unit is configured to determine the candidate shooting posture and the candidate shooting focal length corresponding to each of the preset regions;
  • the target preset area determining unit is configured to determine the preset area where the target area is located, as the target preset area;
  • the first parameter determining unit is configured to determine the first candidate shooting posture and the first candidate shooting focal length corresponding to the target preset area
  • the parameter generation unit is configured to generate the target shooting posture and the target shooting focal length according to the first candidate shooting posture and the first candidate shooting focal length.
  • the parameter generation unit includes:
  • the first reference area determining subunit is configured to select a preset area from the image of the monitoring scene as a first reference preset area; the first reference preset area is different from the target preset area;
  • the posture change ratio parameter determination sub-unit is configured to be based on the distance between the target preset area and the first reference preset area, the candidate shooting posture corresponding to the target preset area, and the first reference preset area. Set the candidate shooting posture corresponding to the area, and determine the posture change ratio parameter;
  • the first calculation subunit is configured to generate the target shooting based on the distance between the target area and the target preset area, the candidate shooting posture corresponding to the target preset area, and the posture change ratio parameter. attitude.
  • the parameter generation unit further includes:
  • the second reference area determining subunit determines a second reference preset area from the image of the monitoring scene; the second reference preset area includes the target preset area and the area of the second reference preset area Larger than the target preset area, or the target preset area includes the second reference preset area and the area of the target preset area is larger than the second reference preset area;
  • the focal length change ratio determining subunit is configured to be based on the area ratio between the target preset area and the second reference preset area, the candidate shooting focal length corresponding to the target preset area, and the second reference preset area. Set the candidate shooting focal length of the area, and obtain the focal length change ratio parameter;
  • the second calculation subunit is configured to generate the target according to the area ratio between the target area and the target preset area, the candidate shooting focal length corresponding to the target preset area, and the focal length change ratio parameter. Shooting focal length.
  • the area determining module 1102 further includes:
  • the area quantity determining unit is configured to determine at least one candidate target area from the image
  • the judging unit is configured to judge whether the number of candidate target areas is greater than the number of pan-tilt cameras
  • the merging unit if the judgment result of the judgment unit is yes, perform merging processing on at least two candidate target areas, so that the number of candidate target areas after merging processing is the same as the number of pan-tilt cameras;
  • the target area determining unit is configured to use the candidate target area after the merging process as the target area.
  • the merging unit includes:
  • the pre-merging module is configured to pre-merger any two of the candidate target areas to obtain the candidate target area after pre-merging;
  • An area determining module configured to determine the area of the candidate target area after the pre-merging
  • the selection module is configured to select the candidate target area after the pre-merging with the smallest area
  • the merging module is configured to merge two candidate target regions corresponding to the pre-merged candidate target regions with the smallest area.
  • the merging unit includes:
  • the target area area determination module is configured to determine the area of each candidate target area
  • the selection module is configured to select the candidate target area with the smallest area
  • a distance determining module configured to determine the first distance between the candidate target area with the smallest area and each of the other candidate target areas
  • the selection module is configured to select the candidate target area with the closest first distance
  • the merging module is configured to merge the candidate target area with the smallest area and the candidate target area with the closest first distance.
  • the merging unit includes:
  • the target area area determination module is configured to determine the area of each candidate target area
  • the selection module is configured to select the candidate target area with the smallest area
  • a distance determining module configured to determine the first distance between the candidate target area with the smallest area and each of the other candidate target areas
  • the selection module is configured to select the candidate target area with the closest first distance
  • a pre-merging module configured to perform pre-merging processing on the candidate target area with the smallest area and the candidate target area with the closest first distance to obtain a candidate target area after pre-merging processing
  • An area determining module configured to determine the area of the candidate target area after the pre-merging process
  • a comparison module configured to compare the area of the candidate target area after the pre-merging process with a preset area threshold
  • the merging module is configured to, if the area of the candidate target area after the pre-merging process is less than or equal to the area threshold, compare the candidate target area with the smallest area to the candidate target area with the closest first distance Perform the merging process, and if the area of the candidate target area after the merging process is greater than the area threshold, then: divide the candidate target area with the smallest area and the candidate target area with the closest distance except for the first distance For the other candidate target regions, pre-merging is performed in sequence from near to far according to the first distance; when the area of any candidate target region after pre-merging is less than or equal to the area threshold, pre-merging The processed two candidate target regions are merged.
  • the merging unit includes:
  • the distance determining module is configured to determine the distance between two candidate target regions
  • the merging module is configured to sequentially merge the candidate target regions in the descending order of the distance until the number of candidate target regions after the merging process is not greater than the number of the pan-tilt cameras.
  • the determining unit includes:
  • the motion area determining subunit is configured to determine the area where the moving object is located according to the panoramic images of multiple consecutive frames
  • the target area determining subunit is configured to determine the target area based on the area where the moving object is located.
  • the target area determining subunit is configured to:
  • Determine the motion range of the moving object sort the priority of the area where the moving object is located according to the motion range of the moving object from large to small; according to the priority, select the area where the moving object is located according to the number of PTZ cameras as the target area;
  • determine the motion amplitude of the moving object according to the motion amplitude of the moving object, determine the area of the candidate moving object whose motion amplitude is greater than the preset motion threshold; determine whether the number of the candidate moving object is greater than the number of pan-tilt cameras; if so, Perform merging processing on the area where the candidate moving object is located, so that the number of the area where the candidate moving object is after the merging process is the same as the number of pan-tilt cameras; the area where the candidate moving object after the merging process is located is taken as the target area.
  • the area determining module 1102 further includes:
  • Threshold judging unit configured to judge whether the number of candidate target regions is greater than a set threshold for the number of regions
  • the area dividing unit is configured to, when the threshold judgment unit judges yes, divide the image into a first area and a second area according to the sharpness of the image taken by the wide-angle camera; wherein, in the first area, the wide-angle camera The sharpness of the captured image reaches the preset sharpness threshold;
  • the wide-angle control unit is configured to control the wide-angle camera to shoot
  • the target area determining unit is configured to use the second area as the target area.
  • the device in the foregoing embodiment is configured to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which will not be repeated here.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements any of the foregoing monitoring methods when the program is executed.
  • a non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make a computer execute any of the foregoing monitoring methods.
  • a monitoring system including a wide-angle camera, an electronic device, and a pan-tilt camera; wherein:
  • Wide-angle camera configured to collect images of the surveillance scene
  • the electronic device is configured to determine the target area to be monitored from the acquired image of the monitoring scene; determine the shooting posture and the shooting focal length according to the target area; and control the pan-tilt camera to shoot according to the shooting posture and the shooting focal length.
  • a monitoring system including electronic equipment and a pan-tilt camera; wherein:
  • PTZ camera configured to collect images of the surveillance scene
  • the electronic device is configured to determine the target area to be monitored from the acquired image of the monitoring scene; determine the shooting posture and the shooting focal length according to the target area; and control the pan-tilt camera to shoot according to the shooting posture and the shooting focal length.
  • DRAM dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请公开了一种监控方法、装置、系统、电子设备及存储介质,方法包括:从获取的监控场景的图像中,确定待监控的目标区域;根据目标区域,确定目标拍摄姿态和目标拍摄焦距;控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄。本申请可利用云台摄像机对监控场景内的任意对象进行监控,且拍摄效果良好。

Description

监控方法、装置、系统、电子设备及存储介质
相关申请的交叉引用
本申请要求享有于2019年12月23日提交的名称为“监控方法、装置、系统、电子设备及存储介质”的中国专利申请201911340725.4的优先权,该申请的全部内容通过引用并入本文中。
技术领域
本申请涉及监控技术领域,特别是指一种监控方法、装置、系统、电子设备及存储介质。
背景技术
目前,对监控场景进行监控一般利用具有广角镜头或具有变焦镜头的图像采集设备。广角镜头拍摄视角范围广、焦距短,能够拍摄到场景内的所有对象,但是距离镜头较远的对象清晰度较低;变焦镜头拍摄视角范围小、焦距可调,可通过调整焦距提高其拍摄范围内对象的清晰度,但是拍摄视角范围有限,无法拍摄到场景内的所有对象。
发明内容
有鉴于此,本申请的目的在于提出一种监控方法、装置、系统、电子设备及存储介质,能够拍摄到监控场景内的任意对象、且拍摄影像清晰。
基于上述目的,本申请提供了一种监控方法,该方法可以包括:从获取的监控场景的图像中,确定待监控的目标区域;根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
在本申请的实施例中,上述根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距可以包括:确定所述目标区域所在的预设区域作为目标预设 区域;所述预设区域是基于所述监控场景的图像划分出的,所述预设区域的数量为至少两个;根据所述目标预设区域,确定对应的候选拍摄姿态和候选拍摄焦距;所述预设区域对应的候选拍摄姿态和候选拍摄焦距是根据所述云台摄像机拍摄所述预设区域所对应的监控场景内的范围得到的;根据确定出的候选拍摄姿态和候选拍摄焦距,确定所述目标拍摄姿态和所述目标拍摄焦距。
在本申请的实施例中,上述根据确定出的候选拍摄姿态和候选拍摄焦距,确定所述目标拍摄姿态和所述目标拍摄焦距可以包括:从所述监控场景的图像中选取一个预设区域作为第一参考预设区域;所述第一参考预设区域不同于所述目标预设区域;根据所述目标预设区域与所述第一参考预设区域之间的距离、所述目标预设区域对应的候选拍摄姿态及所述第一参考预设区域对应的候选拍摄姿态,确定姿态变化比例参数;根据所述目标区域与所述目标预设区域之间的距离、所述目标预设区域对应的候选拍摄姿态及所述姿态变化比例参数,生成所述目标拍摄姿态。
在本申请的实施例中,上述目标拍摄姿态包括水平旋转角度和垂直旋转角度;其中,针对所述水平旋转角度,所述目标预设区域与所述第一参考预设区域之间的距离为所述目标预设区域与所述第一参考预设区域之间的水平距离;所述目标区域与所述目标预设区域之间的距离为所述目标区域与所述目标预设区域之间的水平距离;其中,所述目标预设区域与所述第一参考预设区域之间的水平距离不为零;以及针对所述垂直旋转角度,所述目标预设区域与所述第一参考预设区域之间的距离为所述目标预设区域与所述第一参考预设区域之间的垂直距离;所述目标区域与所述目标预设区域之间的距离为所述目标区域与所述目标预设区域之间的垂直距离;其中,所述目标预设区域与所述第一参考预设区域之间的垂直距离不为零。
在本申请的实施例中,上述根据确定出的候选拍摄姿态和候选拍摄焦距,确定所述目标拍摄姿态和所述目标拍摄焦距可以包括:从所述监控场景的图像中确定第二参考预设区域;所述第二参考预设区域包括所述目标预设区域且所述第二参考预设区域的面积大于所述目标预设区域,或者, 所述目标预设区域包括所述第二参考预设区域且所述目标预设区域的面积大于所述第二参考预设区域;根据所述目标预设区域和所述第二参考预设区域之间的面积比、所述目标预设区域对应的候选拍摄焦距和所述第二参考预设区域的候选拍摄焦距,得到焦距变化比例参数;根据所述目标区域与所述目标预设区域之间的面积比、所述目标预设区域对应的候选拍摄焦距及所述焦距变化比例参数,生成所述目标拍摄焦距。
在本申请的实施例中,上述从获取的监控场景的图像中,确定待监控的目标区域可以包括:从所述图像中,确定至少一个候选目标区域;判断所述候选目标区域的数量是否大于所述云台摄像机的数量;若是,则对确定出的候选目标区域进行合并处理,以使得合并处理后的候选目标区域的数量不大于所述云台摄像机的数量;将合并处理后的候选目标区域作为所述目标区域。
在本申请的实施例中,上述对确定出的候选目标区域进行合并处理可以包括:将任意两个所述候选目标区域进行预合并,得到预合并后的候选目标区域;确定所述预合并后的候选目标区域的面积;选取面积最小的所述预合并后的候选目标区域;对所述面积最小的所述预合并后的候选目标区域所对应的两个候选目标区域进行合并处理。
在本申请的实施例中,上述对至少两个候选目标区域进行合并处理可以包括:确定每个所述候选目标区域的面积;选取面积最小的所述候选目标区域;确定所述面积最小的所述候选目标区域与其他每个所述候选目标区域的第一距离;选取第一距离最近的所述候选目标区域;将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行预合并处理,得到预合并处理后的候选目标区域;确定所述预合并处理后的候选目标区域的面积;将所述预合并处理后的候选目标区域的面积与预设的面积阈值进行比较;若所述预合并处理后的候选目标区域的面积小于或等于所述面积阈值,将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行合并处理。
在本申请的实施例中,上述对至少两个候选目标区域进行合并处理可以包括:若所述预合并处理后的候选目标区域的面积大于所述面积阈值, 则将所述面积最小的所述候选目标区域与除所述第一距离最近的所述候选目标区域之外其他的所述候选目标区域,按照所述第一距离从近到远依次进行预合并处理;当任一预合并处理后的候选目标区域的面积小于或等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
在本申请的实施例中,上述对至少两个候选目标区域进行合并处理还可以包括:当所有预合并处理后的候选目标区域的面积均大于所述面积阈值时,从除所述面积最小的所述候选目标区域之外其他的所述候选目标区域,按照面积从小到大依次选取所述候选目标区域;确定选取出的所述候选目标区域与其他每个所述候选目标区域的第二距离;将选取出的所述候选目标区域与其他每个所述候选目标区域,按照所述第二距离从近到远依次进行预合并处理;当任一预合并处理后的候选目标区域的面积小于等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
在本申请的实施例中,上述对至少两个候选目标区域进行合并处理可以包括:确定两两候选目标区域之间的距离;按照所述距离从小到大的顺序,依次将候选目标区域进行合并,直至合并处理后的候选目标区域的数量不大于所述云台摄像机的数量。
在本申请的实施例中,上述从获取的监控场景的图像中,确定待监控的目标区域可以包括:根据连续多帧的所述图像,确定运动对象及所述运动对象所在区域;基于所述运动对象所在区域,确定所述目标区域。
在本申请的实施例中,上述基于所述运动对象所在区域,确定所述目标区域可以包括:判断所述运动对象所在区域的数量是否大于所述云台摄像机的数量;若是,则确定所述运动对象的运动幅度;根据所述运动对象的运动幅度从大到小对所述运动对象所在区域的优先级进行高低排列;按照优先级从高到低,根据所述云台摄像机的数量选取所述运动对象所在区域作为所述目标区域;或,确定所述运动对象的运动幅度;根据所述运动对象的运动幅度,确定运动幅度大于预设的运动阈值的候选运动对象所在区域;判断所述候选运动对象所在区域的数量是否大于所述云台摄像机的数量;若是,对所述候选运动对象所在区域进行合并处理,以使得合并处理后的所述候选运动对象所在区域的数量与所述云台摄像机的数量相同; 将合并处理后的所述候选运动对象所在区域作为所述目标区域。
在本申请的实施例中,上述监控场景内设置有广角摄像机;所述从获取的监控场景的图像中,确定待监控的目标区域可以包括:根据广角摄像机所拍摄影像的清晰度,将所述图像划分为第一区域和第二区域;其中,在所述第一区域内,广角摄像机所拍摄影像的清晰度达到预设的清晰度阈值;以及将所述第二区域作为所述目标区域。
本申请还提供了一种监控装置,该监控装置可以包括:
区域确定模块,被配置为从获取的监控场景的图像中,确定待监控的目标区域;
参数确定模块,被配置为根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;
控制模块,被配置为控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
本申请还提供了一种电子设备,可以包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述监控方法。
本申请还提供了一种非暂态计算机可读存储介质,可以存储计算机指令,所述计算机指令用于使所述计算机执行上述监控方法。
本申请还提供了一种监控系统,可以包括:广角摄像机、电子设备及云台摄像机;其中:
广角摄像机,被配置为采集监控场景的图像;
电子设备,被配置为从获取的监控场景的图像中,确定待监控的目标区域;根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;以及,控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
本申请还提供了另一种监控系统,可以包括:电子设备及云台摄像机;其中:云台摄像机,被配置为采集监控场景的图像;
电子设备,被配置为从获取的监控场景的图像中,确定待监控的目标区域;根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;以及,控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
从上面所述可以看出,本申请提供的监控方法、装置、系统、电子设备及存储介质,从获取的监控场景的图像中,确定待监控的目标区域,根据目标区域,确定目标拍摄姿态和目标拍摄焦距,控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄。本申请利用云台摄像机拍摄姿态和拍摄焦距可调的特性,能够实现对监控场景内的任意对象进行监控,且保证拍摄效果。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为根据本申请的监控方法所涉及的应用场景示意图;
图2为根据本申请的监控方法的实施方式的流程示意图;
图3为根据本申请的划分预设区域的实施方式的示意图;
图4为根据本申请另一实施例的划分预设区域的实施方式的示意图;
图5为根据本申请实施例的矩形区域补充为方形区域的示意图;
图6为根据本申请的确定目标拍摄姿态和目标拍摄焦距的实施方式的流程示意图;
图7A-7C为根据本申请的目标区域与目标预设区域的位置关系的实施方式的示意图;
图8A-8B为根据本申请的生成目标拍摄姿态和目标拍摄焦距的实施方式的流程示意图;
图9为根据本申请的确定目标区域的实施方式的流程示意图;
图10为根据本申请的待合并的候选目标区域的位置关系的示意图;
图11为根据本申请的监控装置的实施方式的结构示意图。
具体实施方式
使本申请的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本申请进一步详细说明。
需要说明的是,本申请实施例中所有使用“第一”和“第二”的表述均是为了区分两个相同名称非相同的实体或者非相同的参量,可见“第一”“第二”仅为了表述的方便,不应理解为对本申请实施例的限定,后续实施例对此不再一一说明。
在一种实现方式中,对监控场景内的对象进行监控是利用图像采集设备实现的。参考图1所示,在学校的应用场景中,可于教室内安装图像采集设备,通过图像采集设备采集教室内学生的图像,以进行后续监控处理。然而,如何选取合适的图像采集设备成为一个问题。这是因为,广角镜头拍摄范围广泛,但是拍摄距离镜头较远的对象清晰度较低;变焦镜头能够通过变焦改变拍摄范围,拍摄范围内的画面清晰,但是拍摄范围有限,无法拍摄到整个教室。云台摄像机能够通过调整拍摄姿态拍摄教室内的任意位置,通过调整拍摄焦距调整拍摄清晰度,然而云台摄像机无法获知所要拍摄的位置,也无法确定出拍摄特定位置的拍摄姿态和拍摄焦距。
为解决上述问题,本申请提供一种监控方法、装置、系统、电子设备及存储介质,能够从监控场景的图像中确定出所要监控的目标区域,根据目标区域确定出目标区域对应的目标拍摄姿态和目标拍摄焦距,并控制云台摄像机按照目标拍摄姿态和拍摄目标焦距拍摄进行拍摄。这样,本申请能够利用云台摄像机拍摄监控场景内的任意对象,且能够保证拍摄效果良好。
为了便于理解,下面结合附图对本申请的监控方法进行详细说明。
图2为本申请提供的监控方法的实施方式的流程示意图。参见图2,该监控方法200包括:
S201:从获取的监控场景的图像中,确定待监控的目标区域;
S202:根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;
S203:控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄。
本申请实施例中,监控场景可以是任意需要进行监控的场景,例如学校、办公楼、停车场、厂房等。为实现对监控场景的监控,首先于监控场 景配置图像采集设备,接着利用图像采集设备采集监控场景的视频,然后进一步从视频中提取出图像,以进行后续处理。
在一些实施例中,在S201中,图像中待监控的目标区域是运动对象所在区域。例如,S201可以包括:获取图像采集设备采集的视频;从视频中提取出连续多帧的图像;根据连续多帧的图像,利用任意一种运动对象检测算法,确定图像中运动对象所在区域;基于运动对象所在区域,确定出目标区域。例如,若监控场景是教室,运动对象是活动的学生,教室图像中活动的学生所在区域为目标区域。再例如,监控场景是停车场,运动对象是开动的车辆,停车场图像中开动的车辆所在区域为目标区域。在一些实施例中,还可以利用对静止场景进行运动分割的背景差分法来确定运动对象所在的区域。在一些具体实施例中,可以首先获得静止场景的背景图像;再将当前获取的图像帧与背景图像做差分运算,得到目标运动区域的灰度图;接着对灰度图进行阈值化提取运动对象所在区域。进一步地,为避免环境光照变化影响,背景图像可以根据当前获取图像帧进行更新。
在另一些实施例中,在S201中,图像中待监控的目标区域是预设的监控对象所在区域。在一些示例中,S201可以包括:获取图像采集设备采集的视频;从视频中提取出图像;根据提取出的图像,利用图像识别算法,识别出图像中的监控对象;基于图像中监控对象所在区域,确定出目标区域。例如,监控对象为特定的人、动物、植物、车牌等,具体不做限定。
在本申请的一些实施例中,所述图像采集设备可以是任意一种具有图像采集功能的设备。例如,按照镜头焦距分类,图像采集设备可以是配置广角镜头、变焦镜头、标准镜头或长焦镜头的摄像机。图像采集设备的具体选型可根据实际监控需要配置。
一种实际应用中,监控场景是教室,教室中安装有广角镜头的摄像机(广角摄像机)。首先利用广角摄像机采集包括教室内所有学生的视频;接着从视频中提取出连续多帧的全景图像;再接着根据连续多帧的全景图像,确定全景图像中活动的学生所在区域;然后将活动的学生所在区域作为待监控的目标区域。例如,活动的学生可以是进行举手、回头、说话等 动作的学生。利用本方案的方法能够对教室内任意活动的学生进行监控,可用于实现无人监考、课堂活跃度评估等功能。
另一种实际应用中,监控场景是教室,教室中安装有至少一台云台摄像机。由上述至少一台云台摄像机采集包括教室内所有学生的视频;然后从视频中提取出连续多帧的全景图像;接着根据连续多帧的全景图像,确定全景图像中活动的学生所在区域,并将活动的学生所在区域作为待监控的目标区域。例如,活动的学生可以是进行举手、回头、说话等动作的学生。利用本方案的方法能够对教室内任意活动的学生进行监控,可用于实现无人监考、课堂活跃度评估等功能。
本申请实施例中,在步骤202中,为了根据所述目标区域确定目标拍摄姿态和目标拍摄焦距,首先需要将监控场景的图像划分为至少两个预设区域。在一些具体示例中,首先可以利用广角摄像机或云台摄像机采集监控场景的视频,接着从采集的监控场景的视频中提取出监控场景的全景图像,并将全景图像划分成至少两个预设区域。然后,对于每个预设区域,利用云台摄像机依次拍摄各个预设区域所对应的监控场景内的范围,确定云台摄像机拍摄各个预设区域所对应的范围的拍摄姿态和拍摄焦距,将其分别作为每个预设区域所对应的候选拍摄姿态和候选拍摄焦距。其中,关于在监控场景的图像中划分预设区域以及确定各个预设区域对应的候选拍摄姿态和候选拍摄焦距的方法将在后文中详细描述,在此暂不详述。
本申请实施例中,在确定了目标区域之后,可以根据目标区域与各个预设区域的关系,利用各个预设区域所对应候选拍摄姿态和候选拍摄焦距来确定上述目标拍摄姿态和目标拍摄焦距,具体的确定方法将在后文中详细描述,在此暂不赘述。
本申请实施例中,确定上述目标拍摄姿态和目标拍摄焦距之后,在S203中,控制云台摄像机按照上述目标拍摄姿态和目标拍摄焦距进行拍摄。
由于目标拍摄姿态和目标拍摄焦距是根据待监控的目标区域确定且与之对应的,因此,通过控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄可以充分利用云台摄像机拍摄姿态和拍摄焦距可调的特性,能够 实现对监控场景内的任意对象进行监控,从而解决云台摄像机无法获知所要拍摄的位置,也无法确定出拍摄特定位置的拍摄姿态和拍摄焦距的问题。
一种可能的实施例中,云台摄像机中设置有执行本申请的监控方法的电子设备,电子设备从获取的监控场景的图像中,确定待监控的目标区域,以及根据目标区域确定目标拍摄姿态和目标拍摄焦距。之后,电子设备控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄。具体的,云台摄像机包括云台、镜头及电子设备,电子设备确定目标拍摄姿态和目标拍摄焦距之后,控制云台按照目标拍摄姿态调整至拍摄位置,控制镜头按照目标拍摄焦距调整焦距,使得云台摄像机于拍摄位置,按照调整后的焦距进行拍摄。这样,能够利用云台摄像机拍摄监控场景内的任意对象,且保证拍摄效果。
还可以是,终端中设置有执行本申请的监控方法的电子设备,终端与云台摄像机之间建立数据连接。电子设备从获取的监控场景的图像中,确定待监控的目标区域,以及根据目标区域,确定目标拍摄姿态和目标拍摄焦距的步骤。之后,电子设备向云台摄像机发送包括目标拍摄姿态和目标拍摄焦距的控制指令。云台摄像机接收控制指令,对控制指令进行解析,以得到目标拍摄姿态和目标拍摄焦距,然后按照目标拍摄姿态和目标拍摄焦距进行拍摄。
其中,终端可以是移动终端或是固定终端,移动终端例如是智能手机、平板电脑、IPAD等具有数据处理功能的终端,固定终端例如是电子白板、台式电脑、服务器等等具有数据处理功能的终端,具体不做限定。终端与云台摄像机可以任意一种数据连接方式建立连接,例如有线连接、无线连接等。
本申请的实施例中,由于目标拍摄姿态和目标拍摄焦距是根据待监控的目标区域确定且与之对应的,因此,通过控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄可以充分利用云台摄像机拍摄姿态和拍摄焦距可调的特性,能够实现对监控场景内的任意对象进行监控,从而解决云台摄像机无法获知所要拍摄的位置,也无法确定出拍摄特定位置的拍摄姿态和拍摄焦距的问题。
下面结合具体示例详细说明关于在监控场景的图像中划分预设区域的方法。
在一些可能的实施例中,可以利用广角摄像机采集监控场景的视频。
在本申请的实施例中,可以从广角摄像机获取监控场景的视频,并从视频中提取出监控场景的全景图像。
基于提取出的全景图像,将全景图像划分成若干个预设区域。例如,划分出的预设区域可以是方形区域、矩形区域、圆形区域等任意形状区域。
如图3所示,一种实施例中,将全景图像划分为若干个预设区域,每个预设区域为方形区域。具体的,首先根据全景图像确定图像的若干个边界点(如图中的点1-20);接着按照边界点将全景图像划分成若干个方形区域(如图中的方形区域A1-A24)。
考虑到目标区域可以是任意大小并位于全景图像中的任意位置,进一步地,可将全景图像按照大小划分为至少两级预设区域。在一些实施例中,可将全景图像划分为若干个一级预设区域。如图3所示,一级预设区域为单个方形区域,将全景图像划分为A1-A24共24个一级预设区域;也可以将全景图像划分为若干个二级预设区域,二级预设区域为2×2个方形区域,例如由A1、A2、A5、A6组成的方形区域为二级预设区域;也可以将全景图像划分为若干个三级预设区域,三级预设区域为4×4个方形区域,如由A1-A16组成的方形区域为三级预设区域;依此类推,可将全景图像划分为不同大小的N级预设区域,每个预设区域包括N×N个方形区域,其中,N=2 n-1,n为大于或等于1的整数。
在另一种可能的实现方式中,还可以按座位表划分预设区域。
如图4所示,按照正方形区域划分全景图像时,有可能存在划分不完整的情况,如图4所示,划分之后的预设区域A21'-A24'为矩形区域。这种情况下,仍将预设区域A21'-A24'作为方形区域,后续处理也按照方形区域的方式进行,后面还将进行说明。
下面结合具体示例详细说明确定各个预设区域对应的候选拍摄姿态和候选拍摄焦距的方法。
在本申请的实施例中,可以利用云台摄像机依次拍摄每个预设区域所对应的监控场景内的范围,得到拍摄每个预设区域所对应的范围的拍摄姿态和拍摄焦距;并将得到的拍摄每个预设区域所对应的范围的拍摄姿态和拍摄焦距作为每个预设区域所对应的候选拍摄姿态和候选拍摄焦距。
承图3所示实施例,一种可能的实现方式中,为确定所有一级预设区域对应的候选拍摄姿态和候选拍摄焦距,利用云台摄像机按照从左到右、从上到下的方式,依次拍摄一级预设区域A1-A24所分别对应的监控场景内的范围。即,拍摄一级预设区域A1所对应的监控场景内的范围,得到拍摄一级预设区域A1所对应的范围的拍摄姿态和拍摄焦距;基于拍摄一级预设区域A1所对应的范围的拍摄姿态和拍摄焦距,确定一级预设区域A1的候选拍摄姿态和候选拍摄焦距。之后,拍摄一级预设区域A2所对应的监控场景内的范围,得到拍摄一级预设区域A2所对应的范围的拍摄姿态和拍摄焦距;基于拍摄一级预设区域A2所对应的范围的拍摄姿态和拍摄焦距,确定一级预设区域A2的候选拍摄姿态和候选拍摄焦距;确定一级预设区域A4的候选拍摄姿态和拍摄焦距之后,拍摄一级预设区域A8所对应的范围,进而确定一级预设区域A8的候选拍摄姿态和候选拍摄焦距;按此顺序,最终确定出每个一级预设区域所对应的候选拍摄姿态和候选拍摄焦距。
在另一种可能的实现方式中,还可以先确定监控场景四个角的参数,再平均分粗略确定各预设区域的参数。
需要说明的是,在一些实施例中,可以通过手动调节云台摄像机,以使云台摄像机分别拍摄各预设区域所对应的监控场景内的范围;然后分别记录对应的拍摄姿态和拍摄焦距,从而得到各预设区域对应的候选拍摄姿态和候选拍摄焦距。在另一些实施例中,可以利用拍摄控制模块控制云台摄像机按照设定的顺序依次拍摄每个预设区域所对应的监控场景内的范围;接着分别记录存储对应的拍摄姿态和拍摄焦距,进而得到各预设区域对应的候选拍摄姿态和候选拍摄焦距。
结合图4和图5所示,对于预设区域A21'-A24',将矩形区域补充成方形区域;然后利用云台摄像机拍摄补充之后的方形区域所对应的监控场景 内的范围,得到拍摄补充之后的方形区域对应的范围的拍摄姿态和拍摄焦距;基于拍摄补充之后的方形区域所对应的范围的拍摄姿态和拍摄焦距,确定预设区域A21'-A24'的候选拍摄姿态和候选拍摄焦距。
下面结合具体示例详细说明上述步骤S202所述的确定目标拍摄姿态和目标拍摄焦距的具体实现方式。图6示出了本申请所述的根据目标区域确定目标拍摄姿态和目标拍摄焦距的实施方式的示意性流程图。如图6所示,该方法可以包括:
S601:确定目标区域所在的预设区域为目标预设区域;
S602:确定目标预设区域对应的候选拍摄姿态和候选拍摄焦距;以及
S603:根据确定出的候选拍摄姿态和候选拍摄焦距,确定上述目标拍摄姿态和上述目标拍摄焦距。
本申请实施例中,可利用预先确定出的各个预设区域的候选拍摄姿态和候选拍摄焦距;然后根据目标区域与目标预设区域的位置关系,计算生成目标拍摄姿态和目标拍摄焦距。
其中,所述拍摄姿态和拍摄焦距是云台摄像机的拍摄参数。云台摄像机的拍摄姿态包括水平旋转角度和垂直旋转角度。其中,水平旋转角度为云台摄像机沿水平方向的旋转角度,垂直旋转角度为云台摄像机沿垂直方向的旋转角度。水平旋转角度和垂直旋转角度共同决定了云台摄像机的拍摄位置。在一些实施例中,云台摄像机的拍摄焦距可以是变焦倍数。云台摄像机可根据变焦倍数调整焦距,以调整拍摄影像的清晰度。
本申请实施例中,在上述步骤S601,从获取的监控场景的全景图像中,根据目标区域在全景图像中的位置,即可确定目标区域所在预设区域,并将该预设区域作为目标预设区域。
如图3所示,前述步骤已将全景图像划分为了不同大小的至少两级预设区域,在此基础上,任意大小且位于任意位置的目标区域均会落入某个预设区域之内。如图7A所示,目标区域Z落入一级预设区域A1之内,则目标区域Z所在预设区域为一级预设区域A1;如图7B所示,目标区域Z落入二级预设区域C1之内,则目标区域Z所在预设区域为二级预设区域 C1;如图7C所示,目标区域Z落入四级预设区域D1之内,则目标区域Z所在预设区域为四级预设区域D1。
本申请实施例中,根据全景图像,将全景图像划分成至少两个预设区域,并预先确定出每个预设区域所对应的候选拍摄姿态和候选拍摄焦距,能够实现无论目标区域的数量是多少,目标区域所在预设区域均可落入任意一个预设区域,均可利用预先确定的每个预设区域所对应的候选拍摄姿态和候选拍摄焦距,快速生成目标区域的拍摄姿态和拍摄焦距,提高处理速度。
本申请实施例中,由于各个预设区域对应的候选拍摄姿态和候选拍摄焦距均已经预先确定,因此,在步骤S602,在确定了目标区域所在的目标预设区域后,即可确定目标预设区域对应的候选拍摄姿态和候选拍摄焦距。
图8A示出了本申请中上述步骤S603所述的根据确定出的目标预设区域对应的候选拍摄姿态确定上述目标拍摄姿态的实施方式的示意性流程图。图8B示出了本申请中上述步骤S603所述的根据确定出的目标预设区域对应的候选拍摄焦距确定上述目标拍摄焦距的实施方式的示意性流程图。
如图8A所示,本申请实施例所述的根据目标预设区域对应的候选拍摄姿态确定上述目标拍摄姿态,可以包括:
S811:从监控场景的图像中选取一个预设区域作为第一参考预设区域;其中,第一参考预设区域不同于所述目标预设区域;
S812:根据目标预设区域与第一参考预设区域之间的距离、目标预设区域对应的候选拍摄姿态及第一参考预设区域对应的候选拍摄姿态,确定姿态变化比例参数;以及
S813:根据目标区域与目标预设区域之间的距离、目标预设区域对应的候选拍摄姿态及姿态变化比例参数,生成目标拍摄姿态。
在本申请的实施例中,如前所述,由于云台摄像机的拍摄姿态包括水平旋转角度和垂直旋转角度,因此,在选取上述第一参考预设区域时,第一参考预设区域与目标预设区域的位置关系与是在确定水平方向上的拍摄参数还是在确定竖直方向上的拍摄参数相关。例如,如果是在确定水平方向上的拍摄参数,也即水平旋转角度,则第一参考预设区域与目标预设区 域在水平方向上的位移不能为零。同理,如果是在确定竖直方向上的拍摄参数,也即垂直旋转角度,则第一参考预设区域与目标预设区域在竖直方向上的位移不能为零。
具体地,在确定云台摄像机目标水平旋转角度的实施例中,上述目标预设区域与第一参考预设区域之间的距离为目标预设区域与第一参考预设区域之间的水平距离;上述目标区域与目标预设区域之间的距离为目标区域与目标预设区域之间的水平距离。其中,目标预设区域与第一参考预设区域之间的水平距离不能为零。而在确定云台摄像机目标垂直旋转角度的实施例中,上述目标预设区域与第一参考预设区域之间的距离为目标预设区域与第一参考预设区域之间的垂直距离;上述目标区域与目标预设区域之间的距离为目标区域与目标预设区域之间的垂直距离。其中,目标预设区域与第一参考预设区域之间的垂直距离不能为零。
可选的,在本申请的一些实施例中,可以选择与上述目标预设区域相邻的预设区域作为上述第一参考预设区域。
上述与目标预设区域相邻的预设区域是指以下中的一种:目标预设区域的右侧预设区域和下侧预设区域;也可以是,目标预设区域的右侧预设区域和上侧预设区域;也可以是,目标预设区域的左侧预设区域和下侧预设区域;以及,目标预设区域的左侧预设区域和上侧预设区域。
为简化实现方案,对于位于全景图像其他位置的预设区域,相邻的预设区域可以统一选取右侧的预设区域和下侧的预设区域。这种情况下,对于全景图像最右侧的一列预设区域,相邻的预设区域可以选取左侧的预设区域和下侧的预设区域。也可以是,在将全景图像划分为多个预设区域的步骤,于全景图像的最右侧增加一列候补预设区域,并利用云台摄像机拍摄候补预设区域所对应的监控场景内的范围,基于云台摄像机拍摄候补预设区域对应的范围的拍摄姿态和拍摄焦距,确定候补预设区域的候选拍摄姿态和候选拍摄焦距。当目标区域所在预设区域为最右侧的其中一个预设区域中时,相邻的预设区域可选取其右侧的候补预设区域。同样的,对于全景图像最下侧的一行预设区域,同样可以采用上述两种可选方案中的一种确定相邻的预设区域。
在一些可能的实施例中,当上述第一参考预设区域与上述目标预设区域是相邻的预设区域时,可按照下述公式(1)-(2)计算生成上述目标拍摄姿态。其中,目标拍摄姿态包括目标水平旋转角度和目标垂直旋转角度。
首先,在本申请的实施例中,在确定云台摄像机的目标水平旋转角度和目标垂直旋转角度的过程中,需要利用到目标预设区域的各个顶点的位置信息。而为了方便对目标预设区域各个顶点的位置进行度量,引入了直角坐标系。通常可以以监控场景的图像的左下角作为坐标系的原点,并按照预设区域划分的方向设置X轴和Y轴。如图3所示,可以以预设区域A21的左下角顶点(边界点16)作为直角坐标系的原点,将边界点17、18、19以及20所确定的边作为直角坐标系的X轴,而将边界点14、12、10、8、6以及1所确定的边作为直角坐标系的Y轴。如此,可以通过坐标表示出各个预设区域各个顶点的位置。例如,确定监控场景的图像信息的分辨率为720*1080,则预设区域A1四个顶点的坐标可以依次表示为(0,1080),(180,1080),(0,900)以及(180,900)。又例如,边界点5的坐标可以表示为(720,1080),边界点16的坐标点为(0,0),边界点20的坐标点为(720,0)等等。
接下来,再来根据目标区域与目标预设区域的位置关系确定目标水平旋转角度和目标垂直旋转角度。
设目标区域由T表示,T 1代表目标区域的第一顶点,T 2代表目标区域的第二顶点,T 3代表目标区域的第三顶点。其中,T 1和T 2的横坐标不相同,分别表示为X T1和X T2。T 1和T 2的纵坐标可以相同也可以不相同。其中,T 1和T 3的纵坐标不相同,分别表示为Y T1和Y T3。T 1和T 3的横坐标可以相同也可以不相同。
设目标预设区域由D表示,D 1代表目标预设区域的第一顶点,D 2代表目标预设区域的第二顶点,D 3代表目标预设区域的第三顶点。其中,D 1和D 2的横坐标不相同,分别表示为X D1和X D2。D 1和D 2的纵坐标可以相同也可以不相同。其中,D 1和D 3的纵坐标不相同,分别表示为Y D1和Y D3。T 1和T 3的横坐标可以相同也可以不相同。
此外,设P D为目标预设区域D的水平旋转角度,P DI为与目标预设区域D在水平方向上(X轴方向上)相邻的预设区域对应的水平旋转角度,也即,预设区域DI位于预设区域D的右侧或左侧。设T D为目标预设区域D的垂直旋转角度,T DJ为与目标预设区域D在垂直方向上(Y轴方向上)相邻的预设区域对应的垂直旋转角度,也即,预设区域DJ位于预设区域D的上侧或下侧。
在上述假设条件之下,云台摄像机在拍摄目标区域时的目标水平旋转角度P T可以通过如下的公式(1)计算得到:
P T=P DI–(X T1+X T2)/(X D1+X D2)×(P DI-P D)  (1)
其中,可以理解,(X T1+X T2)/2表示的是目标区域中点的横坐标;(X D1+X D2)/2表示的是目标预设区域中点的横坐标,因此,(X T1+X T2)/(X D1+X D2)表示的是云台摄像机的水平旋转角度从目标区域到目标预设区域的变化比例,也即上述变化比例参数。(P DI-P D)表示的是云台摄像机的目标水平旋转角度从目标预设区域D到预设区域DI的变化总量。因而,(X T1+X T2)/(X D1+X D2)×(P DI-P D)表示的是云台摄像机的目标水平旋转角度从目标区域T到预设区域DI的变化量。因此,通过上述公式(1)即可得到云台摄像机的在拍摄目标区域时的目标水平旋转角度P T
类似地,在上述假设条件之下,云台摄像机在拍摄目标区域时的目标垂直旋转角度T T可以通过如下的公式(2)计算得到:
T T=T DJ+(Y T1+Y T3)/(Y D1+Y D3)×(T D-T DJ)  (2)
其中,可以理解,(Y T1+Y T3)/2表示的是目标区域中点的纵坐标;(Y D1+Y D3)/2表示的是目标预设区域中点的纵坐标,因此,(Y T1+Y T3)/(Y D1+Y D3)表示的是是云台摄像机的垂直旋转角度从目标区域到目标预设区域的变化比例,也即上述变化比例参数。(T D-T DJ)表示的是云台摄像机的目标垂直旋转角度从预设区域DJ到目标预设区域D的变化总量。因而,(Y T1+Y T3)/(Y D1+Y D3)×(T D-T DJ)表示的是云台摄像机的目标垂直旋转角度从预设区域DJ到目标区域T的变化量。因此,通过上述公式(2)即可得到云台摄像机的在拍摄目标区域时的目标垂直旋转角度T T
如图8B所示,本申请实施例所述的根据目标预设区域对应的候选拍摄焦距确定上述目标拍摄焦距方法可以包括:
S821:从监控场景的图像中确定第二参考预设区域;其中,第二参考预设区域包括目标预设区域且第二参考预设区域的面积大于目标预设区域,或者,目标预设区域包括第二参考预设区域且目标预设区域的面积大于第二参考预设区域;
S822:根据目标预设区域和第二参考预设区域之间的面积比、目标预设区域对应的候选拍摄焦距和第二参考预设区域的候选拍摄焦距,得到焦距变化比例参数;
S823:根据目标区域与目标预设区域之间的面积比、目标预设区域对应的候选拍摄焦距及焦距变化比例参数,生成目标拍摄焦距。
可选的,在本申请的一些实施例中,可按照大小将全景图像划分为至少两级预设区域;根据目标区域所在预设区域,确定与目标区域所在预设区域相差一级或多级的预设区域,将其作为第二参考预设区域;之后,确定对应上述第二参考预设区域对应的候选拍摄焦距,然后,根据目标预设区域对应的候选拍摄焦距,以及上述第二参考预设区域对应的候选拍摄焦距,计算生成目标区域对应的拍摄焦距。
在一些可能的实施例中,可按照公式(3)计算生成目标拍摄焦距;其中,目标拍摄焦距可以是变焦倍数。
设目标区域由T表示,目标预设区域由D表示,Z D是目标预设区域所对应的变焦倍数,Z M是第二参考预设区域所对应的变焦倍数。
在本申请的实施例中,对于目标变焦倍数Z T可以通过如下的公式计算得到:
Z T=Z D-Z a3×(Z M-Z D)(3)
其中,Z a1=|X T1+X T2|/|X D1+X D2|;Z a2=|Y T1+Y T3|/|Y D1+Y D3|;Z a3的取值为Z a1、Z a2中较大的一个。可以看出,上述Z a3即可被视为上述焦距变化比例参数;(Z M-Z D)为云台摄像机从目标预设区域D到第二参考预设区域的焦距变化总量;Z a3×(Z M-Z D)即为云台摄像机从目标区域T到目标预设区域D 的焦距变化量。因此,通过上述公式(3)即可得到云台摄像机的在拍摄目标区域时的目标拍摄焦距。
下面结合图3、图7A所示,在一个可选的实施例中,根据全景图像,确定全景图像的20个边界点,将全景图像划分为A1-A24共24个一级预设区域。利用云台摄像机拍摄每一级预设区域所对应的监控场景内的范围;然后基于拍摄每个一级预设区域所对应的范围的拍摄姿态和拍摄焦距,确定每个一级预设区域的候选拍摄姿态和候选拍摄焦距。
根据全景图像,确定目标区域Z,根据目标区域Z的大小以及其在全景图像中的位置,确定目标区域Z落入一级预设区域A1之内,目标区域所在预设区域为一级预设区域A1;之后,确定一级预设区域A1所对应的水平旋转角度P A1、垂直旋转角度T A1、变焦倍数Z A1,例如,P A1T A1Z A1=(0,0.8,0.8);根据一级预设区域A1,确定与之相邻的一级预设区域A2与一级预设区域A5;根据一级预设区域A2,确定一级预设区域A2对应的水平旋转角度P A2、垂直旋转角度T A2、变焦倍数Z A2,例如,P A2T A2Z A2=(0.25,0.8,0.8);根据一级预设区域A5,确定一级预设区域A5对应的水平旋转角度P A5、垂直旋转角度T A5、变焦倍数Z A5,例如,P A5T A5Z A5=(0,0.67,0.8)。
此外,根据全景图像的分辨率,可确定各边界点的坐标。本实施例中,确定全景图像信息的分辨率为720*1080,则边界点1的坐标为(0,1080),边界点5的坐标为(720,1080),边界点16的坐标点为(0,0),边界点20的坐标点为(720,0);对于一级预设区域A1,其四个顶点的坐标分别为A 11(0,1080),A 12(180,1080),A 13(0,900),A 14(180,900);对于目标区域Z,其三个顶点的坐标分别为Z 11(45,1020),Z 12(135,1020),Z 13(45,948)。
另,云台摄像机的水平旋转角度最大值、垂直旋转角度最大值、变焦倍数最大值为已知条件,本实施例中,设定P maxT maxZ max=(1,1,1),则Z M=1。
则,根据确定的各项参数,利用公式(1)-(5)计算目标区域Z的拍摄姿态和拍摄焦距,得到:
P Z=0.25-((45+135)/(0+180))×(0.25-0)=0      (6)
T Z=0.67+((1020+948)/(1080+900))×(0.8-0.67)=0.799  (7)
Z Z=1-1×(1-0.8)=0.8              (8)
经计算得到目标区域Z的水平旋转角度P Z=0,垂直旋转角度T Z=0.799,变焦倍数Z Z=0.8,即P ZT ZZ Z=(0,0.799,0.8)。
计算生成目标区域的水平旋转角度、垂直旋转角度、变焦倍数之后,可控制云台摄像机按照生成的水平旋转角度、垂直旋转角度、变焦倍数进行拍摄,使得云台摄像机能够拍摄目标区域所对应的监控场景内的范围,拍摄该范围之内的对象,实现云台摄像机拍摄监控场景内任意对象的目的。
在本申请的实施例中,从获取的监控场景的图像确定目标区域的过程中可能会存在初步确定的目标区域的数量大于云台摄像机的数量的情况,此时,为了有效地进行监控,需要对初步确定的目标区域进行合并,以确定最终待监控的目标区域,使得最终确定的待监控的目标区域的数量小于或者等于云台摄像机的数量。
图9示出了本申请所述的从获取的监控场景的图像中确定待监控的目标区域的实施方式的示意性流程图。如图9所示,从获取的监控场景的图像中确定待监控的目标区域可以包括:
S901:从所述监控场景的图像中,确定至少一个候选目标区域。
S902:判断候选目标区域的数量是否大于云台摄像机的数量,在候选目标区域的数量大于云台摄像机的数量时,执行S903;否则,执行S905。
其中,在本申请的实施例中,为了进行有效监控,所设置的云台摄像机的数量至少为1台。
S903:对至少两个候选目标区域进行合并处理,以使得合并处理后的候选目标区域的数量与云台摄像机的数量相同。
S904:将合并处理后的候选目标区域作为目标区域。
S905:将上述候选目标区域作为目标区域。
在本申请的实施例中,上述候选目标区域为根据监控场景的图像确定的需要关注的区域,也就是需要云台摄像机进行监控的区域。具体地,可以根据获取的监控场景的图像,利用图像识别处理技术,通过前文所述的 多种方法确定出需要云台摄像机监控的多个候选目标区域。通常,对于每个候选目标区域都需要单独一台云台摄像机进行监控,因此,在云台摄像机的数量有限的情况下,可能需要对多个候选目标区域中的部分候选目标区域进行合并处理,以确定出每台云台摄像机待监控的目标区域。
一种情况是,候选目标区域的数量少于或是等于云台摄像机的数量,则可以将候选目标区域作为目标区域,并建立目标区域与云台摄像机的对应关系,然后,针对各个目标区域,分别确定目标拍摄姿态和目标拍摄焦距,控制对应的云台摄像机按照上述目标拍摄姿态和目标拍摄焦距进行拍摄。
另一种情况是,若候选目标区域的数量大于云台摄像机的数量,则需要对上述候选目标区域中的至少两个候选目标区域进行合并处理,使得合并处理后的候选目标区域的数量与云台摄像机的数量相同;然后,将合并处理后的候选目标区域作为目标区域,并建立目标区域与云台摄像机的对应关系;再然后,针对各个目标区域,确定上述目标拍摄姿态和目标拍摄焦距,控制云台摄像机按照目标拍摄姿态和目标拍摄焦距进行拍摄。
下面将结合具体的示例详细说明对候选目标区域进行合并的方法。
在本申请一种可能的实施例中,可以基于候选目标区域之间的距离对至少两个候选目标区域进行合并处理,S903包括:
确定两两候选目标区域的距离;
按照上述距离从小到大的顺序依次将候选目标区域进行合并,直至合并后候选目标区域的数量和云台摄像机的数量相同。
即,当判断候选目标区域的数量大于云台摄像机的数量时,分别确定两两候选目标区域之间的距离;之后,从中选取出距离最近的两候选目标区域,将二者合并成一个候选目标区域。合并处理之后,判断候选目标区域的数量是否和云台摄像机的数量相同;若相同,则将合并处理后的候选目标区域作为目标区域;若不相同,继续进行合并处理,直至候选目标区域的数量与云台摄像机的数量相同为止。
在上述合并处理过程中,可能出现的情况是,分别确定两两候选目标区域之间的距离之后,距离最近的候选目标区域有多组。即,存在至少两 组候选目标区域,每组候选目标区域包括两个待合并的候选目标区域,各组中的两个待合并的候选目标区域的距离相等,则,按照合并优先级对至少两组候选目标区域进行合并。其中,所述合并优先级是:
第一优先级,两个待合并的候选目标区域的大小均小于预设的面积阈值;
第二优先级,其中一个待合并的候选目标区域的大小大于面积阈值,另一个待合并的候选目标区域的大小小于面积阈值;
第三优先级,两个待合并的候选目标区域的大小均大于面积阈值。
结合图10所示,从图像中确定出候选目标区域Z1、Z2、Z3、Z4,假如云台摄像机的数量为三部,则需要对候选目标区域Z1、Z2、Z3、Z4进行合并处理,以将候选目标区域合并为三个。首先确定两两候选目标区域之间的距离,得到候选目标区域Z1与候选目标区域Z2之间的距离、候选目标区域Z1与候选目标区域Z3之间的距离、候选目标区域Z3与候选目标区域Z4之间的距离均为d0,候选目标区域Z2与候选目标区域Z4之间的距离为d1,候选目标区域Z2与候选目标区域Z3之间的距离为d2,候选目标区域Z1与候选目标区域Z4之间的距离为d3。且判定d0、d1、d2和d3中,d0为距离最小值。则,进一步确定候选目标区域Z1、Z2、Z3、Z4的大小,得到候选目标区域Z1与候选目标区域Z2的大小均小于设定面积阈值,按照合并优先级,确定将候选目标区域Z1与候选目标区域Z2合并成一个候选目标区域。
其中,确定两两候选目标区域之间的距离,可以是两个候选目标区域最近边之间的距离,或是两个候选目标区域最远边之间的距离,或是两个候选目标区域最远顶点之间的距离等等,具体不做限定,只要计算距离的依据相同即可。
在本申请另一种可能的实施例中,还可以基于候选目标区域的面积对至少两个候选目标区域进行合并处理,则S903具体可以包括:将任意两个所述候选目标区域进行预合并,得到预合并后的候选目标区域;确定所述预合并后的候选目标区域的面积;选取面积最小的所述预合并后的候选目 标区域;以及对所述面积最小的所述预合并后的候选目标区域所对应的两个候选目标区域进行合并处理。
在本申请再一种可能的实施例中,为了提高合并效率,可以基于候选目标区域的面积以及候选目标区域之间的距离对确定出的候选目标区域进行合并,S903具体可以包括:确定每个候选目标区域的面积;选取面积最小的候选目标区域;确定面积最小的候选目标区域与其他每个候选目标区域的第一距离;选取第一距离最近的候选目标区域;将面积最小的所述候选目标区域与第一距离最近的候选目标区域进行合并处理。
在本申请又一种可能的实施例中,还可以基于候选目标区域的面积以及后续目标区域之间的距离对至少两个候选目标区域进行合并处理,则S903具体可以包括:确定每个所述候选目标区域的面积;选取面积最小的所述候选目标区域;确定所述面积最小的所述候选目标区域与其他每个所述候选目标区域的第一距离;选取第一距离最近的所述候选目标区域;将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行预合并处理,得到预合并处理后的候选目标区域;确定所述预合并处理后的候选目标区域的面积;将所述预合并处理后的候选目标区域的面积与预设的面积阈值进行比较;若所述预合并处理后的候选目标区域的面积小于等于所述面积阈值,将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行合并处理;若所述预合并处理后的候选目标区域的面积大于所述面积阈值,则:将所述面积最小的所述候选目标区域与除所述第一距离最近的所述候选目标区域之外其他的所述候选目标区域,按照所述第一距离从近到远依次进行预合并处理;当任一预合并处理后的候选目标区域的面积小于等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
当所有预合并处理后的候选目标区域的面积均大于所述面积阈值时,从除所述面积最小的所述候选目标区域之外其他的所述候选目标区域,按照面积从小到大依次选取所述候选目标区域;确定选取出的所述候选目标区域与其他每个所述候选目标区域的第二距离;将选取出的所述候选目标区域与其他每个所述候选目标区域,按照所述第二距离从近到远依次进行 预合并处理;当任一预合并处理后的候选目标区域的面积小于等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
除了上述对候选目标区域进行合并的方法之外,在本申请的另一些可能的实施例中,也可以基于运动对象所在的区域以及云台摄像机的数量确定上述目标区域,则S101可以包括:
判断运动对象所在区域的数量是否大于云台摄像机的数量;
若是,则:
确定运动对象的运动幅度;根据运动对象的运动幅度从大到小对运动对象所在区域的优先级进行高低排列;按照优先级从高到低,根据云台摄像机的数量选取运动对象所在区域作为目标区域;
或,确定运动对象的运动幅度;根据运动对象的运动幅度,确定运动幅度大于预设的运动阈值的候选运动对象所在区域;判断候选运动对象所在区域的数量是否大于云台摄像机的数量;若是,对候选运动对象所在区域进行合并处理,以使得合并处理后的候选运动对象所在区域的数量与云台摄像机的数量相同;将合并处理后的候选运动对象所在区域作为目标区域。
本实施例中,所监控的对象为运动对象,全景图像中运动对象所在区域为目标区域,当确定出全景图像中存在多个运动对象所在区域,而云台摄像机的数量有限时,可按照运动对象的运动幅度,确定待监控的目标区域。
一种方式是,首先根据运动对象的运动幅度从大到小对运动对象所在区域的优先级进行高低排列;然后根据云台摄像机的数量,从中选取相应数量的运动对象所在区域作为目标区域。
另一种方式是,设置运动阈值。首先,确定运动幅度大于运动阈值的候选运动对象所在区域;若确定出的候选运动对象所在区域的数量小于等于云台摄像机的数量,则将候选运动对象所在区域作为目标区域;若确定出的候选运动对象所在区域的数量大于云台摄像机的数量,则对至少两个候选运动对象所在区域进行合并处理,以使合并处理后的候选运动对象所 在区域的数量与云台摄像机的数量相同。其中合并处理的方法可参照前述实施例所述的合并处理方法,在此不再赘述。
在一些可能的实施例中,若从监控场景的图像中,确定出多个候选目标区域,且候选目标区域的数量大于设定的区域数阈值,为保证监控效果,可配合广角摄像机对监控场景进行监控。
则,S101包括:
从所述图像中,确定至少两个候选目标区域;
判断候选目标区域的数量是否大于设定的区域数阈值;
若是,根据广角摄像机所拍摄影像的清晰度,将所述图像划分为第一区域和第二区域;其中,在第一区域内,广角摄像机所拍摄影像的清晰度达到预设的清晰度阈值;
在第一区域内,控制广角摄像机进行拍摄;
将第二区域作为目标区域。
本实施例中,从获取的图像中,确定出至少两个候选目标区域,当判断候选目标区域的数量大于设定的区域数阈值时,由于目标区域数量多,且分布位于图像中,为实现全面覆盖的监控效果,可利用广角摄像机和云台摄像机同时对监控场景进行监控。此种情况下,将图像划分为第一区域和第二区域,根据广角摄像机的清晰拍摄范围确定第一区域,将第二区域作为目标区域,利用云台摄像机拍摄目标区域所对应的监控场景的范围。其中,广角摄像机的清晰拍摄范围是指:在清晰拍摄范围之内,广角摄像机拍摄的影像清晰度能够达到预设的清晰度阈值。
实际应用中,于教室的前方黑板上方安装广角摄像机,根据广角摄像机的清晰拍摄范围,确定广角摄像机拍摄教室的前区(例如前三排课桌),利用云台摄像机拍摄教室的后区。当教室内需要监控的学生人数较多且分布位于教室的不同位置时,通过广角摄像机和云台摄像机的配合,能够达到全覆盖且影像清晰度佳的监控效果。
图11为本申请实施例的装置结构示意图,监控装置包括:
区域确定模块1102,被配置为从获取的监控场景的图像中,确定待监控的目标区域;
参数确定模块1104,被配置为根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;
控制模块1106,被配置为控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
一种实施方式中,所述区域确定模块1102包括:
获取单元,被配置为获取监控场景的全景图像;
确定单元,被配置为根据全景图像,确定目标区域。
一种实施方式中,所述参数确定模块1104包括:
划分单元,被配置为将全景图像划分为至少两个预设区域;
候选参数确定单元,被配置为确定每个所述预设区域应所对应的候选拍摄姿态和候选拍摄焦距;
目标预设区域确定单元,被配置为确定目标区域所在预设区域,作为目标预设区域;
第一参数确定单元,被配置为确定目标预设区域对应的第一候选拍摄姿态和第一候选拍摄焦距;
参数生成单元,被配置为根据第一候选拍摄姿态和第一候选拍摄焦距,生成目标拍摄姿态和目标拍摄焦距。
一种实施方式中,所述参数生成单元包括:
第一参考区域确定子单元,被配置为从所述监控场景的图像中选取一个预设区域作为第一参考预设区域;所述第一参考预设区域不同于所述目标预设区域;
姿态变化比例参数确定子单元,被配置为根据所述目标预设区域与所述第一参考预设区域之间的距离、所述目标预设区域对应的候选拍摄姿态及所述第一参考预设区域对应的候选拍摄姿态,确定姿态变化比例参数;
第一计算子单元,被配置为根据所述目标区域与所述目标预设区域之间的距离、所述目标预设区域对应的候选拍摄姿态及所述姿态变化比例参数,生成所述目标拍摄姿态。
所述参数生成单元还包括:
第二参考区域确定子单元,从所述监控场景的图像中确定第二参考预设区域;所述第二参考预设区域包括所述目标预设区域且所述第二参考预设区域的面积大于所述目标预设区域,或者,所述目标预设区域包括所述第二参考预设区域且所述目标预设区域的面积大于所述第二参考预设区域;
焦距变化比例确定子单元,被配置为根据所述目标预设区域和所述第二参考预设区域之间的面积比、所述目标预设区域对应的候选拍摄焦距和所述第二参考预设区域的候选拍摄焦距,得到焦距变化比例参数;
第二计算子单元,被配置为根据所述目标区域与所述目标预设区域之间的面积比、所述目标预设区域对应的候选拍摄焦距及所述焦距变化比例参数,生成所述目标拍摄焦距。
一种实施例方式中,所述区域确定模块1102还包括:
区域数量确定单元,被配置为从图像中,确定至少一个候选目标区域;
判断单元,被配置为判断候选目标区域的数量是否大于云台摄像机的数量;
合并单元,若判断单元的判断结果为是,对至少两个候选目标区域进行合并处理,以使得合并处理后的候选目标区域的数量与云台摄像机的数量相同;
目标区域确定单元,被配置为将合并处理后的候选目标区域作为目标区域。
一种实施方式中,所述合并单元包括:
预合并模块,被配置为将任意两个所述候选目标区域进行预合并,得到预合并后的候选目标区域;
面积确定模块,被配置为确定所述预合并后的候选目标区域的面积;
选择模块,被配置为选取面积最小的所述预合并后的候选目标区域;
合并模块,被配置为对所述面积最小的所述预合并后的候选目标区域所对应的两个候选目标区域进行合并处理。
一种实施方式中,所述合并单元包括:
目标区域面积确定模块,被配置为确定每个所述候选目标区域的面积;
选择模块,被配置为选取面积最小的所述候选目标区域;
距离确定模块,被配置为确定所述面积最小的所述候选目标区域与其他每个所述候选目标区域的第一距离;
选取模块,被配置为选取第一距离最近的所述候选目标区域;
合并模块,用于将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行合并处理。
一种实施方式中,合并单元包括:
目标区域面积确定模块,被配置为确定每个所述候选目标区域的面积;
选择模块,被配置为选取面积最小的所述候选目标区域;
距离确定模块,被配置为确定所述面积最小的所述候选目标区域与其他每个所述候选目标区域的第一距离;
选取模块,被配置为选取第一距离最近的所述候选目标区域;
预合并模块,被配置为将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行预合并处理,得到预合并处理后的候选目标区域;
面积确定模块,被配置为确定所述预合并处理后的候选目标区域的面积;
比较模块,被配置为将所述预合并处理后的候选目标区域的面积与预设的面积阈值进行比较;
合并模块,被配置为若所述预合并处理后的候选目标区域的面积小于等于所述面积阈值,将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行合并处理,若所述合并处理后的候选目标区域的面积大于所述面积阈值,则:将所述面积最小的所述候选目标区域与除所述第一距离最近的所述候选目标区域之外其他的所述候选目标区域,按照所述第一距离从近到远依次进行预合并处理;当任一预合并处理后的候选目标区域的面积小于或等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
一种实施方式中,所述合并单元包括:
距离确定模块,被配置为确定两两候选目标区域之间的距离;
合并模块,被配置为按照所述距离从小到大的顺序,依次将候选目标区域进行合并,直至合并处理后的候选目标区域的数量不大于所述云台摄像机的数量。
一些实施例中,所述确定单元包括:
运动区域确定子单元,被配置为根据连续多帧的全景图像,确定运动对象所在区域;
目标区域确定子单元,被配置为基于运动对象所在区域,确定目标区域。
一些实施例中,所述目标区域确定子单元被配置为:
判断运动对象所在区域的数量是否大于云台摄像机的数量;
若是,则:
确定运动对象的运动幅度;根据运动对象的运动幅度从大到小对运动对象所在区域的优先级进行高低排列;按照优先级从高到低,根据云台摄像机的数量选取运动对象所在区域作为目标区域;
或,确定运动对象的运动幅度;根据运动对象的运动幅度,确定运动幅度大于预设的运动阈值的候选运动对象所在区域;判断候选运动对象所在区域的数量是否大于云台摄像机的数量;若是,对候选运动对象所在区域进行合并处理,以使得合并处理后的候选运动对象所在区域的数量与云台摄像机的数量相同;将合并处理后的候选运动对象所在区域作为目标区域。
一些实施例中,所述区域确定模块1102还包括:
阈值判断单元:被配置为判断候选目标区域的数量是否大于设定的区域数阈值;
区域划分单元,被配置为阈值判断单元判断为是时,根据广角摄像机所拍摄影像的清晰度,将图像划分为第一区域和第二区域;其中,在所述第一区域内,广角摄像机所拍摄影像的清晰度达到预设的清晰度阈值;
广角控制单元,被配置为控制广角摄像机进行拍摄;
目标区域确定单元,被配置为将第二区域作为目标区域。
上述实施例的装置被配置为实现前述实施例中相应的方法,且具有相应的方法实施例的有益效果,在此不再赘述。
本申请实施例中,还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上述任一监控方法。
本申请实施例中,还提供一种非暂态计算机可读存储介质,非暂态计算机可读存储介质存储计算机指令,计算机指令用于使计算机执行上述任一监控方法。
本申请实施例中,还提供一种监控系统,包括广角摄像机、电子设备及云台摄像机;其中:
广角摄像机,被配置为采集监控场景的图像;
电子设备,被配置为从获取的监控场景的图像中,确定待监控的目标区域;根据目标区域,确定拍摄姿态和拍摄焦距;以及,控制云台摄像机按照拍摄姿态和拍摄焦距进行拍摄。
本申请实施例中,还提供一种监控系统,包括电子设备及云台摄像机;其中:
云台摄像机,被配置为采集监控场景的图像;
电子设备,被配置为从获取的监控场景的图像中,确定待监控的目标区域;根据目标区域,确定拍摄姿态和拍摄焦距;以及,控制云台摄像机按照拍摄姿态和拍摄焦距进行拍摄。
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本公开的范围(包括权利要求)被限于这些例子;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明它们没有在细节中提供。
另外,为简化说明和讨论,并且为了不会使本申请难以理解,在所提供的附图中可以示出或可以不示出与集成电路(IC)芯片和其它部件的公知的电源/接地连接。此外,可以以框图的形式示出装置,以便避免使本申请难以理解,并且这也考虑了以下事实,即关于这些框图装置的实施方式 的细节是高度取决于将要实施本申请的平台的(即,这些细节应当完全处于本领域技术人员的理解范围内)。在阐述了具体细节(例如,电路)以描述本申请的示例性实施例的情况下,对本领域技术人员来说显而易见的是,可以在没有这些具体细节的情况下或者这些具体细节有变化的情况下实施本申请。因此,这些描述应被认为是说明性的而不是限制性的。
尽管已经结合了本申请的具体实施例对本申请进行了描述,但是根据前面的描述,这些实施例的很多替换、修改和变型对本领域普通技术人员来说将是显而易见的。例如,其它存储器架构(例如,动态RAM(DRAM))可以使用所讨论的实施例。
本申请的实施例旨在涵盖落入所附权利要求的宽泛范围之内的所有这样的替换、修改和变型。因此,凡在本申请的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (19)

  1. 一种监控方法,包括:
    从获取的监控场景的图像中,确定待监控的目标区域;
    根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;
    控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
  2. 根据权利要求1所述的方法,其中,所述根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距,包括:
    确定所述目标区域所在的预设区域作为目标预设区域,其中,所述预设区域是基于所述监控场景的图像划分出的,所述预设区域的数量为至少两个;
    根据所述目标预设区域,确定对应的候选拍摄姿态和候选拍摄焦距;所述预设区域对应的候选拍摄姿态和候选拍摄焦距是根据所述云台摄像机拍摄所述预设区域所对应的监控场景内的范围得到的;
    根据确定出的候选拍摄姿态和候选拍摄焦距,确定所述目标拍摄姿态和所述目标拍摄焦距。
  3. 根据权利要求2所述的方法,其中,所述根据确定出的候选拍摄姿态和候选拍摄焦距,确定所述目标拍摄姿态和所述目标拍摄焦距,包括:
    从所述监控场景的图像中选取一个预设区域作为第一参考预设区域;所述第一参考预设区域不同于所述目标预设区域;
    根据所述目标预设区域与所述第一参考预设区域之间的距离、所述目标预设区域对应的候选拍摄姿态及所述第一参考预设区域对应的候选拍摄姿态,确定姿态变化比例参数;
    根据所述目标区域与所述目标预设区域之间的距离、所述目标预设区域对应的候选拍摄姿态及所述姿态变化比例参数,生成所述目标拍摄姿态。
  4. 根据权利要求3所述的方法,其中,所述目标拍摄姿态包括水平旋转角度和垂直旋转角度;
    针对所述水平旋转角度,所述目标预设区域与所述第一参考预设区域之间的距离为所述目标预设区域与所述第一参考预设区域之间的水平距离; 所述目标区域与所述目标预设区域之间的距离为所述目标区域与所述目标预设区域之间的水平距离;其中,所述目标预设区域与所述第一参考预设区域之间的水平距离不为零;以及
    针对所述垂直旋转角度,所述目标预设区域与所述第一参考预设区域之间的距离为所述目标预设区域与所述第一参考预设区域之间的垂直距离;所述目标区域与所述目标预设区域之间的距离为所述目标区域与所述目标预设区域之间的垂直距离;其中,所述目标预设区域与所述第一参考预设区域之间的垂直距离不为零。
  5. 根据权利要求2或3所述的方法,其中,所述根据确定出的候选拍摄姿态和候选拍摄焦距,确定所述目标拍摄姿态和所述目标拍摄焦距,包括:
    从所述监控场景的图像中确定第二参考预设区域;所述第二参考预设区域包括所述目标预设区域且所述第二参考预设区域的面积大于所述目标预设区域,或者,所述目标预设区域包括所述第二参考预设区域且所述目标预设区域的面积大于所述第二参考预设区域;
    根据所述目标预设区域和所述第二参考预设区域之间的面积比、所述目标预设区域对应的候选拍摄焦距和所述第二参考预设区域的候选拍摄焦距,得到焦距变化比例参数;
    根据所述目标区域与所述目标预设区域之间的面积比、所述目标预设区域对应的候选拍摄焦距及所述焦距变化比例参数,生成所述目标拍摄焦距。
  6. 根据权利要求1所述的方法,其中,所述从获取的监控场景的图像中,确定待监控的目标区域,包括:
    从所述图像中,确定至少一个候选目标区域;
    判断所述候选目标区域的数量是否大于所述云台摄像机的数量;
    若是,则对确定出的候选目标区域进行合并处理,以使得合并处理后的候选目标区域的数量不大于所述云台摄像机的数量;
    将合并处理后的候选目标区域作为所述目标区域。
  7. 根据权利要求6所述的方法,其中,所述对确定出的候选目标区域进行合并处理,包括:
    将任意两个所述候选目标区域进行预合并,得到预合并后的候选目标区域;
    确定各个所述预合并后的候选目标区域的面积;
    选取面积最小的所述预合并后的候选目标区域;
    对所述面积最小的所述预合并后的候选目标区域所对应的两个候选目标区域进行合并处理。
  8. 根据权利要求6所述的方法,其中,所述对确定出的候选目标区域进行合并处理,包括:
    确定每个所述候选目标区域的面积;
    选取面积最小的所述候选目标区域;
    确定所述面积最小的所述候选目标区域与其他每个所述候选目标区域的第一距离;
    选取第一距离最近的所述候选目标区域;
    将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行合并处理。
  9. 根据权利要求6所述的方法,其中,所述对确定出的候选目标区域进行合并处理,包括:
    确定每个所述候选目标区域的面积;
    选取面积最小的所述候选目标区域;
    确定所述面积最小的所述候选目标区域与其他每个所述候选目标区域的第一距离;
    选取第一距离最近的所述候选目标区域;
    将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行预合并处理,得到预合并处理后的候选目标区域;
    确定所述预合并处理后的候选目标区域的面积;
    将所述预合并处理后的候选目标区域的面积与预设的面积阈值进行比较;
    若所述预合并处理后的候选目标区域的面积小于或等于所述面积阈值,将所述面积最小的所述候选目标区域与所述第一距离最近的所述候选目标区域进行合并处理;
    若所述合并处理后的候选目标区域的面积大于所述面积阈值,则:
    将所述面积最小的所述候选目标区域与除所述第一距离最近的所述候选目标区域之外其他的所述候选目标区域,按照所述第一距离从近到远依次进行预合并处理;当任一预合并处理后的候选目标区域的面积小于或等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
  10. 根据权利要求9所述的方法,其中,所述对确定出的候选目标区域进行合并处理,包括:
    当所有预合并处理后的候选目标区域的面积均大于所述面积阈值时,从除所述面积最小的所述候选目标区域之外其他的所述候选目标区域,按照面积从小到大依次选取所述候选目标区域;
    确定选取出的所述候选目标区域与其他每个所述候选目标区域的第二距离;
    将选取出的所述候选目标区域与其他每个所述候选目标区域,按照所述第二距离从近到远依次进行预合并处理;当任一预合并处理后的候选目标区域的面积小于等于所述面积阈值时,将预合并处理的两个候选目标区域进行合并处理。
  11. 根据权利要求6所述的方法,其中,所述对确定出的候选目标区域进行合并处理,包括:
    确定两两候选目标区域之间的距离;
    按照所述距离从小到大的顺序,依次将候选目标区域进行合并,直至合并处理后的候选目标区域的数量不大于所述云台摄像机的数量。
  12. 根据权利要求1所述的方法,其中,所述从获取的监控场景的图像中,确定待监控的目标区域,包括:
    根据连续多帧的所述图像,确定运动对象及所述运动对象所在区域;
    基于所述运动对象所在区域,确定所述目标区域。
  13. 根据权利要求12所述的方法,其中,所述基于所述运动对象所在区域,确定所述目标区域,包括:
    判断所述运动对象所在区域的数量是否大于所述云台摄像机的数量;
    若是,则:
    确定所述运动对象的运动幅度;根据所述运动对象的运动幅度从大到小对所述运动对象所在区域的优先级进行高低排列;按照优先级从高到低,根据所述云台摄像机的数量选取所述运动对象所在区域作为所述目标区域;
    或,
    确定所述运动对象的运动幅度;根据所述运动对象的运动幅度,确定运动幅度大于预设的运动阈值的候选运动对象所在区域;判断所述候选运动对象所在区域的数量是否大于所述云台摄像机的数量;若是,对所述候选运动对象所在区域进行合并处理,以使得合并处理后的所述候选运动对象所在区域的数量与所述云台摄像机的数量相同;将合并处理后的所述候选运动对象所在区域作为所述目标区域。
  14. 根据权利要求1所述的方法,其中,所述监控场景内设置有广角摄像机;所述从获取的监控场景的图像中,确定待监控的目标区域,包括:
    根据广角摄像机所拍摄影像的清晰度,将所述图像划分为第一区域和第二区域;其中,在所述第一区域内,广角摄像机所拍摄影像的清晰度达到预设的清晰度阈值;以及
    将所述第二区域作为所述目标区域。
  15. 一种监控装置,其中,包括:
    区域确定模块,被配置为从获取的监控场景的图像中,确定待监控的目标区域;
    参数确定模块,被配置为根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;
    控制模块,被配置为控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
  16. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如权利要求1至14任意一项所述的方法。
  17. 一种非暂态计算机可读存储介质,其中,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行权利要求1至14任一项所述方法。
  18. 一种监控系统,其中,包括:广角摄像机、电子设备及云台摄像机;其中:
    广角摄像机,被配置为采集监控场景的图像;
    电子设备,被配置为从获取的监控场景的图像中,确定待监控的目标区域;根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;以及,控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
  19. 一种监控系统,其中,包括:电子设备及云台摄像机;其中:
    云台摄像机,被配置为采集监控场景的图像;
    电子设备,被配置为从获取的监控场景的图像中,确定待监控的目标区域;根据所述目标区域,确定目标拍摄姿态和目标拍摄焦距;以及,控制云台摄像机按照所述目标拍摄姿态和所述目标拍摄焦距进行拍摄。
PCT/CN2020/095112 2019-12-23 2020-06-09 监控方法、装置、系统、电子设备及存储介质 WO2021128747A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20905705.8A EP4068763A4 (en) 2019-12-23 2020-06-09 MONITORING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE AND STORAGE MEDIUM
US17/785,940 US11983898B2 (en) 2019-12-23 2020-06-09 Monitoring method, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911340725.4 2019-12-23
CN201911340725.4A CN111355884B (zh) 2019-12-23 2019-12-23 监控方法、装置、系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021128747A1 true WO2021128747A1 (zh) 2021-07-01

Family

ID=71197010

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/095112 WO2021128747A1 (zh) 2019-12-23 2020-06-09 监控方法、装置、系统、电子设备及存储介质

Country Status (4)

Country Link
US (1) US11983898B2 (zh)
EP (1) EP4068763A4 (zh)
CN (1) CN111355884B (zh)
WO (1) WO2021128747A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272483A (zh) * 2022-07-22 2022-11-01 北京城市网邻信息技术有限公司 一种图像生成方法、装置、电子设备及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112033372B (zh) * 2020-07-20 2022-11-04 河北汉光重工有限责任公司 无雷达引导的固定屏占比稳定自动跟踪方法
CN112040128A (zh) * 2020-09-03 2020-12-04 浙江大华技术股份有限公司 工作参数的确定方法及装置、存储介质、电子装置
CN113141518B (zh) * 2021-04-20 2022-09-06 北京安博盛赢教育科技有限责任公司 直播课堂中视频帧图像的控制方法、控制装置
CN113489948A (zh) * 2021-06-21 2021-10-08 浙江大华技术股份有限公司 一种摄像监控设备、方法、装置及存储介质
CN113591703B (zh) * 2021-07-30 2023-11-28 山东建筑大学 一种教室内人员定位方法及教室综合管理系统
CN113610027A (zh) * 2021-08-13 2021-11-05 青岛海信网络科技股份有限公司 监控方法、装置、电子设备及计算机可读存储介质
CN116567385A (zh) * 2023-06-14 2023-08-08 深圳市宗匠科技有限公司 图像采集方法及图像采集装置
CN117237879B (zh) * 2023-11-06 2024-04-26 浙江大学 一种轨迹追踪方法和系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106700A (zh) * 2007-08-01 2008-01-16 大连海事大学 视频监控系统中的智能化目标细节捕获装置及方法
CN101969548A (zh) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
US20120075467A1 (en) * 2010-09-29 2012-03-29 Hon Hai Precision Industry Co., Ltd. Image capture device and method for tracking moving object using the same
CN103716594A (zh) * 2014-01-08 2014-04-09 深圳英飞拓科技股份有限公司 基于运动目标检测的全景拼接联动方法及装置
CN104639908A (zh) * 2015-02-05 2015-05-20 华中科技大学 一种监控球机的控制方法
CN104822045A (zh) * 2015-04-15 2015-08-05 中国民用航空总局第二研究所 采用预置位实现观察画面分布式联动显示的方法及装置
CN107438154A (zh) * 2016-05-25 2017-12-05 中国民用航空总局第二研究所 一种基于全景视频的高低位联动监视方法及系统
CN108416285A (zh) * 2018-03-02 2018-08-17 深圳市佳信捷技术股份有限公司 枪球联动监控方法、装置及计算机可读存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7990422B2 (en) * 2004-07-19 2011-08-02 Grandeye, Ltd. Automatically expanding the zoom capability of a wide-angle video camera
JP4147427B2 (ja) * 2004-08-10 2008-09-10 フジノン株式会社 フォーカスコントロール装置
JP3902222B2 (ja) * 2005-06-07 2007-04-04 松下電器産業株式会社 監視システム、監視方法及びカメラ端末
US7636105B2 (en) * 2006-04-05 2009-12-22 Etreppid Technologies Llc Method and apparatus for providing motion control signals between a fixed camera and a PTZ camera
CN101198030B (zh) * 2007-12-18 2010-06-23 北京中星微电子有限公司 一种视频监控系统的摄像机定位方法及定位装置
CN101291428A (zh) * 2008-05-30 2008-10-22 上海天卫通信科技有限公司 自动视角配置的全景视频监控系统和方法
EP2437496A4 (en) 2009-05-29 2013-05-22 Youngkook Electronics Co Ltd INTELLIGENT SURVEILLANCE CAMERA APPARATUS AND IMAGE MONITORING SYSTEM USING THE SAME
US9215358B2 (en) 2009-06-29 2015-12-15 Robert Bosch Gmbh Omni-directional intelligent autotour and situational aware dome surveillance camera system and method
CN103986871B (zh) * 2014-05-23 2017-04-19 华中科技大学 一种智能变焦视频监控方法及系统
KR102575271B1 (ko) * 2016-10-17 2023-09-06 한화비전 주식회사 Pos 기기와 연동된 감시 카메라 및 이를 이용한 감시 방법
CN107016367B (zh) * 2017-04-06 2021-02-26 北京精英路通科技有限公司 一种跟踪控制方法及跟踪控制系统
EP3419283B1 (en) 2017-06-21 2022-02-16 Axis AB System and method for tracking moving objects in a scene
CN108632574A (zh) * 2018-05-02 2018-10-09 山东浪潮通软信息科技有限公司 一种监控图像信息展示方法、装置及系统
CN109120904B (zh) * 2018-10-19 2022-04-01 宁波星巡智能科技有限公司 双目摄像头监控方法、装置及计算机可读存储介质
CN110083180A (zh) * 2019-05-22 2019-08-02 深圳市道通智能航空技术有限公司 云台控制方法、装置、控制终端及飞行器系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106700A (zh) * 2007-08-01 2008-01-16 大连海事大学 视频监控系统中的智能化目标细节捕获装置及方法
US20120075467A1 (en) * 2010-09-29 2012-03-29 Hon Hai Precision Industry Co., Ltd. Image capture device and method for tracking moving object using the same
CN101969548A (zh) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
CN103716594A (zh) * 2014-01-08 2014-04-09 深圳英飞拓科技股份有限公司 基于运动目标检测的全景拼接联动方法及装置
CN104639908A (zh) * 2015-02-05 2015-05-20 华中科技大学 一种监控球机的控制方法
CN104822045A (zh) * 2015-04-15 2015-08-05 中国民用航空总局第二研究所 采用预置位实现观察画面分布式联动显示的方法及装置
CN107438154A (zh) * 2016-05-25 2017-12-05 中国民用航空总局第二研究所 一种基于全景视频的高低位联动监视方法及系统
CN108416285A (zh) * 2018-03-02 2018-08-17 深圳市佳信捷技术股份有限公司 枪球联动监控方法、装置及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068763A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272483A (zh) * 2022-07-22 2022-11-01 北京城市网邻信息技术有限公司 一种图像生成方法、装置、电子设备及存储介质
CN115272483B (zh) * 2022-07-22 2023-07-07 北京城市网邻信息技术有限公司 一种图像生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
EP4068763A4 (en) 2023-11-29
US20230021863A1 (en) 2023-01-26
CN111355884A (zh) 2020-06-30
US11983898B2 (en) 2024-05-14
EP4068763A1 (en) 2022-10-05
CN111355884B (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2021128747A1 (zh) 监控方法、装置、系统、电子设备及存储介质
US10339386B2 (en) Unusual event detection in wide-angle video (based on moving object trajectories)
US10893251B2 (en) Three-dimensional model generating device and three-dimensional model generating method
US9686461B2 (en) Image capturing device and automatic focusing method thereof
CN107507243A (zh) 一种摄像机参数调整方法、导播摄像机及系统
WO2017045326A1 (zh) 一种无人飞行器的摄像处理方法
WO2014034556A1 (ja) 画像処理装置及び画像表示装置
CN108198199B (zh) 运动物体跟踪方法、运动物体跟踪装置和电子设备
US20160217326A1 (en) Fall detection device, fall detection method, fall detection camera and computer program
CN112714287B (zh) 一种云台目标转换控制方法、装置、设备及存储介质
CN103327250A (zh) 基于模式识别镜头控制方法
CN101621619A (zh) 对多张脸部同时对焦的拍摄方法及其数字取像装置
CN113391644B (zh) 一种基于图像信息熵的无人机拍摄距离半自动寻优方法
CN110555377B (zh) 一种基于鱼眼相机俯视拍摄的行人检测与跟踪方法
US20200036895A1 (en) Image processing apparatus, control method thereof, and image capture apparatus
US9031355B2 (en) Method of system for image stabilization through image processing, and zoom camera including image stabilization function
CN112489077A (zh) 目标跟踪方法、装置及计算机系统
WO2021217403A1 (zh) 可移动平台的控制方法、装置、设备及存储介质
CN111325790A (zh) 目标追踪方法、设备及系统
JP6483661B2 (ja) 撮像制御装置、撮像制御方法およびプログラム
CN112702513B (zh) 一种双光云台协同控制方法、装置、设备及存储介质
WO2022040988A1 (zh) 图像处理方法、装置及可移动平台
KR102450466B1 (ko) 영상 내의 카메라 움직임 제거 시스템 및 방법
CN112378409B (zh) 动态环境下基于几何与运动约束的机器人rgb-d slam方法
CN111259825B (zh) 基于人脸识别的ptz扫描路径生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905705

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020905705

Country of ref document: EP

Effective date: 20220628

NENP Non-entry into the national phase

Ref country code: DE