WO2016074123A1 - 一种视频生成系统的视频生成方法及装置 - Google Patents

一种视频生成系统的视频生成方法及装置 Download PDF

Info

Publication number
WO2016074123A1
WO2016074123A1 PCT/CN2014/090703 CN2014090703W WO2016074123A1 WO 2016074123 A1 WO2016074123 A1 WO 2016074123A1 CN 2014090703 W CN2014090703 W CN 2014090703W WO 2016074123 A1 WO2016074123 A1 WO 2016074123A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
target object
tracking camera
target
Prior art date
Application number
PCT/CN2014/090703
Other languages
English (en)
French (fr)
Inventor
瞿新
廖海
孙兴磊
袁洁
徐崇
钱震
Original Assignee
深圳锐取信息技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳锐取信息技术股份有限公司 filed Critical 深圳锐取信息技术股份有限公司
Priority to CN201480018777.9A priority Critical patent/CN105830426B/zh
Priority to US14/888,627 priority patent/US9838595B2/en
Priority to PCT/CN2014/090703 priority patent/WO2016074123A1/zh
Publication of WO2016074123A1 publication Critical patent/WO2016074123A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the invention belongs to the field of video shooting, and in particular relates to a video generation method and device for a video generation system.
  • the existing camera system usually controls the long-range and close-range switching of a single camera and the switching between different cameras manually. Therefore, at least one person is required to be on duty during the video generation process, and an operation may be introduced during the manual switching process. error.
  • An object of the embodiments of the present invention is to provide a video generation method and device for a video generation system, which aims to solve the problem that the prior art can not realize intelligent recording and switching between different cameras in the process of tracking a target and generating a video. problem.
  • a first aspect of the present invention provides a video generation method of a video generation system, the system target positioning camera, a target tracking camera, and a panoramic camera, the method comprising:
  • Controlling according to the coordinate position, the pan/tilt of the target tracking camera to move, so that the tracking camera tracks the target, and acquires an image captured by the tracking camera;
  • the tracking camera is switched to be a video generation source, and a video is generated according to the image captured by the tracking camera;
  • the panoramic camera is a video generation source, and a video is generated according to the image acquired by the panoramic camera.
  • a second aspect of the present invention provides a video generating apparatus for a video generating system, the apparatus comprising:
  • An acquiring unit configured to acquire an image captured by a positioning camera
  • a first determining unit configured to determine whether the target object is included in the collected image
  • a position determining unit configured to determine a coordinate position of the target object in a current shooting space when the target image is included in the captured image
  • a tracking unit configured to control, according to the coordinate position, the pan/tilt of the target tracking camera to move, so that the tracking camera tracks the target, and acquires an image captured by the tracking camera;
  • a second determining unit configured to determine whether the target object included in the image captured by the tracking camera is a person
  • a first switching unit configured to: when the target object included in the image captured by the tracking camera is a person, and the person is located within the lens range of the tracking camera, switch the tracking camera as a video generation source, according to the tracking camera The captured image generates a video;
  • a second switching unit configured to: when the target object included in the image captured by the tracking camera is a person, and the person is located outside the lens range of the tracking camera, or when determining that the target object included in the image captured by the tracking camera is non-
  • the panoramic camera is switched to be a video generation source, and a video is generated according to the image acquired by the panoramic camera.
  • the target object is determined by positioning the image captured by the camera, and the target object is further determined by the tracking camera.
  • the target object included in the image captured by the tracking camera is determined to be a person, and the person is in the tracking
  • the tracking camera is automatically switched to be a video generation source, and a video is generated according to the image captured by the tracking camera, and the target object included in the image captured by the tracking camera is a person, and the person is located in the tracking camera.
  • the panoramic camera When the lens is out of range, or when it is determined that the target object included in the image captured by the tracking camera is non-human, the panoramic camera is automatically switched as a video generation source, and a video is generated according to the image captured by the panoramic camera, which makes the video recording process Automatic switching between different cameras to obtain a smooth switching video, so that after installation and debugging, no operation is required, and no manual intervention is required.
  • FIG. 1 is a schematic diagram of a video generation system in a classroom scenario of an embodiment of the present invention
  • FIG. 2 is a flowchart showing an implementation of a video generation method of a video generation system according to Embodiment 1 of the present invention
  • FIG. 3 is a flowchart showing an implementation of a video generation method of a video generation system according to Embodiment 2 of the present invention.
  • FIG. 4 is a structural diagram of a video generating apparatus of a video generating system according to Embodiment 3 of the present invention.
  • FIG. 5 is a structural diagram of a video generating apparatus of a video generating system according to Embodiment 4 of the present invention.
  • the application scenario of the embodiment of the present invention is a scenario in which a plurality of cameras are used to track a target and generate a video, for example, in a school teaching scenario, a training institution training site, etc.
  • FIG. 1 shows a classroom according to an embodiment of the present invention.
  • a schematic diagram of a video generation system in a scene the video generation system includes: a positioning camera, a tracking camera, and a panoramic camera, wherein the positioning camera can be installed on a ceiling in the center of the classroom, the direction of the camera is facing the platform, the tracking The camera and the panoramic camera are respectively positioned behind the classroom.
  • the following data is determined: determining a lens parameter of the positioning camera, determining a relationship between a lens pixel of the positioning camera and a viewing angle of the lens, that is, the The distance between the pixels in the horizontal direction of the positioning camera can represent the size of the viewing angle, determine the height of the positioning camera and the distance between the positioning camera and the lectern, determine the height of the tracking camera, and track the distance between the camera and the lectern the distance.
  • the video generation system combines target detection, triangulation, and lens switching to analyze the image captured by the positioning camera, the image captured by the tracking camera, and the image captured by the panoramic camera, thereby realizing intelligent tracking in an unattended situation. Switch smooth video.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 2 is a flowchart showing an implementation of a video generation method of a video generation system according to Embodiment 1 of the present invention, where the system targets a target camera, a target tracking camera, and a panoramic camera.
  • the positioning camera can be used to find the suspected target
  • the image is collected by the positioning camera and the tracking camera
  • the image is analyzed to determine whether the suspected target is a person, to eliminate interference generated by other objects, and to control automatic switching.
  • the images of different cameras are collected to generate video, so as to accurately track and accurately switch the target.
  • the method is as follows:
  • the tracking object By further analyzing the acquired image of the positioning camera, it may be determined whether the image contains a target object, and in order to accurately confirm that the target object of the positioning camera is a person, the tracking object further tracks the target object to determine the target. Whether the object is a person, the specific process is as follows:
  • S204 Control, according to the position of the target in the current space, the pan/tilt of the target tracking camera to move, so that the tracking camera tracks the target, and acquires an image captured by the tracking camera;
  • the switching tracking camera is a video generation source, and the video is generated according to the image captured by the tracking camera;
  • the panoramic camera is a video generation source, and a video is generated according to the image acquired by the panoramic camera.
  • the target object is determined by positioning the image captured by the camera, and the target object is further determined by the tracking camera.
  • the target object included in the image captured by the tracking camera is determined to be a person, and the person is in the tracking
  • the tracking camera is automatically switched to be a video generation source, and a video is generated according to the image captured by the tracking camera, and the target object included in the image captured by the tracking camera is a person, and the person is located in the tracking camera.
  • the panoramic camera When the lens is out of range, or when it is determined that the target object included in the image captured by the tracking camera is non-human, the panoramic camera is automatically switched as a video generation source, and a video is generated according to the image captured by the panoramic camera, which makes the video recording process Automatic switching between different cameras to obtain a smooth switching video, so that after installation and debugging, no operation is required, and no manual intervention is required.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 3 is a flowchart showing an implementation of a video generation method of a video generation system according to Embodiment 2 of the present invention.
  • the system targets a target camera, a target tracking camera, and a panoramic camera, and the method is as follows:
  • the background frame is an image frame including a foreground object.
  • the initial frame of the image captured by the positioning camera may be processed as an initial frame to make the initial frame closer to a real background.
  • the image frame of the image currently acquired by the positioning camera is compared with the background frame, and when the number of changed pixel points in the currently acquired image exceeds a predetermined threshold, compared with the background image
  • the current image includes a foreground object; when the number of changed pixel points in the currently acquired image does not exceed a predetermined threshold, it is determined that the current image is still a background image.
  • the background is also constantly changing, so the background can be refreshed according to the predetermined time interval.
  • the background refresh method directly affects the accuracy of the target object detection, especially for the scene with frequent target motion.
  • a false refresh occurs, for example, in a classroom scene where the teacher frequently moves.
  • the foreground object is a background object newly added to the background, so the background object is refreshed to the background frame, thereby avoiding misidentification of the background object as the target object.
  • the relationship between the pixel point of the lens of the positioning camera and the viewing angle is searched according to the position of the pixel point where the target object is located in the positioning lens, and the viewing angle of the target object is determined.
  • the spatial parameter may include length, width, height, and the like of the current space.
  • the positioning camera can treat the shooting surface, and the positioning camera center point is the focal point of the camera perpendicular to the shooting surface, and the level between the target and the center point is obtained by comparing the viewing angle of the camera lens with the pixel point. Angle.
  • the tracking camera is facing the shooting surface, and the cloud platform is controlled to move the tracking camera to the horizontal angle. There may be a certain fixed offset between the pan/tilt and the positioning angle, which can be manually corrected by the engineer during the first commissioning.
  • the image captured by the tracking camera includes the target object.
  • the target object can be further analyzed to determine whether the target object is a human.
  • the currently collected target object is determined to be a person, and the technology of the face contour and the skin color detection is used to identify whether the current tracking is a person, thereby avoiding other Interference targets cause false tracking and false triggering, which can be specifically as follows:
  • the extracted lines are generally located at the junction of light and dark or the edge of the object.
  • the lines can be divided into different types according to angles, for example, all lines are divided into 0 degrees, 45 degrees, 90 degrees, 135 degrees, and the like. First, iterate through all the lines of 0 degrees, and then use it as the origin to traverse the other lines in a certain range, and determine whether the pattern composed of the traversed lines conforms to the preset face model. If it matches, the current target object is determined to be human. .
  • the tracking camera is switched to be a video generation source, and the image is generated according to the image captured by the tracking camera.
  • the panoramic camera is a video generation source, and generates a video according to the image captured by the panoramic camera.
  • an image is acquired by a positioning camera and a tracking camera, and image analysis is performed on the image to determine whether the suspected target is a human, to eliminate interference generated by other objects, and automatically control image acquisition of different cameras by switching.
  • the video so as to achieve accurate tracking and accurate switching of the target, realizes the automatic switching of the video generation process, so that after installation and debugging, no operation is required, and no manual intervention is required.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 4 is a structural diagram of a video generating apparatus of a video generating system according to Embodiment 3 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown.
  • the video generation device of the video generation system includes an acquisition unit 41, a first determination unit 42, a position determination unit 43, a tracking unit 44, a second determination unit 45, a first switching unit 46, and a second switching unit 47.
  • An acquiring unit 41 configured to acquire an image captured by a positioning camera
  • a first determining unit 42 configured to determine whether the target object is included in the collected image
  • the position determining unit 43 is configured to determine, when the target image is included in the acquired image, a coordinate position where the target object is located in a current shooting space;
  • the tracking unit 44 is configured to control, according to the coordinate position, the pan/tilt of the target tracking camera to move, so that the tracking camera tracks the target, and acquires an image captured by the tracking camera;
  • a second determining unit 45 configured to determine whether the target object included in the image captured by the tracking camera is a person
  • a first switching unit 46 configured to: when the target object included in the image captured by the tracking camera is a person, and the person is located within the lens range of the tracking camera, switch the tracking camera as a video generation source, according to the tracking The image captured by the camera generates a video;
  • a second switching unit 47 configured to: when the target object included in the image captured by the tracking camera is a person, and the person is located outside the lens range of the tracking camera, or when determining that the target object included in the image captured by the tracking camera is When non-human, the panoramic camera is switched to be a video generation source, and a video is generated according to the image captured by the panoramic camera.
  • the video generating device of the video generating system provided by the embodiment of the present invention may be corresponding to the foregoing method embodiment 2.
  • the video generating device of the video generating system provided by the embodiment of the present invention may be corresponding to the foregoing method embodiment 2.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIG. 5 is a structural diagram of a video generating apparatus of a video generating system according to Embodiment 4 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown.
  • the video generating device of the video generating system includes: an obtaining unit 51, a first determining unit 52, a position determining unit 53, a tracking unit 54, a second determining unit 55, a first switching unit 56, and a second switching unit 57, and a refreshing unit. 58.
  • the first determining unit 52 includes:
  • the first judging module 521 is configured to compare the image frame currently acquired by the positioning camera with the background frame, and determine that the current image includes the foreground when the number of changed pixel points in the currently acquired image exceeds a predetermined threshold.
  • the second determining module 522 is configured to determine that the foreground object is a background when a pixel point of the foreground object stays at a fixed position for a predetermined time, and determine that the foreground object is a background when the number of connected pixels of the foreground object is greater than a predetermined threshold.
  • the foreground object is a target object.
  • the device also includes:
  • the refreshing unit 58 is configured to refresh the background into the background frame when the foreground is determined to be the background.
  • the location determining unit 53 includes:
  • the angle determining module 531 is configured to determine, according to a relationship between a pixel point of the lens of the positioning camera and a viewing angle, a viewing angle at which the target object is located;
  • the coordinate determining module 532 is configured to determine a coordinate position of the target in the current space according to the viewing angle, the current spatial parameter, and the trigonometric function relationship of the target object.
  • the second determining unit 55 includes:
  • An extraction module 551, configured to extract a line of the target object in the captured image of the tracking camera
  • a classification module 552 configured to divide the lines into different types
  • a combination module 553 configured to sequentially traverse different types of lines in order, and combine different types of lines after traversal;
  • the comparison module 554 is configured to determine that the currently added target object is a person when it is determined that the combined image conforms to the preset face model.
  • the video generating device of the video generating system provided by the embodiment of the present invention corresponds to the foregoing method embodiment 3.
  • the video generating device of the video generating system provided by the embodiment of the present invention corresponds to the foregoing method embodiment 3.
  • each unit included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented;
  • the specific names are also for convenience of distinguishing from each other and are not intended to limit the scope of the present invention.

Abstract

本发明属于视频拍摄领域,提供了一种视频生成系统的视频生成方法及装置,所述系统目标定位摄像机、目标跟踪摄像机以及全景摄像机,所述方法包括:当判断跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;当判断跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。实现自动控制切换采集不同摄像机的图像生成视频,从而做到对目标的精准跟踪、准确切换,实现了视频生成过程的自动化切换,使得安装调试完毕以后,不需要操作,也不需要人工干预。

Description

一种视频生成系统的视频生成方法及装置 技术领域
本发明属于视频拍摄领域,尤其涉及一种视频生成系统的视频生成方法及装置。
背景技术
随着科技的发展和社会的进步,摄像机系统对目标进行跟踪并生视频已经十分普遍。现有的摄像机系统,通常采用手动方式控制单个摄像机的远景及近景切换、以及不同摄像机之间的切换,所以,在视频生成过程中,至少需要一个人值班,且人工切换过程中可能会引入操作误差。
综上,现有技术在摄像机跟踪目标并生成视频的过程中,无法实现不同摄像机之间的智能录播切换。
技术问题
本发明实施例的目的在于提供一种视频生成系统的视频生成方法及装置,旨在解决现有技术在摄像机跟踪目标并生成视频的过程中,还无法实现不同摄像机之间的智能录播切换的问题。
技术解决方案
本发明的第一方面提供了一种视频生成系统的视频生成方法,所述系统目标定位摄像机、目标跟踪摄像机以及全景摄像机,所述方法包括:
获取定位摄像机采集的图像;
判断所述采集的图像中是否包含目标对象;
当所述采集的图像中包含目标对象时,确定所述目标对象在当前的拍摄空间内所处的坐标位置;
根据所述坐标位置,控制所述目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
判断跟踪摄像机采集的图像中包含的目标对象为是否为人;
当判断跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
当判断跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
本发明的第二方面提供了一种视频生成系统的视频生成装置,所述装置包括:
获取单元,用于获取定位摄像机采集的图像;
第一判断单元,用于判断所述采集的图像中是否包含目标对象;
位置确定单元,用于当所述采集的图像中包含目标对象时,确定所述目标对象在当前的拍摄空间内所处的坐标位置;
跟踪单元,用于根据所述坐标位置,控制所述目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
第二判断单元,用于判断跟踪摄像机采集的图像中包含的目标对象为是否为人;
第一切换单元,用于当跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
第二切换单元,用于当跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
有益效果
本发明实施例,通过定位摄像机采集的图像确定目标对象,并通过跟踪摄像机进一步判断所述目标对象是否为人,当判断跟踪摄像机采集的图像中包含的目标对象为人,且所述人在所述跟踪摄像机的镜头范围内时,自动切换跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频,当跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,自动切换全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频,这使得在视频录制过程中,不同摄像机之间的自动切换,从而获取平滑切换的视频,使得安装调试完毕以后,不需要操作,也不需要人工干预。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本发明实施例的教室场景中的视频生成系统的示意图;
图2示出了本发明实施例一提供的视频生成系统的视频生成方法实现的流程图;
图3示出了本发明实施例二提供的视频生成系统的视频生成方法实现的流程图;
图4示出了本发明实施例三提供的视频生成系统的视频生成装置的结构图;
图5示出了本发明实施例四提供的视频生成系统的视频生成装置的结构图。
本发明的实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例的应用场景为利用多个摄像机对目标进行跟踪并生成视频的场景,例如可以为学校教场景内、培训机构培训场所内等,请参阅图1示出了本发明实施例的教室场景中的视频生成系统的示意图,所视频生成系统包括:定位摄像机、跟踪摄像机和全景摄像机,其中,可以将定位摄像机安装在教室中央的天花上,所述摄像机的方向正对讲台,所述跟踪摄像机和全景摄像机分别定位于教室后方,另外,在生成视频数据之前确定以下数据:确定所述定位摄像机的镜头参数、确定定位摄像机的镜头像素点与镜头可视角度之间的关系,即所述定位摄像机的水平方向的像素点之间的距离能够代表的可视角度的大小、确定定位摄像机的高度及所述定位摄像机与讲台之间的距离、确定跟踪摄像机的高度以及跟踪摄像机与讲台之间的距离。该视频生成系统,结合目标检测、三角换算、镜头切换等手段,对定位摄像机采集的图像、跟踪摄像机采集的图像以及全景摄像机采集的图像进行分析,从而实现在无人值守情况下生成智能跟踪、切换平滑的视频。
以下结合具体实施例对本发明的实现进行详细描述:
实施例一:
图2示出了本发明实施例一提供的视频生成系统的视频生成方法实现的流程图,所述系统目标定位摄像机、目标跟踪摄像机以及全景摄像机, 其中,可以由定位摄像机来寻找疑似目标,通过定位摄像机、跟踪摄像机采集图像,并对所述图像进行图像分析以确定所述疑似目标是否是人,以排除其他物体产生的干扰,并控制自动切换采集不同摄像机的图像生成视频,从而做到对目标的精准跟踪、准确切换,所述方法详述如下:
S201,获取定位摄像机采集的图像;
S202,判断所述采集的图像中是否包含目标对象;
通过对定位摄像机的采集的图像进行进一步分析,可以确定所述图像是否包含目标对象,为了准确确认定位摄像机定位的目标对象为人,由跟踪摄像机进一步对所述目标对象进行跟踪,以确定所述目标对象是否为人,具体过程如下:
S203,当定位摄像机当前采集的图像中包含目标对象时,确定所述目标对象在当前的拍摄空间内所处的位置;
S204,根据所述目标在当前空间中所处的位置,控制目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
S205,判断跟踪摄像机采集的图像中包含的目标对象为是否为人;
S206,当确定跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
S207,当确定跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
本发明实施例,通过定位摄像机采集的图像确定目标对象,并通过跟踪摄像机进一步判断所述目标对象是否为人,当判断跟踪摄像机采集的图像中包含的目标对象为人,且所述人在所述跟踪摄像机的镜头范围内时,自动切换跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频,当跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,自动切换全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频,这使得在视频录制过程中,不同摄像机之间的自动切换,从而获取平滑切换的视频,使得安装调试完毕以后,不需要操作,也不需要人工干预。
实施例二:
图3示出了本发明实施例二提供的视频生成系统的视频生成方法实现的流程图,所述系统目标定位摄像机、目标跟踪摄像机以及全景摄像机,所述方法详述如下:
S301,获取定位摄像机采集的图像;
S302,将定位摄像机当前获取的图像帧与背景帧进行比较,当所述当前获取的图像中变化的像素点的个数超过预定阈值时,则判定当前图像包含前景对象,其中,背景帧为将所述定位摄像机采集的图像的初始帧;
其中,所述背景帧为包括前景对象的图像帧,此处优选的可以对定位摄像机采集的图像的初始帧进行处理后作为初始帧,以使所述初始帧更接近真实的背景。
本实施例中,将定位摄像机当前获取的图像的图像帧与所述背景帧进行比较,当与所述背景图像相比,所述当前获取的图像中变化的像素点的个数超过预定阈值时,所述当前图像中包含前景对象;当所述当前获取的图像中变化的像素点的个数未超过预定阈值时,则判定当前图像仍然为背景图像。
S303,当所述前景对象像素点在固定位置停留时间大于预定时间,确定所述前景对象为背景,执行S304;
S305,当所述前景对象的像素点的联通数大于预定阈值时,确定所述前景对象为目标对象,执行S306。
S304,将所述背景对象刷新到背景帧内;
在实际情况中,背景也是在不断变化的,因此,所以可以按照预定时间间隔对背景进行刷新,此外,背景刷新的方式直接影响到目标对象检测的准确性,特别对于目标频繁运动的场景很容易产生误刷新的情况,例如,对于教室场景内,教师频繁移动的情形。本实施例中,当所述前景对象的像素点的联通数大于预定阈值时,说明该前景对象足够大,可能为人,所以确定所述前景对象为目标对象,当前景对象在固定位置停留时间大于预定时间时,说明该前景对象是新加入背景中的背景对象,因此将所述背景对象刷新到背景帧,从而避免将背景目标当作目标对象的误识别。
S306,将所述背景对象刷新到背景帧内;
其中,根据所述目标对象在定位镜头内所处的像素点的位置,查找所述定位摄像机的镜头的像素点与可视角度之间的关系,确定所述目标对象所处的可视角度。
S307,根据所述目标对象在所处的可视角度、当前空间参数、以及三角函数关系,确定目标在当前空间中所处的坐标位置。
本实施例中,所述空间参数可以包括当前空间的长、宽、高等。
在位置确定过程中,定位摄像机可以正对待拍摄面,定位摄像机中心点是摄像机对拍摄面做垂线的焦点,通过定位摄像机镜头的可视角度和像素点比较得到目标与中心点之间的水平夹角。将跟踪摄像机正对拍摄面,并控制云台示跟踪摄像机移动到所述水平夹角的方向上。云台和定位夹角之间可能会存在一定的固定偏移,可以由工程人员在第一次调试时手工矫正。
S308,根据所述目标在当前空间中所处的坐标位置,控制目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
本实施例中,所述跟踪摄像机采集的图像包括所述目标对象。通过对所述目标对象的跟踪,可以对所述目标对象进行进一步的分析,从而确定所述目标对象是否为人。
S309,判断跟踪摄像机采集的图像中包含的目标对象为是否人;
本实施例中,可以根据人脸轮廓识别和/或肤色检测,确定当前采集的目标对象为人,通过采用人脸轮廓和肤色检测的技术,来识别当前跟踪的到底是不是人,从而避免其他的干扰目标造成误跟踪和误触发,具体的可以如下包括:
a.提取所述跟踪摄像机采集的图像中的目标对象的线条;
其中,提取出来的线条一般位于明暗交界处或者物体的边缘。
b.将所述线条分为不同的类型;
c.按顺序依次遍历不同类型的线条,并对遍历后的不同类型的线条进行组合;
d.当判断组合后的图像符合预设人脸模型时,则确定当前才加的目标对象为人。
本实施例中,可以按角度将所述线条分为不同的类型,例如,将所有的线条分为0度、45度、90度、135度等。首先,遍历所有0度的线条,然后,以之为原点在一定范围内遍历周围其他的线,判断遍历后的线条组成的图形是否符合预设的人脸模型,如果符合则判断当前目标对象为人。
S310,当判断跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
S311,当判断跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
本实施例中,由定位摄像机、跟踪摄像机采集图像,并对所述图像进行图像分析以确定所述疑似目标是否是人,以排除其他物体产生的干扰,并自动控制切换采集不同摄像机的图像生成视频,从而做到对目标的精准跟踪、准确切换,实现了视频生成过程的自动化切换,使得安装调试完毕以后,不需要操作,也不需要人工干预。
实施例三:
图4示出了本发明实施例三提供的视频生成系统的视频生成装置的结构图,为了便于说明,仅示出了与本发明实施例相关的部分。
所述视频生成系统的视频生成装置包括:获取单元41、第一判断单元42、位置确定单元43、跟踪单元44、第二判断单元45、第一切换单元46以及第二切换单元47。
获取单元41,用于获取定位摄像机采集的图像;
第一判断单元42,用于判断所述采集的图像中是否包含目标对象;
位置确定单元43,用于当所述采集的图像中包含目标对象时,确定所述目标对象在当前的拍摄空间内所处的坐标位置;
跟踪单元44,用于根据所述坐标位置,控制所述目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
第二判断单元45,用于判断跟踪摄像机采集的图像中包含的目标对象为是否为人;
第一切换单元46,用于当跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
第二切换单元47,用于当跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
本发明实施例,实现不同视频源之间的自动切换,可以获取平滑切换的视频,使得安装调试完毕以后,不需要操作,也不需要人工干预。
本发明实施例提供的视频生成系统的视频生成装置可以与前述方法实施例二对应,详情参见上述实施例二的描述,在此不再赘述。
实施例四:
图5示出了本发明实施例四提供的视频生成系统的视频生成装置的结构图,为了便于说明,仅示出了与本发明实施例相关的部分。
所述视频生成系统的视频生成装置包括:获取单元51、第一判断单元52、位置确定单元53、跟踪单元54、第二判断单元55、第一切换单元56以及第二切换单元57、刷新单元58。
所述第一判断单元52包括:
第一判断模块521,用于将定位摄像机当前获取的图像帧与所述背景帧进行比较,当所述当前获取的图像中变化的像素点的个数超过预定阈值时,则判定当前图像包含前景对象,其中,背景帧为将所述定位摄像机采集的图像的初始帧;
第二判断模块522,用于当所述前景对象的像素点在固定位置停留时间大于预定时间,确定所述前景对象为背景;当所述前景对象的像素点的联通数大于预定阈值时,确定所述前景对象为目标对象。
所述装置还包括:
刷新单元58,用于当确定所述前景为背景时,将所述背景刷新到背景帧内。
所述位置确定单元53包括:
角度确定模块531,用于根据定位摄像机的镜头的像素点与可视角度之间的关系,确定所述目标对象所处的可视角度;
坐标确定模块532,用于根据所述目标对象在所处的可视角度、当前空间参数、以及三角函数关系,确定目标在当前空间中所处的坐标位置。
所述第二判断单元55包括:
提取模块551,用于提取所述跟踪摄像机采集图像中的目标对象的线条;
分类模块552,用于将所述线条分为不同的类型;
组合模块553,用于按顺序依次遍历不同类型的线条,并对遍历后的不同类型的线条进行组合;
比较模块554,用于当判断组合后的图像符合预设人脸模型时,则确定当前才加的目标对象为人。
本发明实施例提供的视频生成系统的视频生成装置与前述方法实施例三对应,详情参见上述实施例三的描述,在此不再赘述。
值得注意的是,上述装置和系统实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
另外,本领域普通技术人员可以理解实现上述各实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,相应的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘或光盘等。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种视频生成系统的视频生成方法,其特征在于,所述系统目标定位摄像机、目标跟踪摄像机以及全景摄像机,所述方法包括:
    获取定位摄像机采集的图像;
    判断所述采集的图像中是否包含目标对象;
    当所述采集的图像中包含目标对象时,确定所述目标对象在当前的拍摄空间内所处的坐标位置;
    根据所述坐标位置,控制所述目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
    判断跟踪摄像机采集的图像中包含的目标对象为是否为人;
    当判断跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
    当判断跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
  2. 如权利要求1所述的方法,所述判断所述采集的图像中是否包含目标对象包括:
    将定位摄像机当前获取的图像帧与背景帧进行比较,当所述当前获取的图像中变化的像素点的个数超过预定阈值时,则判定当前图像包含前景对象,其中,背景帧为将所述定位摄像机采集的图像的初始帧;
    当所述前景对象的像素点在固定位置停留时间大于预定时间,确定所述前景对象为背景;当所述前景对象的像素点的联通数大于预定阈值时,确定所述前景对象为目标对象。
  3. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    当确定所述前景为背景时,将所述背景刷新到背景帧内。
  4. 如权利要求1所述的方法,其特征在于,所述确定所述目标对象在当前的拍摄空间内所处的坐标位置包括:
    根据定位摄像机的镜头的像素点与可视角度之间的关系,确定所述目标对象所处的可视角度;
    根据所述目标对象在所处的可视角度、当前空间参数、以及三角函数关系,确定目标在当前空间中所处的坐标位置。
  5. 如权利要求1所述的方法,其特征在于,所述判断跟踪摄像机采集的图像中包含的目标对象为是否为人包括:
    提取所述跟踪摄像机采集图像中的目标对象的线条;
    将所述线条分为不同的类型;
    按顺序依次遍历不同类型的线条,并对遍历后的不同类型的线条进行组合;
    当判断组合后的图像符合预设人脸模型时,则确定当前才加的目标对象为人。
  6. 一种视频生成系统的视频生成装置,其特征在于,所述装置包括:
    获取单元,用于获取定位摄像机采集的图像;
    第一判断单元,用于判断所述采集的图像中是否包含目标对象;
    位置确定单元,用于当所述采集的图像中包含目标对象时,确定所述目标对象在当前的拍摄空间内所处的坐标位置;
    跟踪单元,用于根据所述坐标位置,控制所述目标跟踪摄像机的云台进行移动,以使所述跟踪摄像机对所述目标进行跟踪,同时获取跟踪摄像机采集的图像;
    第二判断单元,用于判断跟踪摄像机采集的图像中包含的目标对象为是否为人;
    第一切换单元,用于当跟踪摄像机采集的图像中包含的目标对象为人,且所述人位于所述跟踪摄像机的镜头范围内时,切换所述跟踪摄像机为视频生成源,根据所述跟踪摄像机采集的图像生成视频;
    第二切换单元,用于当跟踪摄像机采集的图像中包含的目标对象为人、且所述人位于所述跟踪摄像机的镜头范围外时,或者当确定跟踪摄像机采集的图像中包含的目标对象为非人时,切换所述全景摄像机为视频生成源,根据所述全景摄像机采集的图像生成视频。
  7. 如权利要求6所述的装置,其特征在于,所述第一判断单元包括:
    第一判断模块,用于将定位摄像机当前获取的图像帧与所述背景帧进行比较,当所述当前获取的图像中变化的像素点的个数超过预定阈值时,则判定当前图像包含前景对象,其中,背景帧为将所述定位摄像机采集的图像的初始帧;
    第二判断模块,用于当所述前景对象的像素点在固定位置停留时间大于预定时间,确定所述前景对象为背景;当所述前景对象的像素点的联通数大于预定阈值时,确定所述前景对象为目标对象。
  8. 如权利要求7所述的装置,其特征在于,所述装置还包括:
    刷新单元,用于当确定所述前景为背景时,将所述背景刷新到背景帧内。
  9. 如权利要去6所述的装置,其特征在于,所述位置确定单元包括:
    角度确定模块,用于根据定位摄像机的镜头的像素点与可视角度之间的关系,确定所述目标对象所处的可视角度;
    坐标确定模块,用于根据所述目标对象在所处的可视角度、当前空间参数、以及三角函数关系,确定目标在当前空间中所处的坐标位置。
  10. 如权利要求6所述的装置,其特征在于,所述第二判断单元包括:
    提取模块,用于提取所述跟踪摄像机采集图像中的目标对象的线条;
    分类模块,用于将所述线条分为不同的类型;
    组合模块,用于按顺序依次遍历不同类型的线条,并对遍历后的不同类型的线条进行组合;
    比较模块,用于当判断组合后的图像符合预设人脸模型时,则确定当前才加的目标对象为人。
PCT/CN2014/090703 2014-11-10 2014-11-10 一种视频生成系统的视频生成方法及装置 WO2016074123A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201480018777.9A CN105830426B (zh) 2014-11-10 2014-11-10 一种视频生成系统的视频生成方法及装置
US14/888,627 US9838595B2 (en) 2014-11-10 2014-11-10 Video generating method and apparatus of video generating system
PCT/CN2014/090703 WO2016074123A1 (zh) 2014-11-10 2014-11-10 一种视频生成系统的视频生成方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/090703 WO2016074123A1 (zh) 2014-11-10 2014-11-10 一种视频生成系统的视频生成方法及装置

Publications (1)

Publication Number Publication Date
WO2016074123A1 true WO2016074123A1 (zh) 2016-05-19

Family

ID=55953532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/090703 WO2016074123A1 (zh) 2014-11-10 2014-11-10 一种视频生成系统的视频生成方法及装置

Country Status (3)

Country Link
US (1) US9838595B2 (zh)
CN (1) CN105830426B (zh)
WO (1) WO2016074123A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131491A (zh) * 2016-07-19 2016-11-16 科盾科技股份有限公司 一种用于抓捕目标的装置
CN106603912A (zh) * 2016-12-05 2017-04-26 科大讯飞股份有限公司 一种视频直播控制方法及装置
CN110876036A (zh) * 2018-08-31 2020-03-10 腾讯数码(天津)有限公司 一种视频生成的方法以及相关装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107666590B (zh) * 2016-07-29 2020-01-17 华为终端有限公司 一种目标监控方法、摄像头、控制器和目标监控系统
CN112449113B (zh) * 2017-05-09 2022-04-15 浙江凡后科技有限公司 一种物体位置捕捉系统及物体运动轨迹捕捉方法
CN109215055A (zh) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 一种目标特征提取方法、装置及应用系统
CN108090147A (zh) * 2017-12-08 2018-05-29 四川金英科技有限责任公司 一种视频目标智能追踪方法
TWI714318B (zh) * 2019-10-25 2020-12-21 緯創資通股份有限公司 人臉辨識方法及裝置
US11553162B2 (en) * 2019-12-23 2023-01-10 Evolon Technology, Inc. Image processing system for extending a range for image analytics
CN113269011B (zh) * 2020-02-17 2022-10-14 浙江宇视科技有限公司 车辆检测方法、装置、设备及存储介质
CN111402304A (zh) * 2020-03-23 2020-07-10 浙江大华技术股份有限公司 目标对象的跟踪方法、装置及网络视频录像设备
CN111866437B (zh) * 2020-06-30 2022-04-08 厦门亿联网络技术股份有限公司 一种用于视频会议双摄像头的自动切换方法、装置、终端设备以及存储介质
CN113687715A (zh) * 2021-07-20 2021-11-23 温州大学 基于计算机视觉的人机交互系统及交互方法
CN116320322B (zh) * 2023-05-12 2023-08-22 美宜佳控股有限公司 门店云值守视频推流方法、装置、系统、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2896731Y (zh) * 2006-03-20 2007-05-02 江军 同步摄像跟踪系统
CN103164991A (zh) * 2013-03-01 2013-06-19 广州市信和电信发展有限公司 一种网络互动教学教研应用系统
CN103777643A (zh) * 2012-10-23 2014-05-07 北京网动网络科技股份有限公司 一种基于图像定位的摄像机自动跟踪系统及跟踪方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9740922B2 (en) * 2008-04-24 2017-08-22 Oblong Industries, Inc. Adaptive tracking system for spatial input devices
CN102256065B (zh) * 2011-07-25 2012-12-12 中国科学院自动化研究所 基于视频监控网络的视频自动浓缩方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2896731Y (zh) * 2006-03-20 2007-05-02 江军 同步摄像跟踪系统
CN103777643A (zh) * 2012-10-23 2014-05-07 北京网动网络科技股份有限公司 一种基于图像定位的摄像机自动跟踪系统及跟踪方法
CN103164991A (zh) * 2013-03-01 2013-06-19 广州市信和电信发展有限公司 一种网络互动教学教研应用系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131491A (zh) * 2016-07-19 2016-11-16 科盾科技股份有限公司 一种用于抓捕目标的装置
CN106603912A (zh) * 2016-12-05 2017-04-26 科大讯飞股份有限公司 一种视频直播控制方法及装置
CN110876036A (zh) * 2018-08-31 2020-03-10 腾讯数码(天津)有限公司 一种视频生成的方法以及相关装置
CN110876036B (zh) * 2018-08-31 2022-08-02 腾讯数码(天津)有限公司 一种视频生成的方法以及相关装置

Also Published As

Publication number Publication date
CN105830426A (zh) 2016-08-03
CN105830426B (zh) 2019-01-01
US20160344928A1 (en) 2016-11-24
US9838595B2 (en) 2017-12-05

Similar Documents

Publication Publication Date Title
WO2016074123A1 (zh) 一种视频生成系统的视频生成方法及装置
EP3639243A1 (en) Camera pose determination and tracking
WO2020085881A1 (en) Method and apparatus for image segmentation using an event sensor
WO2012005387A1 (ko) 다중 카메라와 물체 추적 알고리즘을 이용한 광범위한 지역에서의 물체 이동 감시 방법 및 그 시스템
WO2018098915A1 (zh) 清洁机器人的控制方法及清洁机器人
WO2021075772A1 (ko) 복수 영역 검출을 이용한 객체 탐지 방법 및 그 장치
WO2017183915A2 (ko) 영상취득 장치 및 그 방법
WO2016072625A1 (ko) 영상방식을 이용한 주차장의 차량 위치 확인 시스템 및 그 제어방법
WO2017034177A1 (ko) 이종 카메라로부터의 영상을 이용하여 불법 주정차 단속을 수행하는 단속 시스템 및 이를 포함하는 관제 시스템
WO2019054593A1 (ko) 기계학습과 이미지 프로세싱을 이용한 지도 제작 장치
WO2022145626A1 (ko) 공항의 교통관제 지원정보 생성장치 및 이를 포함하는 공항의 교통관제 지원장치
WO2021002722A1 (ko) 이벤트 태깅 기반 상황인지 방법 및 그 시스템
WO2019127049A1 (zh) 一种图像匹配方法、装置及存储介质
WO2024019342A1 (ko) 인공지능 기반 유해 가스 누출 탐지 시스템 및 이의 동작 방법
WO2016064107A1 (ko) 팬틸트줌 카메라 기반의 영상 재생방법 및 장치
WO2015196878A1 (zh) 一种电视虚拟触控方法及系统
WO2021235682A1 (en) Method and device for performing behavior prediction by using explainable self-focused attention
WO2021201569A1 (ko) 강화학습 기반 신호 제어 장치 및 신호 제어 방법
WO2020071573A1 (ko) 딥러닝을 이용한 위치정보 시스템 및 그 제공방법
WO2012015156A2 (ko) 열적외선을 이용한 환경독립형 교통검지시스템
WO2020130209A1 (ko) 영상 처리를 이용한 차량 속도 측정 방법 및 장치
WO2020027512A1 (ko) 압축영상에 대한 신택스 기반의 ptz 카메라의 객체 추적 제어 방법
WO2023149603A1 (ko) 다수의 카메라를 이용한 열화상 감시 시스템
WO2019103208A1 (ko) 다중 분산 영상 데이터 분석 장치
WO2017007047A1 (ko) 불규칙 비교를 이용하는 공간적 깊이 불균일성 보상 방법 및 장치

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14888627

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14906098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/09/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14906098

Country of ref document: EP

Kind code of ref document: A1