CN106251334A - A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system - Google Patents
A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system Download PDFInfo
- Publication number
- CN106251334A CN106251334A CN201610562671.6A CN201610562671A CN106251334A CN 106251334 A CN106251334 A CN 106251334A CN 201610562671 A CN201610562671 A CN 201610562671A CN 106251334 A CN106251334 A CN 106251334A
- Authority
- CN
- China
- Prior art keywords
- camera
- video
- broadcasting
- instructor
- video camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses a kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system, wherein, the method comprises determining that the target video object needing shooting;From each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place, the target video camera for shooting described target video object is filtered out according to default instructor in broadcasting's strategy;Obtaining the first three-dimensional coordinate of described target video object, described first three-dimensional coordinate is described target video object three-dimensional coordinate under the first coordinate system that described target video camera is corresponding;The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and exports the video image after adjusting camera parameter.Use this programme, it is possible to increase the efficiency that camera parameters adjusts, and promote video camera shooting effect.
Description
Technical field
The present invention relates to technical field of image processing, particularly relate to a kind of camera parameters method of adjustment, instructor in broadcasting's video camera
And system.
Background technology
Along with image processing techniques and the development of the Internet, the scene using video conference is needed to get more and more,
This video conference is that the remote communication between user brings great convenience.At present, when carrying out video conference, deployment is generally required
Multiple video cameras shoot, to obtain the direct picture of participant.Such as, referring to Fig. 1, Fig. 1 is a kind of video conference
Scene schematic diagram.As it is shown in figure 1, in this scenario, meeting room uses strip ellipse conference table, the seat of participant around
Conference table, participant includes that A and B, participant A and B sit facing each other, and the projection screen both sides in the front of this A and B are disposed with shooting
Machine C0 and C1.Then for A, only just can be photographed the direct picture of A by video camera C0, and video camera C1 cannot photograph A's
Front.And for B, the direct picture of B only just can be photographed by video camera C1, and video camera C0 cannot photograph the front of B.
As can be seen here, multiple video camera need to be used to carry out shooting to realize video conference.
When disposing multiple video cameras and carrying out meeting shooting, manually regulate shooting generally by remote controller or other modes
Machine parameter is to obtain a preferable shooting effect.But, this mode manually regulated needs operator to possess certain shooting
Machine Professional knowledge, and operating process is loaded down with trivial details, this allows for regulating inefficient, it is impossible to ensure preferably shooting effect in time.This
Outward, the mode also by sound localization determines the video camera of shooting and adjusts video camera shooting effect.This sound localization side
Formula is by location and to follow the tracks of the participant (i.e. " spokesman ") made a speech, and uses a video camera to shoot this spokesman simultaneously
Feature, during feature follow the tracks of spokesman face location and carry out camera lens PTZ (Pan Tilt Zoom, i.e. " translation, tilt,
Zoom ") adjust, so that the face of spokesman is positioned at the zone line of image.But, this sound localization mode only accounts for sending out
Speech people front adjusts to image central authorities, and does not consider to shoot the effect problem of image, also cannot ensure one and preferably shoot
Effect.
Summary of the invention
The embodiment of the present invention provides a kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system, it is possible to increase shooting
The efficiency of machine parameter adjustment, and promote video camera shooting effect.
First aspect, embodiments provides a kind of camera parameters method of adjustment, and described method is applied to instructor in broadcasting
In video camera, including:
Determine the target video object needing shooting;
Screen from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place according to default instructor in broadcasting's strategy
Go out the target video camera for shooting described target video object;
Obtaining the first three-dimensional coordinate of described target video object, described first three-dimensional coordinate is described target video object
Three-dimensional coordinate under the first coordinate system that described target video camera is corresponding;
The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and defeated
Go out to adjust the video image after camera parameter.
Wherein, this first three-dimensional coordinate can be that this target video object is under the first coordinate system that target video camera is corresponding
Three-dimensional coordinate.This target video camera can be instructor in broadcasting's video camera or be common Pan/Tilt/Zoom camera, then this target video camera is corresponding
First coordinate system may refer to the three-dimensional system of coordinate set up with target video camera photocentre for initial point, or with other any objects of reference
The three-dimensional system of coordinate set up for initial point, the embodiment of the present invention does not limits.
Wherein, this target video object can be the photographed scene that instructor in broadcasting's camera system at this instructor in broadcasting's video camera place is corresponding
In any one or multiple object video.
In certain embodiments, the first three-dimensional coordinate of described acquisition described target video object, including:
Obtain the second three-dimensional coordinate of the binocular camera transmission being connected with described instructor in broadcasting's video camera, the described second three-dimensional seat
It is designated as described target video object three-dimensional coordinate under the second coordinate system that described binocular camera is corresponding;
According to the described binocular camera demarcated in advance and the position relationship of described target video camera, three-dimensional by described second
Coordinate Conversion is the first three-dimensional coordinate.
In certain embodiments, described second three-dimensional coordinate can be that described binocular camera is by the video obtained respectively
In the described binocular camera of object two-dimensional coordinate in the left view and right view of described binocular camera and acquisition
Outer parameter is according to calculated.
Wherein, this second three-dimensional coordinate is described target video object in the second coordinate system corresponding to described binocular camera
Under three-dimensional coordinate, the second coordinate system that this binocular camera is corresponding may refer to binocular camera photocentre for initial point set up
Three-dimensional system of coordinate, or the three-dimensional system of coordinate set up for initial point with other any objects of reference.This two-dimensional coordinate can specially be somebody's turn to do
The pixel coordinate that target video object is corresponding in the left view and right view of described binocular camera.
In certain embodiments, the described target video object determining needs shooting, including:
Obtaining the shooting image of binocular camera transmission, described shooting image includes at least one object video;
Set up the object video model including at least one object video described, and from least one object video described
Determine target video object;
Described tactful from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place according to default instructor in broadcasting
Filter out the target video camera for shooting described target video object, including:
The shooting image that each video camera from described instructor in broadcasting's camera system obtains respectively determines described target video
Object, and obtain the described target video object shooting effect parameter at each video camera;
The video camera that shooting effect parameter meets default instructor in broadcasting strategy is defined as shooting described target video object
Target video camera.
In certain embodiments, from the shooting image that video camera obtains, determine described target video object, including:
According to the described binocular camera demarcated in advance and the position relationship of current camera, by described second three-dimensional coordinate
Be converted to the 3rd three-dimensional coordinate;
Judge that described target video object region under described 3rd three-dimensional coordinate and described current camera detect
The overlapping area in object video region under the three-dimensional coordinate of this object video whether exceed default area threshold;
If exceeding, then this object video is defined as described target video object.
Wherein, with the removing of this instructor in broadcasting's camera calibration good position relation during this current camera is described instructor in broadcasting's camera system
Arbitrary video camera beyond described binocular camera, the 3rd three-dimensional coordinate is described target video object video camera in this prior
The corresponding three-dimensional coordinate under three-coordinate.
In certain embodiments, described shooting effect parameter includes that described target video object is corresponding at current camera
Eye under coordinate system is to any one or many in the scenario objects parameter of eye efficacy parameter, hiding relation parameter and shooting area
, described current camera is the arbitrary video camera in described instructor in broadcasting's camera system in addition to described binocular camera.
Wherein, this can include the coordinate system that this target video object is corresponding relative to current camera to eye efficacy parameter
The anglec of rotation, this anglec of rotation is and to demarcate in advance in the anglec of rotation of this second coordinate system according to target video object
This binocular camera and the position relationship of this current camera determine.This anglec of rotation is the least, and eye is the best to eye effect.
Wherein, this hiding relation parameter and this scenario objects parameter can be according to this binocular camera demarcated in advance and
The position relationship of current camera, this current camera is heavily thrown in the region of the scenario objects detected by this current camera
Imaging plane is determined.There is no to export image effect during hiding relation (hiding relation parameter is the least) the best.This scenario objects
Area the least, number is the least, then output image effect the best;Otherwise, then output image effect is the poorest.
Second aspect, the embodiment of the present invention additionally provides a kind of instructor in broadcasting's video camera, including: memorizer and processor, described
Processor is connected with described memorizer;Wherein,
Described memorizer is used for storing drive software;
Described processor is on described memorizer reads described drive software and performs under the effect of described drive software
State the part or all of step of the camera parameters method of adjustment of first aspect.
The third aspect, the embodiment of the present invention additionally provides a kind of parameter adjustment controls, determines unit including object, selects list
Unit, acquiring unit and parameter adjustment unit, the video camera that these parameter adjustment controls realize first aspect by said units is joined
The part or all of step of number adjusting method.
Fourth aspect, the embodiment of the present invention additionally provides a kind of computer-readable storage medium, and described computer-readable storage medium is deposited
Containing program, described program includes all or part of step of the camera parameters method of adjustment of above-mentioned first aspect when performing
Suddenly.
5th aspect, the embodiment of the present invention additionally provides a kind of instructor in broadcasting's camera system, including the first video camera and at least
Individual second video camera, described first video camera includes instructor in broadcasting's video camera and binocular camera, and described instructor in broadcasting's video camera is double with described
Connected by wireline interface or wave point between lens camera, between described first video camera and described second video camera;Its
In,
Described instructor in broadcasting's video camera, for determining the target video object needing shooting, and according to default instructor in broadcasting's strategy from
The video camera of described instructor in broadcasting's camera system filters out the target video camera for shooting described target video object;
Described binocular camera, for obtaining the second three-dimensional coordinate of described target video object, and by the described 2nd 3
Dimension coordinate is transferred to described instructor in broadcasting's video camera;Wherein, described second three-dimensional coordinate is that described target video object is at described binocular
Three-dimensional coordinate under the second coordinate system that video camera is corresponding;
Described instructor in broadcasting's video camera, for receiving described second three-dimensional coordinate of described binocular camera transmission;According in advance
The described binocular camera demarcated and the position relationship of described target video camera, be converted to first by described second three-dimensional coordinate
Three-dimensional coordinate;The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and
Output adjusts the video image after camera parameter;Wherein, described first three-dimensional coordinate is that described target video object is at described mesh
Three-dimensional coordinate under the first coordinate system that mark video camera is corresponding.
In certain embodiments, this second video camera can include instructor in broadcasting's video camera and binocular camera, then this target is taken the photograph
Camera can be the arbitrary instructor in broadcasting's video camera in this instructor in broadcasting's camera system;Or, this second video camera can also be taken the photograph for common PTZ
Camera, then this target video camera can be this instructor in broadcasting's video camera or common Pan/Tilt/Zoom camera.
In certain embodiments, this binocular camera may be disposed on preset instructor in broadcasting's support, and by this instructor in broadcasting's support
It is connected with this instructor in broadcasting's video camera.
Implement the embodiment of the present invention, have the advantages that
In embodiments of the present invention, can be after determining the target video object needing shooting, according to default instructor in broadcasting
Strategy filters out the target video camera shooting this target video object best results from each video camera of instructor in broadcasting's camera system, and
Acquire this target video object three-dimensional coordinate under the coordinate system that this target video camera is corresponding, to control the shooting of this target
Machine carries out camera parameters adjustment according to the three-dimensional coordinate of this target video object, and exports the video after adjusting camera parameter
Image so that instructor in broadcasting's camera system can be based on three-dimensional coordinate detection and the instructor in broadcasting preset strategy, to improve object video detection
With the precision followed the tracks of, improve the efficiency that camera parameters adjusts simultaneously, and be effectively improved the shooting effect of video camera.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to
Other accompanying drawing is obtained according to these accompanying drawings.
Fig. 1 is the scene schematic diagram of a kind of video conference;
Fig. 2 is the schematic flow sheet of a kind of camera parameters method of adjustment that the embodiment of the present invention provides;
Fig. 3 a is a kind of camera imaging model schematic diagram that the embodiment of the present invention provides;
Fig. 3 b is the demarcation scene schematic diagram of a kind of multiple-camera that the embodiment of the present invention provides;
Fig. 3 c is the three-dimensional localization schematic diagram of a kind of binocular camera that the embodiment of the present invention provides;
Fig. 3 d is a kind of Pan/Tilt/Zoom camera rotating model schematic diagram that the embodiment of the present invention provides;
Fig. 4 a is a kind of object video coupling scene schematic diagram that the embodiment of the present invention provides;
Fig. 4 b is one group of object video image in Fig. 4 a;
Fig. 5 is the structural representation of a kind of parameter adjustment controls that the embodiment of the present invention provides;
Fig. 6 is the structural representation of a kind of instructor in broadcasting's camera system that the embodiment of the present invention provides;
Fig. 7 is the structural representation of a kind of first video camera that the embodiment of the present invention provides;
Fig. 8 is the networking schematic diagram of a kind of instructor in broadcasting's camera system that the embodiment of the present invention provides;
Fig. 9 is the structural representation of a kind of instructor in broadcasting's video camera that the embodiment of the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under not making creative work premise
Embodiment, broadly falls into the scope of protection of the invention.
Should be understood that " first ", " second " and " the 3rd " that the present embodiments relate to etc. is for distinguishing different object, and
Non-for describing particular order.Additionally, term " includes " and they any deformation, it is intended that cover non-exclusive comprising.
Such as contain series of steps or the process of unit, method, system, product or equipment be not limited to the step listed or
Unit, but the most also include step or the unit do not listed, or the most also include for these processes, method, product
Product or intrinsic other step of equipment or unit.
Should be understood that the instructor in broadcasting's video camera that the present embodiments relate to can be particularly for the technology performing the embodiment of the present invention
The Pan/Tilt/Zoom camera of scheme, it can be connected with binocular camera, and this instructor in broadcasting's video camera can be applicable to the scene such as meeting, training, and can
The deployment of instructor in broadcasting's camera position and number is carried out according to different scenes.
In certain embodiments, this binocular camera may be installed on instructor in broadcasting's support, i.e. this instructor in broadcasting's video camera can be by leading
Broadcast support (being called for short " support ") to be connected with binocular camera.Wherein, this instructor in broadcasting's video camera is used for carrying out instructor in broadcasting's shooting and tracking.This
Outward, this support also can be installed mike, this mike can be used for realizing the function such as sound localization, identification of sound source.This instructor in broadcasting takes the photograph
Camera and support can be to separate, it is also possible to be integrated in together, and can use control between this instructor in broadcasting's video camera and support
Interface processed such as serial line interface communicates.
In certain embodiments, binocular camera can be used for video acquisition, video pre-filtering, motion detection, Face datection,
Humanoid detection, scenario objects detection, the demarcation of feature detection/coupling, binocular camera, multiple cameras calibration etc., mike can
For audio collection, audio frequency pretreatment, video acquisition, sound source Activity recognition etc., instructor in broadcasting's video camera can be used for audio frequency and video
(Audio Video, referred to as " AV ") object 3D location, AV object modeling, AV Object tracking, action/gesture recognition, instructor in broadcasting's control
System and video switching/synthesis etc..Wherein, video acquisition includes synchronous acquisition binocular camera and the video of instructor in broadcasting's video camera
Stream;Video pre-filtering includes the binocular image of input is carried out pretreatment, as carried out noise reduction, the change behaviour such as resolution and frame per second
Make;Motion detection includes detecting the Moving Objects in scene, and Moving Objects and static background is separated, and is moved
The region of object;Face datection includes detecting the human face target object in scene, and the detection information of output face, such as face position
Put, region, the information such as direction;Humanoid detection includes detecting the humanoid head-and-shoulder area region in scene, output detections information;Scene
Object detection includes other object in detection scene in addition to people, such as fluorescent tube, window, conference table etc.;Feature detection/coupling
Carry out feature detection and coupling including the Moving Objects region that detection is obtained, detect the properties object in an image (such as spy
Levy a little) and mate in another image, the feature object information of output matching;Binocular camera is demarcated and is included binocular
Video camera is demarcated, it is thus achieved that the inside and outside ginseng of binocular camera, for calculating the three-dimensional coordinate of the object video in video image;Many
Camera calibration includes demarcating the relative position relation of multiple instructor in broadcasting's video cameras, it is thus achieved that multiple instructor in broadcasting's video cameras relative
Outer ginseng information, for object video location in multiple camera coordinate systems.Further, audio collection includes synchronous acquisition
The multi-path audio-frequency data of mike;Audio frequency pretreatment includes the multi-path audio-frequency data of input is carried out 3A process, wherein 3A process
(AWB) is controlled including auto-exposure control (AE), auto focus control (AF), AWB;Sound localization includes input
Multi-path audio-frequency data detect, find the two-dimensional position information of sounding object;Sound source Activity recognition includes detection and statistics
The speech act of object video in scene.Further, AV object 3D location includes the inside and outside ginseng according to binocular camera and spy
Levy the parallax information that detection/coupling obtains, it is thus achieved that the depth information of objects in images feature, in conjunction with the result of audio frequency location,
To characteristics of objects three dimensional local information under single instructor in broadcasting's camera coordinate system, according to feature under single instructor in broadcasting's coordinate system
Position and the relative position relation of multiple instructor in broadcasting's video camera, can obtain feature position in other instructor in broadcasting's camera coordinate system
Confidence ceases;AV object modeling includes combining the information architecture AV objects such as source of sound location, face information, feature object and scenario objects
Model;AV Object tracking includes being tracked the multiple AV objects in scene, the status information of upgating object;Action/appearance
State identification includes that the action to AV object, attitude etc. are identified, such as, identify the stance of object, gesture motion etc.;Lead
Broadcast Control system includes that the result combining action/gesture recognition and source of sound Activity recognition determines instructor in broadcasting's strategy, and instructor in broadcasting's camera control is defeated
Go out control instruction corresponding to instructor in broadcasting's strategy, object video and scene characteristic information and video frequency output strategy etc..Wherein, video camera
Control instruction can be used for controlling Pan/Tilt/Zoom camera and carries out PTZ operation, i.e. translates, tilts, zoom operation etc., object video and field
Scape characteristic information can be used for the information sharing between multiple instructor in broadcasting's video camera, and video frequency output strategy can be used for controlling single or many
The output policy of individual instructor in broadcasting's camera video stream.
Embodiments provide a kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system, it is possible to increase take the photograph
The efficiency of camera parameter adjustment, and promote video camera shooting effect.Describe in detail individually below.
Further, the stream that Fig. 2, Fig. 2 are a kind of camera parameters methods of adjustment that the embodiment of the present invention provides is referred to
Journey schematic diagram.Concrete, the described method of the embodiment of the present invention can be applied particularly in above-mentioned instructor in broadcasting's video camera.Such as Fig. 2 institute
Showing, the described camera parameters method of adjustment of the embodiment of the present invention may comprise steps of:
101, the target video object needing to shoot is determined.
102, according to default instructor in broadcasting's strategy from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place
Filter out the target video camera for shooting described target video object.
Optionally, the described target video object determining needs shooting, can be particularly as follows: obtain binocular camera transmission
Shooting image, described shooting image includes at least one object video;Set up and include regarding of at least one object video described
Frequently object model, and determine target video object from least one object video described.Further, described according to presetting
Instructor in broadcasting strategy filter out for shooting described mesh from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place
Mark object video target video camera, can particularly as follows: respectively each video camera from described instructor in broadcasting's camera system obtain bat
Take the photograph and image is determined described target video object, and obtain the described target video object shooting effect ginseng at each video camera
Number;The video camera that shooting effect parameter meets default instructor in broadcasting strategy is defined as the target for shooting described target video object
Video camera.Wherein, can dispose one or more instructor in broadcasting's video camera, i.e. this instructor in broadcasting's camera system in this instructor in broadcasting's camera system can be
The deployment way of instructor in broadcasting's video camera+instructor in broadcasting's video camera, it is also possible to for instructor in broadcasting's video camera+common camera (such as common PTZ shooting
Machine) deployment way.Concrete, this object video model can including, instructor in broadcasting's camera system at this instructor in broadcasting's video camera place is corresponding
Photographed scene in all object videos.If other video cameras in this instructor in broadcasting's camera system also include instructor in broadcasting's video camera, then
Also can receive the shooting image that the binocular camera being connected with other instructor in broadcasting's video cameras sends, carry out object video model modification,
To acquire the object video model of all object videos in photographed scene.Wherein, this target video object can be this bat
Take the photograph any one in scene or multiple object video.
Optionally, described shooting effect parameter can include that described target video object is in coordinate system corresponding to current camera
Under eye to any one or multinomial in the scenario objects parameter of eye efficacy parameter, hiding relation parameter and shooting area.Its
In, described current camera is the arbitrary video camera in described instructor in broadcasting's camera system in addition to described binocular camera, that is,
This current camera can be the arbitrary instructor in broadcasting's video camera in this instructor in broadcasting's camera system or common Pan/Tilt/Zoom camera.
Wherein, described eye can include the seat that described target video object is corresponding relative to current camera to eye efficacy parameter
The anglec of rotation of mark system, the described anglec of rotation can be in the anglec of rotation of described second coordinate system according to described target video object
The position relationship of degree and the described binocular camera demarcated in advance and described current camera is determined.Concrete, this mesh
It is corresponding that the anglec of rotation of the coordinate system that mark object video is corresponding relative to current camera may refer to this target video object
Face or humanoid subject are relative to the optical axis angle of this current camera (instructor in broadcasting's video camera or common Pan/Tilt/Zoom camera).This angle
The least, can represent that face more presents in positive face mode, namely eye is the best to eye effect, output image effect is the best.
Wherein, described hiding relation parameter and described scenario objects parameter can be to take the photograph according to the described binocular demarcated in advance
Camera and the position relationship of described current camera, institute is heavily thrown in the region of the scenario objects detected by described current camera
State what the imaging plane of current camera was determined.Concrete, the region such as two object videos is overlapping, then may utilize degree of depth letter
Breath determines the hiding relation between these two objects, and the nearer object video of this distance binocular camera can block video pair farther out
As.There is no to export image effect during hiding relation (hiding relation parameter is the least) the best.The scene of this scenario objects parameter instruction
Object can include fluorescent tube, window, desk etc., and the area of this scenario objects is the least, number is the least, then output image effect is the best;
Otherwise, then output image effect is the poorest.
103, the first three-dimensional coordinate of described target video object is obtained.
Wherein, this first three-dimensional coordinate can be that this target video object is under the first coordinate system that target video camera is corresponding
Three-dimensional coordinate.This target video camera can be configured to above-mentioned instructor in broadcasting's video camera or is common Pan/Tilt/Zoom camera, then this target is taken the photograph
The first coordinate system that camera is corresponding may refer to the three-dimensional system of coordinate set up with target video camera photocentre for initial point, or with other
Arbitrarily object of reference is the three-dimensional system of coordinate that initial point is set up, and the embodiment of the present invention does not limits.
Optionally, described instructor in broadcasting's video camera can be connected with preset binocular camera.The described target of the most described acquisition regards
Frequently the first three-dimensional coordinate of object, can particularly as follows: obtain that the binocular camera that is connected with described instructor in broadcasting's video camera transmits the
Two three-dimensional coordinates;According to the described binocular camera demarcated in advance and the position relationship of described target video camera, by described second
Three-dimensional coordinate is converted to the first three-dimensional coordinate.Further alternative, described second three-dimensional coordinate can be described binocular camera
By two-dimensional coordinate in the left view and right view of described binocular camera of this target video object of obtaining respectively and
The inside and outside parameter of the described binocular camera obtained is according to calculated.Wherein, this second three-dimensional coordinate is described target video
Object three-dimensional coordinate under the second coordinate system that described binocular camera is corresponding, the second coordinate that this binocular camera is corresponding
System may refer to the three-dimensional system of coordinate set up with binocular camera photocentre for initial point, or builds with other any objects of reference for initial point
Vertical three-dimensional system of coordinate.This two-dimensional coordinate can be specially this target video object and regard on left view and the right side of described binocular camera
Pixel coordinate corresponding in figure.
In specific embodiment, can be in advance to the position relationship between binocular camera, instructor in broadcasting's video camera and binocular camera
Between position relationship and the video camera of many seats in the plane in instructor in broadcasting's camera system between position relationship demarcate.Wherein,
The parameter that binocular camera system calibrating obtains can be used for calculating object video three under the coordinate system that binocular camera is corresponding
Dimension coordinate;Position relationship between instructor in broadcasting's video camera and binocular camera is demarcated and be can be used for calculating object video at instructor in broadcasting's video camera
Three-dimensional coordinate under coordinate system;And the parameter that the position relationship between the video camera of many seats in the plane is demarcated can be used for calculating many seats in the plane portion
During administration's scene, object video three-dimensional coordinate under the camera coordinate system of each seat in the plane, in order to carry out Coordinate Conversion.This multimachine
The deployment way of the video camera of position can be the deployment way of above-mentioned instructor in broadcasting's video camera+instructor in broadcasting's video camera, or takes the photograph for instructor in broadcasting
The deployment way of the Pan/Tilt/Zoom camera of camera+common.Wherein, each instructor in broadcasting's video camera can be described as a seat in the plane, when using multiple leading
Broadcast video camera carry out coordinate shooting time, can therefrom determine a main frame position, then remaining is from seat in the plane, as from the instructor in broadcasting of seat in the plane
Video camera can be by the information registerings such as the IP of self to main frame position, thus main frame position is capable of managing from seat in the plane multiple
Reason.Concrete, below calibration process is briefly described.Wherein, binocular camera includes left video camera and right video camera, left
The image that video camera obtains can be described as left view, and the image that right video camera obtains can be described as right view.Wherein single camera
Imaging (projection) model can be described by equation below:
X=PX=K [R | t] X
As shown in Figure 3 a, x is that certain point in scene (i.e. object video, concretely object video characteristic of correspondence point) exists
Pixel coordinate under image coordinate system, it is two-dimensional coordinate;X is certain some position coordinates under world coordinate system in scene;P
It it is the projection matrix of 3 × 4.PX refers to P × X.Wherein, K is the video camera internal reference matrix of 3 × 3, can be expressed as:
Wherein, fx,fyFor the equivalent focal length in x and y direction, cx,cyFor the image coordinate of photocentre, s is skew deformation coefficient
(sensor and optical axis out of plumb cause, the least, negligible in calibration process).
Additionally, R and t is to join outside video camera, it is expressed as the spin matrix of 3 × 3 and the translation vector of 3 × 1, following institute
Show:
R=[r1 r2 r3]
T=[t1 t2 t3]T
Wherein, r1,r2,r3For the column vector of 3 × 1 in spin matrix.
The factors such as optical characteristics, the manufacture of image sensitive device and the installation due to camera lens, video camera is actual to be clapped
It is not preferable for taking the photograph the image obtained, and can there is distortion, therefore can be modeled pattern distortion, to obtain ideal image.Tool
Body, the model of camera review distortion can describe according to equation below:
Wherein, xp,ypFor location of pixels, x after correctiond,ydFor correction preceding pixel position, k1,k2,k3For radial distortion system
Number, p1,p2For tangential distortion coefficient.
Imaging model based on above-mentioned monocular-camera, when the known world coordinate system transformation is to left camera coordinate system and the right side
The spin matrix R of camera coordinate system1And R2And translation vector t1And t2Time, then can obtain between binocular camera is relative
Outer ginseng, including spin matrix R and translation vector T:
Should be understood that in embodiments of the present invention, position relationship and instructor in broadcasting's video camera such as PTZ between binocular camera take the photograph
Position relationship between camera and binocular camera is changeless, both demarcate can complete before dispatching from the factory, i.e. this two
It is changeless for planting and demarcating the data such as inside and outside parameter evidence obtained.Optionally, in embodiments of the present invention, the demarcation of video camera
Can use kinds of schemes, such as the plane reference method (also known as " Zhang Shi standardizition ") of Zhang, its distortion parameter calculates and uses
The method of Brown, does not repeats.
Further, from above-mentioned binocular camera calibration principle, the video camera of many seats in the plane such as many instructors in broadcasting position for video camera
The essence of the demarcation putting relation is to ask the most externally joining between the most adjacent instructor in broadcasting's video camera, according to adjacent instructor in broadcasting's video camera it
Between mutually externally ginseng calculate the outer ginseng between any two instructor in broadcasting's video camera, thus obtain between any two instructor in broadcasting's video camera
Position relationship.Bigger shooting overlapping region is needed between instructor in broadcasting's video camera two-by-two when many instructors in broadcasting video camera is disposed, multiple
Seat in the plane constitutes and is similar to around multi-camera system, and i-th video camera is relative to the spin matrix of jth video camera and translation vector
For:
Ri,i-1Ri-1,i-2...Rj+1,j
Ri,i-1Ri-1,i-2...Rj+2,j+1Tj+Ri,i-1Ri-1,i-2...Rj+3,j+2Tj+1+...+Ri,i-1Ti-2+Ti-1
Wherein, Ri,i-1Ri-1,i-2...Rj+1,jRepresent Ri,i-1×Ri-1,i-2×...×Rj+1,j.Due to instructor in broadcasting's camera unit
During administration, different instructor in broadcasting's supports change according to actual deployment scene for the position of the photographic head of location, the most
Position relationship between instructor in broadcasting's video camera cannot carry out pre-demarcation before equipment dispatches from the factory, then can carry out when instructor in broadcasting's video camera is disposed
Field calibration.
Further, it is assumed that the video camera in this instructor in broadcasting's camera system is instructor in broadcasting's video camera, and each instructor in broadcasting shooting
Machine connects binocular camera.Refer to the demarcation that Fig. 3 b, Fig. 3 b is a kind of many instructors in broadcasting video camera that the embodiment of the present invention provides
Scene schematic diagram.As shown in Figure 3 b, carrying out timing signal, between two adjacent instructor in broadcasting's video cameras or binocular camera and instructor in broadcasting
LAN (Local Area Network, referred to as " LAN ") or Wireless Fidelity (Wireless can be passed through between video camera
Fidelity, referred to as " Wi-Fi ") network communicates, including transmission calibrating template image and calibrating parameters etc., transmission association
View can use multiple network agreement, such as HTML (Hypertext Markup Language), and (HyperText Transfer Protocol is referred to as
“HTTP”).Concrete, can be each instructor in broadcasting's video camera and the video camera (binocular camera) point being connected with instructor in broadcasting's video camera in advance
Join Global ID number.Such as may select certain video camera is original position, such as may select Far Left or rightmost video camera is made
For original position, No. ID of other photographic head is by being incremented by counterclockwise or clockwise.In all video cameras, choose one group take the photograph
Camera participates in demarcating, and the overlapping region that the principle of selection can be to ensure that between adjacent camera is maximum.As shown in Figure 3 b, it is assumed that
Present filming scene is deployed with 3 seats in the plane D1, D2, D3, each seat in the plane include respectively instructor in broadcasting's video camera (be designated as respectively PTZ0,
PTZ1 and PTZ2) and binocular camera (this binocular camera includes left and right two video cameras, be designated as respectively C0, C1, C2, C3,
C4、C5).Assume that the video camera selecting No. ID to be C0, C2 and C4 is demarcated, and select one of them instructor in broadcasting's video camera as mark
Devise a stratagem calculation equipment, instructor in broadcasting's video camera of main frame position described above.Further, can enter by direction from left to right or from right to left
Outer ginseng between row video camera two-by-two is demarcated, and obtains the most externally joining between two video cameras.Optionally, in each instructor in broadcasting's video camera
A video camera relative position relation table can be safeguarded, as shown in following table one.Wherein, each demarcation can increase or update therein one newly
Individual list item, each list item is uniquely determined by two video camera ID.
Table one
List item | Video camera ID1 | Video camera ID2 | The most externally join |
1 | C0 | C2 | R02,T02 |
2 | C2 | C4 | R24,T24 |
... | ...... | ... | ...... |
After demarcation completes, this position relationship table can be sent out by the instructor in broadcasting's video camera as calibrated and calculated equipment by network
Give other all of instructor in broadcasting's video camera to preserve.Further, according to this position relationship table, and outside binocular camera
Ginseng (this outer ginseng can have been demarcated before dispatching from the factory), can be calculated in this demarcation scene video camera the most two-by-two and (include binocular
Between video camera, between binocular camera and Pan/Tilt/Zoom camera and between Pan/Tilt/Zoom camera) position relationship.
As an example it is assumed that the instructor in broadcasting video camera D3 in Fig. 3 b is calibrated and calculated equipment, instructor in broadcasting video camera D1 and D2 is for wanting
The video camera of calibration position relation, instructor in broadcasting video camera D3 can be set to for demarcate main frame position, other video camera be set to from
Seat in the plane, and initiate to demarcate by instructor in broadcasting video camera D3.Before demarcation, need to guarantee to be interconnected by network between video camera, need
Overlapping region can be photographed between video camera to be demarcated, overlapping region has calibrating template (such as checkerboard pattern) etc.
Deng.Needing timing signal, instructor in broadcasting video camera D3 starts calibration process, and sends image acquisition commands to instructor in broadcasting video camera D1, collection
Order comprises No. ID (i.e. the D1) of instructor in broadcasting video camera D1 and No. ID (C4 or C5) of the binocular camera of needs collection.Instructor in broadcasting takes the photograph
Camera D1 receives the image acquisition of this acquisition laggard rower solid plate, and the view data of collection is transferred to instructor in broadcasting
Video camera D3.Being similar to, instructor in broadcasting video camera D3 acquires the calibrating template of the binocular camera shooting on instructor in broadcasting video camera D2
Image.If the binocular camera that need to demarcate is positioned on instructor in broadcasting video camera D3, then instructor in broadcasting's video camera D3 can directly obtain this binocular and takes the photograph
The calibrating template image of camera.After obtaining the calibrating template image of the video camera needing demarcation, instructor in broadcasting video camera D3 can be right
The calibrating template image of these two video cameras carries out X-comers detection, if two width images are capable of detecting when all of gridiron pattern
Angle point, then can be shown that and gather successfully;This two width image the most discardable, this image of Resurvey.Further, can be by changing
The position of calibrating template, cyclically obtains and needs several calibrating template images of two video cameras demarcated to be saved in instructor in broadcasting's shooting
In machine D3, when, after the quantitative requirement meeting calibrating template image, instructor in broadcasting video camera D3 can carry out camera calibration, each shooting
The internal reference of machine has been demarcated before dispatching from the factory, therefore can be as the input initial value demarcated.Demarcation obtains two shootings after completing
The most externally ginseng R and T between machine, and whether calculating heavily throwing (shadow) error is less than the threshold value preset, if heavily throwing error to be more than threshold value,
Then can be shown that and demarcate unsuccessfully;Otherwise, then can be shown that and demarcate successfully.Instructor in broadcasting video camera D3 can be based on being calculated after completing to demarcate
Video camera relative position relation update position relationship table and be sent to other instructor in broadcasting's video camera.
Further, realizing to the position relationship between binocular camera, between instructor in broadcasting's video camera and binocular camera
Position relationship and many instructors in broadcasting video camera between position relationship demarcation after, can be to the coverage of instructor in broadcasting's video camera
Interior object video positions, and obtains its three dimensional local information, to determine suitably according to the three dimensional local information obtained
Instructor in broadcasting's video camera seat in the plane, and according to instructor in broadcasting's strategy that this three dimensional local information is corresponding, this instructor in broadcasting's video camera is carried out parameter adjustment,
Control instructor in broadcasting's Camera Positioning and carry out object video shooting to suitable position.Wherein, the location of object video is included binocular
Three-dimensional localization between the video camera of video camera three-dimensional localization, single instructor in broadcasting's video camera such as Pan/Tilt/Zoom camera location and many seats in the plane.
Concrete, during the three-dimensional localization of binocular camera, the stereo-picture of available binocular camera shooting, meter
Calculate and obtain certain observation station depth location information in camera coordinate system in scene, so that it is determined that the three-dimensional position of this observation station
Confidence ceases.Which is identical with the principle of human eye perceived depth distance, referred to as binocular camera range finding.As shown in Figure 3 c, it carries
Supply the three-dimensional localization schematic diagram of a kind of binocular camera, below the range measurement principle of this binocular camera system has briefly been situated between
Continue.Wherein, P is the observation station under world coordinate system, by the video camera shooting imaging of two, left and right.Wherein, this P point is at left video camera
Position in physical coordinates system is XL, YL, ZL, the imaging point location of pixels coordinate at left view is xl, yl;At right video camera physics
Position in coordinate system is XR, YR, ZR, the imaging point location of pixels coordinate at right view is xr, yr, it is assumed that the phase of left and right cameras
Externally ginseng is R, T;The focal length of left and right cameras is respectively as follows: fl, fr.According to binocular camera model, it is known that the one-tenth of left and right cameras
As the physical coordinates position relationship of model and left and right cameras is:
Can be derived by according to above-mentioned formula:
Wherein, xl, yl, xr, yrValue can be obtained by images match, fl, fr, R, T can pass through binocular camera mark
Determine to obtain, therefore can calculate XL, YL, ZLAnd XR, YR, ZRValue, so that it is determined that observation station is at binocular camera pair in scene
Three-dimensional coordinate under the coordinate system answered.
Further, during instructor in broadcasting's video camera such as Pan/Tilt/Zoom camera three-dimensional localization, the basic mesh of Pan/Tilt/Zoom camera location
Be certain target known physical coordinates in Pan/Tilt/Zoom camera coordinate system, how to make this mesh by rotating Pan/Tilt/Zoom camera
Certain point location of target is to the specific pixel coordinate position in image.This target physical coordinates in Pan/Tilt/Zoom camera coordinate system
By this target three-dimensional position in binocular camera coordinate system, and the binocular camera that obtains can be demarcated and PTZ takes the photograph
Position relationship between camera calculates.Refer to Fig. 3 d, be a kind of Pan/Tilt/Zoom camera rotating mould of embodiment of the present invention offer
Type schematic diagram.As shown in Figure 3 d, it is assumed that wish that the position coordinates that impact point P occurs is x0,y0, the physical coordinates position of impact point P
For X, Y, Z, the pixel coordinate position on imaging plane is xc,yc, then can rotate rotating around X-axis and Y-axis, make the pixel position of a P
Put and overlap with target location, then the anglec of rotation that Pan (translation) and Tilt (inclination) operates can be modeled by following equation:
Owing to Pan/Tilt/Zoom camera is zoom camera, it is therefore desirable to obtain zoom magnification Z and the internal reference such as focal length, distortion factor
Functional relationship.Such as, available fitting of a polynomial zoom magnification Z and focal distance fx, fyRelation, obtain following relation:
fx=a0+a1Z+a2Z2+...anZn
fy=b0+b1Z+b2Z2+...bnZn
Concrete, under different Z values, demarcate and obtain video camera internal reference, be calculated the f of correspondencex, fyAnd distortion factor,
And use least square fitting to go out coefficient.The internal references such as other distortion factor can also process according to similar approach.Obtain not
After internal reference with the lower video camera of Z value, Δ p, the value of Δ t can be calculated according to Pan/Tilt model formation.
Further alternative, under many instructors in broadcasting camera scene, also can obtain the binocular being connected with other instructor in broadcasting's video cameras
The shooting image that video camera sends, is updated object video model after carrying out object video coupling.Then obtain from video camera
Shooting image in determine described target video object, can particularly as follows: according to the described binocular camera demarcated in advance and
The position relationship of current camera, is converted to the 3rd three-dimensional coordinate by described second three-dimensional coordinate;Judge described target video pair
The object video detected as the region under described 3rd three-dimensional coordinate and described current camera is at the three of this object video
Whether the overlapping area in the region under dimension coordinate exceedes default area threshold;If exceeding, then this object video is defined as institute
State target video object.Namely object video the match is successful.Wherein, described 3rd three-dimensional coordinate is that described target video object exists
Three-dimensional coordinate under the three-coordinate that described current camera is corresponding, described current camera is in described instructor in broadcasting's camera system
Arbitrary video camera in addition to described binocular camera.Such as, under many instructors in broadcasting camera scene, this current camera is permissible
For other binocular cameras in addition to the binocular camera of main frame position.
In specific embodiment, the purpose of multimachine digital video object three-dimensional localization is to image certain instructor in broadcasting according to object video
Three-dimensional coordinate in machine binocular coordinate system, is calculated it at other instructor in broadcasting's video camera binocular camera or certain PZT video camera
Three-dimensional coordinate in coordinate system.Certain observation station known (i.e. object video can be specifically a certain characteristic point of object video)
Coordinate vector in video camera D1 is X1, and video camera D2 is relative to the outer ginseng R of video camera D121, T21(pass through binocular camera shooting
Machine is demarcated and is obtained), this observation station coordinate vector X in video camera D2 can be calculated2:
X2=R21X1+t21
Concrete, refer to Fig. 4 a, be a kind of object matching scene schematic diagram of embodiment of the present invention offer.Many seats in the plane regard
Frequently object three-dimensional localization is determined for the corresponding relation of multiple object video.As shown in fig. 4 a, scene deploys three machines
Video camera D1, D2 and D3 of position, has O in scene1、O2And O3Totally 3 participants.Further, as shown in Figure 4 b, it is in Fig. 4 a
One group of object video image, it is participant O1Imaging in instructor in broadcasting video camera D1, D2 and D3 of different visual angles.For
Participant O1, the binocular camera in the D1 of seat in the plane utilizes the detection of Face datection scheduling algorithm to obtain video object11, then use double
Lens camera three-dimensional location obtains this object three-dimensional position in D1 binocular coordinate system.In like manner D2 and D3 also can detect
Video object12And VO13, and it is calculated the three-dimensional coordinate of this object under D2, D3 binocular coordinate system.At multimachine digital video
During object three-dimensional localization, available position relationship between D1, D2 and D3 is demarcated, by video object under D1 coordinate system11Three
Dimension position is transformed under D2 and D3 coordinate system.And detect its overlapping region.VO after if conversion being detected11Three-dimensional position and
VO12、VO13Overlapping region, position exceed certain area threshold, i.e. it is believed that VO11、VO12And VO13For same object video, depending on
Frequently object matching success.Further, if image exists multiple object videos that neighbor distance is nearer, utilize merely position weight
Close region and determine that the corresponding relation of object video may result in matching error.Thus, can be further combined with the figure of object video
As information being promoted by matching algorithm determines the accuracy of corresponding relation.Wherein, this matching algorithm can include template matching
Algorithm etc..Such as may utilize template matching algorithm, the object video detected by the binocular camera of such as main frame position, certain seat in the plane
Two dimensional image as known template, and the object video detected by the binocular camera of other seat in the plane carries out one by one with it
Join, mated by the difference of two squares, object that relevant matches scheduling algorithm finds this object video to mate most, thus set up the right of object
Should be related to.
Further, after the three-dimensional localization determining binocular camera and Pan/Tilt/Zoom camera three-dimensional localization, can be to regarding
Frequently object detection is followed the tracks of and scene modeling.Wherein, the purpose of object video detection/tracking is to build and describe present in scene
Object video, and these objects are tracked and identify.Object video includes participant's object, and scenario objects, such as lamp
Pipe, window, conference table etc..System needs cyclically to process, the view data of the binocular camera inputted including carrying out
Face datection and coupling, humanoid detection and coupling, Moving Objects detection and coupling, scenario objects detection and coupling etc., to video
Object is set up model and updates model parameter, thus the object model obtained according to detection carries out the modeling of whole photographed scene.
The model of place that modeling obtains can be used for follow-up Object identifying and instructor in broadcasting's strategy processes.Wherein, Face datection may be used for inspection
Surveying object video close together, participant as nearer in detecting distance, for region farther out, due to the less nothing of face area
Method well detects, then can use the humanoid or method of Moving Objects detection.This Face datection can acquire face video
The parameters of object, including the two-dimensional coordinate of face circumscribed rectangular region, center point coordinate, rectangular area, face around coordinate
The position of the organ such as eyes, nose and mouth in the anglec of rotation (representing face deflection, pitching and degree of rotation) of axle, face
The parameter such as put.
Further, after detecting object video in each two field picture, in addition it is also necessary to object video in sequence of frames of video
It is tracked, thus sets up object video corresponding relation in time domain.Current traditional widely used Video object tracking
Algorithm includes template matching based on gray scale, MeanShift, CamShift, Kalman filter algorithm etc..Wherein, object video
Coupling can be applicable in binocular camera, utilize the object video detected in a camera review in this binocular camera
Region, finds the video image region of correspondence in another camera review, thus can be in the matching area of object video
Carry out the calculating of characteristic matching and three-dimensional coordinate.The matching algorithm of object video is similar with track algorithm, can use based on gray scale
Template matching and MeanShift scheduling algorithm.
In embodiments of the present invention, object video can be indicated by its feature, and normally used feature includes spy
Levy point, image texture, histogram information etc..This feature detection and coupling can be carried out in the object video region detected, from
And the three dimensional local information of object video can be calculated according to characteristic point information, i.e. three-dimensional coordinate, and can according to texture information and
Histogram information carries out the tracking of object video.Wherein, this feature point is main characteristic type, and feature point detection algorithm includes
Harris Corner Detection, SIFT feature point detection scheduling algorithm.Further, characteristic matching is used for setting up that binocular camera is same to be regarded
Frequently the corresponding relation of characteristics of objects, characteristic point can use the matching algorithm such as FLANN algorithm, KLT optical flow method to mate, image
Texture can use gray scale template matching scheduling algorithm to mate, and rectangular histogram can use Histogram Matching scheduling algorithm to carry out
Join.To sum up, the characteristic information obtained according to coupling, and above-mentioned binocular camera three-dimensional location, then can be calculated
The three-dimensional coordinate of object video feature under single instructor in broadcasting's video camera three-dimensional system of coordinate, such that it is able in three dimensions location and with
Certain object video of track.
Further, data and video pair are obtained according to object video detection and coupling, feature detection and matching algorithm
The result calculated as three-dimensional position, can set up the model of multiple object video in single instructor in broadcasting's camera coordinate system, and can
By face, humanoid and motion detecting and tracking algorithm, model data is updated.Concrete, can be each object video model
Distributing one unique No. ID, the data in model represent the attribute of this object video.Such as, for Moving Objects model,
Data in model can include object ID, boundary rectangle two-dimensional coordinate, the three-dimensional coordinate of characteristics of objects point, moving region texture number
According to attributes such as, histogram datas.When Moving Objects position changes, its attribute can be according to above-mentioned detection and matching algorithm
Output refreshes, but the ID of object keeps constant.Face is similar with Moving Objects model with the foundation of humanoid subject, the most not
Repeat.
Should be understood that in the application scenarios of many seats in the plane, between multiple instructor in broadcasting's video cameras, video can be exchanged by network service
Object model data, after single instructor in broadcasting's video camera has obtained the object video model data of other instructor in broadcasting's video camera, may utilize
The algorithm of the many instructors in broadcasting video camera three-dimensional localization stated and object video coupling sets up the corresponding relation of object video model, thus obtains
To the instructor in broadcasting's strategy to whole scene.Network communication protocol during communication can use standard agreement such as http protocol, it is also possible to
Using custom protocol, the data of object video model are by according to certain format such as extensible markup language (eXtensible
Markup Language, referred to as " XML ") form formats, packs and transmits.By multiple instructor in broadcasting's video cameras are regarded
Frequently the coupling of object model and merging, single instructor in broadcasting's video camera can set up the model of whole photographed scene.Model of place wraps
Contain the model of multiple object video, reflect feature and the distribution situation in three dimensions of object video.Instructor in broadcasting images
Machine needs to safeguard model of place, including increasing, delete object model and object model attributes newly.Such as new when scene has
Participant when occurring, when binocular camera detects new face or humanoid subject, after setting up object model, add object mould
In type set;After scene having participant leave, delete the model of this object;After participant position changes, it is right to update
Answer the parameter of object model.To formulate instructor in broadcasting's strategy according to up-to-date object video model, select the video camera of optimal seat in the plane
Shoot.
104, the camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate,
And export the video image after adjusting camera parameter.
In specific embodiment, obtain including the object video model of all object videos and determining mesh setting up (renewal)
After mark object video, can be according to instructor in broadcasting's camera of the default one or more optimal shooting effect of instructor in broadcasting's policy selection
Position, as determined have according to eye to the scenario objects parameter etc. of eye efficacy parameter, hiding relation parameter and shooting area
The video camera of better shooting effect.Concrete, eye effect is needed according to face/humanoid subject relative to Pan/Tilt/Zoom camera by this eye
Optical axis angle determine, angle is the least, and face more presents in positive face mode, and eye is the best to eye effect.Concrete, can
Obtained by face/humanoid detection algorithm is that the 3-D walls and floor centered by face/humanoid is sat relative to binocular camera
The anglec of rotation (deflection, pitching and the anglec of rotation) of mark system, and utilize the formula of coordinate system conversion between aforementioned video camera,
By the anglec of rotation of face/humanoid relative binocular camera, be converted to the anglec of rotation of relative Pan/Tilt/Zoom camera.In transformation process
Need to utilize the outer ginseng between binocular camera and the Pan/Tilt/Zoom camera that aforementioned demarcation is good, and different seats in the plane binocular camera it
Between outer ginseng, determine that this eye is to eye efficacy parameter.And can be further directed to each object video set up an eye to eye imitate
The Pan/Tilt/Zoom camera priority query of fruit, eye video camera more preferable to eye effect has higher priority.
Further, when obtaining the hiding relation of object video, can be according to video camera projection equation, it is known that certain instructor in broadcasting
The region (such as boundary rectangle) of the object video that video camera detects, and the available single instructor in broadcasting's video camera binocular demarcated takes the photograph
Outer ginseng between outer ginseng between camera and Pan/Tilt/Zoom camera, and the binocular camera of different seat in the plane, heavily throws into this region respectively
On the imaging plane of seat in the plane Pan/Tilt/Zoom camera.If the region of two object videos is overlapping, depth information is utilized to can determine that the two pair
As hiding relation, i.e. the distance nearer object video of binocular camera can block object video farther out.Thus, for often
Individual object video can set up the Pan/Tilt/Zoom camera priority query of a hiding relation, and the video camera not blocked has more
High priority.
Further, in addition to object video based on people detects, system is also to other video pair interested in scene
As (scenario objects) detects, such as fluorescent tube, window and conference table etc..The detection of these objects can use based on image face
Algorithm of normal complexion edge feature etc..Such as, fluorescent tube is detected, first can go out the edge of fluorescent tube by Canny operator extraction,
To its long linear feature, detect whether adjacent area exists overexposure pixel region (light-emitting zone) the most again, special according to the two
Levy and can detect fluorescent tube object, obtain the coordinate of its boundary rectangle.The detection of window is similar with fluorescent tube detection, can pass through limit
Edge detection obtains tetragon feature, then judges whether according to the overexposure pixel region that whether there is certain area in tetragon
For window.Conference table can also utilize the edge feature in image to carry out detection to obtain.Then when obtaining this scenario objects parameter,
Can be according to video camera projection equation, it is known that the region of the scenario objects that certain instructor in broadcasting's video camera detects, utilize the list demarcated
Outside between outer ginseng between individual instructor in broadcasting's video camera binocular camera and Pan/Tilt/Zoom camera, and the binocular camera of different seat in the plane
Ginseng, heavily throws into this region on the imaging plane of each seat in the plane Pan/Tilt/Zoom camera.For the object such as fluorescent tube and window, it will usually exist
Large-area overexposure region, causes the automatic exposure effect of video camera to be deteriorated, and scene is dimmed, and the scenario objects such as desk can exist
The color regions such as large-area redness or yellow, cause the AWB colour cast of video camera, and these scenario objects should be tried one's best
Avoid the occurrence of in the picture.Thus, can set up whether can photograph the scenario objects being unfavorable for image effect PTZ shooting
Machine priority query, photographs the less video camera of scenario objects probability and has higher priority.
Further, instructor in broadcasting's video camera of instructor in broadcasting's video camera such as main frame position is according to the image effect parameter obtained, in conjunction with pre-
If instructor in broadcasting strategy, the Pan/Tilt/Zoom camera of each seat in the plane is set up priority query, i.e. can determine that need select video camera.Tool
Body, one or more object videos of needs shooting can be predefined out, i.e. target video object, as tied according to sound localization
The object video spoken that fruit determines, to shoot the feature of this object video;Or use AutoFrame strategy, need by
When all object videos are as this target video object, then adjustable Pan/Tilt includes all object videos in scene in bat
Take the photograph scope, and adjust Zoom and make object have suitable size, etc..For target reference object, utilize eye that eye effect is joined
Number, hiding relation parameter and the Pan/Tilt/Zoom camera priority query of scenario objects parameter, i.e. can determine that according to certain instructor in broadcasting's strategy
Go out a comprehensive Pan/Tilt/Zoom camera priority query.This instructor in broadcasting's strategy can be calculated automatically by system, or by user in advance
Setting, the embodiment of the present invention does not limits.
As an example it is assumed that it is best to eye effect to pay the utmost attention to eye, unscreened Pan/Tilt/Zoom camera, meet this if had
Multiple video cameras of part, the best Pan/Tilt/Zoom camera of the reselection i.e. image effect of scenario objects parameters optimal is as target video camera
Shoot.After completing Pan/Tilt/Zoom camera selection, main frame position can be according to the three-dimensional coordinate of this target video object to selected
The PTZ parameter of Pan/Tilt/Zoom camera is adjusted, to obtain optimal image effect as far as possible.Such as when tone tracking, in shooting
Avoid photographing fluorescent tube, window etc. during participant's feature and affect the object of brightness of image effect;When AutoFrame, regulation
Avoid photographing the white balance effect of large area desk object influences image during Zoom size, etc..
Further, main frame position (instructor in broadcasting's video camera of main frame position) exportable selected Pan/Tilt/Zoom camera video image or
ID.Optionally, for supporting many instructors in broadcasting camera chain of video cascade, main frame position can directly export the figure of selected video camera
Picture;For the many instructors in broadcasting camera chain exported by video matrix, communication interface can be passed through (such as serial ports or net by main frame position
Mouthful) ID of Pan/Tilt/Zoom camera selected by output to video matrix, video matrix complete the switching of camera review.
In embodiments of the present invention, can be after determining the target video object needing shooting, according to default instructor in broadcasting
Strategy filters out the target video camera shooting this target video object best results from each video camera of instructor in broadcasting's camera system, and
Acquire this target video object three-dimensional coordinate under the coordinate system that this target video camera is corresponding, to control the shooting of this target
Machine carries out camera parameters adjustment according to the three-dimensional coordinate of this target video object, and exports the video after adjusting camera parameter
Image so that instructor in broadcasting's camera system can be based on three-dimensional coordinate detection and the instructor in broadcasting preset strategy, to improve object video detection
With the precision followed the tracks of, improve the efficiency that camera parameters adjusts simultaneously, and be effectively improved the shooting effect of video camera.
Refer to the structural representation that Fig. 5, Fig. 5 are a kind of parameter adjustment controls that the embodiment of the present invention provides.Concrete,
The described device of the embodiment of the present invention can specifically be arranged in above-mentioned instructor in broadcasting's video camera, as shown in Figure 6, and the embodiment of the present invention
Described parameter adjustment controls can include that object determines unit 10, selects unit 20, acquiring unit 30 and parameter adjustment unit
40.Wherein,
Described object determines unit 10, for determining the target video object needing shooting.
Described selection unit 20, for imaging system according to the tactful instructor in broadcasting from described instructor in broadcasting's video camera place of default instructor in broadcasting
Each video camera of system filters out the target video camera for shooting described target video object.
Optionally, described shooting effect parameter can include that described target video object is in coordinate system corresponding to current camera
Under eye to any one or multinomial in the scenario objects parameter of eye efficacy parameter, hiding relation parameter and shooting area.Its
In, described current camera is the arbitrary video camera in described instructor in broadcasting's camera system in addition to described binocular camera.
Wherein, described eye can include the seat that described target video object is corresponding relative to current camera to eye efficacy parameter
The anglec of rotation of mark system, the described anglec of rotation can be in the anglec of rotation of described second coordinate system according to described target video object
The position relationship of degree and the described binocular camera demarcated in advance and described current camera is determined.
Wherein, described hiding relation parameter and described scenario objects parameter can be to take the photograph according to the described binocular demarcated in advance
Camera and the position relationship of described current camera, institute is heavily thrown in the region of the scenario objects detected by described current camera
State what the imaging plane of current camera was determined.
Described acquiring unit 30, for obtaining the first three-dimensional coordinate of described target video object.
Wherein, this first three-dimensional coordinate can be that this target video object is under the first coordinate system that target video camera is corresponding
Three-dimensional coordinate.This target video camera can be above-mentioned instructor in broadcasting's video camera or be common Pan/Tilt/Zoom camera, then this target video camera
The first corresponding coordinate system may refer to the three-dimensional system of coordinate set up with target video camera photocentre for initial point, or any with other
Object of reference is the three-dimensional system of coordinate that initial point is set up, and the embodiment of the present invention does not limits.
Described parameter adjustment unit 40, for being adjusted to three-dimensional with described first by the camera parameter of described target video camera
The camera parameter that coordinate is corresponding, and export the video image after adjusting camera parameter.
Optionally, described acquiring unit 30 can be specifically for:
Obtain the second three-dimensional coordinate of the binocular camera transmission being connected with described instructor in broadcasting's video camera, the described second three-dimensional seat
It is designated as described target video object three-dimensional coordinate under the second coordinate system that described binocular camera is corresponding;
According to the described binocular camera demarcated in advance and the position relationship of described target video camera, three-dimensional by described second
Coordinate Conversion is the first three-dimensional coordinate.
Further alternative, described second three-dimensional coordinate can be that described binocular camera is by this target obtained respectively
Object video two-dimensional coordinate in the left view and right view of described binocular camera and the described binocular camera of acquisition
Inside and outside parameter according to calculated.Wherein, this second three-dimensional coordinate is that described target video object is at described binocular camera
The corresponding three-dimensional coordinate under the second coordinate system, the second coordinate system that this binocular camera is corresponding may refer to binocular camera
Photocentre is the three-dimensional system of coordinate that initial point is set up, or the three-dimensional system of coordinate set up for initial point with other any objects of reference.This two dimension
Coordinate can be specially the pixel coordinate that this target video object is corresponding in the left view and right view of described binocular camera.
Optionally, described object determines that unit 10 can be specifically for:
Obtaining the shooting image of binocular camera transmission, described shooting image includes at least one object video;
Set up the object video model including at least one object video described, and from least one object video described
Determine target video object;
Described selection unit 20 can be specifically for:
The shooting image that each video camera from described instructor in broadcasting's camera system obtains respectively determines described target video
Object, and obtain the described target video object shooting effect parameter at each video camera;
The video camera that shooting effect parameter meets default instructor in broadcasting strategy is defined as shooting described target video object
Target video camera.
Further alternative, described selection unit 20 performs to determine described target from the shooting image that video camera obtains
The concrete mode of object video can be:
According to the described binocular camera demarcated in advance and the position relationship of current camera, by described second three-dimensional coordinate
Be converted to the 3rd three-dimensional coordinate, wherein, described current camera be in described instructor in broadcasting's camera system except described binocular camera with
Outer arbitrary video camera, described 3rd three-dimensional coordinate is described target video object at the 3rd seat corresponding to described current camera
Three-dimensional coordinate under mark system;
Judge that described target video object region under described 3rd three-dimensional coordinate and described current camera detect
The overlapping area in object video region under the three-dimensional coordinate of this object video whether exceed default area threshold;
If exceeding, then this object video is defined as described target video object.
In embodiments of the present invention, can be after determining the target video object needing shooting, according to default instructor in broadcasting
Strategy filters out the target video camera shooting this target video object best results from each video camera of instructor in broadcasting's camera system,
And acquire this target video object three-dimensional coordinate under the coordinate system that this target video camera is corresponding, take the photograph controlling this target
Camera carries out camera parameters adjustment according to the three-dimensional coordinate of this target video object, and exports regarding after adjusting camera parameter
Frequently image so that instructor in broadcasting's camera system can be based on three-dimensional coordinate detection and the instructor in broadcasting preset strategy, to improve object video inspection
The precision surveyed and follow the tracks of, improves the efficiency that camera parameters adjusts simultaneously, and is effectively improved the shooting effect of video camera.
Refer to the structural representation that Fig. 6, Fig. 6 are a kind of instructor in broadcasting's camera systems that the embodiment of the present invention provides.Concrete,
Described instructor in broadcasting's camera system of the embodiment of the present invention can include the first video camera 1 and at least one second video camera 2, described first
Video camera 1 includes instructor in broadcasting's video camera 11 and binocular camera 12, between described instructor in broadcasting's video camera 11 and described binocular camera 12,
Can be connected by wireline interface or wave point between described first video camera 1 and described second video camera 2;Wherein,
Described instructor in broadcasting's video camera 11, for determining the target video object needing shooting, and according to default instructor in broadcasting's strategy
The target video camera for shooting described target video object is filtered out from the video camera of described instructor in broadcasting's camera system;
Described binocular camera 12, for obtaining the second three-dimensional coordinate of described target video object, and by described second
Three-dimensional coordinate is transferred to described instructor in broadcasting's video camera 11;Wherein, described second three-dimensional coordinate is that described target video object is described
Three-dimensional coordinate under second coordinate system of binocular camera 12 correspondence;
Described instructor in broadcasting's video camera 11, for receiving described second three-dimensional coordinate of described binocular camera 12 transmission;According to
The described binocular camera 12 demarcated in advance and the position relationship of described target video camera, be converted to described second three-dimensional coordinate
First three-dimensional coordinate;The camera parameter of described target video camera is adjusted to the shooting ginseng corresponding with described first three-dimensional coordinate
Number, and export the video image after adjusting camera parameter;Wherein, described first three-dimensional coordinate is that described target video object is in institute
State the three-dimensional coordinate under the first coordinate system that target video camera is corresponding.
Optionally, described second video camera 2 may also comprise instructor in broadcasting's video camera and binocular camera, then this target video camera can
For the arbitrary instructor in broadcasting's video camera in this instructor in broadcasting's camera system;Or, described second video camera 2 is common Pan/Tilt/Zoom camera, then should
Target video camera can be this instructor in broadcasting's video camera or common Pan/Tilt/Zoom camera.Further alternative, described binocular camera 12 can be arranged
On preset instructor in broadcasting's support, and it is connected with described instructor in broadcasting's video camera 11 by described instructor in broadcasting's support.
Concrete, as it is shown in fig. 7, be the structural representation of a kind of first video camera that the embodiment of the present invention provides.This is years old
One video camera includes binocular camera and one or more instructor in broadcasting's video camera.Assume this first video camera peace in the embodiment of the present invention
Equipped with 2 instructor in broadcasting's video cameras, being used for carrying out instructor in broadcasting's shooting and tracking, it can pass through instructor in broadcasting's support (being called for short " support ") and binocular
Video camera carries out wired or wireless connection.This binocular camera is arranged on this support, additionally, also can be provided with wheat on this support
Gram wind, the mike of installation can be array format, and the mike of this array format can be used for realizing sound localization, sound source is known
The function such as not, specifically can include the mike of horizontal array and the mike of orthogonal array.Further, this instructor in broadcasting's video camera and
Support can be to separate, it is also possible to is integrated in together, can use control interface such as between this instructor in broadcasting's video camera and support
Serial line interface communicates.In certain embodiments, above-mentioned instructor in broadcasting's video camera and instructor in broadcasting's support (include binocular camera, wheat
Gram wind etc.) also can be integrated into instructor in broadcasting's equipment, for the type of attachment of equipment each in instructor in broadcasting's camera system, the embodiment of the present invention
Do not limit.
Further, refer to Fig. 8, be the networking schematic diagram of a kind of instructor in broadcasting's camera system that the embodiment of the present invention provides.
As shown in Figure 8, multiple seats in the plane can carry out networking, and multimachine hyte net mode includes group between multiple seat in the plane installing instructor in broadcasting's video camera
Net, installs instructor in broadcasting's video camera and seat in the plane+multiple common Pan/Tilt/Zoom camera networkings of instructor in broadcasting's support, installs instructor in broadcasting's video camera and instructor in broadcasting
The seat in the plane of support+without networking between the seat in the plane (i.e. only have instructor in broadcasting's support) of Pan/Tilt/Zoom camera, and the seat in the plane+multiple without Pan/Tilt/Zoom camera
Common Pan/Tilt/Zoom camera networking (i.e. without instructor in broadcasting's support).Can enter by the way of LAN or Wi-Fi between the video camera of each seat in the plane
Row interconnection is with transmitting control message, and this control message includes camera switching message, audio, video data such as object video pattern number
According to etc..Further alternative, this control message can pass through Internet protocol (Internet Protocol, referred to as " IP ")
Transmission, as used IP Camera protocol stack.Require that there is shooting overlapping region between binocular camera in seat in the plane two-by-two.When
When a certain instructor in broadcasting's video camera needs to carry out multi-channel video output, may be connected to the video square of this instructor in broadcasting's video camera place group network system
In battle array, video matrix switch over output.Optionally, the switchover policy of this video matrix can be by the arbitrary appointment in scene
Instructor in broadcasting's video camera as instructor in broadcasting's camera control of main frame position, or be controlled by third party device, the embodiment of the present invention
Do not limit.After the video image of this video matrix output is encoded by coding/decoding apparatus, far-end can be transferred to, to realize
Video conference.Concrete, if the number of cameras in networking is less, video data can cascade and carry out processing that (instructor in broadcasting's support props up
Hold video cascade);If quantity is more, the video of multiple video cameras all exports video matrix and processes, video matrix enter
The switching in the one or more camera video sources of row or synthesis.Further, support can externally provide video input/output to connect
Mouth, LAN/Wi-Fi network interface and serial line interface etc..Wherein, video input interface is for the input video of other video camera external;Depending on
Frequently output interface is used for connecting the equipment such as terminal or video matrix with output video image;Serial line interface then provides support
Control and debugging interface;LAN/Wi-Fi network interface, for the cascade of multiple video camera seats in the plane, can transmit audio, video data and control number
According to etc..
Further, under many instructors in broadcasting video camera networking scene, the plurality of instructor in broadcasting's video camera all has object video detection
Ability and Pan/Tilt/Zoom camera function, can be responsible for output seat in the plane select to take the photograph with PTZ using one of them instructor in broadcasting's video camera as main frame position
Camera controls, and other video camera is as from seat in the plane;The seat in the plane of instructor in broadcasting's video camera and instructor in broadcasting's support+multiple common Pan/Tilt/Zoom camera fields
Under scape, only one of which instructor in broadcasting's video camera has object video power of test, is responsible for output seat in the plane and selects and Pan/Tilt/Zoom camera control,
Common camera is only used as Pan/Tilt/Zoom camera and uses, and owing to only instructor in broadcasting's video camera has object video power of test, therefore should
Obtain the data from seat in the plane object video model not over network under scene, carry out the coupling of multimachine digital video object model
Process.
Concrete, instructor in broadcasting's video camera and binocular camera in the embodiment of the present invention can refer to the corresponding enforcement of above-mentioned Fig. 1-6
The associated description of example, here is omitted.
Refer to the structural representation that Fig. 9, Fig. 9 are a kind of instructor in broadcasting's video cameras that the embodiment of the present invention provides, be used for performing
Above-mentioned camera parameters method of adjustment.Concrete, as it is shown in figure 9, described instructor in broadcasting's video camera of the embodiment of the present invention includes: logical
Letter interface 300, memorizer 200 and processor 100, described processor 100 respectively with described communication interface 300 and described memorizer
200 connect.Described memorizer 200 can be high-speed RAM memorizer, it is also possible to be non-labile memorizer (non-
Volatile memory), for example, at least one disk memory.Described communication interface 300, memorizer 200 and processor
Data cube computation can be carried out by bus, it is also possible to data cube computation by other means between 100.In the present embodiment with bus even
Connect and illustrate.Device structure shown in Fig. 9 is not intended that the restriction to the embodiment of the present invention, it is also possible to include than diagram more
Many or less parts, or combine some parts, or different parts are arranged.Wherein:
Processor 100 is the control centre of equipment, utilizes various interface and the various piece of the whole equipment of connection, logical
Cross operation or perform to be stored in the program in memorizer 200 and/or unit, and calling the driving being stored in memorizer 200
Software, to perform the various functions of equipment and to process data.Processor 100 can be by integrated circuit (Integrated
Circuit, referred to as " IC ") composition, such as can be made up of the IC of single encapsulation, it is also possible to by connecting many identical functions
Or the encapsulation IC of difference in functionality and form.For example, processor 100 can only include central processing unit (Central
Processing Unit, referred to as " CPU "), it is also possible to it is CPU, digital signal processor (Digital Signal
Processor, referred to as " DSP "), graphic process unit (Graphic Processing Unit, referred to as " GPU ") and various
The combination of control chip.In embodiments of the present invention, CPU can be single arithmetic core, it is also possible to include multioperation core.
Communication interface 300 can include line interface, wave point etc..
Memorizer 200 can be used for storing drive software (or software program) and unit, processor 100, communication interface 300
By calling the drive software and unit being stored in memorizer 200, thus the various functions performing equipment is applied and real
Existing data process.Memorizer 200 mainly includes program storage area and data storage area, and wherein, program storage area can store at least
Drive software etc. needed for one function;Data storage area can store according to the data in parameter tuning process, and described above three
Dimension coordinate information.
Concrete, described processor 100 reads described drive software and at described drive software from described memorizer 200
Effect is lower to be performed:
Determine the target video object needing shooting;
Screen from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place according to default instructor in broadcasting's strategy
Go out the target video camera for shooting described target video object;
Obtaining the first three-dimensional coordinate of described target video object, described first three-dimensional coordinate is described target video object
Three-dimensional coordinate under the first coordinate system that described target video camera is corresponding;
The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and defeated
Go out to adjust the video image after camera parameter.
Optionally, described processor 100 reads described drive software and at described drive software from described memorizer 200
Lower the first three-dimensional coordinate performing described acquisition described target video object of effect, specifically performs following steps:
The second three-dimensional of the binocular camera transmission being connected with described instructor in broadcasting's video camera is obtained by described communication interface 300
Coordinate, described second three-dimensional coordinate is described target video object three under the second coordinate system that described binocular camera is corresponding
Dimension coordinate;
According to the described binocular camera demarcated in advance and the position relationship of described target video camera, three-dimensional by described second
Coordinate Conversion is the first three-dimensional coordinate.
Optionally, described processor 100 reads described drive software and at described drive software from described memorizer 200
Effect is lower performs the described target video object determining and needing shooting, specifically performs following steps:
Obtaining the shooting image of described binocular camera transmission, described shooting image includes at least one object video;
Set up the object video model including at least one object video described, and from least one object video described
Determine target video object;
Described processor 100 reads described drive software and under the effect of described drive software from described memorizer 200
Perform described according to default instructor in broadcasting's strategy screening from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place
Go out the target video camera for shooting described target video object, specifically perform following steps:
The shooting image that each video camera from described instructor in broadcasting's camera system obtains respectively determines described target video
Object, and obtain the described target video object shooting effect parameter at each video camera;
The video camera that shooting effect parameter meets default instructor in broadcasting strategy is defined as shooting described target video object
Target video camera.
Optionally, described processor 100 reads described drive software and at described drive software from described memorizer 200
The lower execution of effect determines described target video object from the shooting image that video camera obtains, and specifically performs following steps:
According to the described binocular camera demarcated in advance and the position relationship of current camera, by described second three-dimensional coordinate
Be converted to the 3rd three-dimensional coordinate, wherein, described current camera be in described instructor in broadcasting's camera system except described binocular camera with
Outer arbitrary video camera, described 3rd three-dimensional coordinate is described target video object at the 3rd seat corresponding to described current camera
Three-dimensional coordinate under mark system;
Judge that described target video object region under described 3rd three-dimensional coordinate and described current camera detect
The overlapping area in object video region under the three-dimensional coordinate of this object video whether exceed default area threshold;
If exceeding, then this object video is defined as described target video object.
Optionally, described shooting effect parameter can include that described target video object is in coordinate system corresponding to current camera
Under eye to any one or multinomial in the scenario objects parameter of eye efficacy parameter, hiding relation parameter and shooting area, institute
Stating current camera is arbitrary video camera in addition to described binocular camera in described instructor in broadcasting's camera system.
Wherein, described eye can include the seat that described target video object is corresponding relative to current camera to eye efficacy parameter
Mark system the anglec of rotation, the described anglec of rotation be according to described target video object described second coordinate system the anglec of rotation with
And the position relationship of the described binocular camera demarcated in advance and described current camera determines.
Wherein, described hiding relation parameter and described scenario objects parameter can be to take the photograph according to the described binocular demarcated in advance
Camera and the position relationship of described current camera, institute is heavily thrown in the region of the scenario objects detected by described current camera
State what the imaging plane of current camera was determined.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not has the portion described in detail in certain embodiment
Point, may refer to the associated description of other embodiments.
In several embodiments provided by the present invention, it should be understood that disclosed apparatus and method, can be passed through it
Its mode realizes.Such as, device embodiment described above is only schematically, such as, and the division of described unit, only
Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing
Close or be desirably integrated into another system, or some features can be ignored, or not performing.Another point, shown or discussed
Coupling each other or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or logical
Letter connects, and can be electrical, machinery or other form.
Described this as the unit that separating component illustrates can be or may not be physically separate, as unit
The parts of display can be or may not be physical location, i.e. may be located at a place, or can also be distributed to many
On individual NE.Some or all of unit therein can be selected according to the actual needs to realize the present embodiment scheme
Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated list
Unit both can realize to use the form of hardware, it would however also be possible to employ hardware adds the form of SFU software functional unit and realizes.
The above-mentioned integrated unit realized with the form of SFU software functional unit, can be stored in an embodied on computer readable and deposit
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or the network equipment etc.) or processor (processor) perform the present invention each
The part steps of method described in embodiment.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (Read-
Only Memory, referred to as " ROM "), random access memory (Random Access Memory, RAM), magnetic disc or light
The various medium that can store program code such as dish.
Those skilled in the art are it can be understood that arrive, for convenience and simplicity of description, only with above-mentioned each functional unit
Division be illustrated, in actual application, can be as desired by complete by different functional units for above-mentioned functions distribution
Become, the internal structure of device will be divided into different functional units, to complete all or part of function described above.On
State the specific works process of the device of description, be referred to the corresponding process in preceding method embodiment, do not repeat them here.
Last it is noted that various embodiments above is only in order to illustrate technical scheme, it is not intended to limit;To the greatest extent
The present invention has been described in detail by pipe with reference to foregoing embodiments, it will be understood by those within the art that: it depends on
So the technical scheme described in foregoing embodiments can be modified, or the most some or all of technical characteristic is entered
Row equivalent;And these amendments or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology
The scope of scheme.
Claims (15)
1. a camera parameters method of adjustment, described method is applied in instructor in broadcasting's video camera, it is characterised in that including:
Determine the target video object needing shooting;
From each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place, use is filtered out according to default instructor in broadcasting's strategy
In the target video camera shooting described target video object;
Obtaining the first three-dimensional coordinate of described target video object, described first three-dimensional coordinate is that described target video object is in institute
State the three-dimensional coordinate under the first coordinate system that target video camera is corresponding;
The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and exports tune
Video image after whole camera parameter.
Method the most according to claim 1, it is characterised in that the first of described acquisition described target video object is three-dimensional to sit
Mark, including:
Obtaining the second three-dimensional coordinate of the binocular camera transmission being connected with described instructor in broadcasting's video camera, described second three-dimensional coordinate is
Described target video object three-dimensional coordinate under the second coordinate system that described binocular camera is corresponding;
According to the described binocular camera demarcated in advance and the position relationship of described target video camera, by described second three-dimensional coordinate
Be converted to the first three-dimensional coordinate.
Method the most according to claim 2, it is characterised in that the described target video object determining needs shooting, including:
Obtaining the shooting image of described binocular camera transmission, described shooting image includes at least one object video;
Set up the object video model including at least one object video described, and determine from least one object video described
Go out target video object;
Described according to default instructor in broadcasting's strategy screening from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place
Go out the target video camera for shooting described target video object, including:
The shooting image that each video camera from described instructor in broadcasting's camera system obtains respectively determines described target video object,
And obtain the described target video object shooting effect parameter at each video camera;
The video camera that shooting effect parameter meets default instructor in broadcasting strategy is defined as the mesh for shooting described target video object
Mark video camera.
Method the most according to claim 3, it is characterised in that determine described mesh from the shooting image that video camera obtains
Mark object video, including:
According to the described binocular camera demarcated in advance and the position relationship of current camera, by described second three-dimensional coordinate conversion
Be the 3rd three-dimensional coordinate, wherein, described current camera be in described instructor in broadcasting's camera system in addition to described binocular camera
Arbitrary video camera, described 3rd three-dimensional coordinate is described target video object at three-coordinate corresponding to described current camera
Under three-dimensional coordinate;
Judge that what described target video object region under described 3rd three-dimensional coordinate and described current camera detected regards
Frequently whether the overlapping area in object region under the three-dimensional coordinate of this object video exceedes default area threshold;
If exceeding, then this object video is defined as described target video object.
Method the most according to claim 3, it is characterised in that described shooting effect parameter includes described target video object
Eye under the coordinate system that current camera is corresponding is to eye efficacy parameter, hiding relation parameter and the scenario objects of shooting area
Any one or multinomial in parameter, described current camera be in described instructor in broadcasting's camera system in addition to described binocular camera
Arbitrary video camera.
Method the most according to claim 5, it is characterised in that described eye includes described target video pair to eye efficacy parameter
As the anglec of rotation of the coordinate system corresponding relative to current camera, the described anglec of rotation is to exist according to described target video object
The anglec of rotation of described second coordinate system and the position of the described binocular camera demarcated in advance and described current camera are closed
System determines.
Method the most according to claim 5, it is characterised in that described hiding relation parameter and described scenario objects parameter are
According to the described binocular camera demarcated in advance and the position relationship of described current camera, described current camera is detected
The region of scenario objects heavily throw into what the imaging plane of described current camera was determined.
8. instructor in broadcasting's video camera, it is characterised in that include memorizer and processor, described processor is with described memorizer even
Connect;Wherein,
Described memorizer is used for storing drive software;
Described processor reads described drive software from described memorizer and performs under the effect of described drive software:
Determine the target video object needing shooting;
From each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place, use is filtered out according to default instructor in broadcasting's strategy
In the target video camera shooting described target video object;
Obtaining the first three-dimensional coordinate of described target video object, described first three-dimensional coordinate is that described target video object is in institute
State the three-dimensional coordinate under the first coordinate system that target video camera is corresponding;
The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and exports tune
Video image after whole camera parameter.
Instructor in broadcasting's video camera the most according to claim 8, it is characterised in that described instructor in broadcasting's video camera also includes communication interface,
Described communication interface is connected with described processor;Described processor reads described drive software from described memorizer and drives described
Lower first three-dimensional coordinate performing described acquisition described target video object of effect of dynamic software, specifically performs following steps:
The second three-dimensional coordinate of the binocular camera transmission being connected with described instructor in broadcasting's video camera, institute is obtained by described communication interface
Stating the second three-dimensional coordinate is described target video object three-dimensional coordinate under the second coordinate system that described binocular camera is corresponding;
According to the described binocular camera demarcated in advance and the position relationship of described target video camera, by described second three-dimensional coordinate
Be converted to the first three-dimensional coordinate.
Instructor in broadcasting's video camera the most according to claim 9, it is characterised in that described processor reads institute from described memorizer
State drive software and under the effect of described drive software, perform the described target video object determining and needing shooting, specifically performing
Following steps:
Obtaining the shooting image of described binocular camera transmission, described shooting image includes at least one object video;
Set up the object video model including at least one object video described, and determine from least one object video described
Go out target video object;
Described processor reads described drive software from described memorizer and presses described in execution under the effect of described drive software
Filter out for shooting from each video camera of instructor in broadcasting's camera system at described instructor in broadcasting's video camera place according to default instructor in broadcasting's strategy
The target video camera of described target video object, specifically performs following steps:
The shooting image that each video camera from described instructor in broadcasting's camera system obtains respectively determines described target video object,
And obtain the described target video object shooting effect parameter at each video camera;
The video camera that shooting effect parameter meets default instructor in broadcasting strategy is defined as the mesh for shooting described target video object
Mark video camera.
11. instructor in broadcasting's video cameras according to claim 10, it is characterised in that described processor reads institute from described memorizer
State drive software and perform to determine described target from the shooting image that video camera obtains under the effect of described drive software
Object video, specifically performs following steps:
According to the described binocular camera demarcated in advance and the position relationship of current camera, by described second three-dimensional coordinate conversion
Be the 3rd three-dimensional coordinate, wherein, described current camera be in described instructor in broadcasting's camera system in addition to described binocular camera
Arbitrary video camera, described 3rd three-dimensional coordinate is described target video object at three-coordinate corresponding to described current camera
Under three-dimensional coordinate;
Judge that what described target video object region under described 3rd three-dimensional coordinate and described current camera detected regards
Frequently whether the overlapping area in object region under the three-dimensional coordinate of this object video exceedes default area threshold;
If exceeding, then this object video is defined as described target video object.
12. instructor in broadcasting's video cameras according to claim 10, it is characterised in that described shooting effect parameter includes described target
Object video eye under the coordinate system that current camera is corresponding is to eye efficacy parameter, hiding relation parameter and shooting area
Any one or multinomial in scenario objects parameter, described current camera is except described binocular camera shooting in described instructor in broadcasting's camera system
Arbitrary video camera beyond machine.
13. instructor in broadcasting's video cameras according to claim 12, it is characterised in that described eye includes described mesh to eye efficacy parameter
The anglec of rotation of the coordinate system that mark object video is corresponding relative to current camera, the described anglec of rotation is to regard according to described target
Frequently object is at the anglec of rotation of described second coordinate system and the described binocular camera demarcated in advance and described current camera
Position relationship determine.
14. instructor in broadcasting's video cameras according to claim 12, it is characterised in that described hiding relation parameter and described scene pair
As parameter is according to the described binocular camera demarcated in advance and the position relationship of described current camera, by described current shooting
What the imaging plane of described current camera was determined heavily thrown in the region of the scenario objects that machine examination measures.
15. 1 kinds of instructor in broadcasting's camera systems, it is characterised in that include the first video camera and at least one second video camera, described the
One video camera includes instructor in broadcasting's video camera and binocular camera, between described instructor in broadcasting's video camera and described binocular camera, described
Connected by wireline interface or wave point between one video camera and described second video camera;Wherein,
Described instructor in broadcasting's video camera, for determining the target video object needing shooting, and according to default instructor in broadcasting's strategy from described
The video camera of instructor in broadcasting's camera system filters out the target video camera for shooting described target video object;
Described binocular camera, for obtaining the second three-dimensional coordinate of described target video object, and by the described second three-dimensional seat
Mark is transferred to described instructor in broadcasting's video camera;Wherein, described second three-dimensional coordinate is that described target video object is at described binocular camera shooting
Three-dimensional coordinate under the second coordinate system that machine is corresponding;
Described instructor in broadcasting's video camera, for receiving described second three-dimensional coordinate of described binocular camera transmission;According to demarcating in advance
Described binocular camera and the position relationship of described target video camera, described second three-dimensional coordinate is converted to first and three-dimensional sits
Mark;The camera parameter of described target video camera is adjusted to the camera parameter corresponding with described first three-dimensional coordinate, and exports tune
Video image after whole camera parameter;Wherein, described first three-dimensional coordinate is that described target video object images in described target
Three-dimensional coordinate under the first coordinate system that machine is corresponding.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610562671.6A CN106251334B (en) | 2016-07-18 | 2016-07-18 | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system |
PCT/CN2017/091863 WO2018014730A1 (en) | 2016-07-18 | 2017-07-05 | Method for adjusting parameters of camera, broadcast-directing camera, and broadcast-directing filming system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610562671.6A CN106251334B (en) | 2016-07-18 | 2016-07-18 | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106251334A true CN106251334A (en) | 2016-12-21 |
CN106251334B CN106251334B (en) | 2019-03-01 |
Family
ID=57613157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610562671.6A Active CN106251334B (en) | 2016-07-18 | 2016-07-18 | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106251334B (en) |
WO (1) | WO2018014730A1 (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018014730A1 (en) * | 2016-07-18 | 2018-01-25 | 华为技术有限公司 | Method for adjusting parameters of camera, broadcast-directing camera, and broadcast-directing filming system |
CN108900860A (en) * | 2018-08-23 | 2018-11-27 | 佛山龙眼传媒科技有限公司 | A kind of instructor in broadcasting's control method and device |
CN109031201A (en) * | 2018-06-01 | 2018-12-18 | 深圳市鹰硕技术有限公司 | The voice localization method and device of Behavior-based control identification |
CN109218651A (en) * | 2017-06-30 | 2019-01-15 | 宝利通公司 | Optimal view selection method in video conference |
CN109413359A (en) * | 2017-08-16 | 2019-03-01 | 华为技术有限公司 | Camera tracking method, device and equipment |
CN109712188A (en) * | 2018-12-28 | 2019-05-03 | 科大讯飞股份有限公司 | A kind of method for tracking target and device |
CN109922251A (en) * | 2017-12-12 | 2019-06-21 | 华为技术有限公司 | The method, apparatus and system quickly captured |
WO2019206247A1 (en) * | 2018-04-27 | 2019-10-31 | Shanghai Truthvision Information Technology Co., Ltd | System and method for camera calibration |
CN110456829A (en) * | 2019-08-07 | 2019-11-15 | 深圳市维海德技术股份有限公司 | Positioning and tracing method, device and computer readable storage medium |
CN110737798A (en) * | 2019-09-26 | 2020-01-31 | 万翼科技有限公司 | Indoor inspection method and related product |
CN110910460A (en) * | 2018-12-27 | 2020-03-24 | 北京爱笔科技有限公司 | Method and device for acquiring position information and calibration equipment |
CN111080698A (en) * | 2019-11-27 | 2020-04-28 | 上海新时达机器人有限公司 | Long plate position calibration method and system and storage device |
CN111131697A (en) * | 2019-12-23 | 2020-05-08 | 北京中广上洋科技股份有限公司 | Multi-camera intelligent tracking shooting method, system, equipment and storage medium |
CN111353368A (en) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Pan-tilt camera, face feature processing method and device and electronic equipment |
CN111698467A (en) * | 2020-05-08 | 2020-09-22 | 北京中广上洋科技股份有限公司 | Intelligent tracking method and system based on multiple cameras |
CN111787243A (en) * | 2019-07-31 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Broadcasting guide method, device and computer readable storage medium |
CN111800590A (en) * | 2020-07-06 | 2020-10-20 | 深圳博为教育科技有限公司 | Method, device and system for controlling director and control host |
CN112468680A (en) * | 2019-09-09 | 2021-03-09 | 上海御正文化传播有限公司 | Processing method of advertisement shooting site synthesis processing system |
CN112802058A (en) * | 2021-01-21 | 2021-05-14 | 北京首都机场航空安保有限公司 | Method and device for tracking illegal moving target |
CN112887653A (en) * | 2021-01-25 | 2021-06-01 | 联想(北京)有限公司 | Information processing method and information processing device |
CN113271482A (en) * | 2021-05-17 | 2021-08-17 | 广东彼雍德云教育科技有限公司 | Portable full-width image scratching blackboard |
CN113453021A (en) * | 2021-03-24 | 2021-09-28 | 北京国际云转播科技有限公司 | Artificial intelligence broadcasting guide method, system, server and computer readable storage medium |
CN113516717A (en) * | 2020-04-10 | 2021-10-19 | 富华科精密工业(深圳)有限公司 | Camera device external parameter calibration method, electronic equipment and storage medium |
CN113808199A (en) * | 2020-06-17 | 2021-12-17 | 华为技术有限公司 | Positioning method, electronic equipment and positioning system |
CN116389660A (en) * | 2021-12-22 | 2023-07-04 | 广州开得联智能科技有限公司 | Recorded broadcast guiding method, recorded broadcast guiding device, recorded broadcast guiding equipment and storage medium |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969662B (en) * | 2018-09-28 | 2023-09-26 | 杭州海康威视数字技术股份有限公司 | Method and device for calibrating internal parameters of fish-eye camera, calibration device controller and system |
CN111243029B (en) * | 2018-11-28 | 2023-06-23 | 驭势(上海)汽车科技有限公司 | Calibration method and device of vision sensor |
CN111325790B (en) * | 2019-07-09 | 2024-02-20 | 杭州海康威视系统技术有限公司 | Target tracking method, device and system |
CN111080679B (en) * | 2020-01-02 | 2023-04-18 | 东南大学 | Method for dynamically tracking and positioning indoor personnel in large-scale place |
CN112819770B (en) * | 2021-01-26 | 2022-11-22 | 中国人民解放军陆军军医大学第一附属医院 | Iodine contrast agent allergy monitoring method and system |
CN113129376A (en) * | 2021-04-22 | 2021-07-16 | 青岛联合创智科技有限公司 | Checkerboard-based camera real-time positioning method |
CN113587895B (en) * | 2021-07-30 | 2023-06-30 | 杭州三坛医疗科技有限公司 | Binocular distance measuring method and device |
CN113610932B (en) * | 2021-08-20 | 2024-06-04 | 苏州智加科技有限公司 | Binocular camera external parameter calibration method and device |
CN113838146A (en) * | 2021-09-26 | 2021-12-24 | 昆山丘钛光电科技有限公司 | Method and device for verifying calibration precision of camera module and method and device for testing camera module |
CN114025107B (en) * | 2021-12-01 | 2023-12-01 | 北京七维视觉科技有限公司 | Image ghost shooting method, device, storage medium and fusion processor |
CN117523431A (en) * | 2023-11-17 | 2024-02-06 | 中国科学技术大学 | Firework detection method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090303329A1 (en) * | 2008-06-06 | 2009-12-10 | Mitsunori Morisaki | Object image displaying system |
CN101630406A (en) * | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
CN102843540B (en) * | 2011-06-20 | 2015-07-29 | 宝利通公司 | Automatic camera for video conference is selected |
CN104869365A (en) * | 2015-06-02 | 2015-08-26 | 阔地教育科技有限公司 | Direct recording and broadcasting system based mouse tracking method and device |
CN105049764A (en) * | 2015-06-17 | 2015-11-11 | 武汉智亿方科技有限公司 | Image tracking method and system for teaching based on multiple positioning cameras |
CN105718862A (en) * | 2016-01-15 | 2016-06-29 | 北京市博汇科技股份有限公司 | Method, device and recording-broadcasting system for automatically tracking teacher via single camera |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8537195B2 (en) * | 2011-02-09 | 2013-09-17 | Polycom, Inc. | Automatic video layouts for multi-stream multi-site telepresence conferencing system |
CN106251334B (en) * | 2016-07-18 | 2019-03-01 | 华为技术有限公司 | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system |
-
2016
- 2016-07-18 CN CN201610562671.6A patent/CN106251334B/en active Active
-
2017
- 2017-07-05 WO PCT/CN2017/091863 patent/WO2018014730A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090303329A1 (en) * | 2008-06-06 | 2009-12-10 | Mitsunori Morisaki | Object image displaying system |
CN101630406A (en) * | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
CN102843540B (en) * | 2011-06-20 | 2015-07-29 | 宝利通公司 | Automatic camera for video conference is selected |
CN104869365A (en) * | 2015-06-02 | 2015-08-26 | 阔地教育科技有限公司 | Direct recording and broadcasting system based mouse tracking method and device |
CN105049764A (en) * | 2015-06-17 | 2015-11-11 | 武汉智亿方科技有限公司 | Image tracking method and system for teaching based on multiple positioning cameras |
CN105718862A (en) * | 2016-01-15 | 2016-06-29 | 北京市博汇科技股份有限公司 | Method, device and recording-broadcasting system for automatically tracking teacher via single camera |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018014730A1 (en) * | 2016-07-18 | 2018-01-25 | 华为技术有限公司 | Method for adjusting parameters of camera, broadcast-directing camera, and broadcast-directing filming system |
CN109218651B (en) * | 2017-06-30 | 2019-10-22 | 宝利通公司 | Optimal view selection method in video conference |
CN109218651A (en) * | 2017-06-30 | 2019-01-15 | 宝利通公司 | Optimal view selection method in video conference |
CN109413359B (en) * | 2017-08-16 | 2020-07-28 | 华为技术有限公司 | Camera tracking method, device and equipment |
CN109413359A (en) * | 2017-08-16 | 2019-03-01 | 华为技术有限公司 | Camera tracking method, device and equipment |
US10873666B2 (en) | 2017-08-16 | 2020-12-22 | Huawei Technologies Co., Ltd. | Camera tracking method and director device |
CN109922251A (en) * | 2017-12-12 | 2019-06-21 | 华为技术有限公司 | The method, apparatus and system quickly captured |
CN109922251B (en) * | 2017-12-12 | 2021-10-22 | 华为技术有限公司 | Method, device and system for quick snapshot |
WO2019206247A1 (en) * | 2018-04-27 | 2019-10-31 | Shanghai Truthvision Information Technology Co., Ltd | System and method for camera calibration |
US11468598B2 (en) | 2018-04-27 | 2022-10-11 | Shanghai Truthvision Information Technology Co., Ltd. | System and method for camera calibration |
CN109031201A (en) * | 2018-06-01 | 2018-12-18 | 深圳市鹰硕技术有限公司 | The voice localization method and device of Behavior-based control identification |
CN108900860A (en) * | 2018-08-23 | 2018-11-27 | 佛山龙眼传媒科技有限公司 | A kind of instructor in broadcasting's control method and device |
CN110910460B (en) * | 2018-12-27 | 2022-09-27 | 北京爱笔科技有限公司 | Method and device for acquiring position information and calibration equipment |
CN110910460A (en) * | 2018-12-27 | 2020-03-24 | 北京爱笔科技有限公司 | Method and device for acquiring position information and calibration equipment |
CN109712188A (en) * | 2018-12-28 | 2019-05-03 | 科大讯飞股份有限公司 | A kind of method for tracking target and device |
CN111787243A (en) * | 2019-07-31 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Broadcasting guide method, device and computer readable storage medium |
CN110456829A (en) * | 2019-08-07 | 2019-11-15 | 深圳市维海德技术股份有限公司 | Positioning and tracing method, device and computer readable storage medium |
CN110456829B (en) * | 2019-08-07 | 2022-12-13 | 深圳市维海德技术股份有限公司 | Positioning tracking method, device and computer readable storage medium |
CN111353368A (en) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Pan-tilt camera, face feature processing method and device and electronic equipment |
CN112468680A (en) * | 2019-09-09 | 2021-03-09 | 上海御正文化传播有限公司 | Processing method of advertisement shooting site synthesis processing system |
CN110737798A (en) * | 2019-09-26 | 2020-01-31 | 万翼科技有限公司 | Indoor inspection method and related product |
CN111080698A (en) * | 2019-11-27 | 2020-04-28 | 上海新时达机器人有限公司 | Long plate position calibration method and system and storage device |
CN111080698B (en) * | 2019-11-27 | 2023-06-06 | 上海新时达机器人有限公司 | Method, system and storage device for calibrating position of long plate |
CN111131697A (en) * | 2019-12-23 | 2020-05-08 | 北京中广上洋科技股份有限公司 | Multi-camera intelligent tracking shooting method, system, equipment and storage medium |
CN113516717A (en) * | 2020-04-10 | 2021-10-19 | 富华科精密工业(深圳)有限公司 | Camera device external parameter calibration method, electronic equipment and storage medium |
CN111698467A (en) * | 2020-05-08 | 2020-09-22 | 北京中广上洋科技股份有限公司 | Intelligent tracking method and system based on multiple cameras |
CN113808199A (en) * | 2020-06-17 | 2021-12-17 | 华为技术有限公司 | Positioning method, electronic equipment and positioning system |
CN113808199B (en) * | 2020-06-17 | 2023-09-08 | 华为云计算技术有限公司 | Positioning method, electronic equipment and positioning system |
CN111800590A (en) * | 2020-07-06 | 2020-10-20 | 深圳博为教育科技有限公司 | Method, device and system for controlling director and control host |
CN112802058A (en) * | 2021-01-21 | 2021-05-14 | 北京首都机场航空安保有限公司 | Method and device for tracking illegal moving target |
CN112887653A (en) * | 2021-01-25 | 2021-06-01 | 联想(北京)有限公司 | Information processing method and information processing device |
CN112887653B (en) * | 2021-01-25 | 2022-10-21 | 联想(北京)有限公司 | Information processing method and information processing device |
CN113453021A (en) * | 2021-03-24 | 2021-09-28 | 北京国际云转播科技有限公司 | Artificial intelligence broadcasting guide method, system, server and computer readable storage medium |
CN113453021B (en) * | 2021-03-24 | 2022-04-29 | 北京国际云转播科技有限公司 | Artificial intelligence broadcasting guide method, system, server and computer readable storage medium |
CN113271482A (en) * | 2021-05-17 | 2021-08-17 | 广东彼雍德云教育科技有限公司 | Portable full-width image scratching blackboard |
CN116389660A (en) * | 2021-12-22 | 2023-07-04 | 广州开得联智能科技有限公司 | Recorded broadcast guiding method, recorded broadcast guiding device, recorded broadcast guiding equipment and storage medium |
CN116389660B (en) * | 2021-12-22 | 2024-04-12 | 广州开得联智能科技有限公司 | Recorded broadcast guiding method, recorded broadcast guiding device, recorded broadcast guiding equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106251334B (en) | 2019-03-01 |
WO2018014730A1 (en) | 2018-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106251334A (en) | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system | |
US10425638B2 (en) | Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device | |
US10368011B2 (en) | Camera array removing lens distortion | |
US10154194B2 (en) | Video capturing and formatting system | |
CN107507243A (en) | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system | |
US10694167B1 (en) | Camera array including camera modules | |
Matsuyama et al. | 3D video and its applications | |
CN110463176A (en) | Image quality measure | |
CN105264876A (en) | Method and system for low cost television production | |
US20060125921A1 (en) | Method and system for compensating for parallax in multiple camera systems | |
CN105072314A (en) | Virtual studio implementation method capable of automatically tracking objects | |
CN106462944A (en) | Mapping multiple high-resolution images onto a low-resolution 360-degree image to produce a high-resolution panorama without ghosting | |
CN108513072A (en) | Image processor, image processing method and imaging system | |
CN107948577A (en) | A kind of method and its system of panorama video conference | |
US10186301B1 (en) | Camera array including camera modules | |
JP2021514573A (en) | Systems and methods for capturing omni-stereo video using multi-sensors | |
US11108971B2 (en) | Camera array removing lens distortion | |
CN103729839B (en) | A kind of method and system of sensor-based outdoor camera tracking | |
US20210160549A1 (en) | Delivering on-demand video viewing angles of an arena | |
CN107710276A (en) | The unified image processing of the combination image in the region based on spatially co-located | |
CN106447735A (en) | Panoramic camera geometric calibration processing method | |
JPWO2019026287A1 (en) | Imaging device and information processing method | |
CN106060658A (en) | Image processing method and device | |
CN103546680B (en) | A kind of deformation-free omni-directional fisheye photographic device and a method for implementing the same | |
CN111325790A (en) | Target tracking method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |