WO2015184978A1 - Camera control method and device, and camera - Google Patents

Camera control method and device, and camera Download PDF

Info

Publication number
WO2015184978A1
WO2015184978A1 PCT/CN2015/080612 CN2015080612W WO2015184978A1 WO 2015184978 A1 WO2015184978 A1 WO 2015184978A1 CN 2015080612 W CN2015080612 W CN 2015080612W WO 2015184978 A1 WO2015184978 A1 WO 2015184978A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
target object
distance
image
focal length
Prior art date
Application number
PCT/CN2015/080612
Other languages
French (fr)
Chinese (zh)
Inventor
刘源
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015184978A1 publication Critical patent/WO2015184978A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to the field of information processing technologies, and in particular, to a camera control method, apparatus, and video camera.
  • the camera used for video conferencing is the basic equipment of the video conferencing system.
  • the video conferencing system is usually applied to more than two locations. These cameras are respectively placed at various locations of the system and are connected to each other through a communication network for photographing local users. Video and transmit the video to other cameras so that users in different locations can communicate with each other via video.
  • the prior art also provides a method for automatically zooming, in which the camera detects the area of the image to be captured in the video in real time, and then adjusts the focal length of the camera according to the area, so that the camera needs to be photographed.
  • the image of the target in the video is always the right size.
  • the inventor found in the research process that the automatic zoom method provided by the prior art adopts an algorithm which is complicated and inaccurate in obtaining an area to be photographed in the camera.
  • the embodiment of the invention provides a camera control method, device and camera, which solves the problem that the algorithm of the automatic zoom method provided by the prior art has complicated algorithm and low accuracy.
  • a camera control method comprising:
  • the focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  • the step of adjusting a focal length of the camera by using the first distance obtained by the first detection period and the second distance obtained by the second detection period include:
  • the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the target object in the scene the pixel coordinates (u 2, v 2) in the image, the focal length f 2 acquires the pixel coordinates (u 2, v 2) a corresponding relation;
  • the first distance acquired by using the first detection period, and the second distance acquired by the second detection period, adjusting the photo include:
  • the method further includes:
  • the obtaining, by the pixel coordinates of the target object in the depth image, the distance between the target object and the camera includes:
  • the distance between the target object and the camera is obtained by the correspondence between the gray level and the distance.
  • the method further includes:
  • the focus position of the camera is adjusted to focus the camera at the position with the highest focus priority.
  • the determining a pixel point coordinate of the target object in the depth image includes:
  • the method further includes: determining, in advance, pixel coordinates of the target object formed in the scene image and the depth image Correspondence relationship
  • the step of predetermining the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image includes:
  • x is a homogeneous representation of the pixel coordinates of the object in the scene image
  • x' is a homogeneous representation of the pixel coordinates of the object in the depth image
  • H is the scene image and the depth image Perspective transformation matrix
  • a camera control apparatus comprising:
  • a determining module configured to determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image;
  • An acquiring module configured to acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image
  • an adjustment module configured to adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  • the adjusting module includes:
  • a first acquiring unit configured to: according to a second distance d 2 between the target object and the camera obtained in the second detection period, and a pixel point coordinate of the target object in the scene image (u 2 , v 2 ) acquiring world coordinates (x c2 , y c2 ) of the target object in the second detection period;
  • a second acquiring unit configured to: when the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the pixel coordinates of the target object in the scene image in the (u 2, v 2), and acquires the focal length f 2 of the pixel coordinates (u 2, v 2) a corresponding relation;
  • a first adjusting unit configured to: by a correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ), a pixel point coordinate of the target object in the scene image in the first detection period (u 1 , v 1 ), a first distance d 1 between the target object and the camera in the first detection period and a focal length f 1 of the camera in the first detection period, and acquiring the pixel point coordinates (u 1 , v 1 )
  • the value of the focal length f 2 when the difference between the pixel point coordinates (u 2 , v 2 ) is within a preset range, and adjusts the focal length of the camera to the focal length f 2 .
  • the adjusting module includes:
  • a third acquiring unit configured to acquire a focal length f 1 and a first distance d 1 of the camera in the first detection period after acquiring the first distance d 1 of the target object and the camera in the first detection period First ratio;
  • a fourth acquiring unit configured to set a focal length adjusted by the camera to be f 2 , and obtain a focal length f 2 and the second distance after acquiring the second distance d 2 of the target object and the camera in the second detecting period a second ratio of d 2 ;
  • a second adjusting unit configured to acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset proportional range, and adjust a focal length of the camera to a focal length f 2 .
  • the camera control apparatus further includes:
  • a determining module configured to acquire a first distance d 1 and the second distance d 2, d 1 calculates the first distance and the second distance d 2 difference, and when the difference is not within the preset threshold range , determining that the focal length of the camera needs to be adjusted.
  • the acquiring module includes:
  • a gray level acquiring unit configured to acquire a pixel point coordinate pair of the target object in the depth image The gray level of the pixel at which it should be;
  • the distance obtaining unit is configured to acquire a distance between the target object and the camera by the correspondence between the gray level and the distance.
  • the camera control apparatus further includes:
  • a setting module configured to set a focus priority of the target object to a highest priority according to the received setting information
  • a focusing module for adjusting a focus position of the camera after adjusting a focal length of the camera to focus the camera at a position with the highest focus priority.
  • the determining module includes:
  • a determining unit configured to determine, according to a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image, and pixel coordinates of the target object in the scene image, determining the target object in the depth image The coordinates of the pixel points in .
  • the camera control apparatus further includes: a correspondence determining module, where the corresponding relationship determining module is configured to predetermine the Corresponding relationship between pixel coordinates of the target object formed in the scene image and the depth image;
  • the correspondence determining module includes:
  • An imaging relationship expression obtaining unit configured to acquire an imaging relationship expression of the scene image and the depth image, where the imaging relationship expression is:
  • x is a homogeneous representation of the pixel coordinates of the object in the scene image
  • x' is a homogeneous representation of the pixel coordinates of the object in the depth image
  • H is the scene image and the depth image Perspective transformation matrix
  • a perspective transformation matrix acquiring unit configured to acquire pixel point coordinates of the four object points on the scene image and the depth image, respectively, and obtain a value of H in the imaging relationship expression, thereby acquiring the same object in the scene
  • a perspective transformation matrix of the image and the depth image by which the correspondence relationship of the pixel coordinates formed by the target object in the scene image and the depth image is characterized.
  • a camera comprising: a processor, a memory, an image sensor, and a depth sensor,
  • the image sensor is configured to generate a scene image including a target object
  • the depth image sensor is configured to generate a depth image including a target object
  • the memory is configured to store a program for controlling a camera
  • the processor is configured to read a program stored in the memory, and perform a camera-controlled operation according to the program, where the camera controlled operation comprises:
  • the focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  • the image sensor and the depth sensor are integrated in the same sensor;
  • the image sensor and the depth sensor are both disposed behind the camera lens, the image sensor and the depth sensor are disposed at different horizontal levels, and a half-reverse half is disposed between the camera lens and the image sensor and the depth sensor. lens;
  • the image sensor is disposed behind the camera lens, and the image sensor is arranged side by side with the depth sensor.
  • the present application discloses a camera control method and apparatus, and a corresponding camera.
  • the camera control method first, at each detection period, determining pixel coordinates of a target object in a scene image, and determining that the target object is at depth Pixel point coordinates in the image; then, based on the pixel point coordinates of the target object in the depth image, acquire the distance between the target object and the camera And reusing the first distance obtained in the first detection period and the second distance obtained in the second detection period to adjust the focal length of the camera.
  • the camera control method can adjust the focal length of the camera by the distance between the target object and the camera acquired in different detection periods.
  • the change in the distance between the target object and the camera can reflect the size of the area of the target object in the video.
  • the method disclosed in the present application adopts a simple algorithm based on the distance between the target object and the camera to achieve the adjustment of the focus distance, is easy to implement, and has higher accuracy and robustness. .
  • FIG. 1 is a flow chart of an embodiment of a camera control method according to the present disclosure
  • FIG. 2 is a flow chart of another embodiment of a camera control method according to the present disclosure.
  • FIG. 3 is a diagram showing an example of a camera imaging model disclosed in the present invention.
  • FIG. 4 is a schematic diagram of a geometric relationship of imaging of a camera disclosed in the present invention.
  • FIG. 5 is a flowchart of still another embodiment of a camera control method according to the present disclosure.
  • FIG. 6 is a flowchart of still another embodiment of a camera control method according to the present disclosure.
  • FIG. 7 is a schematic diagram showing the working principle of a depth sensor disclosed in the prior art.
  • FIG. 8 is a schematic structural diagram of a camera disclosed in the present application.
  • FIG. 9 is a schematic structural diagram of still another camera disclosed in the present application.
  • FIG. 10 is a schematic structural diagram of a camera control apparatus according to the present disclosure.
  • FIG. 11 is a schematic structural diagram of still another camera control device according to the present disclosure.
  • FIG. 12 is a schematic structural diagram of still another camera control device according to the present disclosure.
  • FIG. 13 is a schematic structural diagram of still another camera control apparatus according to the present invention.
  • the embodiment of the present application provides a camera control method, device, and camera to solve the problem that the target detection algorithm is complicated and the accuracy is low when the camera performs automatic zooming by using the prior art.
  • FIG. 1 is a schematic flowchart diagram of a camera control method according to an embodiment of the present application.
  • the camera control method includes:
  • Step 101 Determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image.
  • the camera control method disclosed in the present application is applied to a camera, and an image sensor and a depth sensor are disposed in the camera.
  • the image sensor usually uses a CCD (Charge-coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) color image sensor, and the color image sensor generates RGB (Red/Green/ A color image in the Blue, Red/Green/Blue format, using the RGB image as a scene image.
  • a black and white image sensor can also be used, and the generated black and white image is used as a scene image.
  • the depth sensor is configured to generate a depth image, and the gray level of the pixel point in the depth image can represent a distance of the object corresponding to the pixel from the camera.
  • the target object is a whole or part of a shooting scene, and is a part of the scene that is of interest to the user in the shooting scene.
  • the attribute characteristics of the target object can be utilized.
  • the attribute characteristics include lines, shapes, areas, and the like.
  • Step 102 Acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image.
  • Step 103 Adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  • the two detection periods are respectively set as a first detection period and a second detection period, and the distances acquired in the two detection periods are the first distance d 1 and the second distance d 2 , respectively .
  • the first distance d 1 and the second distance d 2 can reflect the change of the distance between the target object and the camera in the two detection periods, thereby reflecting the size of the area of the target object presented in the video. For example, when the second distance d 2 is greater than the first distance d 1, the target object is present in the video area is reduced, it is necessary to adjust the focal length of the camera is large; when the second distance d 2 is less than the first distance 1 d, target The area of the object presented in the video increases, and the focal length of the camera needs to be reduced.
  • the pixel point coordinates of the target object in the scene image and the pixel point coordinates of the target object in the depth image are determined in each detection period; Obtaining a pixel point coordinate of the target object in the depth image, acquiring a distance between the target object and the camera; and using a first distance obtained by the first detection period, and a second distance obtained by the second detection period, adjusting The focal length of the camera.
  • the camera control method is applied to a camera, and an image sensor for generating a scene image and a depth sensor for generating a depth image are disposed in the camera.
  • the distance between the target object and the camera in each detection cycle can be acquired, and the change of the distance between the target object and the camera can reflect the size of the area of the target object in the video.
  • the method disclosed in the present application adopts a simple algorithm based on the distance between the target object and the camera to achieve the adjustment of the focus distance, is easy to implement, and has higher accuracy and robustness. .
  • step 103 a first distance obtained by using the first detection period and a second distance obtained by the second detection period are disclosed, and a scheme for adjusting a focal length of the camera is disclosed.
  • the solution may be implemented in various manners. For the implementation thereof, refer to the following embodiments.
  • Step 1011 Determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image.
  • Step 1012 Acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image.
  • the implementation process of the steps 1011 to 1012 is the same as the implementation process of the steps 101 to 102, and can be referred to each other, and details are not described herein again.
  • Step 1013 Set two detection periods as a first detection period and a second detection period, respectively. Obtaining the target according to a second distance d 2 between the target object and the camera obtained in the second detection period, and pixel coordinates (u 2 , v 2 ) of the target object in the scene image The world coordinates (x c2 , y c2 ) of the object in the second detection period.
  • the coordinate system describes the position of the camera and uses it to describe the position of other objects in the environment, which is called the world coordinate system.
  • the coordinates of the world coordinate system are difficult to measure, and the coordinates of the pixel points can be obtained by the image. Therefore, in this embodiment, after the coordinates of the pixel points are acquired, the world coordinates of the target object are determined by the coordinates of the pixel points.
  • Step 1014 When the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the target A pixel point coordinate (u 2 , v 2 ) of the object in the scene image acquires a correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ).
  • Step 1015 The correspondence between the focal point f 2 and the pixel point coordinates (u 2 , v 2 ), the pixel point coordinates of the target object in the scene image in the first detection period (u 1 , v 1 a first distance d 1 between the target object and the camera in the first detection period and a focal length f 1 of the camera in the first detection period, acquiring the pixel point coordinates (u 1 , v 1 ) and pixel coordinates
  • the difference of (u 2 , v 2 ) is the value of the focal length f 2 when the difference is within the preset range, and the focal length of the camera is adjusted to the focal length f 2 .
  • the camera imaging model is shown in Figure 3, where x c , y c , z c are world coordinate systems, and x, y are scene images or depth images.
  • the imaging plane coordinate system, m, n is the pixel coordinate system of the scene image or the depth image
  • O c is the optical center of the camera
  • O c z c is the optical axis of the camera
  • O c p is the focal length f.
  • the x c , y c , and z c coordinate axes of the world coordinate system respectively represent the coordinates of the three-dimensional space in which the object is located, in meters or millimeters;
  • the imaging plane coordinate system established on the scene image represents the object on the image sensor.
  • the unit is m or mm;
  • the pixel coordinate system established on the scene image is the coordinate system of the object after imaging by the image sensor, and the pixel coordinate system established on the depth image, after the object is imaged by the depth sensor, in the pixel
  • the conversion may be performed by a certain ratio.
  • f x and f y are the equivalent focal lengths of the focal length f of the camera on x and y, respectively; x c , y c , z c are the coordinate axes of the world coordinate system; x, y are the coordinate axes of the imaging coordinate system.
  • f x and f y are the equivalent focal lengths on x and y, respectively; s is the distortion coefficient of the image; u 0 , v 0 are the coordinates of the main point of the image.
  • R is the rotation matrix of the camera, and t is the camera translation vector.
  • K is called the internal reference of the camera, and R and t are called the external parameters of the camera.
  • (u, v) is the pixel point coordinate of the object point in the scene image
  • (u 0 , v 0 ) is the image principal point coordinate
  • the image main point is the intersection of the camera optical axis and the imaging plane of the scene image
  • f is the focal length
  • is the scale factor between the image coordinate system and the pixel coordinate system in the scene image
  • d is the distance from the object point to the image sensor. Since the image sensor is disposed in the camera, it can be considered as d The distance from the object to the camera, which distance can be acquired by the depth image.
  • the gray level of each pixel in the depth image represents the quantization of the distance between the object corresponding to the pixel and the depth sensor, and the different gray levels respectively represent different distances between the object and the depth sensor.
  • the quantization level of the gray level, and the quantization algorithm the actual depth value represented by each gray level can be obtained, and the distance between the object corresponding to each pixel point and the camera can be calculated.
  • the algorithm for acquiring the world coordinates (x c1 , y c1 ) of the target object in the first detection period is:
  • (u 1 , v 1 ) is a pixel point coordinate of the target object in the scene image of the first detection period;
  • (u 0 , v 0 ) is a pixel point coordinate of the image main point in the scene image;
  • is the a scale factor when converting between an image coordinate system and a pixel coordinate system corresponding to the scene image;
  • f 1 is a current focal length of the camera;
  • d 1 is a first distance between the target object and the camera;
  • C1 , y c1 ) are the world coordinates of the target object in the first detection period.
  • d 1 can be acquired by the depth image in the first detection period.
  • the pixel point coordinates (u 1 , v 1 ) of the target object in the scene image of the first detection period may be obtained by querying the scene image based on the characteristics of the target object.
  • the feature of the target object may include a line, a shape, and the like of the target object.
  • step 1013 according to the second distance d 2 between the target object and the camera in the second detection period, and the pixel point coordinates of the target object in the scene image (u 2 , v 2 ), when acquiring the world coordinates (x c2 , y c2 ) of the target object in the second detection period, the formula is:
  • (u 2 , v 2 ) is a pixel point coordinate of the target object in the scene image of the second detection period;
  • (u 0 , v 0 ) is a pixel point coordinate of the image main point in the scene image;
  • is the a scale factor when converting between the image coordinate system and the pixel coordinate system corresponding to the scene image;
  • f 1 is the current focal length of the camera;
  • d 2 is a second distance between the target object and the camera;
  • C2 , y c2 ) is the world coordinate of the target object in the second detection period.
  • d 2 can be acquired by the depth image of the second detection period.
  • step 1014 the camera adjusted focal length is set to f 2 , and the corresponding relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ) is obtained.
  • the pixel point coordinates (u 1 , v 1 ) of the target object in the scene image obtained in the first detection period the first distance d 1 between the target object and the camera, and the first detection
  • the focal length f 1 of the camera during the cycle can be obtained by equations (9) and (10).
  • equations (9) and (10) it is understood that the correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ) is:
  • step 1015 the difference between the acquired pixel point coordinates (u 1 , v 1 ) and the pixel point coordinates (u 2 , v 2 ) is disclosed, and the value of the focal length f 2 when the difference is within the preset range is obtained.
  • Program. in order to keep the size of the image of the target object in the video in the preset range, it should be made And among them For the preset range, the following formula is satisfied:
  • the preset range The value can be set according to the application requirements. If it is necessary to make the size of the image of the target object in the video substantially unchanged, the preset range can be set. The value is 0.
  • Step 1021 Determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image.
  • Step 1022 Acquire a distance between the target object and the camera based on pixel coordinates of the target object in the depth image.
  • the implementation process of the steps 1021 to 1022 is the same as the implementation process of the steps 101 to 102, and can be referred to each other, and details are not described herein again.
  • Step 1023 After obtaining the first distance d 1 of the target object and the camera in the first detection period, acquiring a first ratio of the focal length f 1 of the camera and the first distance d 1 in the first detection period. .
  • Step 1024 Set a focal length adjusted by the camera to be f 2 , and obtain a second distance d 2 between the target object and the camera after the second detection period, and obtain a focal length f 2 and a second distance d 2 Two ratios.
  • Step 1025 Acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset ratio range, and adjust a focal length of the camera to a focal length f 2 .
  • the solution disclosed in steps 1021 to 2025 utilizes the distances acquired in the two detection periods to achieve adjustment of the focal length of the camera.
  • the ratio between the focal length of the camera and the distance between the camera and the target object is generally considered to be within a certain range, the screen size of the target object in the video is substantially maintained within a certain range. Therefore, in this embodiment, after obtaining the first ratio of f 1 and d 1 and the second ratio of f 2 and d 2 , calculating a difference between the first ratio and the second ratio within a preset ratio range The value of f 2 is adjusted and the focal length of the camera is adjusted to f 2 to achieve automatic zooming.
  • the present application discloses a camera control method for realizing automatic zooming of a camera according to a distance between a target object and a camera. Referring to Fig. 6, in order to avoid excessive zooming and image blurring, the present application discloses the following embodiments.
  • Step 111 Determine, at each detection period, a pixel point coordinate of the target object in the scene image. And determining pixel coordinates of the target object in the depth image.
  • Step 112 Acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image.
  • the distance includes: a first distance d 1 obtained by the first detection period and a second distance d 2 obtained by the second detection period.
  • Step 113 After obtaining the first distance d 1 and the second distance d 2 , calculate a difference between the first distance d 1 and the second distance d 2 .
  • Step 114 Determine whether the difference is within a preset threshold range. If not, perform the operation of step 115. If yes, perform the operation of step 116.
  • Step 115 When it is determined that the difference is not within a preset threshold range, determine that a focal length of the camera needs to be adjusted, and obtain a first distance obtained by using the first detection period, and a second obtained by using the second detection period. Two distances, adjusting the focal length of the camera.
  • Step 116 When it is determined that the difference is within a preset threshold range, it is determined that it is not currently necessary to adjust the focal length of the camera.
  • step 111 to step 112 is the same as the implementation process of step 101 to step 102.
  • step 115 the first distance and the second distance acquired in the two detection periods are utilized to adjust the implementation process of the focal length of the camera, and
  • the implementation process of step 103 is the same and can be referred to each other, and details are not described herein again.
  • a threshold range is set in advance.
  • the difference between the distances obtained in the two detection periods can reflect the change of the position of the target object. If the position change of the target object is less than the threshold range, the focal length of the camera is temporarily not adjusted.
  • the threshold is larger than the threshold range Only adjust the focal length of the camera.
  • the threshold range can be set based on subjective image effects and empirical values. The method can avoid image blur due to zooming too much, and the image maintains a certain stability.
  • the present application discloses the step of acquiring the distance between the target object and the camera according to the pixel point coordinates of the target object in the depth image, and the step includes:
  • the depth image is generated by a depth sensor disposed in the camera.
  • Depth sensor It is a device capable of generating a depth image about a scene, the basic principle of which is to emit infrared light to a target object, and detect a time difference of the target object reflecting the infrared light, and determine the distance of the target object by the time difference.
  • the depth sensor is capable of acquiring depth images in real time with strong accuracy and reliability.
  • each video frame is a triangularly-intensity-modulated infrared ray emitted by an infrared light source
  • the dotted line portion is an infrared ray reflected by the object.
  • the vertical line pair of the dotted line in the figure is the light received by the depth sensor when the shutter of the camera is opened.
  • the intensity of the infrared light received by the depth sensor will become smaller as the distance of the object increases.
  • the dotted line moves to the right, when the depth sensor is more The reflected light of the target object is received less, indicating that the distance between the target object and the depth sensor becomes far.
  • the shutter is opened for exposure when the brightness of the infrared light source is decreasing, the intensity of the received infrared light increases as the distance of the target object increases.
  • the dotted line moves to the right, so When the depth sensor receives more of the reflected light of the target object, it indicates that the distance between the target object and the depth sensor becomes far.
  • I + (t s , d) and I - (t s , d) is the intensity of the light received by the depth sensor in the luminance increment period and the luminance decrement period, respectively, according to which the following expression can be generated:
  • the distance d of the target object to the infrared light source can be obtained as:
  • the depth sensor can determine the distance of the target object from itself according to the received light intensity of the same target object at different times (such as the brightness increasing period and the brightness decreasing period). And, the depth sensor finally converts the calculated distance information into a grayscale or color depth image, and outputs the depth image.
  • the gray level of each pixel in the depth image represents the quantization of the distance between the object corresponding to the pixel and the depth sensor, and the different gray levels respectively represent different distances between the object and the depth sensor.
  • the depth image may have a frame rate of 30 fps or 60 fps, typically having 256 gray levels.
  • the actual depth value represented by each gray level can be obtained, and the distance between the object corresponding to each pixel point and the camera can be calculated.
  • the pixel region with higher brightness indicates that the closer the object corresponding to the pixel region is to the depth sensor, the darker the pixel region indicates that the object corresponding to the pixel region is farther from the depth sensor.
  • the pixel area with the gray level of 0 represents the object farthest from the depth sensor, and the pixel area with the gray level of 255 represents the object closest to the depth sensor.
  • the distance between the target object and the depth sensor can be obtained by the gray level of the corresponding pixel point of the target object in the depth image, and, in the present application, A depth sensor is disposed in the camera, and a distance between the target object and the camera is obtained by a distance between the target object and the depth sensor.
  • the focal length of the camera is adjusted by the distance between the target object and the camera in different detection periods.
  • the camera control method disclosed in the present application further includes:
  • the focus position can also be adjusted to adjust the focus position to the target object to obtain a video with higher imaging quality.
  • the focus position of the camera it is necessary to set the focus priority of the target object to the highest priority in advance.
  • the scene image is usually partitioned, and then the focus value (FV) of each partition is counted, and the focus priority of each partition is obtained by weighting the focus values of the respective partitions. And focusing the camera at the position with the highest focus priority so that the camera is preferentially focused on the target object.
  • the step of determining the pixel point coordinates of the target object in the scene image and determining the pixel point coordinates of the target object in the depth image is disclosed in each detection period.
  • the determining the pixel point coordinates of the target object in the depth image includes: a correspondence between pixel coordinates formed in the scene image and the depth image based on the target object, and the target object in the scene image The pixel point coordinates in the determination of the pixel point coordinates of the target object in the depth image.
  • the camera control method disclosed in the present application is applied to a camera, and an image sensor and a depth sensor are provided in the camera, the image sensor is used to generate a scene image, and the depth sensor is used to generate a depth image.
  • target detection on the scene image that is, acquiring a pixel point generated by the target object on the scene image, and then correspondingly according to the pixel point coordinates formed by the target object in the scene image and the depth image, and the Determining the pixel point coordinates of the target object in the depth image, and determining the pixel point coordinates of the target object in the depth image, so as to determine the pixel coordinates in the depth image according to the target object in a subsequent step.
  • Corresponding relationship between pixel coordinates formed by the target object in the scene image and the depth image is determined by the placement position of the image sensor and the depth sensor, and includes the following cases:
  • the depth sensor and the image sensor are integrated on one sensor, for example, a pixel unit capable of sensing depth information is added on the basis of an ordinary image sensor, the sensor can simultaneously output a scene image and a depth image, and output the scene image and depth
  • the image is a completely consistent scene.
  • the pixel coordinates formed by the same object on the scene image are the same as the pixel coordinates formed on the depth image.
  • High resolution and frame rate, such as scene images can be achieved by combining a high resolution and frame rate image sensor with a combination of lower resolution and frame rate depth sensors.
  • image sensors and depth sensors can also be used.
  • the image sensor and the depth sensor are both disposed behind the camera lens, and a semi-reverse half lens is disposed between the image sensor and the camera lens, and between the depth sensor and the camera lens.
  • the half-reflex lens is usually at an angle of 45° with respect to the horizontal direction. In addition, the angle may be other angles, which is not limited in this application.
  • the splitting is realized by using a half-reverse half lens, which can ensure that the scenes captured by the image sensor and the depth sensor are consistent, in this case, the pixel coordinates formed by the same object on the scene image, and the pixel points formed on the depth image. The coordinates are the same.
  • the ratio of the amount of transmitted and reflected light of the half-reflex lens can be controlled. For example, the transmitted light accounts for 70% of the total amount of light, and the reflected light accounts for 30% of the total light. .
  • the camera control method further includes: predetermining a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image.
  • the step of predetermining the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image includes:
  • an imaging relationship expression of the scene image and the depth image is obtained, and the imaging relationship expression is:
  • x is a homogeneous representation of the pixel coordinates of the object in the scene image
  • x' is the A homogeneous representation of the coordinates of the pixel points of the object in the depth image
  • H is a perspective transformation matrix of the scene image and the depth image.
  • the H in the imaging relational expression is usually a 3 ⁇ 3 matrix with a degree of freedom of 8, representing a transformation relationship between the scene image and the depth image, which is called a perspective transformation matrix. It is assumed that the coordinates of the pixel point of an object in the scene image are (x, y), and the coordinates of the pixel point of the object in the depth image are (x', y'), according to which the following two equations can be obtained:
  • the embodiment of the present application also discloses a camera control device.
  • the camera control apparatus includes: a determination module 100, an acquisition module 200, and an adjustment module 300.
  • the determining module 100 is configured to determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image;
  • the acquiring module 200 is configured to acquire a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
  • the adjustment module 300 is configured to adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  • the adjustment module 300 includes: a first obtaining unit 301, a second acquiring unit 302, and a first adjusting unit 303.
  • the first obtaining unit 301 is configured to: according to the second distance d 2 between the target object and the camera obtained in the second detection period, and the pixel point coordinates of the target object in the scene image (u 2 , v 2 ), acquiring world coordinates (x c 2, y c 2) of the target object in a second detection period;
  • the second acquiring unit 302 when the focal length for the setting of the camera was adjusted to 2 F, according to a second detection period of the target object in world coordinates (x c2, y c2), the second a distance d 2 and a pixel point coordinate (u 2 , v 2 ) of the target object in the scene image, and obtaining a correspondence relationship between the focal length f 2 and the pixel point coordinate (u 2 , v 2 );
  • the first adjusting unit 303 is configured to: by a correspondence between a focal length f 2 and the pixel point coordinates (u 2 , v 2 ), a pixel point of the target object in the scene image in a first detection period Coordinates (u 1 , v 1 ), a first distance d 1 between the target object and the camera in the first detection period, and a focal length f 1 of the camera in the first detection period, acquiring the pixel point coordinates (u 1 , v 1 ) The value of the focal length f 2 when the difference between the pixel point coordinates (u 2 , v 2 ) is within a preset range, and adjusts the focal length of the camera to the focal length f 2 .
  • the adjustment module 300 may also be in other forms, including: a third acquisition unit 304, a fourth acquisition unit 305, and a second adjustment unit 306.
  • the third obtaining unit 304 is configured to acquire, after acquiring the first distance d 1 of the target object and the camera in the first detection period, the focal length f 1 of the camera in the first detection period, and the a first ratio of the first distance d 1 ;
  • a fourth acquiring unit 305 configured to set a focal length adjusted by the camera to be f 2 , and obtain a focal length f 2 and the second after acquiring a second distance d 2 between the target object and the camera in the second detecting period a second ratio of distance d 2 ;
  • the second adjusting unit 306 is configured to acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset proportional range, and adjust a focal length of the camera to a focal length f 2 .
  • the camera control apparatus further comprising: a determining module, the determining module is configured to acquire a first distance and the second distances d 1 d 2 After calculating the difference between the first and second distances d 1 is a distance d 2 And determining that the focal length of the camera needs to be adjusted when the difference is not within a preset threshold range.
  • the obtaining module 200 includes: a gray level acquiring unit 201 and a distance acquiring unit 202.
  • the gray level obtaining unit 201 is configured to acquire a gray level of a pixel point corresponding to a pixel point coordinate of the target object in the depth image;
  • the distance obtaining unit 202 is configured to acquire a distance between the target object and the camera by using a correspondence between the gray level and the distance.
  • the camera control device further includes: a setting module and a focusing module.
  • the setting module is configured to set a focus priority of the target object to a highest priority according to the received setting information; the focusing module is configured to adjust a focus of the camera after adjusting a focal length of the camera The position is such that the camera is focused at the position with the highest focus priority.
  • the determining module 100 includes: a determining unit, configured to use, according to a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image, and the target object in the scene image Pixel point coordinates, determining pixel point coordinates of the target object in the depth image.
  • the camera control device further includes: a correspondence determining module, wherein the correspondence determining module is configured to predetermine a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image.
  • the correspondence determining module includes: an imaging relationship expression acquiring unit and a perspective transform matrix acquiring unit.
  • the imaging relationship expression obtaining unit is configured to acquire an imaging relationship expression of the scene image and the depth image, where the imaging relationship expression is:
  • x is a homogeneous representation of the pixel coordinates of the object in the scene image
  • x' is a homogeneous representation of the pixel coordinates of the object in the depth image
  • H is the scene image and the depth image Perspective transformation matrix
  • the perspective transformation matrix acquiring unit is configured to acquire pixel point coordinates of the four object points on the scene image and the depth image, and obtain the value of H in the imaging relationship expression, thereby acquiring the same object in the And a perspective transformation matrix of the scene image and the depth image, and the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image is represented by the perspective transformation matrix.
  • the present application discloses a camera control device.
  • the camera control device When performing the camera control operation, the camera control device first determines, by the determining module, pixel coordinates of the target object in the scene image in each detection cycle, and determines the target object. Pixel coordinates in the depth image, then get The module acquires a distance between the target object and the camera based on pixel coordinates of the target object in the depth image, and then uses a first distance obtained by using the first detection period by the adjustment module, and a second detection period The obtained second distance adjusts the focal length of the camera to adjust the focal length of the camera.
  • the distance between the target object and the camera in each detection cycle can be acquired, and the focal length of the camera can be adjusted by using the distance, the algorithm is simple, and the zoom accuracy is improved.
  • the present application also discloses a video camera.
  • the camera includes a processor, a memory, an image sensor, and a depth sensor.
  • the image sensor is configured to generate a scene image including a target object
  • the depth image sensor is configured to generate a depth image including a target object
  • the memory is configured to store a program for controlling a camera
  • the processor is configured to read a program stored in the memory, and perform a camera-controlled operation according to the program, where the camera controlled operation comprises:
  • the focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  • the image sensor and the depth sensor may be placed in different forms.
  • the image sensor and the depth sensor are integrated in the same sensor.
  • the image sensor and the depth sensor may also be two independent sensors.
  • the image sensor and the depth sensor are two independent sensors, as shown in FIG. 8, wherein the image sensor and the depth sensor are both disposed behind the camera lens, the image sensor and the depth sensor are set at different levels.
  • a half mirror half lens is disposed between the camera lens and the image sensor and the depth sensor.
  • the image sensor is generally at the same level as the camera lens and is arranged in a vertical direction with the camera lens, the depth sensor being disposed between the camera lens and the image sensor, and is generally disposed at a level Square
  • the half-reflex lens is disposed above the depth sensor at a certain inclination angle with respect to the horizontal direction. The angle of inclination may be 45° or other angles, which is not limited in this application.
  • the splitting is realized by using the half-reverse half lens, which can ensure that the scenes captured by the image sensor and the depth sensor are consistent, and the coordinates of the pixel points formed by the same object on the scene image are the same as the coordinates of the pixel points formed on the depth image.
  • the ratio of the amount of transmitted and reflected light of the half-reflex lens can be controlled, for example, the transmitted light accounts for 70% of the total amount of light, and the reflected light occupies the total amount of light. 30%.
  • the image sensor and the depth sensor are two independent sensors, as shown in FIG. 9, the image sensor and the depth sensor are used to image different light rays, wherein the image sensor is disposed in the camera. The rear of the lens, and the image sensor is placed side by side with the depth sensor. Additionally, the image sensor is typically at the same level as the camera lens. In this case, since the image sensor and the depth sensor are used for different optical paths, the content of the two images is slightly different, causing parallax between the scene image and the depth image, so that the scene image needs to be calibrated to obtain the target object in the scene. Correspondence between pixel coordinates formed in the image and the depth image.
  • the camera disclosed in the present application is capable of acquiring a distance between a target object and a camera in different detection periods according to a scene image and a depth image, and adjusting the focal length of the camera by the distance.
  • the algorithm adopted by the method is simple, easy to implement, and improves the accuracy and robustness of the focus adjustment.
  • the techniques in the embodiments of the present invention can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution in the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a storage medium such as a ROM/RAM. , a diskette, an optical disk, etc., comprising instructions for causing a computer device (which may be a personal computer, server, or network device;) to perform the methods described in various embodiments of the present invention or in certain portions of the embodiments.
  • a computer device which may be a personal computer, server, or network device;

Abstract

Disclosed are a camera control method and device, and a camera. The control method comprises: firstly, determining a pixel point coordinate of a target object in a scenario image and a pixel point coordinate of the target object in a depth image within each detection period; then, based on the pixel point coordinate of the target object in the depth image, acquiring the distance between the target object and a camera; and adjusting the focal length of the camera using a first distance obtained within the first detection period and a second distance obtained within the second detection period. The camera control method is applied to the camera, and an image sensor for generating a scenario image and a depth sensor for generating a depth image of a corresponding scenario are arranged in the camera. By means of the method, the distance between the target object and the camera within each detection period can be acquired, and it is achieved to adjust the focal length of the camera through the distance, so that the algorithm is simple, and the zooming accuracy is improved.

Description

摄像机控制方法、装置及摄像机Camera control method, device and camera
本申请要求于2014年6月4日提交中国专利局、申请号为201410244369.7、发明名称为“摄像机控制方法、装置及摄像机”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201410244369.7, entitled "Camera Control Method, Apparatus, and Camera", which is incorporated herein by reference. .
技术领域Technical field
本发明涉及信息处理技术领域,特别涉及一种摄像机控制方法、装置及摄像机。The present invention relates to the field of information processing technologies, and in particular, to a camera control method, apparatus, and video camera.
背景技术Background technique
用于视频会议的摄像机是视频会议系统的基本设备,视频会议系统通常应用于两个以上的地点,这些摄像机分别设置在系统的各个地点,通过通信网络相互连接起来,用于拍摄包含本地用户的视频,并将视频传输至其它摄像机,从而使位于不同地点的用户能够通过视频相互交流。The camera used for video conferencing is the basic equipment of the video conferencing system. The video conferencing system is usually applied to more than two locations. These cameras are respectively placed at various locations of the system and are connected to each other through a communication network for photographing local users. Video and transmit the video to other cameras so that users in different locations can communicate with each other via video.
在进行视频会议时,经常需要调节所述摄像机的焦距,实现摄像机的变焦。例如,发言者距离摄像机较远,导致摄像机拍摄的视频中,发言者的画面较小,使得在其它地点的用户无法通过视频看清该发言者发言时的表情,沟通体验差,这种情况下,就需要调节摄像机的焦距,以使发言者在视频中的画面大小保持在一定范围内。现有技术中,提供一种手动变焦的方法,用户通过手动调节摄像机的遥控器、变焦调节界面或变焦环等,实现对摄像机焦距的调节。但是该种方法操作繁琐,并且手动调节需要较长时间,焦距调节实时性差。为了解决这个问题,现有技术中还提供一种自动变焦的方法,该方法中,摄像机会实时检测需拍摄目标在视频中呈现的画面面积,然后根据所述面积调节摄像机的焦距,使需拍摄目标在视频中的画面始终保持合适的大小。When performing a video conference, it is often necessary to adjust the focal length of the camera to achieve zooming of the camera. For example, if the speaker is far away from the camera, the speaker's picture is smaller in the video captured by the camera, so that the user in other places cannot see the expression of the speaker when the speaker speaks through the video, and the communication experience is poor. It is necessary to adjust the focal length of the camera so that the size of the speaker in the video is kept within a certain range. In the prior art, a method of manual zooming is provided, and the user adjusts the focal length of the camera by manually adjusting the remote controller of the camera, the zoom adjustment interface, or the zoom ring. However, this method is cumbersome to operate, and manual adjustment takes a long time, and the focus adjustment is poor in real-time. In order to solve this problem, the prior art also provides a method for automatically zooming, in which the camera detects the area of the image to be captured in the video in real time, and then adjusts the focal length of the camera according to the area, so that the camera needs to be photographed. The image of the target in the video is always the right size.
但是,发明人在研究过程中发现,现有技术提供的自动变焦的方法,在获取需拍摄目标在摄像头中呈现的面积时,采用的算法较为复杂,准确性不高。However, the inventor found in the research process that the automatic zoom method provided by the prior art adopts an algorithm which is complicated and inaccurate in obtaining an area to be photographed in the camera.
发明内容 Summary of the invention
本发明实施例提供了一种摄像机控制方法、装置及摄像机,以一定程度上解决现有技术提供的自动变焦方法中,所具有的算法复杂,准确性不高的问题。The embodiment of the invention provides a camera control method, device and camera, which solves the problem that the algorithm of the automatic zoom method provided by the prior art has complicated algorithm and low accuracy.
为了解决上述技术问题,本发明实施例公开了如下技术方案:In order to solve the above technical problem, the embodiment of the present invention discloses the following technical solutions:
根据本公开实施例的第一方面,提供一种摄像机控制方法,所述方法包括:According to a first aspect of an embodiment of the present disclosure, a camera control method is provided, the method comprising:
在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;Determining, in each detection period, pixel coordinates of the target object in the scene image, and determining pixel coordinates of the target object in the depth image;
基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;Obtaining a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
结合第一方面,在第一方面第一种可能的实现方式中,所述利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距的步骤包括:With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of adjusting a focal length of the camera by using the first distance obtained by the first detection period and the second distance obtained by the second detection period include:
根据在第二检测周期获得的所述目标物体与摄像机之间的第二距离d2,以及所述目标物体在所述场景图像中的像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2);Obtaining the target according to a second distance d 2 between the target object and the camera obtained in the second detection period, and pixel coordinates (u 2 , v 2 ) of the target object in the scene image The world coordinates of the object in the second detection period (x c2 , y c2 );
当设定所述摄像机调节后的焦距为f2时,根据第二检测周期中所述目标物体的世界坐标(xc2,yc2)、所述第二距离d2和所述目标物体在场景图像中的像素点坐标(u2,v2),获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系;When the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the target object in the scene the pixel coordinates (u 2, v 2) in the image, the focal length f 2 acquires the pixel coordinates (u 2, v 2) a corresponding relation;
通过焦距f2与所述像素点坐标(u2,v2)的对应关系、在第一检测周期中所述目标物体在所述场景图像中的像素点坐标(u1,v1)、第一检测周期中所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,获取所述像素点坐标(u1,v1)和像素点坐标(u2,v2)的差值在预设范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2Corresponding relationship between the focal point f 2 and the pixel point coordinates (u 2 , v 2 ), pixel coordinates (u 1 , v 1 ) of the target object in the scene image in the first detection period, Obtaining the pixel point coordinates (u 1 , v 1 ) and pixel point coordinates (u 2 ) by a first distance d 1 between the target object and the camera in a detection period and a focal length f 1 of the camera in the first detection period The difference of v 2 ) is the value of the focal length f 2 when the difference is within the preset range, and the focal length of the camera is adjusted to the focal length f 2 .
结合第一方面,在第一方面第二种可能的实现方式中,所述利用第一检测周期获取到的第一距离,和第二检测周期获取到的第二距离,调节所述摄 像机的焦距的步骤包括:With reference to the first aspect, in a second possible implementation manner of the first aspect, the first distance acquired by using the first detection period, and the second distance acquired by the second detection period, adjusting the photo The steps of the camera's focal length include:
在获取目标物体与摄像机在所述第一检测周期中的第一距离d1后,获取所述第一检测周期中摄像机的焦距f1与所述第一距离d1的第一比例;After obtaining the first distance d 1 of the target object and the camera in the first detection period, acquiring a first ratio of the focal length f 1 of the camera and the first distance d 1 in the first detection period;
设定所述摄像机调节后的焦距为f2,在所述第二检测周期获取目标物体与摄像机的第二距离d2后,获取焦距f2与所述第二距离d2的第二比例;Setting a focal length adjusted by the camera to be f 2 , and acquiring a second ratio d 2 of the focal length f 2 and the second distance d 2 after acquiring the second distance d 2 of the target object and the camera in the second detection period;
获取当所述第一比例与第二比例间的差值在预设比例范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2Obtaining a value of the focal length f 2 when the difference between the first ratio and the second ratio is within a preset ratio range, and adjusting a focal length of the camera to a focal length f 2 .
结合第一方面,在第一方面第三种可能的实现方式中,所述方法还包括:With reference to the first aspect, in a third possible implementation manner of the first aspect, the method further includes:
在获取第一距离d1和第二距离d2后,计算第一距离d1和第二距离d2的差值,并在所述差值不在预设的阈值范围内时,确定需要调节所述摄像机的焦距。After obtaining the first distance d 1 and the second distance d 2 , calculating a difference between the first distance d 1 and the second distance d 2 , and determining that the adjustment is needed when the difference is not within a preset threshold range The focal length of the camera.
结合第一方面,在第一方面第四种可能的实现方式中,所述基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离,包括:With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the obtaining, by the pixel coordinates of the target object in the depth image, the distance between the target object and the camera, includes:
获取所述目标物体在深度图像中的像素点坐标对应的像素点的灰度级;Obtaining a gray level of a pixel point corresponding to a pixel point coordinate of the target object in the depth image;
通过所述灰度级与距离的对应关系,获取所述目标物体与摄像机之间的距离。The distance between the target object and the camera is obtained by the correspondence between the gray level and the distance.
结合第一方面,或者结合第一方面第一种可能的实现方式,或者结合第一方面第二种可能的实现方式,或者结合第一方面第三种可能的实现方式,或者结合第一方面第四种可能的实现方式,在第一方面第五种可能的实现方式中,所述方法还包括:In combination with the first aspect, or in combination with the first possible implementation of the first aspect, or with the second possible implementation of the first aspect, or with the third possible implementation of the first aspect, or with the first aspect In a fourth possible implementation manner of the first aspect, the method further includes:
根据接收到的设置信息,设置所述目标物体的聚焦优先级为最高优先级;Setting a focus priority of the target object to a highest priority according to the received setting information;
在调节所述摄像机的焦距后,调节所述摄像机的聚焦位置,使所述摄像机在聚焦优先级最高的位置聚焦。After adjusting the focal length of the camera, the focus position of the camera is adjusted to focus the camera at the position with the highest focus priority.
结合第一方面,在第一方面第六种可能的实现方式中,所述确定所述目标物体在深度图像中的像素点坐标,包括:With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the determining a pixel point coordinate of the target object in the depth image includes:
基于所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,确定所述目标物体在 所述深度图像中的像素点坐标。Determining, according to a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image, and pixel coordinates of the target object in the scene image, determining that the target object is Pixel point coordinates in the depth image.
结合第一方面第六种可能的实现方式,在第一方面第七种可能的实现方式中,所述方法还包括:预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系;With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, the method further includes: determining, in advance, pixel coordinates of the target object formed in the scene image and the depth image Correspondence relationship
所述预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系的步骤包括:The step of predetermining the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image includes:
获取所述场景图像和深度图像的成像关系表达式,所述成像关系表达式为:Obtaining an imaging relationship expression of the scene image and the depth image, where the imaging relationship expression is:
Figure PCTCN2015080612-appb-000001
Figure PCTCN2015080612-appb-000001
其中,x为该物体在所述场景图像中的像素点坐标的齐次表示;x′为该物体在所述深度图像中像素点坐标的齐次表示;H为所述场景图像和深度图像的透视变换矩阵;Where x is a homogeneous representation of the pixel coordinates of the object in the scene image; x' is a homogeneous representation of the pixel coordinates of the object in the depth image; H is the scene image and the depth image Perspective transformation matrix;
获取四个对象点分别在所述场景图像和深度图像上的像素点坐标,据此获取所述成像关系表达式中H的值,从而获取同一物体在所述场景图像和深度图像的透视变换矩阵,通过所述透视变换矩阵表征所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。Obtaining pixel coordinates of the four object points on the scene image and the depth image respectively, and acquiring the value of H in the imaging relationship expression, thereby acquiring a perspective transformation matrix of the same object in the scene image and the depth image And mapping, by the perspective transformation matrix, a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image.
根据本公开实施例的第二方面,提供一种摄像机控制装置,所述装置包括:According to a second aspect of the embodiments of the present disclosure, there is provided a camera control apparatus, the apparatus comprising:
确定模块,用于在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;a determining module, configured to determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image;
获取模块,用于基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;An acquiring module, configured to acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image;
调节模块,用于利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。And an adjustment module, configured to adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
结合第二方面,在第二方面第一种可能的实现方式中,所述调节模块包括:With reference to the second aspect, in a first possible implementation manner of the second aspect, the adjusting module includes:
第一获取单元,用于根据在第二检测周期获得的所述目标物体与摄像机 之间的第二距离d2,以及所述目标物体在所述场景图像中的像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2);a first acquiring unit, configured to: according to a second distance d 2 between the target object and the camera obtained in the second detection period, and a pixel point coordinate of the target object in the scene image (u 2 , v 2 ) acquiring world coordinates (x c2 , y c2 ) of the target object in the second detection period;
第二获取单元,用于当设定所述摄像机调节后的焦距为f2时,根据第二检测周期中所述目标物体的世界坐标(xc2,yc2)、所述第二距离d2和所述目标物体在场景图像中的像素点坐标(u2,v2),获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系;a second acquiring unit, configured to: when the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the pixel coordinates of the target object in the scene image in the (u 2, v 2), and acquires the focal length f 2 of the pixel coordinates (u 2, v 2) a corresponding relation;
第一调节单元,用于通过焦距f2与所述像素点坐标(u2,v2)的对应关系、在第一检测周期中所述目标物体在所述场景图像中的像素点坐标(u1,v1)、第一检测周期中所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,获取所述像素点坐标(u1,v1)和像素点坐标(u2,v2)的差值在预设范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2a first adjusting unit, configured to: by a correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ), a pixel point coordinate of the target object in the scene image in the first detection period (u 1 , v 1 ), a first distance d 1 between the target object and the camera in the first detection period and a focal length f 1 of the camera in the first detection period, and acquiring the pixel point coordinates (u 1 , v 1 ) The value of the focal length f 2 when the difference between the pixel point coordinates (u 2 , v 2 ) is within a preset range, and adjusts the focal length of the camera to the focal length f 2 .
结合第二方面,在第二方面第二种可能的实现方式中,所述调节模块包括:With reference to the second aspect, in a second possible implementation manner of the second aspect, the adjusting module includes:
第三获取单元,用于在获取目标物体与摄像机在所述第一检测周期中的第一距离d1后,获取所述第一检测周期中摄像机的焦距f1与所述第一距离d1的第一比例;a third acquiring unit, configured to acquire a focal length f 1 and a first distance d 1 of the camera in the first detection period after acquiring the first distance d 1 of the target object and the camera in the first detection period First ratio;
第四获取单元,用于设定所述摄像机调节后的焦距为f2,在所述第二检测周期获取目标物体与摄像机的第二距离d2后,获取焦距f2与所述第二距离d2的第二比例;a fourth acquiring unit, configured to set a focal length adjusted by the camera to be f 2 , and obtain a focal length f 2 and the second distance after acquiring the second distance d 2 of the target object and the camera in the second detecting period a second ratio of d 2 ;
第二调节单元,用于获取当所述第一比例与第二比例间的差值在预设比例范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2a second adjusting unit, configured to acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset proportional range, and adjust a focal length of the camera to a focal length f 2 .
结合第二方面,在第二方面第三种可能的实现方式中,所述摄像机控制装置还包括:In conjunction with the second aspect, in a third possible implementation manner of the second aspect, the camera control apparatus further includes:
判断模块,用于在获取第一距离d1和第二距离d2后,计算第一距离d1和第二距离d2的差值,并在所述差值不在预设的阈值范围内时,确定需要调节所述摄像机的焦距。A determining module, configured to acquire a first distance d 1 and the second distance d 2, d 1 calculates the first distance and the second distance d 2 difference, and when the difference is not within the preset threshold range , determining that the focal length of the camera needs to be adjusted.
结合第二方面,在第二方面第四种可能的实现方式中,所述获取模块包括:With reference to the second aspect, in a fourth possible implementation manner of the second aspect, the acquiring module includes:
灰度级获取单元,用于获取所述目标物体在深度图像中的像素点坐标对 应的像素点的灰度级;a gray level acquiring unit, configured to acquire a pixel point coordinate pair of the target object in the depth image The gray level of the pixel at which it should be;
距离获取单元,用于通过所述灰度级与距离的对应关系,获取所述目标物体与摄像机之间的距离。The distance obtaining unit is configured to acquire a distance between the target object and the camera by the correspondence between the gray level and the distance.
结合第二方面,或者结合第二方面第一种可能的实现方式,或者结合第二方面第二种可能的实现方式,或者结合第二方面第三种可能的实现方式,或者结合第二方面第四种可能的实现方式,在第二方面第五种可能的实现方式中,所述摄像机控制装置还包括:With reference to the second aspect, or in combination with the first possible implementation of the second aspect, or the second possible implementation of the second aspect, or the third possible implementation of the second aspect, or the second aspect In a fourth possible implementation manner of the second aspect, the camera control apparatus further includes:
设置模块,用于根据接收到的设置信息,设置所述目标物体的聚焦优先级为最高优先级;a setting module, configured to set a focus priority of the target object to a highest priority according to the received setting information;
聚焦模块,用于在调节所述摄像机的焦距后,调节所述摄像机的聚焦位置,使所述摄像机在聚焦优先级最高的位置聚焦。a focusing module for adjusting a focus position of the camera after adjusting a focal length of the camera to focus the camera at a position with the highest focus priority.
结合第二方面,在第二方面第六种可能的实现方式中,所述确定模块包括:With reference to the second aspect, in a sixth possible implementation manner of the second aspect, the determining module includes:
确定单元,用于基于所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,确定所述目标物体在所述深度图像中的像素点坐标。a determining unit, configured to determine, according to a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image, and pixel coordinates of the target object in the scene image, determining the target object in the depth image The coordinates of the pixel points in .
结合第二方面第六种可能的实现方式,在第二方面第七种可能的实现方式中,所述摄像机控制装置还包括:对应关系确定模块,所述对应关系确定模块用于预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系;With reference to the sixth possible implementation of the second aspect, in a seventh possible implementation manner of the second aspect, the camera control apparatus further includes: a correspondence determining module, where the corresponding relationship determining module is configured to predetermine the Corresponding relationship between pixel coordinates of the target object formed in the scene image and the depth image;
所述对应关系确定模块包括:The correspondence determining module includes:
成像关系表达式获取单元,用于获取所述场景图像和深度图像的成像关系表达式,所述成像关系表达式为:An imaging relationship expression obtaining unit, configured to acquire an imaging relationship expression of the scene image and the depth image, where the imaging relationship expression is:
Figure PCTCN2015080612-appb-000002
Figure PCTCN2015080612-appb-000002
其中,x为该物体在所述场景图像中的像素点坐标的齐次表示;x′为该物体在所述深度图像中像素点坐标的齐次表示;H为所述场景图像和深度图像的透视变换矩阵; Where x is a homogeneous representation of the pixel coordinates of the object in the scene image; x' is a homogeneous representation of the pixel coordinates of the object in the depth image; H is the scene image and the depth image Perspective transformation matrix;
透视变换矩阵获取单元,用于获取四个对象点分别在所述场景图像和深度图像上的像素点坐标,据此获取所述成像关系表达式中H的值,从而获取同一物体在所述场景图像和深度图像的透视变换矩阵,通过所述透视变换矩阵表征所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。a perspective transformation matrix acquiring unit, configured to acquire pixel point coordinates of the four object points on the scene image and the depth image, respectively, and obtain a value of H in the imaging relationship expression, thereby acquiring the same object in the scene A perspective transformation matrix of the image and the depth image, by which the correspondence relationship of the pixel coordinates formed by the target object in the scene image and the depth image is characterized.
根据本公开实施例的第三方面,提供一种摄像机,所述摄像机包括:处理器、存储器、图像传感器和深度传感器,According to a third aspect of an embodiment of the present disclosure, a camera is provided, the camera comprising: a processor, a memory, an image sensor, and a depth sensor,
其中,所述图像传感器用于产生包含目标物体的场景图像;Wherein the image sensor is configured to generate a scene image including a target object;
所述深度图像传感器用于产生包含目标物体的深度图像;The depth image sensor is configured to generate a depth image including a target object;
所述存储器用于存储对摄像机进行控制的程序;The memory is configured to store a program for controlling a camera;
所述处理器用于读取所述存储器中存储的程序,并根据所述程序执行摄像机控制的操作,所述摄像机控制的操作包括:The processor is configured to read a program stored in the memory, and perform a camera-controlled operation according to the program, where the camera controlled operation comprises:
在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;Determining, in each detection period, pixel coordinates of the target object in the scene image, and determining pixel coordinates of the target object in the depth image;
基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;Obtaining a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
结合第三方面,在第三方面第一种可能的实现方式中,所述图像传感器和深度传感器集成在同一传感器中;With reference to the third aspect, in a first possible implementation manner of the third aspect, the image sensor and the depth sensor are integrated in the same sensor;
或者,所述图像传感器和深度传感器均设置在摄像机镜头的后方,所述图像传感器和深度传感器设置在不同的水平高度上,在摄像机镜头和所述图像传感器、深度传感器之间设置有半反半透镜;Alternatively, the image sensor and the depth sensor are both disposed behind the camera lens, the image sensor and the depth sensor are disposed at different horizontal levels, and a half-reverse half is disposed between the camera lens and the image sensor and the depth sensor. lens;
或者,所述图像传感器设置在摄像机镜头的后方,并且所述图像传感器与所述深度传感器并排地摆列。Alternatively, the image sensor is disposed behind the camera lens, and the image sensor is arranged side by side with the depth sensor.
本申请公开一种摄像机控制方法、装置及相应的摄像机,在所述摄像机控制方法中,首先在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;然后,基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距 离;再利用在第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The present application discloses a camera control method and apparatus, and a corresponding camera. In the camera control method, first, at each detection period, determining pixel coordinates of a target object in a scene image, and determining that the target object is at depth Pixel point coordinates in the image; then, based on the pixel point coordinates of the target object in the depth image, acquire the distance between the target object and the camera And reusing the first distance obtained in the first detection period and the second distance obtained in the second detection period to adjust the focal length of the camera.
该摄像机控制方法,通过不同检测周期中获取到的目标物体与摄像机之间的距离,即可实现对摄像机的焦距的调节。目标物体与摄像机的距离的变化,能够体现目标物体在视频中呈现的面积大小。和现有技术相比,本申请公开的方法根据目标物体与摄像机之间的距离,实现对焦距的调节时,所采用的算法简单,较易实现,并具有更高的准确性和鲁棒性。The camera control method can adjust the focal length of the camera by the distance between the target object and the camera acquired in different detection periods. The change in the distance between the target object and the camera can reflect the size of the area of the target object in the video. Compared with the prior art, the method disclosed in the present application adopts a simple algorithm based on the distance between the target object and the camera to achieve the adjustment of the focus distance, is easy to implement, and has higher accuracy and robustness. .
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present invention, and other drawings can be obtained from those skilled in the art without any inventive labor.
图1为本发明公开的一种摄像机控制方法的实施例流程图;1 is a flow chart of an embodiment of a camera control method according to the present disclosure;
图2为本发明公开的又一种摄像机控制方法的实施例流程图;2 is a flow chart of another embodiment of a camera control method according to the present disclosure;
图3为本发明公开的摄像机成像模型示例图;3 is a diagram showing an example of a camera imaging model disclosed in the present invention;
图4为本发明公开的摄像机成像的几何关系示意图;4 is a schematic diagram of a geometric relationship of imaging of a camera disclosed in the present invention;
图5为本发明公开的又一种摄像机控制方法的实施例流程图;FIG. 5 is a flowchart of still another embodiment of a camera control method according to the present disclosure;
图6为本发明公开的又一种摄像机控制方法的实施例流程图;6 is a flowchart of still another embodiment of a camera control method according to the present disclosure;
图7为现有技术公开的深度传感器的工作原理示意图;7 is a schematic diagram showing the working principle of a depth sensor disclosed in the prior art;
图8为本申请公开的一种摄像机的结构示意图;FIG. 8 is a schematic structural diagram of a camera disclosed in the present application; FIG.
图9为本申请公开的又一种摄像机的结构示意图;FIG. 9 is a schematic structural diagram of still another camera disclosed in the present application; FIG.
图10为本发明公开的一种摄像机控制装置的结构示意图;FIG. 10 is a schematic structural diagram of a camera control apparatus according to the present disclosure; FIG.
图11为本发明公开的又一种摄像机控制装置的结构示意图;11 is a schematic structural diagram of still another camera control device according to the present disclosure;
图12为本发明公开的又一种摄像机控制装置的结构示意图;12 is a schematic structural diagram of still another camera control device according to the present disclosure;
图13为本发明公开的又一种摄像机控制装置的结构示意图。FIG. 13 is a schematic structural diagram of still another camera control apparatus according to the present invention.
具体实施方式 detailed description
本申请实施例提供一种摄像机控制方法、装置及摄像机,以解决摄像机利用现有技术进行自动变焦时,目标检测算法复杂,准确性低的问题。The embodiment of the present application provides a camera control method, device, and camera to solve the problem that the target detection algorithm is complicated and the accuracy is low when the camera performs automatic zooming by using the prior art.
为了使本技术领域的人员更好地理解本发明实施例中的技术方案,并使本发明实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本发明实施例中技术方案作进一步详细的说明。The above-mentioned objects, features, and advantages of the embodiments of the present invention will become more apparent and understood. Give further details.
图1为本申请实施例提供的一种摄像机控制方法的流程示意图。参见图1,所述摄像机控制方法包括:FIG. 1 is a schematic flowchart diagram of a camera control method according to an embodiment of the present application. Referring to FIG. 1, the camera control method includes:
步骤101、在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标。Step 101: Determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image.
本申请公开的摄像机控制方法,应用于摄像机,并且所述摄像机中设置有图像传感器和深度传感器。所述图像传感器通常采用CCD(Charge-coupled Device,电荷耦合元件)或CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)的彩色图像传感器,并由所述彩色图像传感器产生RGB(Red/Green/Blue,红/绿/蓝)格式的彩色图像,将所述RGB图像作为场景图像。另外,还可以采用黑白图像传感器,将生成的黑白图像作为场景图像。所述深度传感器用于产生深度图像,深度图像中像素点的灰度级能够表征该像素点所对应的被拍摄对象距离所述摄像机的距离。The camera control method disclosed in the present application is applied to a camera, and an image sensor and a depth sensor are disposed in the camera. The image sensor usually uses a CCD (Charge-coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) color image sensor, and the color image sensor generates RGB (Red/Green/ A color image in the Blue, Red/Green/Blue format, using the RGB image as a scene image. In addition, a black and white image sensor can also be used, and the generated black and white image is used as a scene image. The depth sensor is configured to generate a depth image, and the gray level of the pixel point in the depth image can represent a distance of the object corresponding to the pixel from the camera.
所述目标物体为拍摄场景的整体或部分,为拍摄场景中用户感兴趣的部分。在确定目标物体在场景图像中的像素点坐标时,可以利用目标物体的属性特征。所述属性特性包括线条、形状和面积等。通过目标检测算法和属性特征,即可获取目标物体在场景图像中的像素点。进一步的,根据场景图像中的像素点和深度图像中的像素点的对应关系,即可确定所述目标物体在深度图像中的像素点坐标。The target object is a whole or part of a shooting scene, and is a part of the scene that is of interest to the user in the shooting scene. When determining the pixel point coordinates of the target object in the scene image, the attribute characteristics of the target object can be utilized. The attribute characteristics include lines, shapes, areas, and the like. Through the target detection algorithm and attribute features, the pixel points of the target object in the scene image can be obtained. Further, according to the correspondence between the pixel points in the scene image and the pixel points in the depth image, the pixel point coordinates of the target object in the depth image may be determined.
步骤102、基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离。Step 102: Acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image.
步骤103、利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。Step 103: Adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
设定两个检测周期分别为第一检测周期和第二检测周期,两个检测周期中获取到的距离分别为第一距离d1和第二距离d2。通过所述第一距离d1和第 二距离d2能够反映两个检测周期中,目标物体与摄像机的距离的变化,从而体现目标物体在视频中呈现的面积大小。例如,当第二距离d2大于第一距离d1时,目标物体在视频中呈现的面积减小,需要将摄像机的焦距调大;当第二距离d2小于第一距离d1时,目标物体在视频中呈现的面积增大,需要将摄像机的焦距调小。The two detection periods are respectively set as a first detection period and a second detection period, and the distances acquired in the two detection periods are the first distance d 1 and the second distance d 2 , respectively . The first distance d 1 and the second distance d 2 can reflect the change of the distance between the target object and the camera in the two detection periods, thereby reflecting the size of the area of the target object presented in the video. For example, when the second distance d 2 is greater than the first distance d 1, the target object is present in the video area is reduced, it is necessary to adjust the focal length of the camera is large; when the second distance d 2 is less than the first distance 1 d, target The area of the object presented in the video increases, and the focal length of the camera needs to be reduced.
由上述实施例可见,该实施例公开的摄像机控制方法中,首先确定每个检测周期中,目标物体在场景图像中的像素点坐标,以及目标物体在深度图像中的像素点坐标;然后基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;再利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。It can be seen from the above embodiment that in the camera control method disclosed in this embodiment, first, the pixel point coordinates of the target object in the scene image and the pixel point coordinates of the target object in the depth image are determined in each detection period; Obtaining a pixel point coordinate of the target object in the depth image, acquiring a distance between the target object and the camera; and using a first distance obtained by the first detection period, and a second distance obtained by the second detection period, adjusting The focal length of the camera.
该摄像机控制方法应用于摄像机,且所述摄像机中设置有用于生成场景图像的图像传感器,以及用于生成深度图像的深度传感器。通过该方法,能够获取各检测周期中,目标物体与摄像机之间的距离,而目标物体与摄像机的距离的变化,能够体现目标物体在视频中呈现的面积大小。和现有技术相比,本申请公开的方法根据目标物体与摄像机之间的距离,实现对焦距的调节时,所采用的算法简单,较易实现,并具有更高的准确性和鲁棒性。The camera control method is applied to a camera, and an image sensor for generating a scene image and a depth sensor for generating a depth image are disposed in the camera. By this method, the distance between the target object and the camera in each detection cycle can be acquired, and the change of the distance between the target object and the camera can reflect the size of the area of the target object in the video. Compared with the prior art, the method disclosed in the present application adopts a simple algorithm based on the distance between the target object and the camera to achieve the adjustment of the focus distance, is easy to implement, and has higher accuracy and robustness. .
在步骤103中,公开了利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距的方案,参见图2,该方案可采用多种方式实现,其实现方式参见以下实施例。In step 103, a first distance obtained by using the first detection period and a second distance obtained by the second detection period are disclosed, and a scheme for adjusting a focal length of the camera is disclosed. Referring to FIG. 2, the solution may be implemented in various manners. For the implementation thereof, refer to the following embodiments.
步骤1011、在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标。Step 1011: Determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image.
步骤1012、基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离。Step 1012: Acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image.
步骤1011至步骤1012的实施过程与步骤101至步骤102的实施过程相同,可相互参照,此处不再赘述。The implementation process of the steps 1011 to 1012 is the same as the implementation process of the steps 101 to 102, and can be referred to each other, and details are not described herein again.
步骤1013、设定两个检测周期分别为第一检测周期和第二检测周期。根据在第二检测周期获得的所述目标物体与摄像机之间的第二距离d2,以及所述目标物体在所述场景图像中的像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2)。Step 1013: Set two detection periods as a first detection period and a second detection period, respectively. Obtaining the target according to a second distance d 2 between the target object and the camera obtained in the second detection period, and pixel coordinates (u 2 , v 2 ) of the target object in the scene image The world coordinates (x c2 , y c2 ) of the object in the second detection period.
由于摄像机可被安放在环境中的任意位置,在环境中任意选择一个基准 坐标系来描述摄像机的位置,并用它描述环境中其他物体的位置,该坐标系称为世界坐标系。Since the camera can be placed anywhere in the environment, choose a benchmark in the environment. The coordinate system describes the position of the camera and uses it to describe the position of other objects in the environment, which is called the world coordinate system.
世界坐标系的坐标较难测量,而像素点坐标通过图像即可获取,因此,本实施例中,在获取像素点坐标后,通过所述像素点坐标确定目标物体的世界坐标。The coordinates of the world coordinate system are difficult to measure, and the coordinates of the pixel points can be obtained by the image. Therefore, in this embodiment, after the coordinates of the pixel points are acquired, the world coordinates of the target object are determined by the coordinates of the pixel points.
步骤1014、当设定所述摄像机调节后的焦距为f2时,根据第二检测周期中所述目标物体的世界坐标(xc2,yc2)、所述第二距离d2和所述目标物体在场景图像中的像素点坐标(u2,v2),获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系。Step 1014: When the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the target A pixel point coordinate (u 2 , v 2 ) of the object in the scene image acquires a correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ).
步骤1015、通过焦距f2与所述像素点坐标(u2,v2)的对应关系、在第一检测周期中所述目标物体在所述场景图像中的像素点坐标(u1,v1)、第一检测周期中所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,获取所述像素点坐标(u1,v1)和像素点坐标(u2,v2)的差值在预设范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2Step 1015: The correspondence between the focal point f 2 and the pixel point coordinates (u 2 , v 2 ), the pixel point coordinates of the target object in the scene image in the first detection period (u 1 , v 1 a first distance d 1 between the target object and the camera in the first detection period and a focal length f 1 of the camera in the first detection period, acquiring the pixel point coordinates (u 1 , v 1 ) and pixel coordinates The difference of (u 2 , v 2 ) is the value of the focal length f 2 when the difference is within the preset range, and the focal length of the camera is adjusted to the focal length f 2 .
通过步骤1013至步骤1015的操作,实现了对摄像机的焦距的调节。Through the operations of steps 1013 to 1015, the adjustment of the focal length of the camera is achieved.
当世界坐标系以摄像机的光心为世界坐标系的原点时,摄像机成像模型如图3所示,其中,xc,yc,zc为世界坐标系,x,y为场景图像或深度图像的成像平面坐标系,m,n为场景图像或深度图像的像素坐标系,Oc为摄像机的光心,Oczc为摄像机的光轴,Ocp为焦距f。When the world coordinate system uses the optical center of the camera as the origin of the world coordinate system, the camera imaging model is shown in Figure 3, where x c , y c , z c are world coordinate systems, and x, y are scene images or depth images. The imaging plane coordinate system, m, n is the pixel coordinate system of the scene image or the depth image, O c is the optical center of the camera, O c z c is the optical axis of the camera, and O c p is the focal length f.
其中,世界坐标系的xc,yc,zc坐标轴,分别代表物体所处的三维空间的坐标,单位为米或毫米;场景图像上建立的成像平面坐标系,代表物体在图像传感器上的成像平面所在的坐标系,深度图像上建立的成像平面坐标系,代表物体在深度传感器上的成像平面所在的坐标系,成像坐标系的x,y坐标轴,分别代表水平和垂直坐标轴,单位为米或毫米;场景图像上建立的像素坐标系,为物体通过图像传感器成像后,在像素位置上的坐标系,深度图像上建立的像素坐标系,为物体通过深度传感器成像后,在像素位置上的坐标系,像素坐标系的m,n坐标轴,分别代表水平和垂直坐标轴,单位为像素。The x c , y c , and z c coordinate axes of the world coordinate system respectively represent the coordinates of the three-dimensional space in which the object is located, in meters or millimeters; the imaging plane coordinate system established on the scene image represents the object on the image sensor. The coordinate system in which the imaging plane is located, the imaging plane coordinate system established on the depth image, the coordinate system representing the imaging plane of the object on the depth sensor, and the x, y coordinate axes of the imaging coordinate system, representing the horizontal and vertical coordinate axes, respectively. The unit is m or mm; the pixel coordinate system established on the scene image is the coordinate system of the object after imaging by the image sensor, and the pixel coordinate system established on the depth image, after the object is imaged by the depth sensor, in the pixel The coordinate system at the position, the m, n coordinate axes of the pixel coordinate system, representing the horizontal and vertical coordinate axes, respectively, in pixels.
另外,同一场景图像的成像平面坐标系和像素坐标系之间,或者同一深度图像的成像平面坐标系和像素坐标系之间,可通过一定比例进行转换。In addition, between the imaging plane coordinate system and the pixel coordinate system of the same scene image, or between the imaging plane coordinate system and the pixel coordinate system of the same depth image, the conversion may be performed by a certain ratio.
参考图4所示的摄像机成像的几何关系示意图,可知世界坐标系和成像 平面坐标系的对应关系为:Referring to the geometric relationship diagram of the camera imaging shown in FIG. 4, the world coordinate system and imaging are known. The correspondence between the plane coordinate systems is:
Figure PCTCN2015080612-appb-000003
Figure PCTCN2015080612-appb-000003
其中,fx和fy分别为摄像机的焦距f在x和y上的等效焦距;xc,yc,zc为世界坐标系的坐标轴;x,y为成像坐标系的坐标轴。Where f x and f y are the equivalent focal lengths of the focal length f of the camera on x and y, respectively; x c , y c , z c are the coordinate axes of the world coordinate system; x, y are the coordinate axes of the imaging coordinate system.
根据公式(1)和公式(2),可得到某一对象点在三维空间的世界坐标系和成像平面坐标系的变换关系为:According to formula (1) and formula (2), the transformation relationship between the world coordinate system of a certain object point and the imaging plane coordinate system in three-dimensional space can be obtained as follows:
Figure PCTCN2015080612-appb-000004
Figure PCTCN2015080612-appb-000004
其中,
Figure PCTCN2015080612-appb-000005
为平面坐标的齐次表示;
Figure PCTCN2015080612-appb-000006
为世界坐标系的齐次表示。fx和fy分别为x和y上的等效焦距;s为图像的畸变系数;u0,v0为图像主点坐标。R为摄像机的旋转矩阵,t为摄像机平移向量。其中K称为摄像机的内参,R和t称为摄像机的外参。
among them,
Figure PCTCN2015080612-appb-000005
Homogeneous representation of plane coordinates;
Figure PCTCN2015080612-appb-000006
A homogeneous representation of the world coordinate system. f x and f y are the equivalent focal lengths on x and y, respectively; s is the distortion coefficient of the image; u 0 , v 0 are the coordinates of the main point of the image. R is the rotation matrix of the camera, and t is the camera translation vector. Where K is called the internal reference of the camera, and R and t are called the external parameters of the camera.
根据公式(3)和公式(4),可知在某一时刻,场景图像中某一对象点的像素点坐标和世界坐标的关系为:According to formula (3) and formula (4), it can be seen that at a certain moment, the relationship between the pixel point coordinates of a certain object point and the world coordinates in the scene image is:
Figure PCTCN2015080612-appb-000007
Figure PCTCN2015080612-appb-000007
其中,(u,v)为所述对象点在场景图像中的像素点坐标,(u0,v0)为图像主点坐标,图像主点为摄像机光轴和场景图像的成像平面的交点,f为焦距,β为场景图像中图像坐标系和像素坐标系之间进行转换的比例因子,d为所述对象点到图像传感器的距离,由于所述图像传感器设置在摄像机中,因此可认为d为所述对象点到摄像机的距离,该距离可通过深度图像获取。通常深度图像中每个像素点的灰度级,代表了该像素点对应的拍摄对象与所述深度传感器的距离的量化,不同灰度级分别代表了拍摄对象与深度传感器之间不同的距离。通过深度传感器生成的深度图像、灰度级的量化级数和量化算法,就可以获取每个灰度级代表的实际深度值,计算得出各个像素点对应的拍摄对象与摄像机的距离。Where (u, v) is the pixel point coordinate of the object point in the scene image, (u 0 , v 0 ) is the image principal point coordinate, and the image main point is the intersection of the camera optical axis and the imaging plane of the scene image, f is the focal length, β is the scale factor between the image coordinate system and the pixel coordinate system in the scene image, and d is the distance from the object point to the image sensor. Since the image sensor is disposed in the camera, it can be considered as d The distance from the object to the camera, which distance can be acquired by the depth image. Generally, the gray level of each pixel in the depth image represents the quantization of the distance between the object corresponding to the pixel and the depth sensor, and the different gray levels respectively represent different distances between the object and the depth sensor. Through the depth image generated by the depth sensor, the quantization level of the gray level, and the quantization algorithm, the actual depth value represented by each gray level can be obtained, and the distance between the object corresponding to each pixel point and the camera can be calculated.
通过上述可知,当世界坐标系以摄像机的光心为世界坐标系的原点时,根据第一检测周期获取到的第一距离d1,以及所述目标物体在所述场景图像 中的像素点坐标(u1,v1),获取目标物体在第一检测周期中的世界坐标(xc1,yc1)的算法为:It can be seen from the above that when the world coordinate system takes the optical center of the camera as the origin of the world coordinate system, the first distance d 1 acquired according to the first detection period and the pixel point coordinates of the target object in the scene image (u 1 , v 1 ), the algorithm for acquiring the world coordinates (x c1 , y c1 ) of the target object in the first detection period is:
Figure PCTCN2015080612-appb-000008
Figure PCTCN2015080612-appb-000008
其中,(u1,v1)为目标物体在第一检测周期的场景图像中的像素点坐标;(u0,v0)为图像主点在场景图像中的像素点坐标;β为所述场景图像对应的图像坐标系和像素坐标系之间进行转换时的比例因子;f1为所述摄像机当前的焦距;d1为所述目标物体与所述摄像机之间的第一距离;(xc1,yc1)为目标物体在第一检测周期中的世界坐标。其中,d1可通过第一检测周期中的深度图像获取。Wherein (u 1 , v 1 ) is a pixel point coordinate of the target object in the scene image of the first detection period; (u 0 , v 0 ) is a pixel point coordinate of the image main point in the scene image; β is the a scale factor when converting between an image coordinate system and a pixel coordinate system corresponding to the scene image; f 1 is a current focal length of the camera; d 1 is a first distance between the target object and the camera; C1 , y c1 ) are the world coordinates of the target object in the first detection period. Wherein d 1 can be acquired by the depth image in the first detection period.
目标物体在第一检测周期的场景图像中的像素点坐标(u1,v1),可基于所述目标物体的特征,对场景图像进行查询获取。其中,所述目标物体的特征可包括所述目标物体的线条、形状等。The pixel point coordinates (u 1 , v 1 ) of the target object in the scene image of the first detection period may be obtained by querying the scene image based on the characteristics of the target object. Wherein, the feature of the target object may include a line, a shape, and the like of the target object.
其中,由于f1焦距要远小于d1,因此可忽略不计,并设定
Figure PCTCN2015080612-appb-000009
b=βu0,则公式(7)和公式(8)可简化为:
Among them, since the focal length of f 1 is much smaller than d 1 , it can be ignored and set.
Figure PCTCN2015080612-appb-000009
b=βu 0 , then equation (7) and formula (8) can be simplified as:
Figure PCTCN2015080612-appb-000010
Figure PCTCN2015080612-appb-000010
相应的,根据上述陈述可知,在步骤1013中,根据在第二检测周期中所述目标物体与摄像机之间的第二距离d2,以及所述目标物体在所述场景图像中的像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2)时,公式为:Correspondingly, according to the above statement, in step 1013, according to the second distance d 2 between the target object and the camera in the second detection period, and the pixel point coordinates of the target object in the scene image (u 2 , v 2 ), when acquiring the world coordinates (x c2 , y c2 ) of the target object in the second detection period, the formula is:
Figure PCTCN2015080612-appb-000011
Figure PCTCN2015080612-appb-000011
其中,(u2,v2)为目标物体在第二检测周期的场景图像中的像素点坐标;(u0,v0)为图像主点在场景图像中的像素点坐标;β为所述场景图像对应的图像坐标系和像素坐标系之间进行转换时的比例因子;f1为所述摄像机当前的焦距;d2为所述目标物体与所述摄像机之间的第二距离;(xc2,yc2)为所述目标物体在第二检测周期中的世界坐标。其中,d2可通过第二检测周期的深 度图像获取。Wherein (u 2 , v 2 ) is a pixel point coordinate of the target object in the scene image of the second detection period; (u 0 , v 0 ) is a pixel point coordinate of the image main point in the scene image; β is the a scale factor when converting between the image coordinate system and the pixel coordinate system corresponding to the scene image; f 1 is the current focal length of the camera; d 2 is a second distance between the target object and the camera; C2 , y c2 ) is the world coordinate of the target object in the second detection period. Where d 2 can be acquired by the depth image of the second detection period.
[01]由于第二检测周期中,还未调节焦距,摄像机当前的焦距仍为f1,则可获取第二检测周期中,目标物体的世界坐标。并且,当
Figure PCTCN2015080612-appb-000012
b=βu0时,公式(11)和公式(12)可简化为:
[01] Since the focal length of the camera has not been adjusted in the second detection period, and the current focal length of the camera is still f 1 , the world coordinates of the target object in the second detection period can be acquired. And when
Figure PCTCN2015080612-appb-000012
When b=βu 0 , formula (11) and formula (12) can be simplified as:
Figure PCTCN2015080612-appb-000013
Figure PCTCN2015080612-appb-000013
在步骤1014中,设定所述摄像机调节后的焦距为f2,并获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系。另外,根据在第一检测周期获得的所述目标物体在所述场景图像中的像素点坐标(u1,v1)、所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,可获取公式(9)和(10)。另外,根据公式(13)和公式(14),可知焦距f2与所述像素点坐标(u2,v2)的对应关系为:In step 1014, the camera adjusted focal length is set to f 2 , and the corresponding relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ) is obtained. In addition, according to the pixel point coordinates (u 1 , v 1 ) of the target object in the scene image obtained in the first detection period, the first distance d 1 between the target object and the camera, and the first detection The focal length f 1 of the camera during the cycle can be obtained by equations (9) and (10). In addition, according to the formula (13) and the formula (14), it is understood that the correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ) is:
Figure PCTCN2015080612-appb-000014
Figure PCTCN2015080612-appb-000014
其中,
Figure PCTCN2015080612-appb-000015
b=βu0
among them,
Figure PCTCN2015080612-appb-000015
b=βu 0 .
当目标物体和摄像机之间的距离发生变化时,目标物体在视频中的成像画面的大小会发生变化,为了对其进行补偿,需要调节摄像机的焦距,以使目标物体在视频中的成像画面的大小保持在一定范围中。步骤1015中,公开了获取像素点坐标(u1,v1)和像素点坐标(u2,v2)的差值,并获取所述差值在预设范围内时焦距f2的值的方案。该方案中,为了使目标物体在视频中的成像画面的大小保持在预设范围中,则应使
Figure PCTCN2015080612-appb-000016
Figure PCTCN2015080612-appb-000017
其中
Figure PCTCN2015080612-appb-000018
为所述的预设范围,即满足如下公式:
When the distance between the target object and the camera changes, the size of the image of the target object in the video changes. In order to compensate for it, the focal length of the camera needs to be adjusted so that the target object is imaged in the video. The size is kept within a certain range. In step 1015, the difference between the acquired pixel point coordinates (u 1 , v 1 ) and the pixel point coordinates (u 2 , v 2 ) is disclosed, and the value of the focal length f 2 when the difference is within the preset range is obtained. Program. In this scheme, in order to keep the size of the image of the target object in the video in the preset range, it should be made
Figure PCTCN2015080612-appb-000016
And
Figure PCTCN2015080612-appb-000017
among them
Figure PCTCN2015080612-appb-000018
For the preset range, the following formula is satisfied:
Figure PCTCN2015080612-appb-000019
Figure PCTCN2015080612-appb-000019
求解公式(17)和公式(18),由于xc1,yc1,xc2,d1,d2,f1的值都是已知的,即可获取焦距f2的值,并将所述摄像机的焦距调节至f2即可。 Solving the formula (17) and the formula (18), since the values of x c1 , y c1 , x c2 , d 1 , d 2 , and f 1 are all known, the value of the focal length f 2 can be obtained, and the the focal length of the camera can be adjusted to f 2.
其中所述预设范围
Figure PCTCN2015080612-appb-000020
的值可根据应用需求进行设定。若需要使目标物体在视频中的成像画面的大小基本保持不变,则可设定所述预设范围
Figure PCTCN2015080612-appb-000021
的值为0。
The preset range
Figure PCTCN2015080612-appb-000020
The value can be set according to the application requirements. If it is necessary to make the size of the image of the target object in the video substantially unchanged, the preset range can be set.
Figure PCTCN2015080612-appb-000021
The value is 0.
另外,参见图5,还可以采用其他方式,实现对摄像机的焦距的调节。In addition, referring to FIG. 5, other methods can be used to adjust the focal length of the camera.
步骤1021、在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标。Step 1021: Determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image.
步骤1022、基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离。Step 1022: Acquire a distance between the target object and the camera based on pixel coordinates of the target object in the depth image.
步骤1021至步骤1022的实施过程与步骤101至步骤102的实施过程相同,可相互参照,此处不再赘述。The implementation process of the steps 1021 to 1022 is the same as the implementation process of the steps 101 to 102, and can be referred to each other, and details are not described herein again.
步骤1023、在获取目标物体与摄像机在所述第一检测周期中的第一距离d1后,获取所述第一检测周期中摄像机的焦距f1与所述第一距离d1的第一比例。Step 1023: After obtaining the first distance d 1 of the target object and the camera in the first detection period, acquiring a first ratio of the focal length f 1 of the camera and the first distance d 1 in the first detection period. .
步骤1024、设定所述摄像机调节后的焦距为f2,在所述第二检测周期获取目标物体与摄像机的第二距离d2后,获取焦距f2与所述第二距离d2的第二比例。Step 1024: Set a focal length adjusted by the camera to be f 2 , and obtain a second distance d 2 between the target object and the camera after the second detection period, and obtain a focal length f 2 and a second distance d 2 Two ratios.
步骤1025、获取当所述第一比例与第二比例间的差值在预设比例范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2Step 1025: Acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset ratio range, and adjust a focal length of the camera to a focal length f 2 .
步骤1021至步骤2025公开的方案,利用了两个检测周期中获取到的距离,实现对摄像机的焦距的调节。该实施例中,通常认为摄像机的焦距,与摄像机和目标物体的距离之间的比例在一定范围内时,目标物体在视频中呈现的画面大小基本维持在一定范围内。因此,在该实施例中,获取f1与d1的第一比例,以及f2与d2的第二比例后,计算所述第一比例和第二比例的差值在预设比例范围内时f2的值,并将所述摄像机的焦距调节至f2,实现自动变焦。The solution disclosed in steps 1021 to 2025 utilizes the distances acquired in the two detection periods to achieve adjustment of the focal length of the camera. In this embodiment, when the ratio between the focal length of the camera and the distance between the camera and the target object is generally considered to be within a certain range, the screen size of the target object in the video is substantially maintained within a certain range. Therefore, in this embodiment, after obtaining the first ratio of f 1 and d 1 and the second ratio of f 2 and d 2 , calculating a difference between the first ratio and the second ratio within a preset ratio range The value of f 2 is adjusted and the focal length of the camera is adjusted to f 2 to achieve automatic zooming.
本申请公开一种摄像机控制方法,根据目标物体与摄像机之间的距离,实现摄像机的自动变焦。参见图6,为了避免变焦过于频繁,导致图像抖动,本申请公开了如下的实施例。The present application discloses a camera control method for realizing automatic zooming of a camera according to a distance between a target object and a camera. Referring to Fig. 6, in order to avoid excessive zooming and image blurring, the present application discloses the following embodiments.
步骤111、在每个检测周期,确定目标物体在场景图像中的像素点坐标, 并确定所述目标物体在深度图像中的像素点坐标。Step 111: Determine, at each detection period, a pixel point coordinate of the target object in the scene image. And determining pixel coordinates of the target object in the depth image.
步骤112、基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离。其中,所述距离包括:第一检测周期获得的第一距离d1和第二检测周期获得的第二距离d2Step 112: Acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image. The distance includes: a first distance d 1 obtained by the first detection period and a second distance d 2 obtained by the second detection period.
步骤113、在获取第一距离d1和第二距离d2后,计算第一距离d1和第二距离d2的差值。Step 113: After obtaining the first distance d 1 and the second distance d 2 , calculate a difference between the first distance d 1 and the second distance d 2 .
步骤114、判断所述差值是否在预设的阈值范围内,若否,则执行步骤115的操作,若是,则执行步骤116的操作。Step 114: Determine whether the difference is within a preset threshold range. If not, perform the operation of step 115. If yes, perform the operation of step 116.
步骤115、当判断得知,所述差值不在预设的阈值范围内时,确定需要调节所述摄像机的焦距,并利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。Step 115: When it is determined that the difference is not within a preset threshold range, determine that a focal length of the camera needs to be adjusted, and obtain a first distance obtained by using the first detection period, and a second obtained by using the second detection period. Two distances, adjusting the focal length of the camera.
步骤116、当判断得知,所述差值在预设的阈值范围内时,确定当前不需要调节所述摄像机的焦距。Step 116: When it is determined that the difference is within a preset threshold range, it is determined that it is not currently necessary to adjust the focal length of the camera.
步骤111至步骤112的实施过程与步骤101至步骤102的实施过程相同,步骤115中利用两个检测周期中获取到的第一距离和第二距离,调节所述摄像机的焦距的实施过程,与步骤103的实施过程相同,可相互参照,此处不再赘述。The implementation process of step 111 to step 112 is the same as the implementation process of step 101 to step 102. In step 115, the first distance and the second distance acquired in the two detection periods are utilized to adjust the implementation process of the focal length of the camera, and The implementation process of step 103 is the same and can be referred to each other, and details are not described herein again.
上述实施例中,预先设定了一个阈值范围。两个检测周期中获取到的距离的差值,能够反映目标物体的位置变化,若目标物体的位置变化小于所述阈值范围时,则暂时不对摄像机的焦距进行调节,当大于所述阈值范围时,才调节所述摄像机的焦距。所述阈值范围可根据主观图像效果和经验值进行设置。该方法能够避免由于变焦过于频繁,导致图像抖动,使图像保持一定的稳定性。In the above embodiment, a threshold range is set in advance. The difference between the distances obtained in the two detection periods can reflect the change of the position of the target object. If the position change of the target object is less than the threshold range, the focal length of the camera is temporarily not adjusted. When the threshold is larger than the threshold range Only adjust the focal length of the camera. The threshold range can be set based on subjective image effects and empirical values. The method can avoid image blur due to zooming too much, and the image maintains a certain stability.
本申请公开了根据目标物体在深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离的步骤,该步骤包括:The present application discloses the step of acquiring the distance between the target object and the camera according to the pixel point coordinates of the target object in the depth image, and the step includes:
首先,获取所述目标物体在所述深度图像中的像素点坐标对应的像素点的灰度级;然后,通过所述灰度级与距离的对应关系,获取所述目标物体与摄像机之间的距离。First, acquiring a gray level of a pixel point corresponding to a pixel point coordinate of the target object in the depth image; and then acquiring, by the correspondence between the gray level and the distance, between the target object and the camera distance.
其中,所述深度图像由设置在摄像机中的深度传感器产生。深度传感器 是一种能够生成关于场景的深度图像的装置,其基本原理是向目标物体发射红外光线,并检测目标物体反射红外光线的时间差,通过所述时间差判定目标物体的距离。深度传感器能够实时获取深度图像,并且具有较强的准确性和可靠性。Wherein the depth image is generated by a depth sensor disposed in the camera. Depth sensor It is a device capable of generating a depth image about a scene, the basic principle of which is to emit infrared light to a target object, and detect a time difference of the target object reflecting the infrared light, and determine the distance of the target object by the time difference. The depth sensor is capable of acquiring depth images in real time with strong accuracy and reliability.
参见图7所示的深度传感器的工作原理示意图。该图中,每一视频帧(Video Frame)的实线部分为红外光源发射的三角波亮度调制光(triangularly intensity-modulated)的红外光线,其虚线部分为物体反射回来的红外光线,两者之间的延迟为Δt=2d/v,其中d为目标物体和红外光源之间的距离,v为光速。并且,图中虚线的竖线对部分为摄像机的快门打开时,深度传感器所接收到的光线。如果在红外光源亮度递增调制时,打开快门进行曝光,深度传感器接收到的红外光线的强度会随着物体的距离增大而变小,这种情况下,虚线部分向右移动,当深度传感器更少地接收到目标物体的反射光,表明目标物体与深度传感器之间的距离变远。相反的,如果在红外光源亮度递减调制时,打开快门进行曝光,接收到的红外光线的强度会随着目标物体的距离增大而增大,这种情况下,虚线部分向右移动,因此当深度传感器更多地接收到目标物体的反射光时,表明目标物体与深度传感器之间的距离变远。通过对亮度递增周期得到的光强和亮度递减周期时得到光强进行综合分析,即可消除物理反射特性的影响,获取目标物体与深度传感器之间的距离。See the schematic diagram of the working principle of the depth sensor shown in Figure 7. In the figure, the solid line portion of each video frame is a triangularly-intensity-modulated infrared ray emitted by an infrared light source, and the dotted line portion is an infrared ray reflected by the object. The delay is Δt=2d/v, where d is the distance between the target object and the infrared source, and v is the speed of light. Moreover, the vertical line pair of the dotted line in the figure is the light received by the depth sensor when the shutter of the camera is opened. If the shutter is opened for exposure when the brightness of the infrared light source is increased, the intensity of the infrared light received by the depth sensor will become smaller as the distance of the object increases. In this case, the dotted line moves to the right, when the depth sensor is more The reflected light of the target object is received less, indicating that the distance between the target object and the depth sensor becomes far. Conversely, if the shutter is opened for exposure when the brightness of the infrared light source is decreasing, the intensity of the received infrared light increases as the distance of the target object increases. In this case, the dotted line moves to the right, so When the depth sensor receives more of the reflected light of the target object, it indicates that the distance between the target object and the depth sensor becomes far. By comprehensively analyzing the light intensity obtained during the luminance increasing period and the light intensity decreasing period, the influence of the physical reflection characteristics can be eliminated, and the distance between the target object and the depth sensor can be obtained.
对亮度递增周期得到的光强和亮度递减周期时得到光强进行综合分析时,设定s(t)为红外光源产生的三角波的亮度调制光功率,I+(ts,d)和I-(ts,d)分别为亮度递增周期和亮度递减周期中,深度传感器接收到的光强,据此可产生如下表达式:When comprehensively analyzing the light intensity obtained during the luminance increasing period and the light intensity decreasing period, set s(t) to the luminance modulated optical power of the triangular wave generated by the infrared light source, I + (t s , d) and I - (t s , d) is the intensity of the light received by the depth sensor in the luminance increment period and the luminance decrement period, respectively, according to which the following expression can be generated:
Figure PCTCN2015080612-appb-000022
Figure PCTCN2015080612-appb-000022
Figure PCTCN2015080612-appb-000023
Figure PCTCN2015080612-appb-000023
其中,σ为目标物体的反向散射截面的面积,T为一个亮度调制周期的时长,ts为快门打开的时刻。根据上述两个公式,可求出目标物体对红外光 源的距离d为:Where σ is the area of the backscattering cross section of the target object, T is the length of one luminance modulation period, and t s is the moment when the shutter is opened. According to the above two formulas, the distance d of the target object to the infrared light source can be obtained as:
Figure PCTCN2015080612-appb-000024
Figure PCTCN2015080612-appb-000024
Figure PCTCN2015080612-appb-000025
Figure PCTCN2015080612-appb-000025
根据上述计算方法可知,深度传感器能够根据在不同时刻(如亮度递增周期和亮度递减周期)中,接收到的同一目标物体反射的光强,确定该目标物体与自身的距离。并且,深度传感器会将计算得到的距离信息最终转换为灰度或彩色的深度图像,并输出所述深度图像。通常深度图像中每个像素点的灰度级,代表了该像素点对应的拍摄对象与所述深度传感器的距离的量化,不同灰度级分别代表了拍摄对象与深度传感器之间不同的距离。所述深度图像的帧率可以达到30fps或60fps,通常具有256个灰度级。通过深度传感器生成的深度图像、灰度级的量化级数和量化算法,就可以获取每个灰度级代表的实际深度值,计算得出各个像素点对应的拍摄对象与摄像机的距离。对于灰度形式的深度图像,亮度越高的像素区域表示该像素区域对应的被拍摄物体距离深度传感器越近,亮度越暗的像素区域表示该像素区域对应的被拍摄物体距离深度传感器越远,其中,灰度级为0的像素区域代表的是距离深度传感器最远的被拍摄物体,灰度级为255的像素区域代表的是距离深度传感器最近的被拍摄物体。According to the above calculation method, the depth sensor can determine the distance of the target object from itself according to the received light intensity of the same target object at different times (such as the brightness increasing period and the brightness decreasing period). And, the depth sensor finally converts the calculated distance information into a grayscale or color depth image, and outputs the depth image. Generally, the gray level of each pixel in the depth image represents the quantization of the distance between the object corresponding to the pixel and the depth sensor, and the different gray levels respectively represent different distances between the object and the depth sensor. The depth image may have a frame rate of 30 fps or 60 fps, typically having 256 gray levels. Through the depth image generated by the depth sensor, the quantization level of the gray level, and the quantization algorithm, the actual depth value represented by each gray level can be obtained, and the distance between the object corresponding to each pixel point and the camera can be calculated. For a depth image in grayscale form, the pixel region with higher brightness indicates that the closer the object corresponding to the pixel region is to the depth sensor, the darker the pixel region indicates that the object corresponding to the pixel region is farther from the depth sensor. Wherein, the pixel area with the gray level of 0 represents the object farthest from the depth sensor, and the pixel area with the gray level of 255 represents the object closest to the depth sensor.
通过上述对深度传感器的工作原理的介绍,可知通过目标物体在深度图像中对应的像素点的灰度级,就能够获取目标物体与深度传感器之间的距离,并且,在本申请中,所述深度传感器设置在所述摄像机中,通过所述目标物体与深度传感器之间的距离,即可获取所述目标物体与摄像机之间的距离。Through the introduction of the working principle of the depth sensor, it can be known that the distance between the target object and the depth sensor can be obtained by the gray level of the corresponding pixel point of the target object in the depth image, and, in the present application, A depth sensor is disposed in the camera, and a distance between the target object and the camera is obtained by a distance between the target object and the depth sensor.
本申请公开的摄像机控制方法中,通过目标物体与摄像机在不同检测周期中的距离,对所述摄像机的焦距进行调节。另外,为了实现对目标物体的聚焦,本申请公开的摄像机控制方法还包括:In the camera control method disclosed in the present application, the focal length of the camera is adjusted by the distance between the target object and the camera in different detection periods. In addition, in order to achieve focus on the target object, the camera control method disclosed in the present application further includes:
根据接收到的设置信息,设置所述目标物体的聚焦优先级为最高优先级;Setting a focus priority of the target object to a highest priority according to the received setting information;
在调节所述摄像机的焦距后,调节所述摄像机的聚焦位置,使所述摄像 机在聚焦优先级最高的位置聚焦,以使所述摄像机聚焦在所述目标物体上。After adjusting the focal length of the camera, adjusting the focus position of the camera to make the camera The machine focuses at the position with the highest focus priority to focus the camera on the target object.
在完成摄像机焦距的调节后,还可以对聚焦位置进行调节,使聚焦位置调节到所述目标物体上,以得到成像质量更高的视频。在对摄像机的聚焦位置进行调节时,需要预先设置目标物体的聚焦优先级为最高优先级。当需要进行聚焦位置的调节时,通常会对场景图像进行分区,然后统计每个分区的聚焦值(Focus Value,FV),通过对各个分区的聚焦值进行加权计算,获取各个分区的聚焦优先级,并将摄像机聚焦在聚焦优先级最高的位置,从而使所述摄像机优先聚焦在所述目标物体上。After the adjustment of the focal length of the camera is completed, the focus position can also be adjusted to adjust the focus position to the target object to obtain a video with higher imaging quality. When adjusting the focus position of the camera, it is necessary to set the focus priority of the target object to the highest priority in advance. When the adjustment of the focus position is required, the scene image is usually partitioned, and then the focus value (FV) of each partition is counted, and the focus priority of each partition is obtained by weighting the focus values of the respective partitions. And focusing the camera at the position with the highest focus priority so that the camera is preferentially focused on the target object.
另外,在本申请的实施例中,公开了在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标的步骤。其中,所述确定所述目标物体在深度图像中的像素点坐标的步骤包括:基于所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,确定所述目标物体在所述深度图像中的像素点坐标。In addition, in the embodiment of the present application, the step of determining the pixel point coordinates of the target object in the scene image and determining the pixel point coordinates of the target object in the depth image is disclosed in each detection period. The determining the pixel point coordinates of the target object in the depth image includes: a correspondence between pixel coordinates formed in the scene image and the depth image based on the target object, and the target object in the scene image The pixel point coordinates in the determination of the pixel point coordinates of the target object in the depth image.
本申请公开的摄像机控制方法,应用于摄像机,且所述摄像机中设置有图像传感器和深度传感器,所述图像传感器用于生成场景图像,所述深度传感器用于生成深度图像。对所述场景图像进行目标检测,即可获取目标物体在所述场景图像上生成的像素点,然后根据所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,即可确定所述目标物体在深度图像中的像素点坐标,以便在后续步骤中,根据所述目标物体在深度图像中的像素点坐标,确定所述目标物体在深度图像中的像素点,并获取所述像素点的灰度级,通过所述灰度级,获取所述目标物体与所述摄像机之间的距离。The camera control method disclosed in the present application is applied to a camera, and an image sensor and a depth sensor are provided in the camera, the image sensor is used to generate a scene image, and the depth sensor is used to generate a depth image. Performing target detection on the scene image, that is, acquiring a pixel point generated by the target object on the scene image, and then correspondingly according to the pixel point coordinates formed by the target object in the scene image and the depth image, and the Determining the pixel point coordinates of the target object in the depth image, and determining the pixel point coordinates of the target object in the depth image, so as to determine the pixel coordinates in the depth image according to the target object in a subsequent step. A pixel point of the target object in the depth image, and acquiring a gray level of the pixel point, by which the distance between the target object and the camera is obtained.
所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,由图像传感器和深度传感器的放置位置决定,包括以下几种情况:Corresponding relationship between pixel coordinates formed by the target object in the scene image and the depth image is determined by the placement position of the image sensor and the depth sensor, and includes the following cases:
若深度传感器和图像传感器集成在一个传感器上时,例如在普通图像传感器的基础上增加能够感知深度信息的像素单元,则所述传感器能够同时输出场景图像和深度图像,并且输出的场景图像和深度图像是完全一致的场景,这种情况下,同一物体在场景图像上形成的像素点坐标,和在深度图像上形成的像素点坐标相同。并且,由于和场景图像相比,深度图像往往不需 要场景图像那样的高分辨率和帧率,因此,可采用高分辨率和帧率的图像传感器,以及较低分辨率和帧率的深度传感器相结合的方案,从而节省成本。If the depth sensor and the image sensor are integrated on one sensor, for example, a pixel unit capable of sensing depth information is added on the basis of an ordinary image sensor, the sensor can simultaneously output a scene image and a depth image, and output the scene image and depth The image is a completely consistent scene. In this case, the pixel coordinates formed by the same object on the scene image are the same as the pixel coordinates formed on the depth image. And, because the depth image is often not needed compared to the scene image High resolution and frame rate, such as scene images, can be achieved by combining a high resolution and frame rate image sensor with a combination of lower resolution and frame rate depth sensors.
另外,还可以采用独立的图像传感器和深度传感器。参见图8,其中图像传感器和深度传感器均设置在摄像机镜头后,并在所述图像传感器和摄像机镜头之间,以及深度传感器和摄像机镜头之间设置有一个半反半透镜,在所述半反半透镜的作用下,穿过摄像机镜头的入射光一部分反射给深度传感器成像,另一部透射至图像传感器成像。其中,所述半反半透镜通常与水平方向成45°角,另外,还可以成其他角度,本申请对此不作限定。该方案中利用半反半透镜实现分光,可以保证图像传感器和深度传感器拍摄的场景保持一致,这种情况下,同一物体在场景图像上形成的像素点坐标,和在深度图像上形成的像素点坐标相同。为了保证图像传感器的进光量,以获得较好的图像效果,可以控制半反半透镜的透射和反射光量的比例,例如透射光占总通光量的70%,反射光占总通光量的30%。In addition, separate image sensors and depth sensors can also be used. Referring to FIG. 8, wherein the image sensor and the depth sensor are both disposed behind the camera lens, and a semi-reverse half lens is disposed between the image sensor and the camera lens, and between the depth sensor and the camera lens. Under the action of the half lens, part of the incident light passing through the camera lens is reflected to the depth sensor and the other is transmitted to the image sensor for imaging. The half-reflex lens is usually at an angle of 45° with respect to the horizontal direction. In addition, the angle may be other angles, which is not limited in this application. In this scheme, the splitting is realized by using a half-reverse half lens, which can ensure that the scenes captured by the image sensor and the depth sensor are consistent, in this case, the pixel coordinates formed by the same object on the scene image, and the pixel points formed on the depth image. The coordinates are the same. In order to ensure the amount of light entering the image sensor to obtain a better image effect, the ratio of the amount of transmitted and reflected light of the half-reflex lens can be controlled. For example, the transmitted light accounts for 70% of the total amount of light, and the reflected light accounts for 30% of the total light. .
另一种情况中,采用的是独立的图像传感器和深度传感器,并且所述图像传感器和深度传感器用于成像的光线为不同的光线,例如,参见图9,其中,所述图像传感器设置在所述摄像机镜头的后方,并且所述图像传感器与所述深度传感器并排地摆列。其中,所述图像传感器通常与所述摄像机镜头在同一水平高度上。这种情况下,由于图像传感器和深度传感器用于成像的光路不同,因此二者拍摄的内容略有差异,造成场景图像和深度图像存在视差,从而需要对场景图像进行标定,获取目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。因此,所述摄像机控制方法还包括:预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。In another case, separate image sensors and depth sensors are employed, and the image sensors and depth sensors are used to image different light rays, for example, see FIG. 9, wherein the image sensor is disposed at The rear of the camera lens is described, and the image sensor is arranged side by side with the depth sensor. Wherein the image sensor is generally at the same level as the camera lens. In this case, since the image sensor and the depth sensor are used for different optical paths, the content of the two images is slightly different, causing parallax between the scene image and the depth image, so that the scene image needs to be calibrated to obtain the target object in the scene. Correspondence between pixel coordinates formed in the image and the depth image. Therefore, the camera control method further includes: predetermining a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image.
所述预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系的步骤包括:The step of predetermining the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image includes:
首先,获取所述场景图像和深度图像的成像关系表达式,所述成像关系表达式为:First, an imaging relationship expression of the scene image and the depth image is obtained, and the imaging relationship expression is:
Figure PCTCN2015080612-appb-000026
Figure PCTCN2015080612-appb-000026
其中,x为该物体在所述场景图像中的像素点坐标的齐次表示;x′为该 物体在所述深度图像中像素点坐标的齐次表示;H为所述场景图像和深度图像的透视变换矩阵。Where x is a homogeneous representation of the pixel coordinates of the object in the scene image; x' is the A homogeneous representation of the coordinates of the pixel points of the object in the depth image; H is a perspective transformation matrix of the scene image and the depth image.
然后,获取四个对象点分别在所述场景图像和深度图像上的像素点坐标,据此获取所述成像关系表达式中H的值,从而获取同一物体在所述场景图像和深度图像的透视变换矩阵,通过所述透视变换矩阵表征所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。Then, acquiring pixel point coordinates of the four object points on the scene image and the depth image respectively, thereby acquiring the value of H in the imaging relationship expression, thereby acquiring the perspective of the same object in the scene image and the depth image Transforming a matrix, and characterizing, by the perspective transformation matrix, a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image.
所述成像关系表达式中的H通常为一个3×3的矩阵,自由度为8,代表了场景图像和深度图像之间的变换关系,称之为透视变换矩阵。假设已知某一物体在场景图像中的像素点坐标为(x,y),该物体在深度图像中的像素点坐标为(x',y'),据此可得到如下两个方程:The H in the imaging relational expression is usually a 3×3 matrix with a degree of freedom of 8, representing a transformation relationship between the scene image and the depth image, which is called a perspective transformation matrix. It is assumed that the coordinates of the pixel point of an object in the scene image are (x, y), and the coordinates of the pixel point of the object in the depth image are (x', y'), according to which the following two equations can be obtained:
Figure PCTCN2015080612-appb-000027
Figure PCTCN2015080612-appb-000027
由上述两个方程可知,最少需要通过四个已知坐标的对象点,建立8个方程,才可以求出H的值。所述四个对象点可由用户预先选取,并分别获取各对象点分别在场景图像和深度图像上的像素点坐标,将其代入上述两个方程中,即可求取H的值,从而获取目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。It can be known from the above two equations that it is necessary to establish eight equations through the object points of four known coordinates, and the value of H can be obtained. The four object points can be pre-selected by the user, and the pixel coordinates of each object point on the scene image and the depth image are respectively obtained, and substituted into the above two equations, the value of H can be obtained, thereby obtaining the target. The correspondence between the pixel coordinates of the object formed in the scene image and the depth image.
相应的,本申请实施例还公开了一种摄像机控制装置。参见图10,所述摄像机控制装置包括:确定模块100、获取模块200和调节模块300。Correspondingly, the embodiment of the present application also discloses a camera control device. Referring to FIG. 10, the camera control apparatus includes: a determination module 100, an acquisition module 200, and an adjustment module 300.
其中,所述确定模块100,用于在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;The determining module 100 is configured to determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image;
所述获取模块200,用于基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;The acquiring module 200 is configured to acquire a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
所述调节模块300,用于利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The adjustment module 300 is configured to adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
进一步的,参见图11,所述调节模块300包括:第一获取单元301、第二获取单元302和第一调节单元303。Further, referring to FIG. 11 , the adjustment module 300 includes: a first obtaining unit 301, a second acquiring unit 302, and a first adjusting unit 303.
其中,所述第一获取单元301,用于根据在第二检测周期获得的所述目标物体与摄像机之间的第二距离d2,以及所述目标物体在所述场景图像中的 像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2);The first obtaining unit 301 is configured to: according to the second distance d 2 between the target object and the camera obtained in the second detection period, and the pixel point coordinates of the target object in the scene image (u 2 , v 2 ), acquiring world coordinates (x c 2, y c 2) of the target object in a second detection period;
所述第二获取单元302,用于当设定所述摄像机调节后的焦距为f2时,根据第二检测周期中所述目标物体的世界坐标(xc2,yc2)、所述第二距离d2和所述目标物体在场景图像中的像素点坐标(u2,v2),获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系;The second acquiring unit 302, when the focal length for the setting of the camera was adjusted to 2 F, according to a second detection period of the target object in world coordinates (x c2, y c2), the second a distance d 2 and a pixel point coordinate (u 2 , v 2 ) of the target object in the scene image, and obtaining a correspondence relationship between the focal length f 2 and the pixel point coordinate (u 2 , v 2 );
所述第一调节单元303,用于通过焦距f2与所述像素点坐标(u2,v2)的对应关系、在第一检测周期中所述目标物体在所述场景图像中的像素点坐标(u1,v1)、第一检测周期中所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,获取所述像素点坐标(u1,v1)和像素点坐标(u2,v2)的差值在预设范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2The first adjusting unit 303 is configured to: by a correspondence between a focal length f 2 and the pixel point coordinates (u 2 , v 2 ), a pixel point of the target object in the scene image in a first detection period Coordinates (u 1 , v 1 ), a first distance d 1 between the target object and the camera in the first detection period, and a focal length f 1 of the camera in the first detection period, acquiring the pixel point coordinates (u 1 , v 1 ) The value of the focal length f 2 when the difference between the pixel point coordinates (u 2 , v 2 ) is within a preset range, and adjusts the focal length of the camera to the focal length f 2 .
另外,参见图12,所述调节模块300还可以为其他形式,包括:第三获取单元304、第四获取单元305和第二调节单元306。In addition, referring to FIG. 12, the adjustment module 300 may also be in other forms, including: a third acquisition unit 304, a fourth acquisition unit 305, and a second adjustment unit 306.
其中,所述第三获取单元304,用于在获取目标物体与摄像机在所述第一检测周期中的第一距离d1后,获取所述第一检测周期中摄像机的焦距f1与所述第一距离d1的第一比例;The third obtaining unit 304 is configured to acquire, after acquiring the first distance d 1 of the target object and the camera in the first detection period, the focal length f 1 of the camera in the first detection period, and the a first ratio of the first distance d 1 ;
第四获取单元305,用于设定所述摄像机调节后的焦距为f2,在所述第二检测周期获取目标物体与摄像机的第二距离d2后,获取焦距f2与所述第二距离d2的第二比例;a fourth acquiring unit 305, configured to set a focal length adjusted by the camera to be f 2 , and obtain a focal length f 2 and the second after acquiring a second distance d 2 between the target object and the camera in the second detecting period a second ratio of distance d 2 ;
第二调节单元306,用于获取当所述第一比例与第二比例间的差值在预设比例范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2The second adjusting unit 306 is configured to acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset proportional range, and adjust a focal length of the camera to a focal length f 2 .
进一步的,所述摄像机控制装置还包括:判断模块,所述判断模块用于在获取第一距离d1和第二距离d2后,计算第一距离d1和第二距离d2的差值,并在所述差值不在预设的阈值范围内时,确定需要调节所述摄像机的焦距。Further, the camera control apparatus further comprising: a determining module, the determining module is configured to acquire a first distance and the second distances d 1 d 2 After calculating the difference between the first and second distances d 1 is a distance d 2 And determining that the focal length of the camera needs to be adjusted when the difference is not within a preset threshold range.
进一步的,参见图13,所述获取模块200包括:灰度级获取单元201和距离获取单元202。Further, referring to FIG. 13, the obtaining module 200 includes: a gray level acquiring unit 201 and a distance acquiring unit 202.
其中,所述灰度级获取单元201,用于获取所述目标物体在深度图像中的像素点坐标对应的像素点的灰度级; The gray level obtaining unit 201 is configured to acquire a gray level of a pixel point corresponding to a pixel point coordinate of the target object in the depth image;
所述距离获取单元202,用于通过所述灰度级与距离的对应关系,获取所述目标物体与摄像机之间的距离。The distance obtaining unit 202 is configured to acquire a distance between the target object and the camera by using a correspondence between the gray level and the distance.
进一步的,所述摄像机控制装置还包括:设置模块和聚焦模块。其中,所述设置模块用于根据接收到的设置信息,设置所述目标物体的聚焦优先级为最高优先级;所述聚焦模块用于在调节所述摄像机的焦距后,调节所述摄像机的聚焦位置,使所述摄像机在聚焦优先级最高的位置聚焦。Further, the camera control device further includes: a setting module and a focusing module. The setting module is configured to set a focus priority of the target object to a highest priority according to the received setting information; the focusing module is configured to adjust a focus of the camera after adjusting a focal length of the camera The position is such that the camera is focused at the position with the highest focus priority.
进一步的,所述确定模块100包括:确定单元,所述确定单元用于基于所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,确定所述目标物体在所述深度图像中的像素点坐标。Further, the determining module 100 includes: a determining unit, configured to use, according to a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image, and the target object in the scene image Pixel point coordinates, determining pixel point coordinates of the target object in the depth image.
进一步的,所述摄像机控制装置还包括:对应关系确定模块,所述对应关系确定模块用于预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。Further, the camera control device further includes: a correspondence determining module, wherein the correspondence determining module is configured to predetermine a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image.
所述对应关系确定模块包括:成像关系表达式获取单元和透视变换矩阵获取单元。其中,所述成像关系表达式获取单元,用于获取所述场景图像和深度图像的成像关系表达式,所述成像关系表达式为:The correspondence determining module includes: an imaging relationship expression acquiring unit and a perspective transform matrix acquiring unit. The imaging relationship expression obtaining unit is configured to acquire an imaging relationship expression of the scene image and the depth image, where the imaging relationship expression is:
Figure PCTCN2015080612-appb-000028
Figure PCTCN2015080612-appb-000028
其中,x为该物体在所述场景图像中的像素点坐标的齐次表示;x′为该物体在所述深度图像中像素点坐标的齐次表示;H为所述场景图像和深度图像的透视变换矩阵;Where x is a homogeneous representation of the pixel coordinates of the object in the scene image; x' is a homogeneous representation of the pixel coordinates of the object in the depth image; H is the scene image and the depth image Perspective transformation matrix;
所述透视变换矩阵获取单元,用于获取四个对象点分别在所述场景图像和深度图像上的像素点坐标,据此获取所述成像关系表达式中H的值,从而获取同一物体在所述场景图像和深度图像的透视变换矩阵,通过所述透视变换矩阵表征所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。The perspective transformation matrix acquiring unit is configured to acquire pixel point coordinates of the four object points on the scene image and the depth image, and obtain the value of H in the imaging relationship expression, thereby acquiring the same object in the And a perspective transformation matrix of the scene image and the depth image, and the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image is represented by the perspective transformation matrix.
本申请公开一种摄像机控制装置,所述摄像机控制装置在执行摄像机控制的操作时,首先由确定模块在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标,然后获取 模块基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离,再由调节模块利用利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距,调节所述摄像机的焦距。The present application discloses a camera control device. When performing the camera control operation, the camera control device first determines, by the determining module, pixel coordinates of the target object in the scene image in each detection cycle, and determines the target object. Pixel coordinates in the depth image, then get The module acquires a distance between the target object and the camera based on pixel coordinates of the target object in the depth image, and then uses a first distance obtained by using the first detection period by the adjustment module, and a second detection period The obtained second distance adjusts the focal length of the camera to adjust the focal length of the camera.
通过该装置,能够获取各检测周期中,目标物体与摄像机之间的距离,利用该距离实现对摄像机的焦距的调节,算法简单,并提高了变焦的准确性。Through the device, the distance between the target object and the camera in each detection cycle can be acquired, and the focal length of the camera can be adjusted by using the distance, the algorithm is simple, and the zoom accuracy is improved.
相应的,本申请还公开了一种摄像机。所述摄像机包括:处理器、存储器、图像传感器和深度传感器。Accordingly, the present application also discloses a video camera. The camera includes a processor, a memory, an image sensor, and a depth sensor.
其中,所述图像传感器用于产生包含目标物体的场景图像;Wherein the image sensor is configured to generate a scene image including a target object;
所述深度图像传感器用于产生包含目标物体的深度图像;The depth image sensor is configured to generate a depth image including a target object;
所述存储器用于存储对摄像机进行控制的程序;The memory is configured to store a program for controlling a camera;
所述处理器用于读取所述存储器中存储的程序,并根据所述程序执行摄像机控制的操作,所述摄像机控制的操作包括:The processor is configured to read a program stored in the memory, and perform a camera-controlled operation according to the program, where the camera controlled operation comprises:
在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;Determining, in each detection period, pixel coordinates of the target object in the scene image, and determining pixel coordinates of the target object in the depth image;
基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;Obtaining a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
另外,在本申请实施例公开的摄像机中,所述图像传感器和深度传感器可以不同的形式放置。其中一种形式中,所述图像传感器和深度传感器集成在同一传感器中。另外,所述图像传感器和深度传感器还可以为两个独立的传感器。In addition, in the camera disclosed in the embodiment of the present application, the image sensor and the depth sensor may be placed in different forms. In one form, the image sensor and the depth sensor are integrated in the same sensor. In addition, the image sensor and the depth sensor may also be two independent sensors.
当所述图像传感器和深度传感器为两个独立的传感器时,如图8所示,其中所述图像传感器和深度传感器均设置在摄像机镜头的后方,所述图像传感器和深度传感器设置在不同的水平高度上,在摄像机镜头和所述图像传感器、深度传感器之间设置有半反半透镜。其中,所述图像传感器通常与所述摄像机镜头在同一水平高度上,并和所述摄像机镜头在垂直方向排列,所述深度传感器设置在所述摄像机镜头和图像传感器之间,并通常设置在水平方 向,所述半反半透镜设置在所述深度传感器上方,与水平方向成一定的倾斜角。所述倾斜角可为45°或其他角度,本申请对此不作限定。该方案中利用半反半透镜实现分光,可以保证图像传感器和深度传感器拍摄的场景保持一致,同一物体在场景图像上形成的像素点坐标,和在深度图像上形成的像素点坐标相同。并且,为了保证图像传感器的进光量,以获得较好的图像效果,可以控制半反半透镜的透射和反射光量的比例,例如透射光占总通光量的70%,反射光占总通光量的30%。When the image sensor and the depth sensor are two independent sensors, as shown in FIG. 8, wherein the image sensor and the depth sensor are both disposed behind the camera lens, the image sensor and the depth sensor are set at different levels. In height, a half mirror half lens is disposed between the camera lens and the image sensor and the depth sensor. Wherein the image sensor is generally at the same level as the camera lens and is arranged in a vertical direction with the camera lens, the depth sensor being disposed between the camera lens and the image sensor, and is generally disposed at a level Square The half-reflex lens is disposed above the depth sensor at a certain inclination angle with respect to the horizontal direction. The angle of inclination may be 45° or other angles, which is not limited in this application. In this scheme, the splitting is realized by using the half-reverse half lens, which can ensure that the scenes captured by the image sensor and the depth sensor are consistent, and the coordinates of the pixel points formed by the same object on the scene image are the same as the coordinates of the pixel points formed on the depth image. Moreover, in order to ensure the amount of light entering the image sensor to obtain a better image effect, the ratio of the amount of transmitted and reflected light of the half-reflex lens can be controlled, for example, the transmitted light accounts for 70% of the total amount of light, and the reflected light occupies the total amount of light. 30%.
或者,当所述图像传感器和深度传感器为两个独立的传感器时,如图9所示,所述图像传感器和深度传感器用于成像的光线为不同的光线,其中,所述图像传感器设置在摄像机镜头的后方,并且所述图像传感器与所述深度传感器并排地摆列。另外,所述图像传感器通常与所述摄像机镜头在同一水平高度上。这种情况下,由于图像传感器和深度传感器用于成像的光路不同,因此二者拍摄的内容略有差异,造成场景图像和深度图像存在视差,从而需要对场景图像进行标定,获取目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。Alternatively, when the image sensor and the depth sensor are two independent sensors, as shown in FIG. 9, the image sensor and the depth sensor are used to image different light rays, wherein the image sensor is disposed in the camera. The rear of the lens, and the image sensor is placed side by side with the depth sensor. Additionally, the image sensor is typically at the same level as the camera lens. In this case, since the image sensor and the depth sensor are used for different optical paths, the content of the two images is slightly different, causing parallax between the scene image and the depth image, so that the scene image needs to be calibrated to obtain the target object in the scene. Correspondence between pixel coordinates formed in the image and the depth image.
本申请公开的摄像机,能够根据场景图像和深度图像,获取不同检测周期中目标物体与摄像机之间的距离,并通过所述距离,实现对所述摄像机的焦距的调节。该方法所采用的算法简单,易于实现,且提高了焦距调节的准确性和鲁棒性。The camera disclosed in the present application is capable of acquiring a distance between a target object and a camera in different detection periods according to a scene image and a depth image, and adjusting the focal length of the camera by the distance. The algorithm adopted by the method is simple, easy to implement, and improves the accuracy and robustness of the focus adjustment.
本领域的技术人员可以清楚地了解到本发明实施例中的技术可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本发明实施例中的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备;)执行本发明各个实施例或者实施例的某些部分所述的方法。It will be apparent to those skilled in the art that the techniques in the embodiments of the present invention can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution in the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a storage medium such as a ROM/RAM. , a diskette, an optical disk, etc., comprising instructions for causing a computer device (which may be a personal computer, server, or network device;) to perform the methods described in various embodiments of the present invention or in certain portions of the embodiments.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其它实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。 The various embodiments in the specification are described in a progressive manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
以上所述的本发明实施方式,并不构成对本发明保护范围的限定。任何在本发明的精神和原则之内所作的修改、等同替换和改进等,均应包含在本发明的保护范围之内。 The embodiments of the invention described above are not intended to limit the scope of the invention. Any modifications, equivalent substitutions and improvements made within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (18)

  1. 一种摄像机控制方法,其特征在于,所述方法包括:A camera control method, the method comprising:
    在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;Determining, in each detection period, pixel coordinates of the target object in the scene image, and determining pixel coordinates of the target object in the depth image;
    基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;Obtaining a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
    利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  2. 根据权利要求1所述的方法,其特征在于,所述利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距的步骤包括:The method according to claim 1, wherein the step of adjusting the focal length of the camera by using the first distance obtained by the first detection period and the second distance obtained by the second detection period comprises:
    根据在第二检测周期获得的所述目标物体与摄像机之间的第二距离d2,以及所述目标物体在所述场景图像中的像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2);Obtaining the target according to a second distance d 2 between the target object and the camera obtained in the second detection period, and pixel coordinates (u 2 , v 2 ) of the target object in the scene image The world coordinates of the object in the second detection period (x c2 , y c2 );
    当设定所述摄像机调节后的焦距为f2时,根据第二检测周期中所述目标物体的世界坐标(xc2,yc2)、所述第二距离d2和所述目标物体在场景图像中的像素点坐标(u2,v2),获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系;When the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and the target object in the scene the pixel coordinates (u 2, v 2) in the image, the focal length f 2 acquires the pixel coordinates (u 2, v 2) a corresponding relation;
    通过焦距f2与所述像素点坐标(u2,v2)的对应关系、在第一检测周期中所述目标物体在所述场景图像中的像素点坐标(u1,v1)、第一检测周期中所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,获取所述像素点坐标(u1,v1)和像素点坐标(u2,v2)的差值在预设范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2Corresponding relationship between the focal point f 2 and the pixel point coordinates (u 2 , v 2 ), pixel coordinates (u 1 , v 1 ) of the target object in the scene image in the first detection period, Obtaining the pixel point coordinates (u 1 , v 1 ) and pixel point coordinates (u 2 ) by a first distance d 1 between the target object and the camera in a detection period and a focal length f 1 of the camera in the first detection period The difference of v 2 ) is the value of the focal length f 2 when the difference is within the preset range, and the focal length of the camera is adjusted to the focal length f 2 .
  3. 根据权利要求1所述的方法,其特征在于,所述利用第一检测周期获取到的第一距离,和第二检测周期获取到的第二距离,调节所述摄像机的焦距的步骤包括:The method according to claim 1, wherein the step of adjusting the focal length of the camera by using the first distance acquired by the first detection period and the second distance acquired by the second detection period comprises:
    在获取目标物体与摄像机在所述第一检测周期中的第一距离d1后,获取所述第一检测周期中摄像机的焦距f1与所述第一距离d1的第一比例; After obtaining the first distance d 1 of the target object and the camera in the first detection period, acquiring a first ratio of the focal length f 1 of the camera and the first distance d 1 in the first detection period;
    设定所述摄像机调节后的焦距为f2,在所述第二检测周期获取目标物体与摄像机的第二距离d2后,获取焦距f2与所述第二距离d2的第二比例;Setting a focal length adjusted by the camera to be f 2 , and acquiring a second ratio d 2 of the focal length f 2 and the second distance d 2 after acquiring the second distance d 2 of the target object and the camera in the second detection period;
    获取当所述第一比例与第二比例间的差值在预设比例范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2Obtaining a value of the focal length f 2 when the difference between the first ratio and the second ratio is within a preset ratio range, and adjusting a focal length of the camera to a focal length f 2 .
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    在获取第一距离d1和第二距离d2后,计算第一距离d1和第二距离d2的差值,并在所述差值不在预设的阈值范围内时,确定需要调节所述摄像机的焦距。After obtaining the first distance d 1 and the second distance d 2 , calculating a difference between the first distance d 1 and the second distance d 2 , and determining that the adjustment is needed when the difference is not within a preset threshold range The focal length of the camera.
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离,包括:The method according to claim 1, wherein the obtaining a distance between the target object and a camera based on pixel coordinates of the target object in the depth image comprises:
    获取所述目标物体在深度图像中的像素点坐标对应的像素点的灰度级;Obtaining a gray level of a pixel point corresponding to a pixel point coordinate of the target object in the depth image;
    通过所述灰度级与距离的对应关系,获取所述目标物体与摄像机之间的距离。The distance between the target object and the camera is obtained by the correspondence between the gray level and the distance.
  6. 根据权利要求1至5所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    根据接收到的设置信息,设置所述目标物体的聚焦优先级为最高优先级;Setting a focus priority of the target object to a highest priority according to the received setting information;
    在调节所述摄像机的焦距后,调节所述摄像机的聚焦位置,使所述摄像机在聚焦优先级最高的位置聚焦。After adjusting the focal length of the camera, the focus position of the camera is adjusted to focus the camera at the position with the highest focus priority.
  7. 根据权利要求1所述的方法,其特征在于,所述确定所述目标物体在深度图像中的像素点坐标,包括:The method according to claim 1, wherein the determining pixel coordinates of the target object in the depth image comprises:
    基于所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,确定所述目标物体在所述深度图像中的像素点坐标。Determining pixel coordinates of the target object in the depth image based on a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image, and pixel coordinates of the target object in the scene image .
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系;The method according to claim 7, wherein the method further comprises: predetermining a correspondence relationship between pixel coordinates formed by the target object in the scene image and the depth image;
    所述预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系的步骤包括:The step of predetermining the correspondence relationship between the pixel coordinates formed by the target object in the scene image and the depth image includes:
    获取所述场景图像和深度图像的成像关系表达式,所述成像关系表达式 为:Obtaining an imaging relationship expression of the scene image and the depth image, the imaging relationship expression for:
    Figure PCTCN2015080612-appb-100001
    Figure PCTCN2015080612-appb-100001
    其中,x为该物体在所述场景图像中的像素点坐标的齐次表示;x′为该物体在所述深度图像中像素点坐标的齐次表示;H为所述场景图像和深度图像的透视变换矩阵;Where x is a homogeneous representation of the pixel coordinates of the object in the scene image; x' is a homogeneous representation of the pixel coordinates of the object in the depth image; H is the scene image and the depth image Perspective transformation matrix;
    获取四个对象点分别在所述场景图像和深度图像上的像素点坐标,据此获取所述成像关系表达式中H的值,从而获取同一物体在所述场景图像和深度图像的透视变换矩阵,通过所述透视变换矩阵表征所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。Obtaining pixel coordinates of the four object points on the scene image and the depth image respectively, and acquiring the value of H in the imaging relationship expression, thereby acquiring a perspective transformation matrix of the same object in the scene image and the depth image And mapping, by the perspective transformation matrix, a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image.
  9. 一种摄像机控制装置,其特征在于,所述装置包括:A camera control device, characterized in that the device comprises:
    确定模块,用于在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;a determining module, configured to determine, at each detection period, pixel coordinates of the target object in the scene image, and determine pixel coordinates of the target object in the depth image;
    获取模块,用于基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离;An acquiring module, configured to acquire a distance between the target object and a camera based on pixel coordinates of the target object in the depth image;
    调节模块,用于利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。And an adjustment module, configured to adjust a focal length of the camera by using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  10. 根据权利要求9所述的装置,其特征在于,所述调节模块包括:The apparatus according to claim 9, wherein the adjustment module comprises:
    第一获取单元,用于根据在第二检测周期获得的所述目标物体与摄像机之间的第二距离d2,以及所述目标物体在所述场景图像中的像素点坐标(u2,v2),获取所述目标物体在第二检测周期中的世界坐标(xc2,yc2);a first acquiring unit, configured to: according to the second distance d 2 between the target object and the camera obtained in the second detection period, and the pixel point coordinates of the target object in the scene image (u 2 , v 2 ) acquiring world coordinates (x c2 , y c2 ) of the target object in the second detection period;
    第二获取单元,用于当设定所述摄像机调节后的焦距为f2时,根据第二检测周期中所述目标物体的世界坐标(xc2,yc2)、所述第二距离d2和所述目标物体在场景图像中的像素点坐标(u2,v2),获取所述焦距f2与所述像素点坐标(u2,v2)的对应关系;a second acquiring unit, configured to: when the focal length adjusted by the camera is set to f 2 , according to the world coordinate (x c2 , y c2 ) of the target object in the second detection period, the second distance d 2 and coordinates of the pixels in the target object in the image scene (u 2, v 2), the focal length f 2 acquires the pixel coordinates (u 2, v 2) of the correspondence relationship;
    第一调节单元,用于通过焦距f2与所述像素点坐标(u2,v2)的对应关系、在第一检测周期中所述目标物体在所述场景图像中的像素点坐标(u1,v1)、第一检测周期中所述目标物体与摄像机之间的第一距离d1和第一检测周期中摄像机的焦距f1,获取所述像素点坐标(u1,v1)和像素点坐标(u2,v2)的 差值在预设范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2a first adjusting unit, configured to: by a correspondence relationship between the focal length f 2 and the pixel point coordinates (u 2 , v 2 ), a pixel point coordinate of the target object in the scene image in the first detection period (u 1 , v 1 ), a first distance d 1 between the target object and the camera in the first detection period and a focal length f 1 of the camera in the first detection period, and acquiring the pixel point coordinates (u 1 , v 1 ) The value of the focal length f 2 when the difference between the pixel point coordinates (u 2 , v 2 ) is within a preset range, and adjusts the focal length of the camera to the focal length f 2 .
  11. 根据权利要求9所述的装置,其特征在于,所述调节模块包括:The apparatus according to claim 9, wherein the adjustment module comprises:
    第三获取单元,用于在获取目标物体与摄像机在所述第一检测周期中的第一距离d1后,获取所述第一检测周期中摄像机的焦距f1与所述第一距离d1的第一比例;a third acquiring unit, configured to acquire a focal length f 1 and a first distance d 1 of the camera in the first detection period after acquiring the first distance d 1 of the target object and the camera in the first detection period First ratio;
    第四获取单元,用于设定所述摄像机调节后的焦距为f2,在所述第二检测周期获取目标物体与摄像机的第二距离d2后,获取焦距f2与所述第二距离d2的第二比例;a fourth acquiring unit, configured to set a focal length adjusted by the camera to be f 2 , and obtain a focal length f 2 and the second distance after acquiring the second distance d 2 of the target object and the camera in the second detecting period a second ratio of d 2 ;
    第二调节单元,用于获取当所述第一比例与第二比例间的差值在预设比例范围内时焦距f2的值,并将所述摄像机的焦距调节至焦距f2a second adjusting unit, configured to acquire a value of a focal length f 2 when a difference between the first ratio and the second ratio is within a preset proportional range, and adjust a focal length of the camera to a focal length f 2 .
  12. 根据权利要求9所述的装置,其特征在于,所述摄像机控制装置还包括:The device according to claim 9, wherein the camera control device further comprises:
    判断模块,用于在获取第一距离d1和第二距离d2后,计算第一距离d1和第二距离d2的差值,并在所述差值不在预设的阈值范围内时,确定需要调节所述摄像机的焦距。A determining module, configured to acquire a first distance d 1 and the second distance d 2, d 1 calculates the first distance and the second distance d 2 difference, and when the difference is not within the preset threshold range , determining that the focal length of the camera needs to be adjusted.
  13. 根据权利要求9所述的装置,其特征在于,所述获取模块包括:The device according to claim 9, wherein the obtaining module comprises:
    灰度级获取单元,用于获取所述目标物体在深度图像中的像素点坐标对应的像素点的灰度级;a gray level acquiring unit, configured to acquire a gray level of a pixel point corresponding to a pixel point coordinate of the target object in the depth image;
    距离获取单元,用于通过所述灰度级与距离的对应关系,获取所述目标物体与摄像机之间的距离。The distance obtaining unit is configured to acquire a distance between the target object and the camera by the correspondence between the gray level and the distance.
  14. 根据权利要求9至13所述的装置,其特征在于,所述摄像机控制装置还包括:The device according to any one of claims 9 to 13, wherein the camera control device further comprises:
    设置模块,用于根据接收到的设置信息,设置所述目标物体的聚焦优先级为最高优先级;a setting module, configured to set a focus priority of the target object to a highest priority according to the received setting information;
    聚焦模块,用于在调节所述摄像机的焦距后,调节所述摄像机的聚焦位置,使所述摄像机在聚焦优先级最高的位置聚焦。a focusing module for adjusting a focus position of the camera after adjusting a focal length of the camera to focus the camera at a position with the highest focus priority.
  15. 根据权利要求9所述的装置,其特征在于,所述确定模块包括:The apparatus according to claim 9, wherein the determining module comprises:
    确定单元,用于基于所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系,以及所述目标物体在场景图像中的像素点坐标,确定所 述目标物体在所述深度图像中的像素点坐标。a determining unit, configured to determine, according to a correspondence relationship between pixel coordinates of the target object formed in the scene image and the depth image, and pixel coordinates of the target object in the scene image The pixel point coordinates of the target object in the depth image.
  16. 根据权利要求15所述的装置,其特征在于,所述摄像机控制装置还包括:对应关系确定模块,所述对应关系确定模块用于预先确定所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系;The device according to claim 15, wherein the camera control device further comprises: a correspondence determining module, wherein the correspondence determining module is configured to predetermine pixels formed by the target object in the scene image and the depth image Correspondence of point coordinates;
    所述对应关系确定模块包括:The correspondence determining module includes:
    成像关系表达式获取单元,用于获取所述场景图像和深度图像的成像关系表达式,所述成像关系表达式为:An imaging relationship expression obtaining unit, configured to acquire an imaging relationship expression of the scene image and the depth image, where the imaging relationship expression is:
    Figure PCTCN2015080612-appb-100002
    Figure PCTCN2015080612-appb-100002
    其中,x为该物体在所述场景图像中的像素点坐标的齐次表示;x′为该物体在所述深度图像中像素点坐标的齐次表示;H为所述场景图像和深度图像的透视变换矩阵;Where x is a homogeneous representation of the pixel coordinates of the object in the scene image; x' is a homogeneous representation of the pixel coordinates of the object in the depth image; H is the scene image and the depth image Perspective transformation matrix;
    透视变换矩阵获取单元,用于获取四个对象点分别在所述场景图像和深度图像上的像素点坐标,据此获取所述成像关系表达式中H的值,从而获取同一物体在所述场景图像和深度图像的透视变换矩阵,通过所述透视变换矩阵表征所述目标物体在场景图像和深度图像中形成的像素点坐标的对应关系。a perspective transformation matrix acquiring unit, configured to acquire pixel point coordinates of the four object points on the scene image and the depth image, respectively, and obtain a value of H in the imaging relationship expression, thereby acquiring the same object in the scene A perspective transformation matrix of the image and the depth image, by which the correspondence relationship of the pixel coordinates formed by the target object in the scene image and the depth image is characterized.
  17. 一种摄像机,其特征在于,所述摄像机包括:处理器、存储器、图像传感器和深度传感器,A camera, comprising: a processor, a memory, an image sensor, and a depth sensor,
    其中,所述图像传感器用于产生包含目标物体的场景图像;Wherein the image sensor is configured to generate a scene image including a target object;
    所述深度图像传感器用于产生包含目标物体的深度图像;The depth image sensor is configured to generate a depth image including a target object;
    所述存储器用于存储对摄像机进行控制的程序;The memory is configured to store a program for controlling a camera;
    所述处理器用于读取所述存储器中存储的程序,并根据所述程序执行摄像机控制的操作,所述摄像机控制的操作包括:The processor is configured to read a program stored in the memory, and perform a camera-controlled operation according to the program, where the camera controlled operation comprises:
    在每个检测周期,确定目标物体在场景图像中的像素点坐标,并确定所述目标物体在深度图像中的像素点坐标;Determining, in each detection period, pixel coordinates of the target object in the scene image, and determining pixel coordinates of the target object in the depth image;
    基于所述目标物体在所述深度图像中的像素点坐标,获取所述目标物体与摄像机之间的距离; Obtaining a distance between the target object and the camera based on pixel coordinates of the target object in the depth image;
    利用第一检测周期获得的第一距离,和第二检测周期获得的第二距离,调节所述摄像机的焦距。The focal length of the camera is adjusted using a first distance obtained by the first detection period and a second distance obtained by the second detection period.
  18. 根据权利要求17所述的摄像机,其特征在于,The camera of claim 17 wherein:
    所述图像传感器和深度传感器集成在同一传感器中;The image sensor and the depth sensor are integrated in the same sensor;
    或者,or,
    所述图像传感器和深度传感器均设置在摄像机镜头的后方,所述图像传感器和深度传感器设置在不同的水平高度上,在摄像机镜头和所述图像传感器、深度传感器之间设置有半反半透镜;The image sensor and the depth sensor are both disposed behind the camera lens, the image sensor and the depth sensor are disposed at different levels, and a semi-reverse half lens is disposed between the camera lens and the image sensor and the depth sensor;
    或者,or,
    所述图像传感器设置在摄像机镜头的后方,并且所述图像传感器与所述深度传感器并排地摆列。 The image sensor is disposed behind the camera lens, and the image sensor is placed side by side with the depth sensor.
PCT/CN2015/080612 2014-06-04 2015-06-02 Camera control method and device, and camera WO2015184978A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410244369.7A CN104023177A (en) 2014-06-04 2014-06-04 Camera control method, device and camera
CN201410244369.7 2014-06-04

Publications (1)

Publication Number Publication Date
WO2015184978A1 true WO2015184978A1 (en) 2015-12-10

Family

ID=51439726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/080612 WO2015184978A1 (en) 2014-06-04 2015-06-02 Camera control method and device, and camera

Country Status (2)

Country Link
CN (1) CN104023177A (en)
WO (1) WO2015184978A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292288A (en) * 2018-12-06 2020-06-16 北京欣奕华科技有限公司 Target detection and positioning method and device
CN112532874A (en) * 2020-11-23 2021-03-19 北京三快在线科技有限公司 Method and device for generating plane thermodynamic diagram, storage medium and electronic equipment
CN113490966A (en) * 2020-10-13 2021-10-08 深圳市大疆创新科技有限公司 Camera parameter calibration method, image processing method, device and storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104023177A (en) * 2014-06-04 2014-09-03 华为技术有限公司 Camera control method, device and camera
CN105491277B (en) * 2014-09-15 2018-08-31 联想(北京)有限公司 Image processing method and electronic equipment
CN107564020B (en) * 2017-08-31 2020-06-12 北京奇艺世纪科技有限公司 Image area determination method and device
CN108234879B (en) * 2018-02-02 2021-01-26 成都西纬科技有限公司 Method and device for acquiring sliding zoom video
CN108924375B (en) * 2018-06-14 2021-09-07 Oppo广东移动通信有限公司 Ringtone volume processing method and device, storage medium and terminal
CN109559522B (en) * 2019-01-21 2021-09-28 熵基科技股份有限公司 Debugging method, telescopic upright post, camera and storage medium
CN111340864B (en) * 2020-02-26 2023-12-12 浙江大华技术股份有限公司 Three-dimensional scene fusion method and device based on monocular estimation
CN111815515B (en) * 2020-07-01 2024-02-09 成都智学易数字科技有限公司 Object three-dimensional drawing method based on medical education
WO2022213311A1 (en) * 2021-04-08 2022-10-13 Qualcomm Incorporated Camera autofocus using depth sensor
CN113572958B (en) * 2021-07-15 2022-12-23 杭州海康威视数字技术股份有限公司 Method and equipment for automatically triggering camera to focus
CN113893034A (en) * 2021-09-23 2022-01-07 上海交通大学医学院附属第九人民医院 Integrated operation navigation method, system and storage medium based on augmented reality
CN113916128A (en) * 2021-10-11 2022-01-11 齐鲁工业大学 Method for improving precision based on optical pen type vision measurement system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102957861A (en) * 2011-08-24 2013-03-06 索尼公司 Image processing device, control method and program thereof
CN103581543A (en) * 2012-07-18 2014-02-12 三星电子株式会社 Photographing apparatus, photographing control method, and eyeball recognition apparatus
CN103795934A (en) * 2014-03-03 2014-05-14 联想(北京)有限公司 Image processing method and electronic device
CN104023177A (en) * 2014-06-04 2014-09-03 华为技术有限公司 Camera control method, device and camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231798B (en) * 2011-06-24 2015-09-23 天津市亚安科技股份有限公司 A kind of method and system controlling the automatic zoom of monopod video camera
US20130057655A1 (en) * 2011-09-02 2013-03-07 Wen-Yueh Su Image processing system and automatic focusing method
CN103475805A (en) * 2012-06-08 2013-12-25 鸿富锦精密工业(深圳)有限公司 Active range focusing system and active range focusing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102957861A (en) * 2011-08-24 2013-03-06 索尼公司 Image processing device, control method and program thereof
CN103581543A (en) * 2012-07-18 2014-02-12 三星电子株式会社 Photographing apparatus, photographing control method, and eyeball recognition apparatus
CN103795934A (en) * 2014-03-03 2014-05-14 联想(北京)有限公司 Image processing method and electronic device
CN104023177A (en) * 2014-06-04 2014-09-03 华为技术有限公司 Camera control method, device and camera

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292288A (en) * 2018-12-06 2020-06-16 北京欣奕华科技有限公司 Target detection and positioning method and device
CN113490966A (en) * 2020-10-13 2021-10-08 深圳市大疆创新科技有限公司 Camera parameter calibration method, image processing method, device and storage medium
CN113490966B (en) * 2020-10-13 2024-01-12 深圳市大疆创新科技有限公司 Calibration method, image processing method, device and storage medium for camera parameters
CN112532874A (en) * 2020-11-23 2021-03-19 北京三快在线科技有限公司 Method and device for generating plane thermodynamic diagram, storage medium and electronic equipment
CN112532874B (en) * 2020-11-23 2022-03-29 北京三快在线科技有限公司 Method and device for generating plane thermodynamic diagram, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN104023177A (en) 2014-09-03

Similar Documents

Publication Publication Date Title
WO2015184978A1 (en) Camera control method and device, and camera
US11877086B2 (en) Method and system for generating at least one image of a real environment
US10997696B2 (en) Image processing method, apparatus and device
WO2019105262A1 (en) Background blur processing method, apparatus, and device
WO2019114617A1 (en) Method, device, and system for fast capturing of still frame
US8885922B2 (en) Image processing apparatus, image processing method, and program
US8786679B2 (en) Imaging device, 3D modeling data creation method, and computer-readable recording medium storing programs
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
TWI433530B (en) Camera system and image-shooting method with guide for taking stereo photo and method for automatically adjusting stereo photo
WO2019109805A1 (en) Method and device for processing image
WO2013094635A1 (en) Image processing device, imaging device, and display device
WO2019105261A1 (en) Background blurring method and apparatus, and device
CN105282421B (en) A kind of mist elimination image acquisition methods, device and terminal
WO2020237565A1 (en) Target tracking method and device, movable platform and storage medium
WO2019105254A1 (en) Background blur processing method, apparatus and device
WO2020042581A1 (en) Focusing method and device for image acquisition apparatus
JP6494587B2 (en) Image processing apparatus, image processing apparatus control method, imaging apparatus, and program
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
TWI518305B (en) Method of capturing images
WO2016197494A1 (en) Method and device for adjusting focusing area
JP7301683B2 (en) Image processing device, image processing method, and program
CN108289170B (en) Photographing apparatus, method and computer readable medium capable of detecting measurement area
JP6624785B2 (en) Image processing method, image processing device, imaging device, program, and storage medium
KR20230107255A (en) Foldable electronic device for multi-view image capture
JP5958082B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15802851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15802851

Country of ref document: EP

Kind code of ref document: A1