WO2018053825A1 - 对焦方法和装置、图像拍摄方法和装置及摄像系统 - Google Patents
对焦方法和装置、图像拍摄方法和装置及摄像系统 Download PDFInfo
- Publication number
- WO2018053825A1 WO2018053825A1 PCT/CN2016/100075 CN2016100075W WO2018053825A1 WO 2018053825 A1 WO2018053825 A1 WO 2018053825A1 CN 2016100075 W CN2016100075 W CN 2016100075W WO 2018053825 A1 WO2018053825 A1 WO 2018053825A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel
- focus area
- pixels
- digital zoom
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 161
- 238000012545 processing Methods 0.000 claims description 133
- 230000008569 process Effects 0.000 claims description 77
- 230000007246 mechanism Effects 0.000 claims description 71
- 238000003384 imaging method Methods 0.000 claims description 56
- 230000009467 reduction Effects 0.000 claims description 26
- 230000003321 amplification Effects 0.000 claims description 23
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 23
- 230000008859 change Effects 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 11
- 238000009877 rendering Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 23
- 238000011156 evaluation Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 13
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000009432 framing Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14603—Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
- H01L27/14605—Structural or functional details relating to the position of the pixel elements, e.g. smaller pixel elements in the center of the imager compared to pixel elements at the periphery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
Definitions
- the present invention relates to the field of image processing, and more particularly to a focusing method and apparatus, an image capturing method and apparatus, an image display method and apparatus, and a camera system.
- the camera has universally have an auto focus (AF) function, and the contrast focus method has been widely used as a technology for realizing the auto focus function.
- AF auto focus
- the contrast focus method is to achieve autofocus by detecting the contour edge of the focus area in the image (specifically, the scene in the image area corresponding to the focus area).
- the sharper the contour edge of the focus target the greater the brightness gradient, or the greater the contrast between the scene and the background at the edge.
- the focus area is a partial area in the image, the area range is small, and the number of pixels included is small. Therefore, the signal noise in the focus area is relatively low, and the noise interference is also large, resulting in a slow focus speed. , the focus accuracy is low.
- Embodiments of the present invention provide a focusing method and apparatus, an image capturing method and apparatus, an image display method and an apparatus, that is, an imaging system, which can improve focusing speed and focusing accuracy.
- a focusing method including: acquiring a first image, and determining a focus area from the first image, the focus area including at least one pixel; and performing the first focus on the focus area based on a preset magnification ratio a digital zoom process to obtain a second image, wherein the second image includes an enlarged focus area, and a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area; The second image is subjected to focusing processing.
- a focusing apparatus including: an image acquiring unit, configured to acquire a first image; and a focusing processing unit, configured to determine a focusing area from the first image, where the focusing area includes at least one pixel, For performing a first digital zoom process on the focus area to acquire a second image based on a preset zoom ratio, wherein the second image includes an enlarged focus area, and a signal to noise ratio of the second image A signal to noise ratio greater than the focus area for performing focus processing based on the second image.
- an image capturing method comprising: capturing and presenting a first image based on a first digital zoom factor and a first focal length, and determining a focus area from the first image, the focus area including at least one a pixel, and the focus area includes a portion of the pixels in the first image; performing a first digital zoom process on the focus area to acquire and present a second image based on a preset magnification ratio, wherein the The second image includes an enlarged focus area, a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area, and a digital zoom factor after the digital zoom processing is a second digital zoom factor; a second image, performing focusing processing to determine a second focal length; performing a second digital zoom processing according to a preset reduction ratio to change a currently used digital zoom magnification from the second digital zoom magnification to the first digital zoom a multiple, wherein the reduction ratio corresponds to the magnification ratio; photographing and presenting based on the first digital zoom
- an image capturing apparatus comprising: an image capturing unit for capturing a first image based on a first digital zoom factor and a first focal length for determining based on the first digital zoom factor and a focus processing unit The second focal length captures a third image; a focus processing unit configured to determine a focus area from the first image, the focus area includes at least one pixel, and the focus area includes a portion of the pixels in the first image And performing a first digital zoom processing on the focus area to acquire a second image, where the second image includes an enlarged focus area, and the second image is signal-to-noise based on a preset magnification ratio.
- the digital zoom factor after the digital zoom processing used by the image capturing unit is a second digital zoom factor for performing a focusing process based on the second image to determine a first a second focal length for performing a second digital zoom process according to a preset reduction ratio, so that the digital zoom factor used by the image capturing unit is from the first Digital zoom is changed to the first digital zoom, wherein the reduction ratio corresponding to the enlarged scale.
- a fifth aspect provides an image display method, including: acquiring and presenting a first image, where The first image is an image captured based on the first digital zoom factor and the first focal length; acquiring and presenting the second image, wherein the second image is the focus area in the first image is based on a preset magnification ratio An image obtained after digital zoom processing, the focus area includes at least one pixel, the second image includes an enlarged focus area, and a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area; acquiring and presenting a third An image, wherein the third image is an image captured based on the first digital zoom factor and the second focal length, the second focal length being a focal length determined by focusing processing based on the second image.
- an image display apparatus comprising: an acquisition unit configured to acquire a first image from an image capturing device communicably coupled to the image display device during a first time period, and from the image in a second time period
- the image processing device of the display device communication connection acquires the second image, and acquires the third image from the image capturing device during the third time period, wherein the first image is an image captured based on the first digital zoom factor and the first focal length
- the second image is an image obtained after the focus area in the first image is subjected to a first digital zoom processing based on a preset enlargement ratio, the focus area includes at least one pixel, and the second image includes an enlargement a focus area, a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area
- the third image is an image based on the first digital zoom factor and a second focus
- the second a focal length is a focal length determined by focusing processing based on the second image
- a rendering unit for presenting the first image
- a camera system includes: a camera mechanism for capturing a first image; a processor, configured to acquire the first image from the camera mechanism, and determine a focus area from the first image Performing a first digital zoom process on the focus area to acquire a second image based on a preset magnification ratio, wherein the focus area includes at least one pixel, and the focus area includes the first image Part of the pixel, the second image includes an enlarged focus area, and a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area for performing focus processing based on the second image.
- a camera system comprising: an image capturing mechanism, configured to capture a first image based on a first digital zoom factor and a first focal length, based on the first digital zoom factor and a second focal length determined by a processor a third image; a processor, configured to acquire the first image from the camera mechanism, determine a focus area from the first image, and perform a first digital zoom process on the focus area based on a preset zoom ratio And acquiring a second image, performing focusing processing to determine the second focal length based on the second image, performing second digital zoom processing according to a preset reduction ratio, to Changing a digital zoom factor used by the camera mechanism from the second digital zoom factor to the first digital zoom factor, the focus area includes at least one pixel, and the focus area includes the first image a partial pixel, wherein the second image includes an enlarged focus area, a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area, and a digital zoom factor after the digital zoom processing a second digital zoom
- a computer program product comprising: computer program code, when the computer program code is executed by a focusing device, causing the network device to perform the first aspect and various implementations thereof Any of the methods of focusing.
- a computer program product comprising: computer program code, when the computer program code is executed by an image capture device, causing the network device to perform the third aspect and various Any of the image capture methods in the implementation.
- a computer program product comprising: computer program code, when the computer program code is executed by an image display device, causing the network device to perform the fifth aspect and each thereof Any of the image display methods of the implementation.
- a computer readable storage medium storing a program that causes a focusing device to perform any one of the first aspect described above and various implementations thereof
- a computer readable storage medium in a thirteenth aspect, storing a program causing an image capture device to perform any of the above third aspects and various implementations thereof Shooting method.
- a computer readable storage medium storing a program causing an image display device to perform any of the above fifth aspect and various implementations thereof Display method.
- a focusing method and apparatus and system, an image capturing method and apparatus, an image presenting system and apparatus, and an imaging system by performing a first digital zoom processing on a focus area to thereby enlarge the focus area, enabling amplification
- the signal-to-noise ratio of the focus area is increased, which can smooth the evaluation value curve, facilitate the peak search for the evaluation value change curve, speed up the peak search speed, improve the accuracy of the peak search, and further improve the focus speed and focus accuracy.
- FIG. 1 is a schematic configuration diagram of an example of an imaging system according to an embodiment of the present invention.
- Fig. 2 is a schematic structural view showing another example of the image pickup system of the embodiment of the invention.
- Fig. 3 is a schematic structural view of an unmanned aerial flight system according to an embodiment of the present invention.
- FIG. 4 is a schematic flowchart of a focusing method according to an embodiment of the present invention.
- Fig. 5 is a view showing the positions of ⁇ pixels #1 to ⁇ pixels #N in the embodiment of the present invention.
- FIG. 6 is a schematic flowchart of an image capturing method according to an embodiment of the present invention.
- FIG. 7 is a schematic flowchart of an image display method according to an embodiment of the present invention.
- FIG. 8 is a schematic interaction diagram of an image capturing method and an image display method according to an embodiment of the present invention.
- Fig. 9 is a schematic diagram showing an evaluation value change curve in the prior art focus processing.
- Fig. 10 is a view showing an evaluation value change curve in the focusing process of the embodiment of the present invention.
- Fig. 11 is a view showing a state of change in the numerical value near the peak point of the evaluation value change curve in the prior art focus processing.
- Fig. 12 is a view showing a state of change in the numerical value near the peak point of the evaluation value change curve in the focus processing in the embodiment of the present invention.
- Figure 13 is a schematic block diagram of a focusing device in accordance with an embodiment of the present invention.
- Figure 14 is a schematic block diagram of an image capturing apparatus according to an embodiment of the present invention.
- Figure 15 is a schematic block diagram of an image display device in accordance with an embodiment of the present invention.
- FIG. 1 is a schematic diagram showing an example of an imaging system 100 according to an embodiment of the present invention. As shown in Figure 1, The camera system 100 includes an imaging mechanism 110 and a processor 120.
- the camera mechanism 110 is configured to acquire an image, wherein the focal length of the camera mechanism 110 can be adjusted.
- the imaging mechanism 110 may be various imaging mechanisms capable of capturing images in the prior art, for example, a camera.
- an autofocus mechanism may be disposed in the camera mechanism 110.
- the autofocus mechanism is communicably connected to a processor 120 to be described later, and can receive a control command from the processor 120, and based on the control command, Adjust the focus.
- the auto focus mechanism can be realized by locking the camera into a voice coil motor.
- the voice coil motor (VCM) is mainly composed of a coil, a magnet group and a spring piece, and the coil is composed of a coil.
- the upper and lower elastic pieces are fixed in the magnet group.
- the coil When the coil is energized, the coil generates a magnetic field, the coil magnetic field interacts with the magnet group, the coil moves upward, and the camera locked in the coil moves together, when the power is off, The coil returns under the elastic force of the shrapnel, thus achieving the autofocus function.
- an instruction from the processor 120 can be used to control the energization of the coil.
- a charge-coupled device may be disposed in the camera mechanism 110.
- the CCD can also be referred to as a CCD image sensor or a CCD image controller.
- a CCD is a semiconductor device that converts an optical image into an electrical signal.
- the tiny photosensitive material implanted on the CCD is called a pixel (Pixel).
- the CCD acts like a film, but it converts the light signal into a charge signal.
- photodiodes on the CCD which can sense light and convert the optical signal into an electrical signal, which is converted into a digital image signal by external sampling amplification and analog-to-digital conversion circuits.
- CMOS complementary metal oxide semiconductor
- CMOS is a large-scale application for the manufacture of integrated circuit chips, and the CMOS manufacturing process is also applied to the production of photosensitive components for digital imaging equipment.
- the principle of photoelectric conversion between CCD and CMOS image sensor is the same. Their main difference is that the signal reading process is different. Since only one (or a few) output nodes of CCD are read out uniformly, the consistency of signal output is very good.
- each pixel has its own signal amplifier, and each of them performs charge-voltage conversion, and the signal output consistency is poor.
- the CCD requires a wide signal bandwidth of the output amplifier, and in the CMOS chip, in each pixel.
- the bandwidth requirements of the amplifier are lower, which greatly reduces the power consumption of the chip. This is the main reason why the power consumption of the CMOS chip is lower than that of the CCD.
- the inconsistency of millions of amplifiers leads to higher fixed noise, which is an inherent disadvantage of CMOS versus CCD.
- imaging mechanism 110 is merely illustrative, and the present invention is not limited thereto, and other imaging devices capable of adjusting the focal length are all within the protection scope of the present invention.
- the processor 120 is communicably connected to the imaging mechanism 110, can acquire an image captured by the imaging mechanism 110, and generates a control command for controlling the imaging mechanism 110 (for example, an autofocus mechanism in the imaging mechanism 110) based on the image. Focus. Subsequently, a detailed process of the processor 120 generating a control command based on the image acquired from the imaging mechanism 110 to complete the focusing of the imaging mechanism 110 will be described in detail.
- the processor may be a central processing unit (CPU), or may be other general-purpose processors, digital signal processors (DSPs), and application specific integrated circuits (ASICs). ), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the above general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the above FPGA is the basis of programmable devices such as Programmable Array Logic, Generic Array Logic, Complex Programmable Logic Device (CPLD), and the like. The product of further development.
- ASIC Application Specific Integrated Circuit
- the imaging system 100 can directly perform framing and focusing through the CCD or CMOS of the imaging mechanism 110.
- FIG. 2 is a schematic diagram showing an example of an imaging system 200 according to an embodiment of the present invention.
- the imaging system 200 includes an imaging mechanism 210 and a processor 220.
- the function and structure of the camera mechanism 210 can be similar to that of the camera mechanism 110 described above.
- the function and structure of the processor 220 can be similar to that of the camera mechanism 110 described above.
- the imaging system 200 may be further provided with a display 230.
- the display 230 can be a liquid crystal display.
- the display 230 can be communicably connected to the camera mechanism 210, and can acquire an image captured by the camera mechanism 210 and present the image.
- the display 230 may be coupled to the processor 220 and acquire an image captured by the camera mechanism 210 via the processor 220 and present the image.
- the processor 220 may have an image processing function (for example, zoom in or out, etc.), so that the image presented by the display 230 may be an image processed by the processor 220.
- display 230 may be an image for presenting a live view (LiveView) function.
- LiveView live view
- the live view function can display the picture taken by the camera mechanism 210 on the display (for example, the liquid crystal display), which greatly facilitates the user to compose the picture.
- the camera system 200 can directly perform framing and focusing through the CCD or CMOS of the camera mechanism 210, that is, the display 230 can directly display an image taken by the CCD or CMOS.
- the processor 220 may perform focusing processing based on the image taken by the CCD or the CMOS, and perform digital zoom processing on the image during the focusing process (ie, the first digital zoom processing, and then, The process is described in detail), and accordingly, the display 230 can also present images processed by the processor 220.
- the components in the above camera system 100 or 200 may be integrally configured in the same device, and the device may be, for example, a camera, a camera, or a smart terminal device having an image capturing function (for example, a mobile phone, Tablet or laptop, etc.).
- the device may be, for example, a camera, a camera, or a smart terminal device having an image capturing function (for example, a mobile phone, Tablet or laptop, etc.).
- the components in the above camera system 100 or 200 may also be configured in different devices.
- the camera mechanism may be configured in a drone, and the display may be configured to be communicably connected to the drone.
- the control terminal (for example, a remote controller or a smart terminal equipped with a control program) may be disposed in the drone or may be disposed in the control terminal, and the present invention is not particularly limited.
- UAVs also known as Unmanned Aerial Vehicles (UAVs)
- UAVs have evolved from military to more and more civilian applications, such as UAV plant protection, UAV aerial photography, UAV forest fire monitoring, etc. And civilization is also the future development of UAV.
- a UAV can be carried by a carrier for performing specific tasks. Payload.
- the UAV can carry the photographing apparatus through the pan/tilt, that is, the photographing mechanism in the embodiment of the present invention.
- FIG. 3 is a schematic architectural diagram of an unmanned flight system 300 in accordance with an embodiment of the present invention. This embodiment is described by taking a rotorcraft as an example.
- the unmanned flight system 300 can include a UAV 310, a pan/tilt device 320, a display device 330, and a steering device 340.
- the UAV 310 may include a power system 350, a flight control system 360, a rack 370, and a focus processor 380.
- the UAV 310 can communicate wirelessly with the manipulation device 340 and the display device 330.
- Rack 370 can include a fuselage and a tripod (also known as a landing gear).
- the fuselage may include a center frame and one or more arms coupled to the center frame, the one or more arms extending radially from the center frame.
- the stand is attached to the fuselage for supporting the UAV 310 when it is landing.
- the powertrain 350 can include an electronic governor (referred to as ESC) 351, one or more propellers 353, and one or more electric machines 352 corresponding to one or more propellers 353, wherein the electric machine 352 is coupled to the electronic governor 351 and the propeller 353, the motor 352 and the propeller 353 are disposed on the corresponding arm; the electronic governor 351 is configured to receive the driving signal generated by the flight controller 360, and provide a driving current to the motor 352 according to the driving signal to control The rotational speed of the motor 352.
- Motor 352 is used to drive propeller rotation to power the flight of UAV 310, which enables UAV 310 to achieve one or more degrees of freedom of motion.
- the UAV 310 can be rotated about one or more axes of rotation.
- the above-described rotating shaft may include a roll axis, a pan axis, and a pitch axis.
- the motor 352 can be a DC motor or an AC motor.
- the motor 352 may be a brushless motor or a brush motor.
- Flight control system 360 may include flight controller 361 and sensing system 362.
- the sensing system 362 is configured to measure the attitude information of the UAV, that is, the position information and state information of the UAV 310 in space, for example, three-dimensional position, three-dimensional angle, three-dimensional speed, three-dimensional acceleration, and three-dimensional angular velocity.
- the sensing system 362 may include, for example, at least one of a gyroscope, an electronic compass, an IMU (Inertial Measurement, Unit), a vision sensor, a GPS (Global Positioning System), and a barometer.
- the flight controller 361 is used to control the flight of the UAV 310, for example, the flight of the UAV 310 can be controlled based on the attitude information measured by the sensing system 362. It should be understood that the flight controller 361 can control the UAV 310 in accordance with pre-programmed program instructions, or can control the UAV 310 in response to one or more control commands from the steering device 340.
- the pan/tilt device 320 can include an ESC 321 and a motor 322.
- the pan/tilt device 320 can be used to carry the camera mechanism 323.
- the structure and function of the imaging mechanism 323 may be similar to those of the imaging mechanism 110 or 120. Here, in order to avoid redundancy, detailed description thereof will be omitted.
- the flight controller 361 can control the motion of the pan-tilt device 320 through the ESC 321 and the motor 322.
- the pan-tilt device 320 may further include a controller for controlling the motion of the pan-tilt device 3200 by controlling the ESC 321 and the motor 322.
- the pan/tilt device 320 can be independent of the UAV 310 or a portion of the UAV 310.
- the motor 322 can be a direct current motor or an alternating current motor.
- the motor 322 may be a brushless motor or a brush motor.
- the carrier may be located at the top of the aircraft or at the bottom of the aircraft.
- the unmanned flight system 300 may further include a focus processor for controlling the camera mechanism 323 to perform focusing, and the structure and function of the focus processor may be combined with the processor 120 or 220 described above. The structure and function are similar, and the detailed description thereof is omitted here to avoid redundancy.
- the focus processor may be disposed on the UAV 310 or in the manipulation device 340 or the display device 330, and the present invention is not particularly limited.
- the display device 330 is located at the ground end of the unmanned flight system 300, can communicate with the UAV 310 wirelessly, and can be used to display attitude information of the UAV 310, and can also be used to display images captured by the camera mechanism 323. It should be understood that display device 330 can be a standalone device or can be disposed in manipulation device 340.
- the maneuvering device 340 is located at the ground end of the unmanned flight system 300 and can communicate with the UAV 310 wirelessly for remote manipulation of the UAV 310.
- the manipulation device may be, for example, a remote controller or a terminal device equipped with an APP (Application) that controls the UAV, for example, a smartphone, a tablet, or the like.
- APP Application
- receiving the user's input through the manipulation device may refer to manipulating the UAV through an input device such as a pull wheel, a button, a button, a rocker, or a user interface (UI) on the terminal device on the remote controller.
- UI user interface
- the focus processor may be a dedicated processor configured independently, or the function of the focus processor may also be the processor of other devices in the unmanned flight system 300 (
- the processor of the manipulation device 340 or the processor of the camera mechanism 323 is provided, and the present invention is not particularly limited.
- FIG. 4 is a schematic flowchart of a focusing method 400 according to an embodiment of the present invention. As shown in FIG. 4, the focusing method 400 includes:
- S410 Acquire a first image, and determine a focus area from the first image, where the focus area includes at least one pixel;
- S420 Perform a first digital zoom process on the focus area to obtain a second image, where the second image includes an enlarged focus area, where the signal to noise ratio of the second image is greater than the focus. Signal to noise ratio of the area;
- the imaging mechanism of the imaging system can perform imaging based on the focal length A and acquire the image A (ie, an example of the first image), and transmit the captured image A to the processor.
- the processor can determine an image A' (i.e., an example of a focus area) for focus processing from the image A.
- an image A' i.e., an example of a focus area
- the image A' is an image at a predetermined position in the image A, and the image A' includes at least one pixel.
- the predetermined position may be a position preset by a manufacturer or a user, and by way of example and not limitation, the specified position may be a position in the image (eg, image A) near the geometric center of the image. .
- the predetermined position may also be determined based on the captured scene.
- the specified area may be an image (eg, The position of the particular scene is presented in image A).
- the imaging system has a digital zoom function
- the image A may be an image acquired based on the focal length A and the magnification A (ie, an example of the first digital zoom magnification), that is, the magnification of the image A.
- magnification A an example of the first digital zoom magnification
- the processor may perform digital zoom processing A (ie, an example of the first digital zoom processing) on the image A', wherein the digital zoom processing A is based on a preset magnification K (ie, an example of an enlargement ratio) Digital zoom processing to generate image B (ie, an example of a second image).
- a preset magnification K ie, an example of an enlargement ratio
- Digital zoom processing to generate image B (ie, an example of a second image).
- the magnification of the image B is recorded as: magnification B.
- the number of pixels included in the image B is larger than the number of pixels included in the image A', and the signal-to-noise ratio of the image B is larger than the signal-to-noise ratio of the image A'.
- the digital zoom processing A increases the area of each pixel in the image A' to achieve the purpose of amplification, that is, the digital zoom processing A utilizes the influence sensor portion (ie, the above-mentioned focus area corresponds to Part of) to image, equivalent to taking the focus area out of the original image and zooming in.
- the influence sensor portion ie, the above-mentioned focus area corresponds to Part of
- the processor may enlarge a part of the pixels (ie, the pixels corresponding to the focus area) on the image sensor (for example, CCD or CMOS) by using interpolation processing means.
- the processor can judge the color around the existing pixels (ie, the pixels corresponding to the focus area), and insert the pixels added by the special algorithm according to the surrounding color conditions.
- the focal length of the lens is not changed during the digital zoom processing A.
- the second image includes a plurality of original pixels and a plurality of interpolated pixels, wherein the original pixels are pixels in the focus area, and
- the first digital zoom processing is performed on the focus area based on a preset magnification ratio, including:
- N reference original pixels corresponding to the first interpolated pixel from the focus area based on a position of the first interpolated pixel of the plurality of interpolated pixels in the second image, N ⁇ 1;
- the image B includes two kinds of pixels.
- an alpha pixel ie, an example of an original pixel
- a beta pixel ie, an example of an interpolated pixel
- each ⁇ pixel is a pixel in the image A'.
- each ⁇ pixel is a pixel generated based on a part of the plurality of ⁇ pixels (specifically, a gradation value of the partial pixels).
- the magnification is K
- the image A' includes R ⁇ Q pixels
- the image B includes (K ⁇ R) ⁇ (K ⁇ Q) pixels, where R is greater than or equal to 2.
- R is greater than or equal to 2.
- An integer, Q is an integer greater than or equal to 2.
- the pixel whose position in the image B is (K ⁇ i, K ⁇ j) is an ⁇ pixel, that is, the pixel value of the pixel whose position in the image B is (K ⁇ i, K ⁇ j) is an image.
- the pixel in the image B whose position is (K ⁇ i+v, K ⁇ j+u) is a ⁇ pixel.
- ⁇ pixel #1 for the determination process of one ⁇ pixel (hereinafter, for convenience of understanding and distinction, denoted as ⁇ pixel #1), it is necessary to use N alpha pixels (hereinafter, for ease of understanding and distinction, note Do: ⁇ pixel #1 ⁇ ⁇ pixel #N).
- determining, based on the position of the first interpolated pixel in the second image, the N reference original pixels corresponding to the first interpolated pixel from the at least one original pixel including:
- the N alpha pixels are pixels at a predetermined position in the image A', and the predetermined position corresponds to the magnification K of the digital zoom processing A (i.e., a preset enlargement ratio).
- K is an integer greater than or equal to 2.
- the processor can determine the corresponding pixel of the ⁇ pixel #1 in the image A′ according to the magnification and the position of the ⁇ pixel #1 in the image B (ie, an example of the first reference original pixel is below, in order to facilitate understanding and distinguishing , remember to do: corresponding pixel #1).
- the coordinates of the corresponding pixel #1 are (x, y), and the processor can determine the specific value of x according to the following formula 1, and determine the specific value of y according to the following formula 2.
- the corresponding pixel in the image A′ may be an image.
- the pixel in position A' is (i, j).
- the processor can determine the spatial differences dx, dx 2 , dx 3 , dy, dy 2 and dy 3 between the ⁇ pixel #1 and the corresponding pixel #1 according to the following Equations 3 to 8.
- the ⁇ pixel #1 to the ⁇ pixel #N include the corresponding pixel #1, and pixels other than the corresponding pixel #1 in the ⁇ pixel #1 to the ⁇ pixel #N (hereinafter, for the sake of easy understanding) And distinguishing, as: the interpolated pixel) and the corresponding pixel #1 positional relationship needs to meet the preset positional condition, for example, on the abscissa, the distance between the interpolated pixel and the corresponding pixel #1 is less than or equal to the preset The first distance threshold (for example, 2). Moreover, on the ordinate, the distance between the interpolated pixel and the corresponding pixel #1 is less than or equal to a preset second distance threshold (eg, 2).
- the preset position condition may be determined according to the above-mentioned enlargement ratio.
- the coordinates of the corresponding pixel #1 are (i, j), and the alpha pixels #1 to ⁇ pixels #N may be the following pixels in the image A':
- Second pixel group (i-1, j), (i, j), (i+1, j), (i+2, j)
- the third pixel group (i-1, j+1), (i, j+1), (i+1, j+1), (i+2, j+1)
- the fourth pixel group (i-1, j+2), (i, j+2), (i+1, j+2), (i+2, j+2)
- ⁇ pixels #1 to ⁇ pixels #N are merely illustrative examples, and the present invention is not limited thereto, as long as the positional relationship between the interpolation pixels and the corresponding pixel #1 needs to satisfy the above predetermined conditions,
- the ⁇ pixel #1 ⁇ pixel #N may also be the following pixels in the image A′:
- the processor can determine the pixel values of the alpha pixels #1 to ⁇ pixels #N determined as described above (for example) For example, the gradation value), the pixel value (for example, the gradation value) of the ⁇ pixel #1 is determined.
- the processor may separately obtain an intermediate value t 0 of the first pixel group according to the following Formula 9 to Formula 13, the intermediate value of the second pixel group is t 1, the intermediate value of the third pixel group and t 2 of the fourth The median value of the pixel group is t 3 .
- a 1 -p 0 /2+(3 ⁇ p 1 )/2-(3 ⁇ p 2 )/2+p 3 /2
- the processor may use the pixel value of (i-1, j-1) included in the first pixel group as p 0 and the pixel value of (i, j-1) included in the first pixel group as p 1 .
- the pixel value of (i+1, j-1) included in the first pixel group is taken as p 2 , and the pixel value of (i+2, j-1) included in the first pixel group is substituted as p 3 into the above formula 9 to 13 gets t 0 .
- the processor may use the pixel value of (i-1, j) included in the second pixel group as p 0 , and the pixel value of (i, j) included in the first pixel group as p 1 , including the first pixel group
- the pixel value of (i+1, j) is taken as p 2 , and the pixel value of (i+2, j) included in the first pixel group is substituted as p 3 into the above-described Equations 9 to 13 to obtain t 1 .
- the processor may use the pixel value of (i-1, j+1) included in the third pixel group as p 0 , and the pixel value of (i, j+1) included in the first pixel group as p 1 , which will be the first
- the pixel value of (i+1, j+1) included in the pixel group is taken as p 2
- the pixel value of (i+2, j+1) included in the first pixel group is substituted as p 3 into the above formula 9 to formula 13. t 2 .
- the processor may use the pixel value of (i-1, j+2) included in the fourth pixel group as p 0 , and the pixel value of (i, j+2) included in the first pixel group as p 1 , which will be the first
- the pixel value of (i+1, j+2) included in the pixel group is taken as p 2
- the pixel value of (i+2, j+2) included in the first pixel group is substituted as p 3 into the above formula 9 to formula 13. t 3 .
- the processor can determine the pixel value w of the ⁇ pixel #1 based on the following equations 14 to 18 based on t 0 to t 3 determined as described above.
- a 2 -t 0 /2+(3 ⁇ t 1 )/2-(3 ⁇ t 2 )/2+t 3 /2
- the processor can determine the pixel values of each beta pixel in image B. Further, the processor can determine the pixel value of each beta pixel and the pixel value of the alpha pixel in image B to determine image B.
- the processor may perform focus processing based on the image B. For example, by analyzing the correct focus image and the out-of-focus image, there is a rule that when the focus is correct, the contrast is the strongest and the deviation is lower, and the contrast is lower.
- the evaluation function of the contrast is:
- E is maximum at the position where the focus is correctly, and if E tends to E max , the position before and after the focus will decrease as the amount of defocus increases. If the defocus offset is particularly large, then E tends to zero.
- auxiliary mirror at the rear end of the framing transflector that directs the beam toward the bottom of the camera into the detection assembly.
- the infrared filter After passing through the infrared filter, it is divided into two groups of images in the image-collecting prism, one group corresponding to the pre-focus equivalent surface S 1 and the other group corresponding to the post-focus equivalent surface S 2 , S 1 and S 2 and imaging
- the planes are separated by l.
- Two sets of one-dimensional CCD elements are respectively disposed at positions S 1 and S 2 . When the lens position is changed, S 1 and S 2 respectively yield corresponding contrast curves E 1 and E 2 .
- the contrast curve E 1 reaches the maximum value and continues to make the contrast curve E 2 reach the maximum value. Conversely, when the lens is initially in the focus position, first The contrast curve E 2 reaches a maximum value and then E 1 reaches a maximum value. Accordingly, when the lens is in the in-focus position, the contrast curves E 1 and E 2 are the same between the front and the back of the focus. E 1 >E 2 is the pre-focus position, and E 1 ⁇ E 2 is the post-focus position.
- the first image may be a frame image or a multi-frame image
- the present invention is not particularly limited, and the process of the processor performing focus processing based on the multi-frame image may be
- the technical similarities are hereby omitted in order to avoid redundancy.
- the focus distance A is changed by the above-described focus processing, and the focal length B is determined, and thereafter, the imaging means can capture an image based on the focal length B.
- the focusing method 400 further includes:
- the camera system may further be configured with a display, and the image A obtained by the camera mechanism and the image B generated by the processor are presented on the display.
- the focusing method 400 further includes:
- the first digital zoom factor is a digital zoom factor before the first digital zoom process
- the second digital zoom factor passes through the digital zoom factor after the first digital zoom process.
- the first image is an image acquired based on the first focal length, and the focal length after the focusing process is a second focal length, and
- the focusing method further includes:
- the focusing process performed by the processor may be performed simultaneously with the process of displaying the liveview by the display.
- the display may present the image A at the time period A, wherein the image A is an image acquired based on the focal length A and the magnification A, wherein
- the magnification A may be an initial digital zoom magnification of the imaging system (for example, a magnification that is not processed by the digital zoom, for example, 1 ⁇ ), or may be a digital zoom magnification adjusted by the user, which is not particularly limited in the present invention. That is, in the period A, a liveview (ie, an example of the first image) acquired based on the focal length A and the magnification A may be presented on the display.
- time period B (may also be referred to as a second time period)
- the display can present the image B, that is, after the processor performs the above-described digital zoom processing based on the magnification ratio K, the magnification is changed to the magnification B, and thus, in the period B, the liveview presented on the display can be based on the focal length A and the magnification.
- B gets the liveview (ie, an example of the second image).
- time period C (which may also be referred to as a third time period)
- the processor can adjust the magnification of the digital zoom processing from the magnification B to the magnification.
- the liveview presented on the display may be a liveview obtained based on the focal length B and the magnification A (ie, an example of the third image).
- the user can recognize the change of the image during the focus processing on the liveview through the display, and can improve the user experience and participation in the shooting process.
- the focusing method of the embodiment of the present invention by performing the first digital zoom processing on the in-focus area to further enlarge the in-focus area, the signal-to-noise ratio of the enlarged in-focus area can be increased, and the evaluation value change curve can be smoothed, which is advantageous.
- the speed of the peak search is accelerated, the accuracy of the peak search is improved, and the focus speed and the focus precision are improved.
- FIG. 6 is a schematic flowchart of an image capturing method 500 according to an embodiment of the present invention. As shown in FIG. 6, the image capturing method 500 includes:
- S510 Capture and present a first image based on the first digital zoom factor and the first focal length, and determine a focus area from the first image, the focus area includes at least one pixel, and the focus area includes a portion in the first image Pixel
- S520 Perform a first digital zoom process on the focus area to acquire and present a second image, where the second image includes an enlarged focus area, where the signal to noise ratio of the second image is greater than The signal-to-noise ratio of the focus area, the digital zoom factor after the digital zoom processing is the second digital zoom factor;
- S540 Perform a second digital zoom process according to a preset reduction ratio, so that the currently used digital zoom factor is changed from the second digital zoom factor to the first digital zoom factor, wherein the reduction ratio corresponds to the amplification ratio ;
- the imaging mechanism of the imaging system can perform imaging based on the focal length A and acquire the image A (ie, an example of the first image), and transmit the captured image A to the processor. It is assumed that the image A is an image acquired based on the magnification A, wherein the magnification A may be a preset magnification of the imaging system or a magnification adjusted by the user, and the invention is not particularly limited.
- the operation and processing acquisition of the processor and the imaging mechanism in S510 may be similar to the operations performed by the processor and the imaging mechanism in S410 in the above method 400.
- detailed description thereof will be omitted.
- a liveview ie, an example of the first image acquired based on the focal length A and the magnification A may be presented on the display.
- the image A may be acquired by the display from the processor (for example, when the magnification A is not 1 ⁇ ), or the display may be acquired from the imaging mechanism (for example, when the magnification A is 1 time)
- the present invention is not particularly limited.
- the processor may perform a first digital zoom process on the image A' to generate an image.
- B ie, an example of the second image. It is assumed that the magnification after the first zoom processing is the magnification B, and the image B is an image acquired based on the focal length A and the magnification B.
- the operation and processing acquisition of the processor in S520 may be similar to the operation performed by the processor in S420 in the above method 400, and a detailed description thereof will be omitted herein to avoid redundancy.
- the processor may perform focus processing based on the image B. It is assumed that the focus distance A is changed by the above-described focusing processing, and the focal length B is determined.
- the operation and processing acquisition of the processor in S530 may be similar to the operations performed by the processor in S430 in the above method 400, and a detailed description thereof will be omitted herein to avoid redundancy.
- the liveview presented on the display may be based on the focal length A and the magnification B. Get the liveview (ie, an example of the second image).
- the processor can adjust the digital zoom by the magnification from the magnification B to the magnification A.
- the imaging mechanism of the imaging system can perform imaging based on the focal length B and acquire the image A (that is, an example of the third image), and the image C is an image acquired based on the magnification A.
- the processor may adjust the magnification of the digital zoom processing from the magnification B to the magnification A, and the liveview presented on the display may be based on the focal length B, after the processor completes the focusing process to determine the focal length B (ie, the period C). And the liveview obtained by the magnification A (ie, an example of the third image).
- the user can recognize the change of the image during the focus processing on the liveview through the display, and can improve the user experience and participation in the shooting process.
- FIG. 7 is a schematic flow diagram of an image display method 600 of an embodiment of the present invention as described from the perspective of a display or display device. As shown in FIG. 7, the image capturing method 600 includes:
- S620 Acquire and present a second image, where the second image is an image obtained after the focus area in the first image is subjected to a first digital zoom process based on a preset zoom ratio, where the focus area includes At least one pixel, the second image includes an enlarged focus area, and a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area;
- the imaging mechanism of the imaging system can take a picture based on the focal length A and acquire an image A. (ie, an example of the first image), and the captured image A is sent to the processor.
- the display can acquire and present the image A. That is, before the processor performs zoom processing for the image A (ie, the period A), a liveview (ie, an example of the first image) acquired based on the focal length A and the magnification A may be presented on the display.
- a liveview ie, an example of the first image
- the image A is an image acquired based on the magnification A, wherein the magnification A may be a preset magnification of the imaging system or a magnification adjusted by the user, and the invention is not particularly limited.
- the operation and processing acquisition of the processor and the imaging mechanism may be similar to the operations performed by the processor and the imaging mechanism in S410 in the above method 400.
- detailed description thereof will be omitted.
- the image A may be acquired by the display from the processor (for example, when the magnification A is not 1 ⁇ ), or the display may be acquired from the imaging mechanism (for example, when the magnification A is 1 time)
- the present invention is not particularly limited.
- the processor can perform a first digital zoom process on the image A' to generate an image B (i.e., an example of the second image). It is assumed that the magnification after the first zoom processing is the magnification B, and the image B is an image acquired based on the focal length A and the magnification B.
- the image B can be acquired and presented at the S620 display, that is, after the image B is generated by the processor performing digital zoom processing A on the image A, until the processor determines the period before the focal length B (ie, the period B),
- the liveview presented on the display may be a liveview (ie, an example of a second image) acquired based on focal length A and magnification B.
- the operation and processing acquisition of the processor may be similar to the operations performed by the processor in S420 in the above method 400.
- detailed description thereof will be omitted.
- the processor can perform focus processing based on the image B. It is assumed that the focus distance A is changed by the above-described focusing processing, and the focal length B is determined.
- the operation and processing acquisition of the processor may be similar to the operations performed by the processor in S430 in the above method 400.
- detailed description thereof will be omitted.
- the processor can adjust the digital zoom by the magnification from the magnification B to the magnification A.
- the imaging mechanism of the imaging system can perform imaging based on the focal length B and acquire the image A (ie, an example of the third image), and the image C is an image acquired based on the magnification A.
- the processor may adjust the magnification of the digital zoom processing from the magnification B to the magnification A, after the processor completes the focusing process to determine the focal length B (ie, the period C),
- the display can acquire and present the image C, ie, presented on the display.
- the liveview may be a liveview obtained based on the focal length B and the magnification A (ie, an example of the third image).
- the user can recognize the change of the image during the focus processing on the liveview through the display, and can improve the user experience and participation in the shooting process.
- FIG. 8 is a schematic interaction diagram of an image capture process and image presentation 700 in accordance with an embodiment of the present invention.
- the camera mechanism and the processor can acquire the image A based on the focal length A and the magnification A, and transmit the image A to the display, which can present the image A.
- the processor may perform a digital zoom processing (ie, an example of enlargement processing) based on the enlargement ratio K on the in-focus area (for example, the above-described image A') in the image A to acquire the image B, and set the digital zoom processing.
- a digital zoom processing ie, an example of enlargement processing
- the magnification B K ⁇ magnification A.
- the processor can send the image B to the display, and the display can present the image B.
- the processor may perform focusing processing based on the image B to determine the focal length B, and after determining the focal length B, the current digital zoom magnification may be adjusted from the magnification B to the magnification A.
- the camera mechanism and the processor may acquire the image C based on the focal length B and the magnification A, and transmit the image C to the display, which may present the image C.
- the image capturing method of the embodiment of the present invention by performing the first digital zoom processing on the in-focus area and then enlarging the in-focus area, the signal-to-noise ratio of the enlarged in-focus area can be increased, and the evaluation value change curve can be smoothed. It is beneficial to the peak search for the evaluation value curve, speed up the peak search, improve the accuracy of the peak search, and thus improve the focus speed and focus accuracy.
- the signal-to-noise ratio is relatively small, resulting in a large fluctuation of the contrast variation curve used in the focus processing process, or a smoothness, so that the smoothness is poor.
- the focusing process (mainly the peak search process) takes longer and the accuracy is poor.
- the area of the in-focus area can be increased (specifically, the number of pixels included is increased.
- the added pixels are pixels based on the positions specified in the original image, the interpolation algorithm is used, and therefore, the noise in the added pixels can be effectively reduced, and therefore, the focus area after the digital zoom processing can be improved.
- Signal-to-noise ratio which reduces the fluctuation of the contrast curve used in the focus process, improves the smoothness of the contrast curve, shortens the time of the focus process (mainly the peak search process), and improves the focus process (mainly the peak search process) ) accuracy.
- the abscissa may be the position of the focus motor, and the vertical position
- the flag may be a contrast value calculated in a focus area corresponding to an image acquired by the position of the focus motor.
- FIGS. 1 through 12 The focusing method and the image capturing method according to the embodiment of the present invention are described in detail above with reference to FIGS. 1 through 12.
- a focusing device, an image capturing device, and an image display device according to an embodiment of the present invention will be described in detail with reference to FIGS. 13 through 15. .
- FIG. 13 is a schematic block diagram of a focusing device 800 according to an embodiment of the present invention. As shown in FIG. 13, the focusing device 800 includes:
- An image obtaining unit 810 configured to acquire a first image
- the focus processing unit 820 is configured to determine a focus area from the first image, where the focus area includes at least one pixel, for performing a first digital zoom processing on the focus area based on a preset zoom ratio to obtain a second image
- the second image includes an enlarged focus area, and a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area for performing focus processing based on the second image.
- the second image includes a plurality of original pixels and a plurality of interpolated pixels, wherein the original pixels are pixels in the focus area, and
- the focus processing unit 820 is specifically configured to determine N reference original pixels corresponding to the first interpolated pixel from the focus area based on a position of the first interpolated pixel of the plurality of interpolated pixels in the second image, N ⁇ 1, for determining the gray value of the first interpolation pixel according to the amplification ratio and the gray value of the N reference original pixels.
- the focus processing unit 820 is specifically configured to determine, according to the zoom ratio and a position of the first interpolated pixel in the second image, a first reference original pixel from the focus area, where the first reference original pixel is a position in the focus area corresponding to a position of the first interpolation pixel in the second image, for determining N-1 second reference original pixels from the focus area according to the first reference original pixel, wherein The positional relationship between the N-1 second reference original pixels and the first reference original pixel satisfies a preset position condition.
- the focus area comprises X ⁇ Y pixels arranged in two dimensions, wherein X ⁇ 1, Y ⁇ 1, and
- the focus processing unit 820 is specifically configured to: when the magnification ratio is 1:M, when the position coordinates of the first interpolation pixel in the second image are (M ⁇ i+1, M ⁇ j+1), the first A reference original pixel is a pixel whose position coordinate is (i, j) from the focus area, where i ⁇ [0, X-1], i ⁇ [0, Y-1].
- X ⁇ 4, Y ⁇ 4, and the N-1 second reference original pixel points include pixels of the following position coordinates in the second image:
- the focusing device further includes:
- An image display unit for presenting the first image and the second image.
- the focus processing unit 820 is further configured to perform a second digital zoom process according to the preset reduction ratio, so that the currently used digital zoom factor is changed from the second digital zoom factor to the first digital zoom factor, where
- the reduction ratio corresponds to the enlargement ratio
- the first digital zoom factor is a digital zoom factor before the first digital zoom process
- the second digital zoom factor passes the digital zoom factor after the first digital zoom process.
- the first image is an image acquired based on the first focal length, and the focal length after the focusing process is a second focal length, and
- the image obtaining unit 810 is further configured to acquire a third image based on the first digital zoom factor and the second focal length;
- the image rendering unit is further for presenting the third image.
- the modules and the other operations and/or functions of the focusing device 800 according to the embodiment of the present invention are respectively omitted in order to implement the corresponding processes in the method 400 in FIG. 4 for brevity.
- the focusing apparatus of the embodiment of the present invention by performing the first digital zoom processing on the in-focus area to enlarge the in-focus area, the signal-to-noise ratio of the enlarged in-focus area can be increased, and the evaluation value change curve can be smoothed. Conducive to the peak search for the evaluation value curve, speed up the peak search, improve the accuracy of the peak search, and thus improve the focus speed and focus accuracy.
- FIG. 14 is a schematic block diagram of an image capturing apparatus 900 according to an embodiment of the present invention. As shown in FIG. 14, the image capturing apparatus 900 includes:
- the image capturing unit 910 is configured to capture a first image based on the first digital zoom factor and the first focal length, and to capture a third image based on the first digital zoom factor and the second focal length determined by the focus processing unit 820;
- a focus processing unit 920 configured to determine a focus area from the first image, the focus area package Include at least one pixel, and the focus area includes a part of the pixels in the first image, for performing a first digital zoom process on the focus area to obtain a second image, wherein the second image is based on a preset magnification ratio
- the image includes an enlarged focus area, the signal-to-noise ratio of the second image is greater than the signal-to-noise ratio of the focus area, and the digital zoom factor used by the image capturing unit 910 after the digital zoom processing is the second digital zoom multiple, Performing a focusing process to determine a second focal length based on the second image for performing a second digital zoom process according to a preset reduction ratio, so that the digital zoom factor used by the image capturing unit 910 is changed from the second digital zoom factor Up to the first digital zoom factor, wherein the reduction ratio corresponds to the amplification ratio.
- the image presenting unit 930 is configured to present the first image, the second image, and the third image.
- the second image includes a plurality of original pixels and a plurality of interpolated pixels, wherein the original pixels are pixels in the focus area, and
- the focus processing unit 920 is specifically configured to determine N reference original pixels corresponding to the first interpolated pixel from the focus area based on a position of the first interpolated pixel of the plurality of interpolated pixels in the second image, N ⁇ 1, for determining the gray value of the first interpolation pixel according to the amplification ratio and the gray value of the N reference original pixels.
- the focus processing unit 920 is specifically configured to determine, according to the zoom ratio and a position of the first interpolated pixel in the second image, a first reference original pixel from the focus area, where the first reference original pixel is in the focus a position in the region corresponding to the position of the first interpolated pixel in the second image, for determining N-1 second reference original pixels according to the first reference original pixel, wherein, The positional relationship between the N-1 second reference original pixels and the first reference original pixel satisfies a preset position condition.
- the focus area comprises X ⁇ Y pixels arranged in two dimensions, wherein X ⁇ 1, Y ⁇ 1, and
- the focus processing unit 920 is specifically configured to: if the zoom ratio is 1:M, when the position coordinates of the first interpolated pixel in the second image are (M ⁇ i+1, M ⁇ j+1), the first The reference original pixel is a pixel whose position coordinate is (i, j) from the focus area, where i ⁇ [0, X-1], i ⁇ [0, Y-1].
- X ⁇ 4, Y ⁇ 4, and the N-1 second reference original pixel points include pixels of the following position coordinates in the second image:
- the modules and the other operations and/or functions of the image capturing apparatus 900 according to the embodiment of the present invention are respectively omitted in order to implement the corresponding processes in the method 500 in FIG. 6 for brevity.
- the image capturing apparatus of the embodiment of the present invention by performing the first digital zoom processing on the focus area to enlarge the focus area, the signal-to-noise ratio of the enlarged focus area can be increased, and the evaluation value change curve can be smoothed. It is beneficial to the peak search for the evaluation value curve, speed up the peak search, improve the accuracy of the peak search, and thus improve the focus speed and focus accuracy.
- FIG. 15 is a schematic block diagram of an image display device 1000 according to an embodiment of the present invention. As shown in FIG. 15, the image display apparatus 1000 includes:
- the acquiring unit 1010 is configured to acquire a first image from an image capturing device communicably connected to the image display device in a first time period, and acquire a second image from an image processing device communicably connected to the image display device in a second time period, in the first Obtaining a third image from the image capturing device in a three-time period, wherein the first image is an image captured based on a first digital zoom factor and a first focal length, the second image being a focus region in the first image being based on a pre-
- the zoom ratio is set to be an image obtained after the first digital zoom processing, the focus area includes at least one pixel, and the second image includes an enlarged focus area, and a signal to noise ratio of the second image is greater than a signal to noise ratio of the focus area
- the third image is an image captured based on the first digital zoom factor and the second focal length, the second focal length being a focal length determined by focusing processing based on the second image;
- the presenting unit 1020 is configured to present the first image in the first time period, present the second image in the second time period, and present the third image in the third time period.
- the second image includes a plurality of original pixels and a plurality of interpolated pixels, wherein the original pixels are pixels in the focus area, and
- the gray value of the first interpolation pixel of the plurality of interpolation pixels is determined according to the amplification ratio and the gray value of the N reference original pixels, wherein the N reference original pixels are the first interpolation pixel in the focus area Corresponding pixels, the N reference original pixels are determined from the focus area based on the position of the first interpolated pixel in the second image, N ⁇ 1.
- the N reference original pixels comprise a first reference original pixel and N-1 second reference original pixels, wherein the first reference original pixel is according to the amplification ratio and the first interpolation pixel is in the second image Position determined from the focus area, the first reference original pixel is in the focus area a position corresponding to a position of the first interpolated pixel in the second image, the N-1 second reference original pixels being determined according to the first reference original pixel according to the focus area, wherein the N - a positional relationship between the first reference original pixel and the first reference original pixel satisfies a preset position condition.
- the focus area comprises X ⁇ Y pixels arranged in two dimensions, wherein X ⁇ 1, Y ⁇ 1, and
- magnification ratio is 1:M
- the position coordinates of the first interpolation pixel in the second image are (M ⁇ i+1, M ⁇ j+1)
- the first reference original pixel is from the The position coordinates in the focus area are pixels of (i, j), where i ⁇ [0, X-1], i ⁇ [0, Y-1].
- X ⁇ 4, Y ⁇ 4, and the N-1 second reference original pixel points include pixels of the following position coordinates in the second image:
- the image capturing device is configured on the first device, and the image display device is configured in the second device, and the first device and the second device are capable of wired communication or wireless communication.
- the first device is a drone
- the second device is a terminal device or a remote controller.
- the focusing method and device, the image capturing method and device, the image display method and device, and the camera system can be applied to a computer, and the computer includes a hardware layer and runs on the hardware layer.
- Operating system layer and the application layer running on the operating system layer.
- the hardware layer includes hardware such as a CPU (Central Processing Unit), a memory management unit (MMU), and a memory (also referred to as main memory).
- the operating system may be any one or more computer operating systems that implement business processing through a process, such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system.
- the application layer includes applications such as browsers, contacts, word processing software, and instant messaging software.
- the computer may be a handheld device such as a smart phone, or may be a terminal device such as a personal computer.
- the present invention is not particularly limited as long as the focusing method and image in which the embodiment of the present invention is recorded can be recorded by running.
- the program for the code of the shooting method is fine.
- the term "article of manufacture” as used in this application encompasses a computer program accessible from any computer-readable device, carrier, or media.
- the computer readable medium may include, but is not limited to, a magnetic storage device (eg, a hard disk, a floppy disk, or a magnetic tape), and an optical disk (eg, a CD (Compact Disc), a DVD (Digital Versatile Disc). Etc.), smart cards and flash memory devices (eg, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drivers, etc.).
- various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
- the term "machine-readable medium” may include, without limitation, a wireless channel and various other mediums capable of storing, containing, and/or carrying instructions and/or data.
- the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
- the implementation process constitutes any limitation.
- B corresponding to A means that B is associated with A, and B can be determined according to A.
- determining B from A does not mean that B is only determined based on A, and that B can also be determined based on A and/or other information.
- the disclosed systems, devices, and methods may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- Electromagnetism (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Studio Devices (AREA)
Abstract
一种对焦方法和装置、图像拍摄方法和装置、图像显示方法和装置及摄像系统(100、200),对焦方法包括:获取第一图像(A),并从第一图像(A)中确定对焦区域(A'),对焦区域(A')包括至少一个像素(S410);基于预设的放大比例,对对焦区域(A')做第一数码变焦处理,以获取第二图像(B),其中,第二图像(B)包括放大后的对焦区域,第二图像(B)的信噪比大于对焦区域(A')的信噪比(S420);基于第二图像(B),进行对焦处理(S430),从而,能够提高对焦速度和对焦精度。
Description
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
本发明涉及图像处理领域,并且更具体地,涉及一种对焦方法和装置、图像拍摄方法和装置、图像显示方法和装置及摄像系统。
目前,摄像设备已普遍具有自动对焦(Auto Focus,AF)功能,其中,对比度对焦法作为实现自动对焦功能的技术,已得到普遍应用。
对比度对焦法是通过检测图像中对焦区域(具体地说,是对焦区域对应的图像区域中的景物)的轮廓边缘实现自动对焦的。对焦目标的轮廓边缘越清晰,则它的亮度梯度就越大,或者说边缘处景物和背景之间的对比度就越大。利用这个原理,在对焦的过程中,在移动对焦点击的同时,采集一系列的图像帧,并计算其对应的对比度值,最后找出对比度值最大的图像帧所对应的对焦电机的位置。
在点对焦过程中,对焦区域是图像中的部分区域,区域范围较小,包括的像素较少,因此,该对焦区域中的信噪比较低,噪声干扰也较大,导致对焦速度较慢,对焦精度较低。
发明内容
本发明实施例提供一种对焦方法和装置、图像拍摄方法和装置、图像显示方法和装置即摄像系统,能够提高对焦速度和对焦精度。
第一方面,提供了一种对焦方法,包括:获取第一图像,并从该第一图像中确定对焦区域,该对焦区域包括至少一个像素;基于预设的放大比例,对该对焦区域做第一数码变焦处理,以获取第二图像,其中,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比;基于
该第二图像,进行对焦处理。
第二方面,提供了一种对焦装置,包括:图像获取单元,用于获取第一图像;对焦处理单元,用于从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素,用于基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,用于基于所述第二图像,进行对焦处理。
第三方面,提供了一种图像拍摄方法,包括:基于第一数码变焦倍数和第一焦距拍摄并呈现第一图像,并从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素;基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取并呈现第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,经过所述数码变焦处理后的数码变焦倍数为第二数码变焦倍数;基于所述第二图像,进行对焦处理,以确定第二焦距;根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从所述第二数码变焦倍数变更至所述第一数码变焦倍数,其中,所述缩小比例与所述放大比例相对应;基于所述第一数码变焦倍数和所述第二焦距拍摄并呈现第三图像。
第四方面,提供了一种图像拍摄装置,包括:图像拍摄单元,用于基于第一数码变焦倍数和第一焦距拍摄第一图像,用于基于所述第一数码变焦倍数和对焦处理单元确定的第二焦距拍摄第三图像;对焦处理单元,用于从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素,用于基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,图像拍摄单元使用的经过所述数码变焦处理后的数码变焦倍数为第二数码变焦倍数,用于基于所述第二图像,进行对焦处理,以确定第二焦距,用于根据预设的缩小比例进行第二数码变焦处理,以使图像拍摄单元使用的数码变焦倍数从所述第二数码变焦倍数变更至所述第一数码变焦倍数,其中,所述缩小比例与所述放大比例相对应。
第五方面,提供了一种图像显示方法,包括:获取并呈现第一图像,该
第一图像是基于第一数码变焦倍数和第一焦距拍摄的图像;获取并呈现第二图像,其中,该第二图像是在该第一图像中的对焦区域被基于预设的放大比例做第一数码变焦处理后获得的图像,该对焦区域包括至少一个像素,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比;获取并呈现第三图像,其中,该第三图像是基于该第一数码变焦倍数和第二焦距拍摄的图像,该第二焦距是经过基于该第二图像的对焦处理而确定的焦距。
第六方面,提供了一种图像显示装置,包括:获取单元,用于在第一时段从与所述图像显示装置通信连接的图像拍摄装置获取第一图像,在第二时段从与所述图像显示装置通信连接的图像处理装置获取第二图像,在第三时段从所述图像拍摄装置获取第三图像,其中,所述第一图像是基于第一数码变焦倍数和第一焦距拍摄的图像,所述第二图像是在所述第一图像中的对焦区域被基于预设的放大比例做第一数码变焦处理后获得的图像,所述对焦区域包括至少一个像素,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,所述第三图像是基于所述第一数码变焦倍数和第二焦距拍摄的图像,所述第二焦距是经过基于所述第二图像的对焦处理而确定的焦距;呈现单元,用于在所述第一时段呈现所述第一图像,在所述第二时段呈现所述第二图像,在所述第三时段呈现所述第三图像。
第七方面,提供了一种摄像系统,包括:摄像机构,用于拍摄第一图像;处理器,用于从所述摄像机构获取所述第一图像,从所述第一图像中确定对焦区域,基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,用于基于所述第二图像,进行对焦处理。
第八方面,提供了一种摄像系统,包括:摄像机构,用于基于第一数码变焦倍数和第一焦距拍摄第一图像,基于所述第一数码变焦倍数和处理器确定的第二焦距获取第三图像;处理器,用于从所述摄像机构获取所述第一图像,从所述第一图像中确定对焦区域,基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,基于所述第二图像,进行对焦处理,以确定所述第二焦距,根据预设的缩小比例进行第二数码变焦处理,以
使所述摄像机构使用的数码变焦倍数从所述第二数码变焦倍数变更至所述第一数码变焦倍数,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素,用于,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,经过所述数码变焦处理后的数码变焦倍数为第二数码变焦倍数,用于,其中,所述缩小比例与所述放大比例相对应;所述显示器,用于在第一时段呈现所述第一图像,在第二时段呈现所述第二图像,在第三时段呈现所述第三图像。
第九方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被对焦装置运行时,使得所述网络设备执行上述第一方面及其各种实现方式中的任一种对焦方法。
第十方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被图像拍摄装置运行时,使得所述网络设备执行上述第三方面及其各种实现方式中的任一种图像拍摄方法。
第十一方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被图像显示装置运行时,使得所述网络设备执行上述第五方面及其各种实现方式中的任一种图像显示方法。
第十二方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有程序,所述程序使得对焦装置执行上述第一方面及其各种实现方式中的任一种对焦方法
第十三方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有程序,所述程序使得图像拍摄装置执行上述第三方面及其各种实现方式中的任一种图像拍摄方法。
第十四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有程序,所述程序使得图像显示装置执行上述第五方面及其各种实现方式中的任一种图像显示方法。
根据本发明的实施例的对焦方法和装置和系统、图像拍摄方法和装置、图像呈现系统和装置,及摄像系统,通过对对焦区域做第一数码变焦处理进而放大该对焦区域,能够使放大后的对焦区域的信噪比增大,进而能够使评价值变化曲线平滑,有利于针对评价值变化曲线的峰值查找,加快峰值查找的速度,提高峰值查找的准确性,进而提高对焦速度和对焦精度。
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例的摄像系统的一例的示意性结构图。
图2是本发明实施例的摄像系统的另一例的示意性结构图。
图3是本发明实施例的无人飞行系统的示意性结构图。
图4是本发明实施例的对焦方法的示意性流程图。
图5是本发明实施例中α像素#1~α像素#N的位置的示意图。
图6是本发明实施例的图像拍摄方法的示意性流程图。
图7是本发明实施例的图像显示方法的示意性流程图。
图8是本发明实施例的图像拍摄方法和图像显示方法的示意性交互图。
图9是现有技术的对焦处理中的评价值变化曲线的示意图。
图10是本发明实施例的对焦处理中的评价值变化曲线的示意图。
图11是现有技术的对焦处理中的评价值变化曲线的峰值点附近数值变化情况的示意图。
图12是本发明实施例的对焦处理中的评价值变化曲线的峰值点附近数值变化情况的示意图。
图13是本发明实施例的对焦装置的示意性框图。
图14是本发明实施例的图像拍摄装置的示意性框图。
图15是本发明实施例的图像显示装置的示意性框图。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都应属于本发明保护的范围。
首先,结合图1至图3对本发明实施例的对焦方法的摄像系统100的结构进行详细说明。
图1是本发明一实施例的摄像系统100的一例的示意图。如图1所示,
该摄像系统100包括摄像机构110和处理器120。
其中,摄像机构110用于获取图像,其中,该摄像机构110的焦距可调节。并且,在本发明实施例中,该摄像机构110可以是现有技术中能够拍摄图像的各种摄像机构,例如,摄像头。
另外,在本发明实施例中,该摄像机构110中可以配置有自动对焦机构,该自动对焦机构与后述处理器120通信连接,能够接收来自处理器120的控制指令,并基于该控制指令,调节焦距。
作为示例而非限定,在本发明实施例中,该自动对焦机构可是将摄像头锁入音圈马达来实现的,音圈马达(Voice Coil Motor,VCM)主要由线圈、磁铁组和弹片构成,线圈通过上下两个弹片固定在磁铁组内,当线圈通电时,线圈会产生磁场,线圈磁场和磁石组相互作用,线圈会向上移动,而锁在线圈里的摄像头便一起移动,当断电时,线圈在弹片弹力下返回,这样就实现了自动对焦功能。并且,此情况下,来自处理器120的指令可以用于控制上述线圈的通电。
作为示例而非限定,在本发明实施例中,该摄像机构110中可以配置有电荷耦合元件(Charge-coupled Device,CCD)。CCD也可以称为CCD图像传感器或CCD图像控制器。CCD是一种半导体器件,能够把光学影像转化为电信号。CCD上植入的微小光敏物质称作像素(Pixel)。一块CCD上包含的像素数越多,其提供的画面分辨率也就越高。CCD的作用就像胶片一样,但它是把光信号转换成电荷信号。CCD上有许多排列整齐的光电二极管,能感应光线,并将光信号转变成电信号,经外部采样放大及模数转换电路转换成数字图像信号。
作为示例而非限定,在本发明实施例中,该摄像机构110中可以配置有互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)。CMOS是一种大规模应用于集成电路芯片制造的原料,并且CMOS制造工艺也被应用于制作数码影像器材的感光元件。CCD与CMOS图像传感器光电转换的原理相同,他们最主要的差别在于信号的读出过程不同;由于CCD仅有一个(或少数几个)输出节点统一读出,其信号输出的一致性非常好;而CMOS芯片中,每个像素都有各自的信号放大器,各自进行电荷-电压的转换,其信号输出的一致性较差。但是CCD为了读出整幅图像信号,要求输出放大器的信号带宽较宽,而在CMOS芯片中,每个像元中的
放大器的带宽要求较低,大大降低了芯片的功耗,这就是CMOS芯片功耗比CCD要低的主要原因。尽管降低了功耗,但是数以百万的放大器的不一致性却带来了更高的固定噪声,这又是CMOS相对CCD的固有劣势。
应理解,以上列举的摄像机构110的具体机构仅为示例性说明,本发明并未限定于此,其他能够调节焦距的摄像装置均落入本发明的保护范围内。
处理器120与摄像机构110通信连接,能够获取摄像机构110拍摄的图像,并基于该图像,生成控制指令,该控制指令用于控制摄像机构110(例如,摄像机构110中的自动对焦机构)的对焦。随后,对该处理器120基于从摄像机构110获取的图像生成控制指令进而完成摄像机构110的对焦的具体过程进行详细说明。
作为示例而非限定,在本发明实施例中,该处理器可以是中央处理单元(Central Processing Unit,CPU),也可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。并且,上述通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。并且,上述FPGA是在例如,可编程阵列逻辑(PAL,Programmable Array Logic)、通用阵列逻辑(GAL,Generic Array Logic)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)等可编程器件的基础上进一步发展的产物。它是作为专用集成电路(ASIC,Application Specific Integrated Circuit)领域中的一种半定制电路而出现的,既解决了定制电路的不足,又克服了原有可编程器件门电路数有限的缺点。系统设计师可以根据需要通过可编辑的连接把FPGA内部的逻辑块连接起来,就好像一个电路试验板被放在了一个芯片里。一个出厂后的成品FPGA的逻辑块和连接可以按照设计者而改变,所以FPGA可以完成所需要的逻辑功能。
需要说明的是,在本发明实施例中,该摄像系统100可以直接通过摄像机构110的CCD或CMOS进行取景和对焦。
图2是本发明一实施例的摄像系统200的一例的示意图。如图2所示,与上述摄像系统100相似,该摄像系统200包括摄像机构210和处理器220。其中,摄像机构210的功能和结构可以与上述摄像机构110相似,处理器220的功能和结构可以与上述摄像机构110相似,这里为了避免赘述,省略其详
细说明。并且,与上述摄像系统100不同的是,该摄像系统200还可以配置有显示器230。作为示例而非限定,在本发明实施例中,该显示器230可以是液晶显示器。
其中,该显示器230可以与该摄像机构210通信连接,能够获取摄像机构210拍摄的图像,并呈现该图像。或者,该显示器230可以与该处理器220,并经由处理器220获取摄像机构210拍摄的图像,并呈现该图像。
需要说明的是,在本发明实施例中,该处理器220可以具有图像处理功能(例如,放大或缩小等),从而,该显示器230呈现的图像可以是经过处理器220处理后的图像。作为示例而非限定,在本发明实施例中,显示器230可以是用于呈现实时取景(LiveView)功能提供的图像。该实时取景功能可以将摄像机构210取景的画面显示在显示器(例如,液晶显示屏)上面,大大方便了用户取景构图。
需要说明的是,在本发明实施例中,该摄像系统200可以直接通过摄像机构210的CCD或CMOS进行取景和对焦,即,该显示器230可以直接呈现由CCD或CMOS所取景的图像。
另外,在本发明实施例中,处理器220还可以基于由CCD或CMOS所取景的图像进行对焦处理,并在对焦处理期间对该图像做数码变焦处理(即,第一数码变焦处理,随后,对该过程进行详细说明),相应地,该显示器230还可以呈现经过该处理器220处理后的图像。
在本发明实施例中,上述摄像系统100或200中的各部件可以集成配置在同一设备中,并且,该设备可以为例如,照相机、摄像机或具有图像拍摄功能的智能终端设备(例如,手机、平板电脑或笔记本电脑等)。
或者,上述摄像系统100或200中的各部件也可以配置在不同设备中,作为示例而非限定,上述摄像机构可以配置在无人机中,显示器可以配置在能够与该无人机通信连接的控制终端(例如,遥控器或安装有控制程序的智能终端)中,并且,上述处理器可以配置在无人机中,也可以配置在控制终端中,本发明并未特别限定。
具体地说,无人机也可以称为无人飞行器(Unmanned Aerial Vehicle,UAV),已经从军用发展到越来越广泛的民用,例如,UAV植物保护、UAV航空拍摄、UAV森林火警监控等等,而民用化也是UAV未来发展的趋势。
在有些场景下,UAV可以通过载体(carrier)携带用于执行特定任务的
负载(payload)。例如,在利用UAV进行航空拍摄时,UAV可以通过云台携带拍摄设备,即,本发明实施例中的摄像机构。图3是根据本发明的实施例的无人飞行系统300的示意性架构图。本实施例以旋翼飞行器为例进行说明。
无人飞行系统300可以包括UAV 310、云台设备320、显示设备330和操纵设备340。其中,UAV 310可以包括动力系统350、飞行控制系统360、机架370和对焦处理器380。UAV 310可以与操纵设备340和显示设备330进行无线通信。
机架370可以包括机身和脚架(也称为起落架)。机身可以包括中心架以及与中心架连接的一个或多个机臂,一个或多个机臂呈辐射状从中心架延伸出。脚架与机身连接,用于在UAV 310着陆时起支撑作用。
动力系统350可以包括电子调速器(简称为电调)351、一个或多个螺旋桨353以及与一个或多个螺旋桨353相对应的一个或多个电机352,其中电机352连接在电子调速器351与螺旋桨353之间,电机352和螺旋桨353设置在对应的机臂上;电子调速器351用于接收飞行控制器360产生的驱动信号,并根据驱动信号提供驱动电流给电机352,以控制电机352的转速。电机352用于驱动螺旋桨旋转,从而为UAV 310的飞行提供动力,该动力使得UAV 310能够实现一个或多个自由度的运动。在某些实施例中,UAV310可以围绕一个或多个旋转轴旋转。例如,上述旋转轴可以包括横滚轴、平移轴和俯仰轴。应理解,电机352可以是直流电机,也可以交流电机。另外,电机352可以是无刷电机,也可以有刷电机。
飞行控制系统360可以包括飞行控制器361和传感系统362。传感系统362用于测量UAV的姿态信息,即UAV 310在空间的位置信息和状态信息,例如,三维位置、三维角度、三维速度、三维加速度和三维角速度等。传感系统362例如可以包括陀螺仪、电子罗盘、IMU(惯性测量单元,Inertial Measurement,Unit)、视觉传感器、GPS(全球定位系统,Global Positioning System)和气压计等传感器中的至少一种。飞行控制器361用于控制UAV 310的飞行,例如,可以根据传感系统362测量的姿态信息控制UAV 310的飞行。应理解,飞行控制器361可以按照预先编好的程序指令对UAV 310进行控制,也可以通过响应来自操纵设备340的一个或多个控制指令对UAV310进行控制。
云台设备320可以包括电调321和电机322。云台设备320可以用来承载摄像机构323。其中,该摄像机构323的结构和功能可以与上述摄像机构110或120的结构和功能相似,这里,为了避免赘述,省略其详细说明。
飞行控制器361可以通过电调321和电机322控制云台设备320的运动。可选地,作为一另一实施例,云台设备320还可以包括控制器,用于通过控制电调321和电机322来控制云台设备3200的运动。应理解,云台设备320可以独立于UAV 310,也可以为UAV 310的一部分。应理解,电机322可以是直流电机,也可以交流电机。另外,电机322可以是无刷电机,也可以有刷电机。还应理解,载体可以位于飞行器的顶部,也可以位于飞行器的底部。
尽管图中未示出,但是,该无人飞行系统300还可以包括对焦处理器,该对焦处理器用于控制摄像机构323进行对焦,该焦处理器的结构和功能可以与上述处理器120或220的结构和功能相似,这里,为了避免赘述,省略其详细说明。另外,该焦处理器可以配置在UAV 310上,也可以配置在操纵设备340或显示设备330中,本发明并未特别限定。
显示设备330位于无人飞行系统300的地面端,可以通过无线方式与UAV 310进行通信,并且可以用于显示UAV 310的姿态信息,还可以用于显示摄像机构323拍摄的图像。应理解,显示设备330可以是独立的设备,也可以设置在操纵设备340中。
操纵设备340位于无人飞行系统300的地面端,可以通过无线方式与UAV 310进行通信,用于对UAV 310进行远程操纵。操纵设备例如可以是遥控器或者安装有控制UAV的APP(应用程序,Application)的终端设备,例如,智能手机、平板电脑等。本发明的实施例中,通过操纵设备接收用户的输入,可以指通过遥控器上的拔轮、按钮、按键、摇杆等输入装置或者终端设备上的用户界面(UI)对UAV进行操控。
另外,需要说明的是,在本发明实施例中,对焦处理器可以是独立配置的专用处理器,或者,该对焦处理器的功能也可以由无人飞行系统300中的其他设备的处理器(例如,操纵设备340的处理器,或者,摄像机构323的处理器)提供,本发明并未特别限定。
应理解,上述对于无人飞行系统各组成部分的命名仅是出于标识的目的,并不应理解为对本发明的实施例的限制。
下面,对本发明实施例中,摄像机构、处理器和显示器在对焦过程中的动作和交互进行详细说明。
图4是本发明实施例的对焦方法400的示意性流程图。如图4所示,该对焦方法400包括:
S410,获取第一图像,并从该第一图像中确定对焦区域,该对焦区域包括至少一个像素;
S420,基于预设的放大比例,对该对焦区域做第一数码变焦处理,以获取第二图像,其中,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比;
S430,基于该第二图像,进行对焦处理。
具体地说,在S410,摄像系统的摄像机构可以基于焦距A进行拍摄并获取图像A(即,第一图像的一例),并将所拍摄的图像A发送给处理器。
处理器在接收到该图像A后,可以从该图像A中,确定用于对焦处理的图像A’(即,对焦区域的一例)。
其中,该图像A’是图像A中位于规定位置的图像,该图像A’包括至少一个像素。
在本发明实施例中,上述规定位置可以是由制造商或用户预先设定的位置,作为示例而非限定,该规定位置可以是图像(例如,图像A)中位于图像的几何中心附近的位置。
或者,在本发明实施例中,上述规定位置也可以是基于拍摄到的景物确定的,例如,在该摄像机构具有特定景物(例如:人脸)识别功能时,该规定区域可以是图像(例如,图像A)中呈现有该特定景物的位置。
另外,在本发明实施例中,摄像系统具有数码变焦功能,该图像A可以是基于上述焦距A和倍率A(即,第一数码变焦倍数的一例)获取的图像,即,该图像A的倍率记做:倍率A。
在S420,处理器可以对该图像A’做数码变焦处理A(即,第一数码变焦处理的一例),其中,该数码变焦处理A是基于预设的放大倍率K(即,放大比例的一例)的数码变焦处理,以生成图像B(即,第二图像的一例)。为了便于理解和说明,在本发明实施例中,将该图像B的倍率记做:倍率B。
其中,该图像B所包括的像素的数量大于图像A’所包括的像素的数量,并且,该图像B的信噪比大于图像A’的信噪比。
在本发明实施例中,该数码变焦处理A是把图像A’内的每个象素面积增大,从而达到放大目的,即,数码变焦处理A是利用影响传感器局部(即,上述对焦区域对应的部分)来成像,相当于将对焦区域从在原图像中取出并放大。
例如,在本发明实施例中,在处理器可以把影像感应器(例如,CCD或CMOS)上的一部份像素(即,上述对焦区域对应的像素)使用插值处理手段做放大。例如,处理器可以对已有像素(即,对焦区域对应的像素)周边的色彩进行判断,并根据周边的色彩情况插入经特殊算法加入的像素。
并且,在本发明实施例中,数码变焦处理A过程中,并没有改变镜头的焦距。
下面,对该数码变焦处理A是的具体方法和过程进行示例性说明。
可选地,该第二图像包括多个原始像素和多个插值像素,其中,该原始像素是该对焦区域中的像素,以及
该基于预设的放大比例,对该对焦区域做第一数码变焦处理,包括:
基于该多个插值像素中的第一插值像素在该第二图像中的位置,从该对焦区域中确定与该第一插值像素相对应的N个参考原始像素,N≥1;
根据该放大比例和该N个参考原始像素的灰度值,确定该第一插值像素的灰度值。
具体地说,在本发明实施例中,图像B中包括两种像素,以下,为了便于理解和区分,记做:α像素(即,原始像素的一例)和β像素(即,插值像素的一例)。
其中,α像素为多个,每个α像素是图像A’中的像素。
β像素为多个,每个β像素是基于上述多个α像素中的部分像素(具体地说,是该部分像素的灰度值)生成的像素。
不失一般性,当放大倍率为K时,如果图像A’包括R·Q个像素,则图像B包括(K·R)·(K·Q)个像素,其中,R为大于或等于2的整数,Q为大于或等于2的整数。
需要说明的是,图像B中的位置为(K·i,K·j)的像素为α像素,即,图像B中的位置为(K·i,K·j)的像素的像素值为图像A’中位置为(i,j)的像素的像素值。其中,i∈[0,R],j∈[0,Q]。
并且,图像B中的位置为(K·i+v,K·j+u)的像素为β像素。其中,i∈[0,
R],j∈[0,Q],v∈[1,R-1],u∈[1,Q-1]。
另外,在本发明实施例中,对于一个β像素(以下,为了便于理解和区分,记做:β像素#1)的确定过程,需要使用N个α像素(以下,为了便于理解和区分,记做:α像素#1~α像素#N)。
下面,对该α像素#1~α像素#N的过程进行说明。
可选地,该基于该多个插值像素中的第一插值像素在该第二图像中的位置,从该至少一个原始像素中确定该第一插值像素所对应的N个参考原始像素,包括:
根据该放大比例和第一插值像素在该第二图像中的位置,从该对焦区域中确定第一参考原始像素,该第一参考原始像素在该对焦区域中的位置与该第一插值像素在该第二图像中的位置相对应;
根据第一参考原始像素,从该根据该对焦区域中确定N-1个第二参考原始像素,其中,该N-1个第二参考原始像素与该第一参考原始像素之间的位置关系满足预设的位置条件。
具体地说,该N个α像素是在图像A’中处于规定位置的像素,并且,该规定位置与该数码变焦处理A的放大倍率K(即,预设的放大比例)相对应。其中,K为大于或等于2的整数。
作为示例而非限定,为了便于理解可区分,以K=2,v=1,u=1时的处理为例,进行说明,即,此情况下,β像素#1在图像B中的位置为(2·i+1,2·j+1),其中,i∈[0,R],j∈[0,Q]。
从而,处理器可以根据放大倍率和β像素#1在图像B中的位置,确定β像素#1在图像A’中的对应像素(即,第一参考原始像素的一例以下,为了便于理解和区分,记做:对应像素#1)。
对应像素#1的坐标为(x,y),则处理器可以根据以下式1确定x的具体值,并根据以下式2确定y的具体值。
即,作为示例而非限定,在本发明实施例中,对于图像B中的位置为(K·i+v,K·j+u)的像素,其在图像A’中的对应像素可以为图像A’中位置为(i,j)的像素。
并且,处理器可以根据以下式3至式8确定β像素#1与对应像素#1之间的空间差距dx,dx2,dx3,dy,dy2和dy3。
dx=1/2·(2·i+1)-x=1/2 式3
dx2=dx·dx=1/22=1/4 式4
dx3=dx2·dx=1/23=1/8 式5
dy=1/2·(2·j+1)-y=1/2 式6
dy2=dy·dy=1/22=1/4 式7
dy3=dy2·dy=1/23=1/8 式8
在本发明实施例中,α像素#1~α像素#N包括该对应像素#1,并且,α像素#1~α像素#N中除该对应像素#1以外的像素(以下,为了便于理解和区分,记做:插值像素)与该对应像素#1的位置关系需要满足预设的位置条件,例如,在横坐标上,该插值像素与对应像素#1之间的距离小于或等于预设的第一距离阈值(例如,2)。且,在纵坐标上,该插值像素与对应像素#1之间的距离小于或等于预设的第二距离阈值(例如,2)。
作为示例而非限定,在本发明实施例中,上述预设的位置条件可以根据上述放大比例确定。
作为示例而非限定,如图5所示,如上所述该对应像素#1的坐标为(i,j),则α像素#1~α像素#N可以为图像A’中的以下像素:
第一像素组:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1)
第二像素组:(i-1,j),(i,j),(i+1,j),(i+2,j)
第三像素组:(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1)
第四像素组:(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)
应理解,以上列举的α像素#1~α像素#N仅为示例性说明,本发明并未限定于此,只要使插值像素与该对应像素#1的位置关系需要满足上述预设的条件,例如,在第一距离阈值为2,第二距离阈值为2时,该α像素#1~α像素#N还可以为图像A’中的以下像素:
(i-2,j-1),(i-1,j-1),(i,j-1),(i+1,j-1)
(i-2,j),(i-1,j),(i,j),(i+1,j)
(i-2,j+1),(i-1,j+1),(i,j+1),(i+1,j+1)
(i-2,j+2),(i-1,j+2),(i,j+2),(i+1,j+2)
其后,处理器可以根据如上所述确定的α像素#1~α像素#N的像素值(例
如,灰度值),确定β像素#1的像素值(例如,灰度值)。
具体地说,首先,处理器可以根据以下式9至式13分别获取第一像素组的中间值t0、第二像素组的中间值t1、第三像素组的中间值t2和第四像素组的中间值t3。
a1=-p0/2+(3·p1)/2-(3·p2)/2+p3/2 式9
b1=p0-(5·p1)/2+2·p2-p3/2 式10
c1=-p0/2+p2/2 式11
d1=p1 式12
t=a1·dx3+b1·dx2+c1·dx+d1 式13
即,处理器可以将第一像素组包括的(i-1,j-1)的像素值作为p0,将第一像素组包括的(i,j-1)的像素值作为p1,将第一像素组包括的(i+1,j-1)的像素值作为p2,将第一像素组包括的(i+2,j-1)的像素值作为p3代入上述式9至式13得到t0。
处理器可以将第二像素组包括的(i-1,j)的像素值作为p0,将第一像素组包括的(i,j)的像素值作为p1,将第一像素组包括的(i+1,j)的像素值作为p2,将第一像素组包括的(i+2,j)的像素值作为p3代入上述式9至式13得到t1。
处理器可以将第三像素组包括的(i-1,j+1)的像素值作为p0,将第一像素组包括的(i,j+1)的像素值作为p1,将第一像素组包括的(i+1,j+1)的像素值作为p2,将第一像素组包括的(i+2,j+1)的像素值作为p3代入上述式9至式13得到t2。
处理器可以将第四像素组包括的(i-1,j+2)的像素值作为p0,将第一像素组包括的(i,j+2)的像素值作为p1,将第一像素组包括的(i+1,j+2)的像素值作为p2,将第一像素组包括的(i+2,j+2)的像素值作为p3代入上述式9至式13得到t3。
其后,处理器可以根据如上所述确定的t0~t3,基于以下式14至式18,确定β像素#1的像素值w。
a2=-t0/2+(3·t1)/2-(3·t2)/2+t3/2 式14
b2=t0-(5·t1)/2+2·t2-t3/2 式15
c2=-t0/2+t2/2 式16
d2=t1 式17
w=a2·dy3+b2·dy2+c2·dy+d2 式18
类似地,处理器可以确定图像B中各β像素的像素值,进而,处理器可以确定图像B中各β像素的像素值和α像素的像素值,确定图像B。
在S430,处理器可以基于图像B,进行对焦处理。例如,通过分析正确对焦的图像和离焦图像,有如下规律:当正确对焦时,对比度最强越偏离这个位置,对比度就越低。应用一维CCD元件作对比度检测时,假使将第n个光接收元件的输出设为In,则对比度的评价函数为:
其中,m为一维CCD的总像素数。E在正确对焦的位置上为最大,如果E趋向于Emax,则在焦前、焦后位置,将会随着离焦量的增大而减小。如果离焦偏移特别大时,则E趋向于0。
在本发明实施例中,在由处理器控制的自动对焦系统中,在取景半透反光镜后端有一小尺寸辅助反光镜,它将光束转向照相机的底部,进入检测组件内。经红外滤光片后,在分像棱镜内分成相应两组像,一组相当于焦前等价面S1,另一组相当于焦后等价面S2,S1与S2和成像平面相距l。在S1、S2位置上分别安置两组一维CCD元件。当改变镜头位置时,分别S1和S2得出相应的对比度变化曲线E1和E2。如镜头最初位置处于焦前点,对焦时向焦后方向移动,那么先是对比度曲线E1达到极大值,继续使对比度曲线E2达到极大值.相反,当镜头最初处于焦后位置,先是对比度曲线E2达到极大值、然后E1达到最大值。相应地,镜头处于准确对焦位置时,在焦前与焦后中间,对比度曲线E1和E2相同。E1>E2为焦前位置,E1<E2为焦后位置。
需要说明的是,在本发明实施例中,第一图像可以是一帧图像也可以是多帧图像,本发明并未特别限定,并且,处理器基于多帧图像完成对焦处理的过程可以与现有技术相似,这里,为了避免赘述,省略其详细说明。
由此,经由上述对焦处理,对焦距A进行变更,确定焦距B,其后,摄像机构能够根据该焦距B拍摄图像。
可选地,该对焦方法400还包括:
在获取该第一图像后,呈现该第一图像;
在获取该第二图像后,呈现该第二图像;
此外,在本发明实施例中,该摄像系统中还可以配置有显示器,并在显示器上呈现摄像机构获取的上述图像A,处理器生成的上述图像B。
可选地,该对焦方法400,还包括:
根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从第二数码变焦倍数变更至第一数码变焦倍数,其中,该缩小比例与该放大比例相对应,该第一数码变焦倍数是经过该第一数码变焦处理前的数码变焦倍数,该第二数码变焦倍数经过该第一数码变焦处理后的数码变焦倍数。
并且,该第一图像是基于第一焦距获取的图像,经过该对焦处理后的焦距为第二焦距,以及
该对焦方法还包括:
基于该第一数码变焦倍数和该第二焦距获取第三图像,并呈现该第三图像。
具体地说,在本发明实施例中,上述处理器进行的对焦处理可以与显示器呈现liveview的过程同时进行。
例如,在摄像机构在时段A(也可以称为第一时段),获取图像A后,显示器可以在该时段A呈现图像A,其中,该图像A是基于焦距A和倍率A获取的图像,其中,该倍率A可以是摄像系统初始的数码变焦倍率(例如,未经数码变焦处理的倍率,例如,1倍),也可以是用户调节的数码变焦倍率,本发明并未特别限定。即,在时段A,在显示器上可以呈现基于焦距A和倍率A获取的liveview(即,第一图像的一例)。
并且,在自处理器对图像A做数码变焦处理A而生成图像B之后,至处理器确定焦距B之前的时段,为了便于理解和区分,记做:时段B(也可以称为第二时段),显示器可以呈现该图像B,即,在处理器进行上述基于放大比例K的数码变焦处理后,倍率变更为倍率B,从而,在时段B,在显示器上呈现的liveview可以是基于焦距A和倍率B获取的liveview(即,第二图像的一例)。
并且,在处理器完成对焦处理而确定焦距B之后,为了便于理解和区分,记做:时段C(也可以称为第三时段),处理器可以将数码变焦处理的倍率从倍率B调整至倍率A,在显示器上呈现的liveview可以是基于焦距B和倍率A获取的liveview(即,第三图像的一例)。
从而,用户能够通过显示器,在liveview上识别出该对焦处理过程中图像的变化情况,能够提高用户在拍摄过程体验和参与度。
根据本发明的实施例的对焦方法通过对对焦区域做第一数码变焦处理进而放大该对焦区域,能够使放大后的对焦区域的信噪比增大,进而能够使评价值变化曲线平滑,有利于针对评价值变化曲线的峰值查找,加快峰值查找的速度,提高峰值查找的准确性,进而提高对焦速度和对焦精度。
图6是本发明实施例的图像拍摄方法500的示意性流程图。如图6所示,该图像拍摄方法500包括:
S510,基于第一数码变焦倍数和第一焦距拍摄并呈现第一图像,并从该第一图像中确定对焦区域,该对焦区域包括至少一个像素,且该对焦区域包括该第一图像中的部分像素;
S520,基于预设的放大比例,对该对焦区域做第一数码变焦处理,以获取并呈现第二图像,其中,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比,经过该数码变焦处理后的数码变焦倍数为第二数码变焦倍数;
S530,基于该第二图像,进行对焦处理,以确定第二焦距;
S540,根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从该第二数码变焦倍数变更至该第一数码变焦倍数,其中,该缩小比例与该放大比例相对应;
S550,基于该第一数码变焦倍数和该第二焦距拍摄并呈现第三图像。
具体地说,在S510,摄像系统的摄像机构可以基于焦距A进行拍摄并获取图像A(即,第一图像的一例),并将所拍摄的图像A发送给处理器。设该图像A是基于倍率A获取的图像,其中,该倍率A可以是摄像系统预设的倍率,也可以是用户调节的倍率,本发明并未特别限定。
需要说明的是,在S510中处理器和摄像机构的动作和处理获取可以与上述方法400中,处理器和摄像机构在S410中执行的动作相似,这里,为了避免赘述,省略其详细说明。
并且,在处理器进行针对图像A的变焦处理之前(即,时段A),在显示器上可以呈现基于焦距A和倍率A获取的liveview(即,第一图像的一例)。
另外,该图像A可以是显示器从处理器获取的(例如,上述倍率A为非1倍的情况下),也可以是显示器从摄像机构获取的(例如,上述倍率A为1倍的情况下),本发明并未特别限定。
在S520,处理器可以对该图像A’进行第一数码变焦处理,以生成图像
B(即,第二图像的一例)。设经过第一变焦处理后的倍率为倍率B,则该图像B是基于焦距A和倍率B获取的图像。
并且,在S520中处理器的动作和处理获取可以与上述方法400中,处理器在S420中执行的动作相似,这里,为了避免赘述,省略其详细说明。
在S530,处理器可以基于图像B,进行对焦处理。设经由上述对焦处理,对焦距A进行变更,确定焦距B。
并且,在S530中处理器的动作和处理获取可以与上述方法400中,处理器在S430中执行的动作相似,这里,为了避免赘述,省略其详细说明。
并且,在自处理器对图像A做数码变焦处理A而生成图像B之后,至处理器确定焦距B之前的时段(即,时段B),在显示器上呈现的liveview可以是基于焦距A和倍率B获取的liveview(即,第二图像的一例)。
在S540,处理器可以将数码变焦被倍率从倍率B调节至倍率A。
在S550,摄像系统的摄像机构可以基于焦距B进行拍摄并获取图像A(即,第三图像的一例),则该图像C是基于倍率A获取的图像。
即,在处理器完成对焦处理而确定焦距B之后的时段(即,时段C),处理器可以将数码变焦处理的倍率从倍率B调整至倍率A,在显示器上呈现的liveview可以是基于焦距B和倍率A获取的liveview(即,第三图像的一例)。
从而,用户能够通过显示器,在liveview上识别出该对焦处理过程中图像的变化情况,能够提高用户在拍摄过程体验和参与度。
图7是从显示器或者显示装置角度描述的本发明实施例的图像显示方法600的示意性流程图。如图7所示,该图像拍摄方法600包括:
S610,获取并呈现第一图像,所述第一图像是基于第一数码变焦倍数和第一焦距拍摄的图像;
S620,获取并呈现第二图像,其中,所述第二图像是在所述第一图像中的对焦区域被基于预设的放大比例做第一数码变焦处理后获得的图像,所述对焦区域包括至少一个像素,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比;
S630,获取并呈现第三图像,其中,所述第三图像是基于所述第一数码变焦倍数和第二焦距拍摄的图像,所述第二焦距是经过基于所述第二图像的对焦处理而确定的焦距。;
具体地说,摄像系统的摄像机构可以基于焦距A进行拍摄并获取图像A
(即,第一图像的一例),并将所拍摄的图像A发送给处理器。
从而,在S610,该显示器能够获取并呈现该图像A。即,在处理器进行针对图像A的变焦处理之前(即,时段A),在显示器上可以呈现基于焦距A和倍率A获取的liveview(即,第一图像的一例)。
设该图像A是基于倍率A获取的图像,其中,该倍率A可以是摄像系统预设的倍率,也可以是用户调节的倍率,本发明并未特别限定。
需要说明的是,处理器和摄像机构的动作和处理获取可以与上述方法400中,处理器和摄像机构在S410中执行的动作相似,这里,为了避免赘述,省略其详细说明。
另外,该图像A可以是显示器从处理器获取的(例如,上述倍率A为非1倍的情况下),也可以是显示器从摄像机构获取的(例如,上述倍率A为1倍的情况下),本发明并未特别限定。
处理器可以对图像A’进行第一数码变焦处理,以生成图像B(即,第二图像的一例)。设经过第一变焦处理后的倍率为倍率B,则该图像B是基于焦距A和倍率B获取的图像。
从而,在S620显示器可以获取并呈现该图像B,即,在自处理器对图像A做数码变焦处理A而生成图像B之后,至处理器确定焦距B之前的时段(即,时段B),在显示器上呈现的liveview可以是基于焦距A和倍率B获取的liveview(即,第二图像的一例)。
并且,处理器的动作和处理获取可以与上述方法400中,处理器在S420中执行的动作相似,这里,为了避免赘述,省略其详细说明。
其后,处理器可以基于图像B,进行对焦处理。设经由上述对焦处理,对焦距A进行变更,确定焦距B。
并且,处理器的动作和处理获取可以与上述方法400中,处理器在S430中执行的动作相似,这里,为了避免赘述,省略其详细说明。
其后,处理器可以将数码变焦被倍率从倍率B调节至倍率A。
摄像系统的摄像机构可以基于焦距B进行拍摄并获取图像A(即,第三图像的一例),则该图像C是基于倍率A获取的图像。
即,在处理器完成对焦处理而确定焦距B之后的时段(即,时段C),处理器可以将数码变焦处理的倍率从倍率B调整至倍率A,
从而,在S630,显示器可以获取并呈现该图像C,即,在显示器上呈现的
liveview可以是基于焦距B和倍率A获取的liveview(即,第三图像的一例)。
从而,用户能够通过显示器,在liveview上识别出该对焦处理过程中图像的变化情况,能够提高用户在拍摄过程体验和参与度。
图8是本发明实施例的图像拍摄过程和图像呈现700的示意性交互图。如图8所示,在S710,摄像机构和处理器可以基于焦距A和倍率A获取图像A,并将该图像A发送给显示器,显示器可以呈现该图像A。
在S720,处理器可以对图像A中的对焦区域(例如,上述图像A’)进行基于放大比例K的数码变焦处理(即,放大处理的一例),以获取图像B,设数码变焦处理后的倍率为倍率B,则倍率B=K·倍率A。
在S730,处理器可以将该图像B发送给显示器,并且,显示器可以呈现该图像B。
在S740,处理器可以基于该图像B进行对焦处理,以确定焦距B,并且,在确定焦距B之后,可以将当前的数码变焦倍率从倍率B调节至倍率A。
在S750,摄像机构和处理器可以基于焦距B和倍率A获取图像C,并将该图像C发送给显示器,显示器可以呈现该图像C。
根据本发明的实施例的图像拍摄方法,通过对对焦区域做第一数码变焦处理进而放大该对焦区域,能够使放大后的对焦区域的信噪比增大,进而能够使评价值变化曲线平滑,有利于针对评价值变化曲线的峰值查找,加快峰值查找的速度,提高峰值查找的准确性,进而提高对焦速度和对焦精度。
如图9和图11所示,由于现有技术中对焦区域较小,因此信噪比较小,导致对焦处理过程中使用的对比度变化曲线的波动较大,或者说,平滑性较差,因此导致对焦过程(主要是峰值查找过程)的时间较长,准确性较差。
与此相对,如图10和图12所示,在本发明实施例中,通过对对焦区域做第一数码变焦处理,能够使该对焦区域的面积增大(具体地说,是包括的像素增多)并且,由于增加的像素是基于原图像中规定的位置的像素,采用插值算法确定的,因此,能够有效减小增加的像素中的噪声,因此,能够提高经过数码变焦处理后的对焦区域的信噪比,从而减小对焦处理过程中使用的对比度变化曲线的波动,提高对比度变化曲线的平滑性,从而缩短对焦过程(主要是峰值查找过程)的时间,提高对焦过程(主要是峰值查找过程)的准确性。
需要说明的是,图9至图12中,横坐标可以是对焦电机的位置,纵坐
标可以是对应于该对焦电机位置所采集到的图像的对焦区域中计算出的对比度值。
以上,结合图1至图12详细说明了根据本发明实施例的对焦方法和图像拍摄方法,下面,结合图13至图15详细说明根据本发明实施例的对焦装置、图像拍摄装置和图像显示装置。
图13是本发明实施例的对焦装置800的示意性框图。如图13所示,该对焦装置800包括:
图像获取单元810,用于获取第一图像;
对焦处理单元820,用于从该第一图像中确定对焦区域,该对焦区域包括至少一个像素,用于基于预设的放大比例,对该对焦区域做第一数码变焦处理,以获取第二图像,其中,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比,用于基于该第二图像,进行对焦处理。
可选地,该第二图像包括多个原始像素和多个插值像素,其中,该原始像素是该对焦区域中的像素,以及
该对焦处理单元820具体用于基于该多个插值像素中的第一插值像素在该第二图像中的位置,从该对焦区域中确定与该第一插值像素相对应的N个参考原始像素,N≥1,用于根据该放大比例和该N个参考原始像素的灰度值,确定该第一插值像素的灰度值。
可选地,该对焦处理单元820具体用于根据该放大比例和第一插值像素在该第二图像中的位置,从该对焦区域中确定第一参考原始像素,该第一参考原始像素在该对焦区域中的位置与该第一插值像素在该第二图像中的位置相对应,用于根据第一参考原始像素,从该根据该对焦区域中确定N-1个第二参考原始像素,其中,该N-1个第二参考原始像素与该第一参考原始像素之间的位置关系满足预设的位置条件。
可选地,该对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及
该对焦处理单元820具体用于如果该放大比例为1:M,则当第一插值像素在该第二图像中的位置坐标为(M·i+1,M·j+1)时,该第一参考原始像素是在从该对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
可选地,X≥4,Y≥4,且该N-1个第二参考原始像素点包括该第二图像中以下位置坐标的像素:
(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),
(i-1,j),(i,j),(i+1,j),(i+2,j),
(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),
(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
可选地,该对焦装置还包括:
图像显示单元,用于呈现该第一图像和该第二图像。
可选地,该对焦处理单元820还用于根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从第二数码变焦倍数变更至第一数码变焦倍数,其中,该缩小比例与该放大比例相对应,该第一数码变焦倍数是经过该第一数码变焦处理前的数码变焦倍数,该第二数码变焦倍数经过该第一数码变焦处理后的数码变焦倍数。
可选地,该第一图像是基于第一焦距获取的图像,经过该对焦处理后的焦距为第二焦距,以及
该图像获取单元810还用于基于该第一数码变焦倍数和该第二焦距获取第三图像;
该图像呈现单元还用于呈现该第三图像。
根据本发明实施例的对焦装置800的各单元即模块和上述其他操作和/或功能分别为了实现图4中的方法400中的相应流程,为了简洁,在此不再赘述。
根据本发明的实施例的对焦装置,通过对对焦区域做第一数码变焦处理进而放大该对焦区域,能够使放大后的对焦区域的信噪比增大,进而能够使评价值变化曲线平滑,有利于针对评价值变化曲线的峰值查找,加快峰值查找的速度,提高峰值查找的准确性,进而提高对焦速度和对焦精度。
图14是本发明实施例的图像拍摄装置900的示意性框图。如图14所示,该图像拍摄装置900,包括:
图像拍摄单元910,用于基于第一数码变焦倍数和第一焦距拍摄第一图像,用于基于该第一数码变焦倍数和对焦处理单元820确定的第二焦距拍摄第三图像;
对焦处理单元920,用于从该第一图像中确定对焦区域,该对焦区域包
括至少一个像素,且该对焦区域包括该第一图像中的部分像素,用于基于预设的放大比例,对该对焦区域做第一数码变焦处理,以获取第二图像,其中,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比,图像拍摄单元910使用的经过该数码变焦处理后的数码变焦倍数为第二数码变焦倍数,用于基于该第二图像,进行对焦处理,以确定第二焦距,用于根据预设的缩小比例进行第二数码变焦处理,以使图像拍摄单元910使用的数码变焦倍数从该第二数码变焦倍数变更至该第一数码变焦倍数,其中,该缩小比例与该放大比例相对应。
图像呈现单元930,用于呈现该第一图像、该第二图像和该第三图像。
可选地,该第二图像包括多个原始像素和多个插值像素,其中,该原始像素是该对焦区域中的像素,以及
对焦处理单元920具体用于基于该多个插值像素中的第一插值像素在该第二图像中的位置,从该对焦区域中确定与该第一插值像素相对应的N个参考原始像素,N≥1,用于根据该放大比例和该N个参考原始像素的灰度值,确定该第一插值像素的灰度值。
可选地,对焦处理单元920具体用于根据该放大比例和第一插值像素在该第二图像中的位置,从该对焦区域中确定第一参考原始像素,该第一参考原始像素在该对焦区域中的位置与该第一插值像素在该第二图像中的位置相对应,用于根据第一参考原始像素,从该根据该对焦区域中确定N-1个第二参考原始像素,其中,该N-1个第二参考原始像素与该第一参考原始像素之间的位置关系满足预设的位置条件。
可选地,该对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及
对焦处理单元920具体用于如果该放大比例为1:M,则当第一插值像素在该第二图像中的位置坐标为(M·i+1,M·j+1)时,该第一参考原始像素是在从该对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
可选地,X≥4,Y≥4,且该N-1个第二参考原始像素点包括该第二图像中以下位置坐标的像素:
(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),
(i-1,j),(i,j),(i+1,j),(i+2,j),
(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),
(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
根据本发明实施例的图像拍摄装置900的各单元即模块和上述其他操作和/或功能分别为了实现图6中的方法500中的相应流程,为了简洁,在此不再赘述。
根据本发明的实施例的图像拍摄装置,通过对对焦区域做第一数码变焦处理进而放大该对焦区域,能够使放大后的对焦区域的信噪比增大,进而能够使评价值变化曲线平滑,有利于针对评价值变化曲线的峰值查找,加快峰值查找的速度,提高峰值查找的准确性,进而提高对焦速度和对焦精度。
图15是本发明实施例的图像显示装置1000的示意性框图。如图15所示,该图像显示装置1000,包括:
获取单元1010,用于在第一时段从与该图像显示装置通信连接的图像拍摄装置获取第一图像,在第二时段从与该图像显示装置通信连接的图像处理装置获取第二图像,在第三时段从该图像拍摄装置获取第三图像,其中,该第一图像是基于第一数码变焦倍数和第一焦距拍摄的图像,该第二图像是在该第一图像中的对焦区域被基于预设的放大比例做第一数码变焦处理后获得的图像,该对焦区域包括至少一个像素,该第二图像包括放大后的对焦区域,该第二图像的信噪比大于该对焦区域的信噪比,该第三图像是基于该第一数码变焦倍数和第二焦距拍摄的图像,该第二焦距是经过基于该第二图像的对焦处理而确定的焦距;
呈现单元1020,用于在该第一时段呈现该第一图像,在该第二时段呈现该第二图像,在该第三时段呈现该第三图像。
可选地,该第二图像包括多个原始像素和多个插值像素,其中,该原始像素是该对焦区域中的像素,以及
该多个插值像素中的第一插值像素的灰度值是根据该放大比例和N个参考原始像素的灰度值确定的,该N个参考原始像素是该对焦区域中与该第一插值像素相对应的像素,该N个参考原始像素是基于该第一插值像素在该第二图像中的位置从该对焦区域中确定的,N≥1。
可选地,该N个参考原始像素包括第一参考原始像素和N-1个第二参考原始像素,该第一参考原始像素是根据该放大比例和第一插值像素在该第二图像中的位置从该对焦区域中确定的,该第一参考原始像素在该对焦区域中
的位置与该第一插值像素在该第二图像中的位置相对应,该N-1个第二参考原始像素是根据第一参考原始像素从该根据该对焦区域中确定的,其中,该N-1个第二参考原始像素与该第一参考原始像素之间的位置关系满足预设的位置条件。
可选地,该对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及
如果该放大比例为1:M,则当第一插值像素在该第二图像中的位置坐标为(M·i+1,M·j+1)时,该第一参考原始像素是在从该对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
可选地,X≥4,Y≥4,且该N-1个第二参考原始像素点包括该第二图像中以下位置坐标的像素:
(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),
(i-1,j),(i,j),(i+1,j),(i+2,j),
(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),
(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
可选地,该图像拍摄装置配置于第一设备,该图像显示装置配置于第二设备,该第一设备和该第二设备之间能够进行有线通信或无线通信。
可选地,该第一设备为无人机,该第二设备为终端设备或遥控器。
需要说明的是,本发明实施例提供的对焦方法和装置、图像拍摄方法和装置、图像显示方法和装置,及摄像系统,可以应用于计算机上,该计算机包括硬件层、运行在硬件层之上的操作系统层,以及运行在操作系统层上的应用层。该硬件层包括CPU(Central Processing Unit)、内存管理单元(MMU,Memory Management Unit)和内存(也称为主存)等硬件。该操作系统可以是任意一种或多种通过进程(Process)实现业务处理的计算机操作系统,例如,Linux操作系统、Unix操作系统、Android操作系统、iOS操作系统或windows操作系统等。该应用层包含浏览器、通讯录、文字处理软件、即时通信软件等应用。并且,在本发明实施例中,该计算机可以是智能手机等手持设备,也可以是个人计算机等终端设备,本发明并未特别限定,只要能够通过运行记录有本发明实施例的对焦方法和图像拍摄方法的代码的程序即可。
此外,本发明的各个方面或特征可以实现成方法、装置或使用标准编程
和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,CD(Compact Disc,压缩盘)、DVD(Digital Versatile Disc,数字通用盘)等),智能卡和闪存器件(例如,EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,在不冲突的情况下,这些实施例及实施例中特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
应理解,在本发明实施例中,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应
过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
Claims (55)
- 一种对焦方法,其特征在于,包括:获取第一图像,并从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素;基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比;基于所述第二图像,进行对焦处理。
- 根据权利要求1所述的对焦方法,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述基于预设的放大比例,对所述对焦区域做第一数码变焦处理,包括:基于所述多个插值像素中的第一插值像素在所述第二图像中的位置,从所述对焦区域中确定与所述第一插值像素相对应的N个参考原始像素,N≥1;根据所述放大比例和所述N个参考原始像素的灰度值,确定所述第一插值像素的灰度值。
- 根据权利要求2所述的对焦方法,其特征在于,所述基于所述多个插值像素中的第一插值像素在所述第二图像中的位置,从所述至少一个原始像素中确定所述第一插值像素所对应的N个参考原始像素,包括:根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应;根据第一参考原始像素,从所述根据所述对焦区域中确定N-1个第二参考原始像素,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求3所述的对焦方法,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及所述根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,包括:如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区 域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求4所述的对焦方法,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 根据权利要求1至5中任一项所述的对焦方法,其特征在于,所述对焦方法还包括:在获取所述第一图像后,呈现所述第一图像;在获取所述第二图像后,呈现所述第二图像。
- 根据权利要求1至6中任一项所述的对焦方法,其特征在于,所述对焦方法还包括:根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从第二数码变焦倍数变更至第一数码变焦倍数,其中,所述缩小比例与所述放大比例相对应,所述第一数码变焦倍数是经过所述第一数码变焦处理前的数码变焦倍数,所述第二数码变焦倍数经过所述第一数码变焦处理后的数码变焦倍数。
- 根据权利要求7所述的对焦方法,其特征在于,所述第一图像是基于第一焦距获取的图像,经过所述对焦处理后的焦距为第二焦距,以及所述对焦方法还包括:基于所述第一数码变焦倍数和所述第二焦距获取第三图像,并呈现所述第三图像。
- 一种对焦装置,其特征在于,包括:图像获取单元,用于获取第一图像;对焦处理单元,用于从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素,用于基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,用于基于所述第二图像,进行对焦处理。
- 根据权利要求9所述的对焦装置,其特征在于,所述第二图像包括 多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述对焦处理单元具体用于基于所述多个插值像素中的第一插值像素在所述第二图像中的位置,从所述对焦区域中确定与所述第一插值像素相对应的N个参考原始像素,N≥1,用于根据所述放大比例和所述N个参考原始像素的灰度值,确定所述第一插值像素的灰度值。
- 根据权利要求10所述的对焦装置,其特征在于,所述对焦处理单元具体用于根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应,用于根据第一参考原始像素,从所述根据所述对焦区域中确定N-1个第二参考原始像素,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求11所述的对焦装置,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及所述对焦处理单元具体用于如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求12所述的对焦装置,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 根据权利要求9至13中任一项所述的对焦装置,其特征在于,所述对焦装置还包括:图像显示单元,用于呈现所述第一图像和所述第二图像。
- 根据权利要求9至14中任一项所述的对焦装置,其特征在于,所述对焦处理单元还用于根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从第二数码变焦倍数变更至第一数码变焦倍数,其 中,所述缩小比例与所述放大比例相对应,所述第一数码变焦倍数是经过所述第一数码变焦处理前的数码变焦倍数,所述第二数码变焦倍数经过所述第一数码变焦处理后的数码变焦倍数。
- 根据权利要求15所述的对焦装置,其特征在于,所述第一图像是基于第一焦距获取的图像,经过所述对焦处理后的焦距为第二焦距,以及所述图像获取单元还用于基于所述第一数码变焦倍数和所述第二焦距获取第三图像;所述图像呈现单元还用于呈现所述第三图像。
- 一种图像拍摄方法,其特征在于,包括:基于第一数码变焦倍数和第一焦距拍摄并呈现第一图像,并从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素;基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取并呈现第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,经过所述数码变焦处理后的数码变焦倍数为第二数码变焦倍数;基于所述第二图像,进行对焦处理,以确定第二焦距;根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从所述第二数码变焦倍数变更至所述第一数码变焦倍数,其中,所述缩小比例与所述放大比例相对应;基于所述第一数码变焦倍数和所述第二焦距拍摄并呈现第三图像。
- 根据权利要求17所述的图像拍摄方法,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述基于预设的放大比例,对所述对焦区域做第一数码变焦处理,包括:基于所述多个插值像素中的第一插值像素在所述第二图像中的位置,从所述对焦区域中确定与所述第一插值像素相对应的N个参考原始像素,N≥1;根据所述放大比例和所述N个参考原始像素的灰度值,确定所述第一插值像素的灰度值。
- 根据权利要求18所述的图像拍摄方法,其特征在于,所述基于所述多个插值像素中的第一插值像素在所述第二图像中的位置,从所述至少一 个原始像素中确定所述第一插值像素所对应的N个参考原始像素,包括:根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应;根据第一参考原始像素,从所述根据所述对焦区域中确定N-1个第二参考原始像素,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求19所述的图像拍摄方法,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及所述根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,包括:如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求20所述的图像拍摄方法,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 一种图像拍摄装置,其特征在于,包括:图像拍摄单元,用于基于第一数码变焦倍数和第一焦距拍摄第一图像,用于基于所述第一数码变焦倍数和对焦处理单元确定的第二焦距拍摄第三图像;对焦处理单元,用于从所述第一图像中确定对焦区域,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素,用于基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,图像拍摄单元使用的经过所述数码变焦处理后的数码变焦倍数为第二数码变焦倍数,用于基于所述第二图像,进行对焦处理,以 确定第二焦距,用于根据预设的缩小比例进行第二数码变焦处理,以使图像拍摄单元使用的数码变焦倍数从所述第二数码变焦倍数变更至所述第一数码变焦倍数,其中,所述缩小比例与所述放大比例相对应。图像呈现单元,用于呈现所述第一图像、所述第二图像和所述第三图像。
- 根据权利要求22所述的图像拍摄装置,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及对焦处理单元具体用于基于所述多个插值像素中的第一插值像素在所述第二图像中的位置,从所述对焦区域中确定与所述第一插值像素相对应的N个参考原始像素,N≥1,用于根据所述放大比例和所述N个参考原始像素的灰度值,确定所述第一插值像素的灰度值。
- 根据权利要求23所述的图像拍摄装置,其特征在于,对焦处理单元具体用于根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应,用于根据第一参考原始像素,从所述根据所述对焦区域中确定N-1个第二参考原始像素,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求24所述的图像拍摄装置,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及对焦处理单元具体用于如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求25所述的图像拍摄装置,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 一种图像显示方法,其特征在于,包括:获取并呈现第一图像,所述第一图像是基于第一数码变焦倍数和第一焦距拍摄的图像;获取并呈现第二图像,其中,所述第二图像是在所述第一图像中的对焦区域被基于预设的放大比例做第一数码变焦处理后获得的图像,所述对焦区域包括至少一个像素,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比;获取并呈现第三图像,其中,所述第三图像是基于所述第一数码变焦倍数和第二焦距拍摄的图像,所述第二焦距是经过基于所述第二图像的对焦处理而确定的焦距。
- 根据权利要求27所述的图像显示方法,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述多个插值像素中的第一插值像素的灰度值是根据所述放大比例和N个参考原始像素的灰度值确定的,所述N个参考原始像素是所述对焦区域中与所述第一插值像素相对应的像素,所述N个参考原始像素是基于所述第一插值像素在所述第二图像中的位置从所述对焦区域中确定的,N≥1。
- 根据权利要求28所述的图像显示方法,其特征在于,所述N个参考原始像素包括第一参考原始像素和N-1个第二参考原始像素,所述第一参考原始像素是根据所述放大比例和第一插值像素在所述第二图像中的位置从所述对焦区域中确定的,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应,所述N-1个第二参考原始像素是根据第一参考原始像素从所述根据所述对焦区域中确定的,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求29所述的图像显示方法,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求30所述的图像显示方法,其特征在于,X≥4,Y≥4, 且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 一种图像显示装置,其特征在于,包括:获取单元,用于在第一时段从与所述图像显示装置通信连接的图像拍摄装置获取第一图像,在第二时段从与所述图像显示装置通信连接的图像处理装置获取第二图像,在第三时段从所述图像拍摄装置获取第三图像,其中,所述第一图像是基于第一数码变焦倍数和第一焦距拍摄的图像,所述第二图像是在所述第一图像中的对焦区域被基于预设的放大比例做第一数码变焦处理后获得的图像,所述对焦区域包括至少一个像素,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,所述第三图像是基于所述第一数码变焦倍数和第二焦距拍摄的图像,所述第二焦距是经过基于所述第二图像的对焦处理而确定的焦距;呈现单元,用于在所述第一时段呈现所述第一图像,在所述第二时段呈现所述第二图像,在所述第三时段呈现所述第三图像。
- 根据权利要求32所述的图像显示装置,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述多个插值像素中的第一插值像素的灰度值是根据所述放大比例和N个参考原始像素的灰度值确定的,所述N个参考原始像素是所述对焦区域中与所述第一插值像素相对应的像素,所述N个参考原始像素是基于所述第一插值像素在所述第二图像中的位置从所述对焦区域中确定的,N≥1。
- 根据权利要求33所述的图像显示装置,其特征在于,所述N个参考原始像素包括第一参考原始像素和N-1个第二参考原始像素,所述第一参考原始像素是根据所述放大比例和第一插值像素在所述第二图像中的位置从所述对焦区域中确定的,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应,所述N-1个第二参考原始像素是根据第一参考原始像素从所述根据所述对焦区域中确定的,其 中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求34所述的图像显示装置,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求35所述的图像显示装置,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 根据权利要求32至36中任一项所述的图像显示装置,其特征在于,所述图像拍摄装置配置于第一设备,所述图像显示装置配置于第二设备,所述第一设备和所述第二设备之间能够进行有线通信或无线通信。
- 根据权利要求37所述的图像显示装置,其特征在于,所述第一设备为无人机,所述第二设备为终端设备或遥控器。
- 一种摄像系统,其特征在于,包括:摄像机构,用于拍摄第一图像;处理器,用于从所述摄像机构获取所述第一图像,从所述第一图像中确定对焦区域,基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,其中,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,用于基于所述第二图像,进行对焦处理。
- 根据权利要求39所述的摄像系统,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述处理器具体用于基于所述多个插值像素中的第一插值像素在所述 第二图像中的位置,从所述对焦区域中确定与所述第一插值像素相对应的N个参考原始像素,N≥1,用于根据所述放大比例和所述N个参考原始像素的灰度值,确定所述第一插值像素的灰度值。
- 根据权利要求40所述的摄像系统,其特征在于,所述处理器具体用于根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应,用于根据第一参考原始像素,从所述根据所述对焦区域中确定N-1个第二参考原始像素,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求41所述的摄像系统,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及所述处理器具体用于如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求42所述的摄像系统,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 根据权利要求39至43中任一项所述的摄像系统,其特征在于,所述摄像系统还包括:显示器,用于在所述摄像机构获取所述第一图像之后,呈现所述第一图像,并在所述处理器获取所述第二图像之后,呈现所述第二图像;
- 根据权利要求39至44中任一项所述的摄像系统,其特征在于,所述处理器还用于根据预设的缩小比例进行第二数码变焦处理,以使当前使用的数码变焦倍数从第二数码变焦倍数变更至第一数码变焦倍数,其中,所述缩小比例与所述放大比例相对应,所述第一数码变焦倍数是经过所述第一数码变焦处理前的数码变焦倍数,所述第二数码变焦倍数经过所述第一数码变 焦处理后的数码变焦倍数。
- 根据权利要求45所述的摄像系统,其特征在于,所述第一图像是所述摄像机构基于第一焦距拍摄的图像,经过所述对焦处理后的焦距为第二焦距,以及所述摄像机构还用于基于所述第二焦距拍摄第三图像;所述第三图像是基于所述第一数码变焦倍数获取的图像;所述摄像系统还包括:显示器,用于在所述摄像机构获取所述第三图像后,呈现所述第三图像。
- 根据权利要求44或46所述的摄像系统,其特征在于,所述摄像机构配置于第一设备,所述显示器配置于第二设备,所述第一设备和所述第二设备之间能够进行有线通信或无线通信。
- 根据权利要求47所述的摄像系统,其特征在于,所述第一设备为无人机,所述第二设备为终端设备或遥控器。
- 一种摄像系统,其特征在于,包括:摄像机构,用于基于第一数码变焦倍数和第一焦距拍摄第一图像,基于所述第一数码变焦倍数和处理器确定的第二焦距获取第三图像;处理器,用于从所述摄像机构获取所述第一图像,从所述第一图像中确定对焦区域,基于预设的放大比例,对所述对焦区域做第一数码变焦处理,以获取第二图像,基于所述第二图像,进行对焦处理,以确定所述第二焦距,根据预设的缩小比例进行第二数码变焦处理,以使所述摄像机构使用的数码变焦倍数从所述第二数码变焦倍数变更至所述第一数码变焦倍数,所述对焦区域包括至少一个像素,且所述对焦区域包括所述第一图像中的部分像素,用于,其中,所述第二图像包括放大后的对焦区域,所述第二图像的信噪比大于所述对焦区域的信噪比,经过所述数码变焦处理后的数码变焦倍数为第二数码变焦倍数,用于,其中,所述缩小比例与所述放大比例相对应;所述显示器,用于在第一时段呈现所述第一图像,在第二时段呈现所述第二图像,在第三时段呈现所述第三图像。
- 根据权利要求49所述的摄像系统,其特征在于,所述第二图像包括多个原始像素和多个插值像素,其中,所述原始像素是所述对焦区域中的像素,以及所述处理器具用于基于所述多个插值像素中的第一插值像素在所述第 二图像中的位置,从所述对焦区域中确定与所述第一插值像素相对应的N个参考原始像素,N≥1,用于根据所述放大比例和所述N个参考原始像素的灰度值,确定所述第一插值像素的灰度值。
- 根据权利要求50所述的摄像系统,其特征在于,所述处理器具体用于根据所述放大比例和第一插值像素在所述第二图像中的位置,从所述对焦区域中确定第一参考原始像素,所述第一参考原始像素在所述对焦区域中的位置与所述第一插值像素在所述第二图像中的位置相对应,根据第一参考原始像素,从所述根据所述对焦区域中确定N-1个第二参考原始像素,其中,所述N-1个第二参考原始像素与所述第一参考原始像素之间的位置关系满足预设的位置条件。
- 根据权利要求51所述的摄像系统,其特征在于,所述对焦区域包括二维排列的X·Y个像素,其中,X≥1,Y≥1,以及所述处理器具体用于如果所述放大比例为1:M,则当第一插值像素在所述第二图像中的位置坐标为(M·i+1,M·j+1)时,所述第一参考原始像素是在从所述对焦区域中的位置坐标为(i,j)的像素,其中,i∈[0,X-1],i∈[0,Y-1]。
- 根据权利要求52所述的摄像系统,其特征在于,X≥4,Y≥4,且所述N-1个第二参考原始像素点包括所述第二图像中以下位置坐标的像素:(i-1,j-1),(i,j-1),(i+1,j-1),(i+2,j-1),(i-1,j),(i,j),(i+1,j),(i+2,j),(i-1,j+1),(i,j+1),(i+1,j+1),(i+2,j+1),(i-1,j+2),(i,j+2),(i+1,j+2),(i+2,j+2)。
- 根据权利要求49至53中任一项所述的摄像系统,其特征在于,所述摄像机构配置于第一设备,所述显示器配置于第二设备,所述第一设备和所述第二设备之间能够进行有线通信或无线通信。
- 根据权利要求54所述的摄像系统,其特征在于,所述第一设备为无人机,所述第二设备为终端设备或遥控器。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/100075 WO2018053825A1 (zh) | 2016-09-26 | 2016-09-26 | 对焦方法和装置、图像拍摄方法和装置及摄像系统 |
CN201680003234.9A CN107079106B (zh) | 2016-09-26 | 2016-09-26 | 对焦方法和装置、图像拍摄方法和装置及摄像系统 |
US16/363,113 US10855904B2 (en) | 2016-09-26 | 2019-03-25 | Focusing method and apparatus, image photographing method and apparatus, and photographing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/100075 WO2018053825A1 (zh) | 2016-09-26 | 2016-09-26 | 对焦方法和装置、图像拍摄方法和装置及摄像系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/363,113 Continuation US10855904B2 (en) | 2016-09-26 | 2019-03-25 | Focusing method and apparatus, image photographing method and apparatus, and photographing system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018053825A1 true WO2018053825A1 (zh) | 2018-03-29 |
Family
ID=59624617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/100075 WO2018053825A1 (zh) | 2016-09-26 | 2016-09-26 | 对焦方法和装置、图像拍摄方法和装置及摄像系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10855904B2 (zh) |
CN (1) | CN107079106B (zh) |
WO (1) | WO2018053825A1 (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7057637B2 (ja) * | 2017-08-23 | 2022-04-20 | キヤノン株式会社 | 制御装置、制御システム、制御方法、プログラム、及び記憶媒体 |
US10807579B2 (en) * | 2018-01-19 | 2020-10-20 | Goodrich Corporation | System for maintaining near-peak friction of a braking wheel |
CN108848316A (zh) * | 2018-09-14 | 2018-11-20 | 高新兴科技集团股份有限公司 | 摄像机的自动变焦控制方法、自动变焦装置和摄像机 |
CN110099213A (zh) * | 2019-04-26 | 2019-08-06 | 维沃移动通信(杭州)有限公司 | 一种图像显示控制方法及终端 |
CN112084817A (zh) * | 2019-06-13 | 2020-12-15 | 杭州海康威视数字技术股份有限公司 | 一种儿童单独滞留车内的检测方法、装置及红外摄像头 |
US10868965B1 (en) * | 2019-07-12 | 2020-12-15 | Bennet K. Langlotz | Digital camera zoom control facility |
CN110460735A (zh) * | 2019-08-05 | 2019-11-15 | 上海理工大学 | 基于线阵ccd的大幅面扫描仪控制系统 |
CN110456829B (zh) * | 2019-08-07 | 2022-12-13 | 深圳市维海德技术股份有限公司 | 定位跟踪方法、装置及计算机可读存储介质 |
CN112106344B (zh) * | 2019-08-29 | 2024-04-12 | 深圳市大疆创新科技有限公司 | 显示方法、拍照方法及相关装置 |
CN111970439A (zh) * | 2020-08-10 | 2020-11-20 | Oppo(重庆)智能科技有限公司 | 图像处理方法和装置、终端和可读存储介质 |
CN112752029B (zh) * | 2021-01-22 | 2022-11-18 | 维沃移动通信(杭州)有限公司 | 对焦方法、装置、电子设备及介质 |
CN114554086B (zh) * | 2022-02-10 | 2024-06-25 | 支付宝(杭州)信息技术有限公司 | 一种辅助拍摄方法、装置及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103799962A (zh) * | 2012-11-01 | 2014-05-21 | 佳能株式会社 | 眼科装置、摄像控制装置及摄像控制方法 |
CN104519275A (zh) * | 2013-09-27 | 2015-04-15 | 奥林巴斯株式会社 | 焦点调节装置以及焦点调节方法 |
JP2015154318A (ja) * | 2014-02-17 | 2015-08-24 | 株式会社日立国際電気 | 撮像装置および撮像方法 |
JP2015215371A (ja) * | 2014-05-07 | 2015-12-03 | キヤノン株式会社 | 焦点調節装置およびその制御方法 |
US20160124207A1 (en) * | 2014-11-04 | 2016-05-05 | Olympus Corporation | Microscope system |
US9402022B2 (en) * | 2014-07-23 | 2016-07-26 | Panasonic Intellectual Property Management Co., Ltd. | Imaging apparatus with focus assist function |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009038589A (ja) * | 2007-08-01 | 2009-02-19 | Nikon Corp | 撮像装置 |
JP5807324B2 (ja) * | 2010-09-08 | 2015-11-10 | リコーイメージング株式会社 | 合焦画像表示装置および合焦画像表示方法 |
JP5936404B2 (ja) * | 2012-03-23 | 2016-06-22 | キヤノン株式会社 | 撮像装置、その制御方法およびプログラム |
CN103581534A (zh) * | 2012-08-08 | 2014-02-12 | 中兴通讯股份有限公司 | 提升数码变焦显示效果的方法、装置及移动终端 |
CN108234851B (zh) * | 2013-06-13 | 2019-08-16 | 核心光电有限公司 | 双孔径变焦数字摄影机 |
US9338345B2 (en) * | 2014-06-13 | 2016-05-10 | Intel Corporation | Reliability measurements for phase based autofocus |
-
2016
- 2016-09-26 CN CN201680003234.9A patent/CN107079106B/zh not_active Expired - Fee Related
- 2016-09-26 WO PCT/CN2016/100075 patent/WO2018053825A1/zh active Application Filing
-
2019
- 2019-03-25 US US16/363,113 patent/US10855904B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103799962A (zh) * | 2012-11-01 | 2014-05-21 | 佳能株式会社 | 眼科装置、摄像控制装置及摄像控制方法 |
CN104519275A (zh) * | 2013-09-27 | 2015-04-15 | 奥林巴斯株式会社 | 焦点调节装置以及焦点调节方法 |
JP2015154318A (ja) * | 2014-02-17 | 2015-08-24 | 株式会社日立国際電気 | 撮像装置および撮像方法 |
JP2015215371A (ja) * | 2014-05-07 | 2015-12-03 | キヤノン株式会社 | 焦点調節装置およびその制御方法 |
US9402022B2 (en) * | 2014-07-23 | 2016-07-26 | Panasonic Intellectual Property Management Co., Ltd. | Imaging apparatus with focus assist function |
US20160124207A1 (en) * | 2014-11-04 | 2016-05-05 | Olympus Corporation | Microscope system |
Also Published As
Publication number | Publication date |
---|---|
CN107079106B (zh) | 2020-11-13 |
US20190222746A1 (en) | 2019-07-18 |
US10855904B2 (en) | 2020-12-01 |
CN107079106A (zh) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018053825A1 (zh) | 对焦方法和装置、图像拍摄方法和装置及摄像系统 | |
US11649052B2 (en) | System and method for providing autonomous photography and videography | |
US11181809B2 (en) | Focusing method, imaging device, and unmanned aerial vehicle | |
CN111344644B (zh) | 用于基于运动的自动图像捕获的技术 | |
US11722647B2 (en) | Unmanned aerial vehicle imaging control method, unmanned aerial vehicle imaging method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle | |
KR20160134316A (ko) | 촬상 장치, 이를 채용한 원격 제어 비행체 및 촬상 장치의 자세 제어 방법 | |
CN108235815B (zh) | 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及介质 | |
WO2019061064A1 (zh) | 图像处理方法和设备 | |
WO2020011230A1 (zh) | 控制装置、移动体、控制方法以及程序 | |
WO2020124517A1 (zh) | 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备 | |
JP6584237B2 (ja) | 制御装置、制御方法、およびプログラム | |
JP2019216343A (ja) | 決定装置、移動体、決定方法、及びプログラム | |
JP7501535B2 (ja) | 情報処理装置、情報処理方法、情報処理プログラム | |
JP2017112438A (ja) | 撮像システムおよびその制御方法、通信装置、移動撮像装置、プログラム | |
JP6790318B2 (ja) | 無人航空機、制御方法、及びプログラム | |
JP6503607B2 (ja) | 撮像制御装置、撮像装置、撮像システム、移動体、撮像制御方法、及びプログラム | |
US20200412945A1 (en) | Image processing apparatus, image capturing apparatus, mobile body, image processing method, and program | |
JP6641574B1 (ja) | 決定装置、移動体、決定方法、及びプログラム | |
JP6696092B2 (ja) | 制御装置、移動体、制御方法、及びプログラム | |
US20160255265A1 (en) | Operating method and apparatus for detachable lens type camera | |
WO2021143425A1 (zh) | 控制装置、摄像装置、移动体、控制方法以及程序 | |
US20240209843A1 (en) | Scalable voxel block selection | |
WO2021249245A1 (zh) | 装置、摄像装置、摄像系统及移动体 | |
WO2024118233A1 (en) | Dynamic camera selection and switching for multi-camera pose estimation | |
JP2020005108A (ja) | 制御装置、撮像装置、移動体、制御方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16916564 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16916564 Country of ref document: EP Kind code of ref document: A1 |