WO2021134179A1 - Appareil et procédé de mise au point, dispositif de photographie, plateforme mobile et support d'enregistrement - Google Patents
Appareil et procédé de mise au point, dispositif de photographie, plateforme mobile et support d'enregistrement Download PDFInfo
- Publication number
- WO2021134179A1 WO2021134179A1 PCT/CN2019/129852 CN2019129852W WO2021134179A1 WO 2021134179 A1 WO2021134179 A1 WO 2021134179A1 CN 2019129852 W CN2019129852 W CN 2019129852W WO 2021134179 A1 WO2021134179 A1 WO 2021134179A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- target object
- photographing device
- weight
- focusing
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/51—Housings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
Definitions
- the present invention relates to the field of cameras, in particular to a focusing method, device, photographing equipment, movable platform and storage medium.
- Camera functions are used in many application scenarios, and devices that provide camera functions include mobile phones, cameras, and so on. During the shooting process, in order to ensure a clear imaging picture, a focusing process is necessary.
- many shooting devices support an auto focus (Auto Focus, AF for short) function to complete focusing through the AF function.
- Auto Focus Auto Focus, AF for short
- the focus area is often selected as the center area of the image taken by the photographing device.
- the purpose of AF is to ensure that the image in the focus area is clearly imaged.
- this AF processing method it is difficult to ensure that the target object that the user pays attention to is clearly imaged among several objects being photographed.
- the invention provides a focusing method, device, photographing equipment, movable platform and storage medium, which can realize rapid focusing of target objects.
- the first aspect of the present invention provides a focusing method, including:
- a second aspect of the present invention provides a focusing device, which is provided in a first photographing device.
- the focusing device includes: a memory and a processor; wherein the memory stores executable code, when the executable code is When the processor executes, the processor realizes:
- a third aspect of the present invention provides a photographing device, including:
- the lens assembly is arranged inside the housing of the photographing device
- the sensor module is arranged inside the housing and at the rear end of the lens assembly, the sensor module includes a circuit board and an imaging sensor, and the imaging sensor is arranged on the front surface of the circuit board facing the lens assembly ;
- the focusing device according to the second aspect is arranged inside the housing.
- the fourth aspect of the present invention provides a movable platform, including:
- the power system is arranged on the body and used to provide power for the movable platform
- the photographing device is arranged on the body and used to photograph a first image and perform focusing processing on a target object in the first image.
- a fifth aspect of the present invention provides a computer-readable storage medium having executable code stored in the computer-readable storage medium, and the executable code is used to implement the focusing method described in the first aspect.
- the position area (referred to as the first position area) corresponding to the target object being focused in the image currently collected by the shooting device performing the focusing method is determined, and then the image can be calculated.
- the image definition of the image, and the image definition corresponding to the first location area containing the target object is set to have the first weight, and the image definition corresponding to the areas other than the first location area in the image has the first weight.
- the first weight is greater than the second weight, so that the shooting device can quickly and accurately identify the focused subject-the target object.
- the statistical value of the sharpness of the image is calculated according to the first weight and the second weight. When the statistical value of the sharpness of the image meets the set focusing conditions, it means that the image of the target object is blurred, so as to focus on the target object Processing to ensure clear imaging of the target object.
- FIG. 1 is a schematic flowchart of a focusing method according to an embodiment of the present invention
- FIG. 2 is a schematic flowchart of a focusing process provided by an embodiment of the present invention.
- FIG. 3 is a schematic flowchart of another focusing method provided by an embodiment of the present invention.
- FIG. 4 is a schematic flowchart of another focusing method provided by an embodiment of the present invention.
- FIG. 5 is a schematic diagram of an application scenario of a focusing method provided by an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a focusing device provided by an embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a photographing device provided by an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
- FIG. 1 is a schematic flowchart of a focusing method provided by an embodiment of the present invention. As shown in FIG. 1, the focusing method may include the following steps:
- the first photographing device may be a visible light zoom camera.
- the first photographing device can be integrated and used in other devices, or can be used independently.
- the first photographing device may be implemented as a camera on a terminal device such as a mobile phone, a notebook computer, and the like.
- the first photographing device may be a camera mounted on a drone for use.
- the target object mentioned in the embodiment of the present invention may be set by the user according to requirements. Specifically, before the user uses the above-mentioned first photographing device to photograph the target object, the user can set the target object on the first photographing device to inform the first photographing device of what the target object is, so that the first photographing device finally completes Focusing on the target object is to ensure that the image of the target object is clear.
- the process of focusing is to continuously adjust the object distance of the first photographing device to ensure that the target object photographed by the first photographing device is always clearly imaged.
- the user's setting of the target object may be implemented as: the user inputs a category corresponding to the target object. Therefore, the first photographing device considers the object corresponding to the category in the first image collected as the target object.
- the target object is a human body, and other objects may be trees, buildings, flowers, vehicles, etc. that exist around the person.
- the first camera can identify whether the first image contains the target object set by the user and the first location area of the target object in the first image, where the first location area covers the target object And is smaller than the area of the first image.
- the first imaging device can identify whether the first image contains the target object based on the visible light characteristics of the target object.
- the so-called visible light feature refers to the optical feature of the target object.
- the corresponding visible light feature can be the contour shape of the human body, facial features, and so on.
- the implementation of human body recognition based on these features can be implemented with reference to the existing related technologies, which will not be repeated here.
- the boundary contour of the target object can be determined as the first location area of the target object, or the smallest rectangular frame surrounding the target object can also be determined as the first position area of the target object. Location area.
- the first image can be divided into N grids according to the set grid size. Therefore, assuming that the M grids contain part of the target object, the M grids can be determined.
- the area covered by each grid is the first location area, or it can be determined that the area covered by the M grids and the surrounding K grids is the first location area, where the M grids and the surrounding K grids are the first location area.
- K grids will form the smallest rectangular frame covering the target object.
- the focus area is often the central area of the image, that is, to ensure that the central area of the image is clear.
- the traditional focusing scheme will make the background appear clear and the target object as the foreground subject is not clearly imaged , Resulting in blurring of the foreground subject in auto focus.
- the first photographing device can automatically select the focus area as the first location area where the target object is located, and then execute the auto-focus algorithm to make the first location area The target object achieves the clearest imaging purpose, thereby completing the focus processing for the target object.
- the first imaging device selects the focus area as the first position area where the target object is located.
- the method is to set the image definition corresponding to the first position area in the first image to have the first weight, and the first image is divided by the first weight.
- the image definition corresponding to other areas outside the location area has a second weight, and the first weight is greater than the second weight.
- the second weight may be set to 0, and the first weight may be set to a value greater than 0, for example, set to 1.
- the image sharpness can be represented by the image gradient value, that is, the image sharpness of the first image is obtained by calculating the gradient of each pixel in the first image. Assuming that the above first location area contains 100 pixels, it can be considered that the gradient values of these 100 pixels are all set with the first weight. Assume that the 100 pixels contained in the first image except for the first location area There are 500 pixels in addition, then these 500 pixels are set to have the second weight.
- the target object is highlighted, that is, the focus area is positioned on the target object.
- the sharpness statistical value of the first image can be calculated according to the setting results of the first weight and the second weight, so as to determine whether it is necessary to perform focus processing on the target object based on the sharpness statistical value. Because if the image of the target object is clear at this time, there is no need to perform focus processing.
- the statistical value of the definition of the first image is obtained according to the first weight and the second weight, which can be implemented as:
- the first weight and the second weight a weighted sum calculation is performed on the image gradient value corresponding to the first location area and the image gradient value corresponding to other location areas to obtain the sharpness statistical value of the first image.
- the statistical value of the sharpness of the first image is 100 The sum of the gradient values of each pixel.
- the statistical value of the sharpness of the first image is greater than the set threshold, it is considered that the target object is clearly imaged and does not need to be focused.
- the first image is If the sharpness statistical value is less than the set threshold, it is considered that the target object is not sharply imaged and needs to be focused.
- the collected video can be segmented to obtain an image sequence composed of multiple frames of images. Assuming that the first image is the last frame of the image in the image sequence, it can also be determined according to a certain frame of image that has been focused before the first image is captured, whether it is necessary to perform focus processing on the target object when the first image is currently captured. Optionally, if the difference between the sharpness statistical value of the first image and the sharpness statistical value of the reference image is greater than a set threshold, focus processing is performed on the target object, where the reference image has been focused on the target object before Of an image.
- the reference image may be updated with the image taken after the focus is completed, so as to perform focus processing on the image subsequently collected by the first photographing device.
- the focusing process (adjusting the object distance) and the zooming process (focusing) always exist at the same time.
- the following zooming process may also be performed:
- the first location area is moved to the central area of the screen according to the relative position of the first location area and the main optical axis of the first photographing device.
- the image is often enlarged. Therefore, after the first image is moved so that the first position area is located in the center area of the screen, the first image can be enlarged.
- the first camera device mounted on the drone as an example.
- the first camera device is mounted on the drone's pan/tilt.
- the first camera device can determine the relative position of the first location area to the main optical axis. Report to the pan/tilt, and the pan/tilt adjusts its pose so that during the zooming process, the first location area containing the target object in the first image is always located in the center area of the screen.
- the weight of the image clarity between the location area of the target object and other areas in the image is set differently, so as to achieve focusing on the target object. Automatic selection of the area, and then when the target object needs to be focused, the object distance and image distance are adjusted to ensure that the target object in the focus area is clearly imaged to complete the accurate focus of the target object.
- the process of executing the focusing solutions provided by the embodiments of the present invention in different application scenarios will be briefly introduced.
- the commonly used application scenarios of the first shooting device include photographing scenes and video recording scenes.
- the first image collected by the first photographing device mentioned above should be understood as the preview that the user can preview through the first photographing device before triggering the actual photographing operation on the first photographing device
- Each frame of the multi-frame image For example, suppose the first shooting device is a mobile phone. When the user activates the shooting function and points the camera toward the target object (ie, the object being photographed), the image of the target object will be displayed on the screen of the mobile phone, that is, in the preview box
- the image presented in the preview frame can also be regarded as a kind of video, and multiple frames of images can be obtained by sampling the video, which are assumed to be three frames of images denoted as F1, F2, and F3, respectively.
- the shooting button After the user adjusts the shooting angle, click the shooting button on the screen, and at this time, a real photo of the target object is triggered, and a photo of the target object will be taken, which is assumed to be Z1.
- the focusing process in the embodiment of the present invention occurs before the user clicks the shooting button, through the multiple frames of images obtained in the preview process-F1, F2, and F3, respectively, corresponding to the definition statistical values (the calculation method is as described above) First, focus on the target object, and then, when the user clicks the shooting button, a photo with good focus effect, namely Z1, will be obtained.
- F1, F2, and F3 are the first three frames of images obtained after the user starts the shooting function, then F1 can be selected as the initial reference image, and the focus processing will be completed based on F1, F2, and F3 to obtain a suitable object distance Later, the reference image can be updated with the focused photo Z1 obtained by shooting.
- the preview video will be seen on the screen at this time, and multiple frames of images can be sampled, such as F4, F5, F6, through this preview process
- the multi-frame images obtained in the image-F4, F5, F6 focus again on the target object to ensure that the user clicks the shooting button again to take the photo Z2 with a good focus effect, that is, the target object is clearly imaged in the photo Z2.
- the reference image can be updated to the photo Z2.
- a set number of images can be sampled from the previewed video stream for focusing processing, such as 3 frames, 5 frames, or even 1 frame.
- the processing process for each frame of the image is the same, as shown in steps 101-104.
- the target object will be focused on the target object in combination with the corresponding sharpness statistics of the multiple frames of images. The specific process will be described below .
- the first image collected by the first shooting device mentioned above is each frame image obtained after sampling the recorded video.
- the video can be sampled to obtain a frame of image, and the statistical value of the definition of each frame of image can be calculated by the method described above.
- multiple frames of images obtained by sequential sampling can also be used as a group, such as 3 frames, 5 frames, and the target object is focused on the target object with the corresponding sharpness statistics of the multiple frames in each group. .
- the reference image can be initialized to the first frame of image obtained by sampling, and it is set that the first group of images needs to be focused, so that after the first group of images is focused, you can A certain appropriate object distance is obtained.
- the object distance can make the target object in the next video image captured by the first shooting device clear.
- the reference image can be updated to the first image in the second group of images.
- the difference between the sharpness statistics of the other images in the second group and the reference image is large, it means that it needs to be done at this time.
- a new object distance can be determined based on the second set of images, and so on.
- the multi-frame image includes the first image and the first image captured by the first image. At least one frame of image adjacent to the first image collected by the device. Furthermore, the object distance of the first photographing device is adjusted to the target object distance, and the target object distance corresponds to the maximum sharpness statistical value in the multiple frames of images.
- the first image here may refer to any frame of image obtained by segmenting the preview video in the photographing scene, or may refer to any frame of image sampled in the recording scene.
- the purpose of auto focusing is to find the target object distance position.
- the captured image has the largest statistical value of sharpness at the target object distance position.
- the focusing process is illustrated as an example: assuming that the currently collected image is F1, calculate the sharpness statistical value corresponding to F1, and reduce the object distance of the first photographing device by a set step. Assuming that the next frame of image F2 is collected at this object distance, the sharpness statistical value of F2 is calculated. If the sharpness statistical value of F2 is greater than the sharpness statistical value of F1, it means that the adjustment direction of the smaller object distance is correct. Otherwise, it is necessary to determine the adjustment direction to increase the object distance. In this way, the object distance is gradually reduced, and a frame of image is collected at each object distance position.
- the multi-frame images collected in sequence are F1, F2, F3, F4, and that the object distances corresponding to these four frames are W1, W2, W3, respectively , W4, it can be determined that the target object distance is the maximum of these four object distances.
- W2 the maximum of these four object distances.
- the sharpness statistical value corresponding to image F2 will also be the largest of the four images.
- these four frames of images can be understood as images obtained by segmenting the preview video. Therefore, the first image in the foregoing may include these four frames of images.
- the focusing process may include the following steps:
- K1 is an integer greater than or equal to 1.
- sharpness statistics of image P1, image P2, and image P3 show a tendency to increase, it indicates that the current adjustment direction of the object distance is correct, and further adjustment of the object distance in this direction can make the image clearer.
- sharpness statistical values of the image P1, the image P2, and the image P3 do not show a tendency to increase, it indicates that the current adjustment direction of the object distance is incorrect, and the object distance needs to be adjusted in the opposite direction to make the image clearer.
- K2 is an integer greater than or equal to 1.
- step 206 Determine whether the sharpness statistics of the K2 frame images and the K1 frame images collected during the movement of K2 set steps show a mountain-like change feature, if yes, perform step 207, otherwise repeat step 203.
- steps 204 to 207 the purpose of steps 204 to 207 is to continue to adjust the object distance according to the previously determined object distance adjustment direction, so as to find the object distance position that makes the image of the target object clearest.
- image P4, image P5, and image P6 Calculate the sharpness statistics of these three frames respectively. It is determined whether the sharpness statistical values of the image P1, the image P2, the image P3, the image P4, the image P5, and the image P6 show a mountain-like characteristic, that is, whether they show a trend of gradually increasing and then gradually decreasing.
- the above image P1 can be updated with another frame of image collected after a delay of the set time. , Re-execute the above steps 201-207.
- FIG. 3 is a schematic flowchart of another focusing method provided by an embodiment of the present invention. As shown in FIG. 3, the focusing method may include the following steps:
- two parameters are set: auto focus state and reference image.
- the auto-focus state is used to indicate whether the currently collected image needs to be subjected to auto-focus processing. Initially, this state is set to need to be focused, which means that the first captured image needs to be used for focus processing.
- the reference image may be an automatically generated image, and the size of the image is equal to the size of each image actually collected subsequently.
- the weight of the gradient value of each pixel in the initial reference image (assuming that the gradient value is used as a measure of image sharpness) can be set to a set value, such as 0, so that the weight of the gradient value of all pixels The sum is the sharpness statistical value of the initial reference image.
- step 302. Determine whether there is a target object in the first image captured by the first photographing device, if it exists, execute step 303, and if it does not exist, execute step 304.
- the focus area can still be positioned to the center position area of the first image.
- the range of the central location area can be set.
- Step 306 is executed, otherwise, step 307 is executed.
- the auto-focusing state does not require focusing
- the next image is taken, and the processing logic of steps 302 to 307 is continued to be executed for the next image.
- the auto-focus state is in need of focusing
- the focusing processing procedure described above is executed.
- the description in the above embodiments is based on a single first photographing device (such as a visible light zoom camera) to realize the recognition and focus processing of the target object.
- a single first photographing device such as a visible light zoom camera
- the visual field angle of the visible light zoom camera is generally small, so the image obtained is visually It will feel very small and the visual effect is poor.
- the embodiment of the present invention also provides a focusing method as shown in FIG. 3.
- FIG. 4 is a schematic flowchart of another focusing method provided by an embodiment of the present invention.
- the focusing method is executed by the first photographing device described above. As shown in FIG. 4, the focusing method may include the following steps:
- the second location area corresponding to the target object in the second image and the coordinate system mapping relationship between the first shooting device and the second shooting device determine the first image corresponding to the target object in the first image captured by the first shooting device. Location area, where the shooting time of the first image and the second image are the same.
- the angle of view of the first photographing device is smaller than the angle of view of the second photographing device.
- the second photographing device may be an infrared camera, or the second photographing device may also be a visible light wide-angle camera.
- FIG. 5 For ease of understanding, an application scenario shown in FIG. 5 is taken as an example for illustration.
- the first photographing device is a visible light zoom camera with a field of view angle of FOV1
- the second photographing device is an infrared camera with a field of view angle of FOV2
- FOV1 is smaller than FOV2.
- both the first photographing device and the second photographing device can be mounted on the drone.
- the image coordinate mapping relationship of the two shooting devices has been determined in advance according to the shooting parameters of the first shooting device and the shooting parameters of the second shooting device.
- both the first shooting device and the second shooting device shoot an image of the target object
- the image collected by the first shooting device is called the first image
- the image collected by the second shooting device is called the second image . Since the second photographing device has a larger field of view, the data content contained in the second image is much more than that of the first image, so that the target object can be captured more quickly by the second photographing device.
- Different features can be adopted according to the different types of the second photographing device to identify the target object in the second image.
- the target object when the second photographing device is an infrared camera, the target object can be identified in the second image according to the temperature characteristic of the target object.
- the temperature value ranges corresponding to different types of target objects are different, according to which the target objects can be identified.
- the target object when the second photographing device is a visible light wide-angle camera, the target object can be identified in the second image according to the visible light feature of the target object.
- the meaning of the visible light feature can be referred to the above description.
- the second location area corresponding to the target object in the second image is determined, and then, according to the image coordinate mapping relationship between the two shooting devices, it can be determined that the target object is in the first image
- the subsequent focus processing steps are performed on the first image.
- FIG. 6 is a schematic structural diagram of a focusing device provided by an embodiment of the present invention.
- the focusing device may be provided in the first photographing device mentioned above.
- the focusing device includes: a memory 11 and a processor 12;
- the memory 11 stores executable code, and when the executable code is executed by the processor 12, the processor 12 is enabled to implement:
- the image definition includes an image gradient value. Therefore, in the process of obtaining the definition statistical value of the first image, the processor 12 is specifically configured to: according to the first weight and the second weight, the image corresponding to the first location area A weighted sum calculation is performed between the gradient value and the image gradient value corresponding to the other location area to obtain the sharpness statistical value of the first image.
- the processor 12 is specifically configured to: if the difference between the sharpness statistical value of the first image and the sharpness statistical value of the reference image is greater than a set threshold, perform focusing processing on the target object, so The reference image is an image that has been in focus.
- the processor 12 is further configured to update the reference image with an image taken after focusing.
- the second weight is set to zero.
- the processor 12 is further configured to: if the first location area is not located in the central area of the frame of the first photographing device, then according to the first location area and the main image of the first photographing device The relative position of the optical axis moves the first position area to the center area of the screen.
- the processor 12 is specifically configured to: The target object is identified in the image; the target is determined according to the second location area corresponding to the target object in the second image and the image coordinate system mapping relationship between the first photographing device and the second photographing device The first location area corresponding to the object in the first image; wherein the shooting time of the first image and the second image are the same.
- the angle of view of the first photographing device is smaller than that of the second photographing device.
- the first photographing device is a visible light zoom camera
- the second photographing device is an infrared camera or a visible light wide-angle camera.
- the processor 12 is specifically configured to: identify the target object in the second image according to the temperature characteristic of the target object.
- the processor 12 is specifically configured to: identify the target object in the second image according to the visible light characteristic of the target object.
- the processor 12 is specifically configured to: determine the object distance adjustment direction of the first photographing device, wherein the object distance is adjusted according to the object distance adjustment direction such that The sharpness statistical value of a multi-frame image shows an increasing trend, and the multi-frame image includes the first image and at least one frame of image adjacent to the first image collected by the first photographing device; The object distance of the first photographing device is adjusted to a target object distance, and the target object distance corresponds to the maximum sharpness statistical value in the multiple frames of images.
- FIG. 7 is a schematic structural diagram of a photographing device provided by an embodiment of the present invention. As shown in FIG. 7, the photographing device includes:
- the lens assembly 21 is arranged inside the housing of the photographing device.
- the focusing device 23 is arranged inside the housing.
- the sensor module 22 is arranged inside the housing and at the rear end of the lens assembly 21.
- the sensor module 22 includes a circuit board and an imaging sensor. The imaging sensor is arranged on the front surface of the circuit board facing the lens assembly 21.
- the image collected by the photographing device will be imaged by the aforementioned imaging sensor, and the focusing device 23 is used for focusing processing on the target object contained in the image collected by the photographing device.
- the photographing device corresponds to the first photographing device described above.
- FIG. 8 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
- the movable platform is implemented as a drone as an example.
- the movable platform can also be implemented as a handheld PTZ, PTZ vehicles, electric cars, electric bicycles, etc.
- the movable platform includes: a body 31, a power system 32 provided on the body 31, and a first photographing device 33 provided on the body 31.
- the power system 32 is used to provide power for the movable platform.
- the first photographing device 33 is a photographing device as shown in FIG. 7, which is used to photograph a first image and perform focusing processing on a target object in the first image.
- the movable platform may further include: a second photographing device 34 arranged on the body 31.
- the angle of view of the first photographing device 33 is smaller than that of the second photographing device 34.
- the second photographing device 34 is used for photographing a second image containing the target object, and transmitting the second image to the first photographing device 33, so that the first photographing device 33 can use the target object in the second image according to the target object.
- the corresponding location area in the image determines the location area corresponding to the target object in the first image.
- the unmanned aerial vehicle may also include a pan/tilt 35 arranged on the body 31, so that the first photographing device 33 and the second photographing device 34 may be set On the pan-tilt 35, the first photographing device 33 and the second photographing device 34 can move relative to the body through the pan-tilt 35.
- the power system 32 of the drone may include an electronic governor, one or more rotors, and one or more motors corresponding to the one or more rotors.
- an embodiment of the present invention also provides a computer-readable storage medium having executable code stored in the computer-readable storage medium, and the executable code is used to implement the focusing method provided in the foregoing embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
La présente invention concerne un appareil et un appareil de mise au point, un dispositif de photographie, une plateforme mobile et un support d'enregistrement. Le procédé de mise au point consiste à : déterminer une première région de position correspondant à un objet cible dans une première image collectée par un premier dispositif de photographie ; régler la définition d'image correspondant à la première région de position dans la première image de sorte à présenter une première pondération, et la définition d'image correspondant à des régions dans la première image autre que la première région de position de sorte à présenter une seconde pondération, la première pondération étant supérieure à la seconde pondération ; acquérir une valeur statistique de définition de la première image en fonction de la première pondération et de la seconde pondération ; et, si la valeur statistique de définition de la première image satisfait une condition de mise au point définie, alors effectuer une mise au point sur l'objet cible de façon à garantir que l'objet cible soit clairement imagé.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980053920.0A CN112585941A (zh) | 2019-12-30 | 2019-12-30 | 对焦方法、装置、拍摄设备、可移动平台和存储介质 |
PCT/CN2019/129852 WO2021134179A1 (fr) | 2019-12-30 | 2019-12-30 | Appareil et procédé de mise au point, dispositif de photographie, plateforme mobile et support d'enregistrement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/129852 WO2021134179A1 (fr) | 2019-12-30 | 2019-12-30 | Appareil et procédé de mise au point, dispositif de photographie, plateforme mobile et support d'enregistrement |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021134179A1 true WO2021134179A1 (fr) | 2021-07-08 |
Family
ID=75117329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/129852 WO2021134179A1 (fr) | 2019-12-30 | 2019-12-30 | Appareil et procédé de mise au point, dispositif de photographie, plateforme mobile et support d'enregistrement |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112585941A (fr) |
WO (1) | WO2021134179A1 (fr) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113810615A (zh) * | 2021-09-26 | 2021-12-17 | 展讯通信(上海)有限公司 | 对焦处理方法、装置、电子设备和存储介质 |
CN113837079A (zh) * | 2021-09-24 | 2021-12-24 | 苏州贝康智能制造有限公司 | 显微镜的自动对焦方法、装置、计算机设备和存储介质 |
CN113923358A (zh) * | 2021-10-09 | 2022-01-11 | 上海深视信息科技有限公司 | 一种飞拍模式下的在线自动对焦方法和系统 |
CN114697548A (zh) * | 2022-03-21 | 2022-07-01 | 迈克医疗电子有限公司 | 显微图像拍摄对焦方法及装置 |
CN114845050A (zh) * | 2022-04-15 | 2022-08-02 | 深圳市道通智能航空技术股份有限公司 | 一种对焦方法、摄像装置、无人机和存储介质 |
CN118695094A (zh) * | 2024-08-21 | 2024-09-24 | 浙江大华技术股份有限公司 | 一种聚焦方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103096124A (zh) * | 2013-02-20 | 2013-05-08 | 浙江宇视科技有限公司 | 一种辅助对焦方法及装置 |
CN108702435A (zh) * | 2017-04-26 | 2018-10-23 | 华为技术有限公司 | 一种终端和摄像头 |
CN108769538A (zh) * | 2018-08-16 | 2018-11-06 | Oppo广东移动通信有限公司 | 自动对焦方法、装置、存储介质及终端 |
US20190279342A1 (en) * | 2017-03-01 | 2019-09-12 | Fotonation Limited | Method of providing a sharpness measure for an image |
WO2019227441A1 (fr) * | 2018-05-31 | 2019-12-05 | 深圳市大疆创新科技有限公司 | Procédé et dispositif de commande vidéo de plateforme mobile |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101408709B (zh) * | 2007-10-10 | 2010-09-29 | 鸿富锦精密工业(深圳)有限公司 | 影像撷取装置及其自动对焦方法 |
CN102148965B (zh) * | 2011-05-09 | 2014-01-15 | 厦门博聪信息技术有限公司 | 多目标跟踪特写拍摄视频监控系统 |
KR101906827B1 (ko) * | 2012-04-10 | 2018-12-05 | 삼성전자주식회사 | 연속 사진 촬영 장치 및 방법 |
CN105898135A (zh) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | 相机成像方法及相机装置 |
CN106707674B (zh) * | 2015-11-17 | 2021-02-26 | 深圳光峰科技股份有限公司 | 投影设备的自动对焦方法及投影设备 |
CN105407283B (zh) * | 2015-11-20 | 2018-12-18 | 成都因纳伟盛科技股份有限公司 | 一种多目标主动识别跟踪监控方法 |
CN105338248B (zh) * | 2015-11-20 | 2018-08-28 | 成都因纳伟盛科技股份有限公司 | 智能多目标主动跟踪监控方法及系统 |
CN105611158A (zh) * | 2015-12-23 | 2016-05-25 | 北京奇虎科技有限公司 | 一种自动跟焦方法和装置、用户设备 |
CN109413324A (zh) * | 2017-08-16 | 2019-03-01 | 中兴通讯股份有限公司 | 一种拍摄方法和移动终端 |
CN110035218B (zh) * | 2018-01-11 | 2021-06-15 | 华为技术有限公司 | 一种图像处理方法、图像处理装置及拍照设备 |
CN108419015B (zh) * | 2018-04-11 | 2020-08-04 | 浙江大华技术股份有限公司 | 一种聚焦方法及装置 |
CN108924427B (zh) * | 2018-08-13 | 2020-08-04 | 浙江大华技术股份有限公司 | 一种摄像机聚焦方法、装置以及摄像机 |
CN110278383B (zh) * | 2019-07-25 | 2021-06-15 | 浙江大华技术股份有限公司 | 聚焦方法、装置以及电子设备、存储介质 |
-
2019
- 2019-12-30 WO PCT/CN2019/129852 patent/WO2021134179A1/fr active Application Filing
- 2019-12-30 CN CN201980053920.0A patent/CN112585941A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103096124A (zh) * | 2013-02-20 | 2013-05-08 | 浙江宇视科技有限公司 | 一种辅助对焦方法及装置 |
US20190279342A1 (en) * | 2017-03-01 | 2019-09-12 | Fotonation Limited | Method of providing a sharpness measure for an image |
CN108702435A (zh) * | 2017-04-26 | 2018-10-23 | 华为技术有限公司 | 一种终端和摄像头 |
WO2019227441A1 (fr) * | 2018-05-31 | 2019-12-05 | 深圳市大疆创新科技有限公司 | Procédé et dispositif de commande vidéo de plateforme mobile |
CN108769538A (zh) * | 2018-08-16 | 2018-11-06 | Oppo广东移动通信有限公司 | 自动对焦方法、装置、存储介质及终端 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837079A (zh) * | 2021-09-24 | 2021-12-24 | 苏州贝康智能制造有限公司 | 显微镜的自动对焦方法、装置、计算机设备和存储介质 |
CN113837079B (zh) * | 2021-09-24 | 2024-05-14 | 苏州贝康智能制造有限公司 | 显微镜的自动对焦方法、装置、计算机设备和存储介质 |
CN113810615A (zh) * | 2021-09-26 | 2021-12-17 | 展讯通信(上海)有限公司 | 对焦处理方法、装置、电子设备和存储介质 |
CN113923358A (zh) * | 2021-10-09 | 2022-01-11 | 上海深视信息科技有限公司 | 一种飞拍模式下的在线自动对焦方法和系统 |
CN114697548A (zh) * | 2022-03-21 | 2022-07-01 | 迈克医疗电子有限公司 | 显微图像拍摄对焦方法及装置 |
CN114697548B (zh) * | 2022-03-21 | 2023-09-29 | 迈克医疗电子有限公司 | 显微图像拍摄对焦方法及装置 |
CN114845050A (zh) * | 2022-04-15 | 2022-08-02 | 深圳市道通智能航空技术股份有限公司 | 一种对焦方法、摄像装置、无人机和存储介质 |
CN118695094A (zh) * | 2024-08-21 | 2024-09-24 | 浙江大华技术股份有限公司 | 一种聚焦方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112585941A (zh) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021134179A1 (fr) | Appareil et procédé de mise au point, dispositif de photographie, plateforme mobile et support d'enregistrement | |
US8194995B2 (en) | Fast camera auto-focus | |
US8335393B2 (en) | Image processing apparatus and image processing method | |
US9313419B2 (en) | Image processing apparatus and image pickup apparatus where image processing is applied using an acquired depth map | |
CN108076278B (zh) | 一种自动对焦方法、装置及电子设备 | |
JP4497211B2 (ja) | 撮像装置、撮像方法及びプログラム | |
US8854528B2 (en) | Imaging apparatus | |
WO2017045558A1 (fr) | Procédé et appareil d'ajustement de profondeur de champ, et terminal | |
JP6436783B2 (ja) | 画像処理装置、撮像装置、画像処理方法、プログラム、および、記憶媒体 | |
JP3823921B2 (ja) | 撮像装置 | |
US20130250165A1 (en) | Method and apparatus for applying multi-autofocusing (af) using contrast af | |
CN110753182B (zh) | 成像设备的调节方法和设备 | |
US20120019709A1 (en) | Assisting focusing method using multiple face blocks | |
US11962901B2 (en) | Systems and methods for obtaining a super macro image | |
US10412321B2 (en) | Imaging apparatus and image synthesis method | |
US8737831B2 (en) | Digital photographing apparatus and method that apply high-speed multi-autofocusing (AF) | |
KR20100079832A (ko) | 지능형 셀프 타이머 모드를 지원하는 디지털 카메라 및 그 제어방법 | |
US10747089B2 (en) | Imaging apparatus and control method of the same | |
CN105744158A (zh) | 视频图像显示的方法、装置及移动终端 | |
US8994846B2 (en) | Image processing apparatus and image processing method for detecting displacement between images having different in-focus positions | |
JP6071173B2 (ja) | 撮像装置、その制御方法及びプログラム | |
CN115379075A (zh) | 月亮拍摄方法、装置、存储介质及拍摄设备 | |
JP2022029567A (ja) | 制御装置、制御方法およびプログラム | |
JP2002277730A (ja) | 電子カメラの自動焦点制御方法、装置及びプログラム | |
JP6246705B2 (ja) | フォーカス制御装置、撮像装置及びフォーカス制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19958490 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19958490 Country of ref document: EP Kind code of ref document: A1 |