CN111277753A - Focusing method, device, terminal device and storage medium - Google Patents
Focusing method, device, terminal device and storage medium Download PDFInfo
- Publication number
- CN111277753A CN111277753A CN202010084012.2A CN202010084012A CN111277753A CN 111277753 A CN111277753 A CN 111277753A CN 202010084012 A CN202010084012 A CN 202010084012A CN 111277753 A CN111277753 A CN 111277753A
- Authority
- CN
- China
- Prior art keywords
- preview image
- image
- current preview
- subject
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000001514 detection method Methods 0.000 claims abstract description 82
- 230000011218 segmentation Effects 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 238000012706 support-vector machine Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000003064 k means clustering Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012896 Statistical algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本申请涉及影像技术领域,尤其涉及一种对焦方法、装置、终端设备和计算机可读存储介质。The present application relates to the field of imaging technologies, and in particular, to a focusing method, an apparatus, a terminal device, and a computer-readable storage medium.
背景技术Background technique
随着影像技术的发展,使用终端设备进行图像拍摄的现象越来越普遍。在图像拍摄过程中,通常用目标分割或者目标检测的方法来检测主体,并将分割结果或者检测结果给自动对焦模块,并进行对焦处理。With the development of imaging technology, the phenomenon of using terminal equipment to capture images is becoming more and more common. In the process of image shooting, the subject is usually detected by the method of target segmentation or target detection, and the segmentation result or detection result is sent to the automatic focusing module, and the focusing process is performed.
但是,现有的主体分割或主体检测算法是在有主体目标场景下进行的,但是,在没有主体目标的场景中,容易产生误分割或者误检测,从而导致误分割或者误检测的结果传给自动对焦模块,会导致对焦不准确,导致用户体验变差。However, the existing subject segmentation or subject detection algorithms are performed in a scene with a subject target. However, in a scene without a subject target, mis-segmentation or mis-detection are prone to occur, resulting in the result of mis-segmentation or mis-detection being transmitted to The auto focus module will lead to inaccurate focus, resulting in poor user experience.
发明内容SUMMARY OF THE INVENTION
本申请的目的旨在至少在一定程度上解决相关技术中的技术问题之一。The purpose of this application is to solve one of the technical problems in the related art at least to a certain extent.
第一方面,本申请实施例提供了一种对焦方法,包括以下步骤:获取当前预览图像;获取所述当前预览图像的主体置信度;根据所述主体置信度,检测所述当前预览图像是否为有主体图像或无主体图像;根据检测结果,采用对应的对焦方式进行对焦处理。In a first aspect, an embodiment of the present application provides a focusing method, including the following steps: acquiring a current preview image; acquiring a subject confidence level of the current preview image; and detecting whether the current preview image is a subject based on the subject confidence level There is a subject image or no subject image; according to the detection result, the corresponding focusing method is used for focusing processing.
第二方面,本申请实施例提供了一种对焦装置,包括:预览图像获取模块,用于获取当前预览图像;主体置信度获取模块,用于获取所述当前预览图像的主体置信度;检测模块,用于根据所述主体置信度,检测所述当前预览图像是否为有主体图像或无主体图像;对焦模块,用于根据检测结果,采用对应的对焦方式进行对焦处理。In a second aspect, an embodiment of the present application provides a focusing device, including: a preview image acquisition module for acquiring a current preview image; a subject confidence acquisition module for acquiring subject confidence in the current preview image; a detection module , for detecting whether the current preview image is an image with or without a subject according to the confidence of the subject; the focusing module is used for focusing processing in a corresponding focusing mode according to the detection result.
第三方面,本申请实施例提供了一种终端设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时,实现本申请第一方面实施例所述的对焦方法。In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the program , to implement the focusing method described in the embodiments of the first aspect of the present application.
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本申请第一方面实施例所述的对焦方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the focusing method described in the embodiment of the first aspect of the present application is implemented.
本申请实施例的技术方案,可获取当前预览图像,之后获取当前预览图像的主体置信度,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像,然后根据检测结果,采用对应的对焦方式进行对焦处理。该方法可通过获取预览图像主体的置信度,检测当前预览图像是否为有主体图像或无主体图像,并根据检测结果采用对应的对焦方式以实现对焦处理,实现了在有无主体图像场景下的对焦处理,减少了在无主体图像的场景下目标误分割或目标误检测,避免了主体分割或主体检测的错误导致自动对焦的不准确,提升了目标主体分割或目标主体检测的精度,提高了自动对焦的准确性。The technical solutions of the embodiments of the present application can acquire the current preview image, then acquire the subject confidence of the current preview image, and detect whether the current preview image is an image with or without subject according to the confidence of the subject, and then, according to the detection result, use the corresponding The focusing method is used for focusing processing. The method can detect whether the current preview image is an image with or without a subject by obtaining the confidence of the subject of the preview image, and adopt a corresponding focusing method according to the detection result to realize focusing processing, which realizes the detection in the scene with or without subject image. Focus processing reduces target mis-segmentation or target mis-detection in scenes without subject images, avoids subject segmentation or subject detection errors leading to inaccurate auto-focusing, improves the accuracy of target subject segmentation or target subject detection, and improves Autofocus accuracy.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1是根据本申请一个实施例的对焦方法的流程图。FIG. 1 is a flowchart of a focusing method according to an embodiment of the present application.
图2是根据本申请一个具体实施例的对焦方法的流程图。FIG. 2 is a flowchart of a focusing method according to a specific embodiment of the present application.
图3是根据本申请一个具体实施例的目标分割结构示意图。FIG. 3 is a schematic diagram of a target segmentation structure according to a specific embodiment of the present application.
图4是根据本申请另一个具体实施例的对焦方法的流程图。FIG. 4 is a flowchart of a focusing method according to another specific embodiment of the present application.
图5是根据本申请一个实施例的对焦装置的结构示意图。FIG. 5 is a schematic structural diagram of a focusing device according to an embodiment of the present application.
图6是根据本申请一个实施例的终端设备的结构示意图。FIG. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。The following describes in detail the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to explain the present invention and should not be construed as limiting the present invention.
现有技术中,当前的主体检测方法,通常用目标分割或者目标检测的方法来检测主体,并将分割结果或者检测结果给自动对焦,并进行对焦处理。In the prior art, in the current subject detection method, the subject is usually detected by the method of target segmentation or target detection, and the segmentation result or the detection result is automatically focused, and the focus processing is performed.
其中,目标分割方法,通常将图像分为若干个子图,划分后的子图在内部保持相似度最大,而子图之间的相似度保持最小,如Normalized Cut(归一化割)、Graph Cut(图割)等。基于聚类的分割方法,通常先初始化一个粗糙的聚类,使用迭代的方式将特征相似的像素点聚到同一超像素,迭代直至收敛,从而得到最终的图像分割结果,如k-means(k-meansclustering algorithm,k均值聚类算法)、SLIC(Simple Linear Iterative Clustering,简单线性迭代聚类)等。基于语义的分割方法,通常采用卷积神经网络对图像中每个像素点进行softmax交叉熵分类,实现对目标的分割,如FCN(Fully Convolutional Networks,全卷积神经网络)、Deep Lab系列等。Among them, the target segmentation method usually divides the image into several sub-images. The divided sub-images maintain the maximum similarity internally, while the similarity between the sub-images remains the smallest, such as Normalized Cut (normalized cut), Graph Cut (Figure cut) and so on. The clustering-based segmentation method usually initializes a rough cluster first, and uses an iterative method to gather pixels with similar features to the same superpixel, and iterate until convergence, so as to obtain the final image segmentation result, such as k-means(k -meansclustering algorithm, k-means clustering algorithm), SLIC (Simple Linear Iterative Clustering, simple linear iterative clustering) and so on. Semantic-based segmentation methods usually use convolutional neural networks to perform softmax cross-entropy classification on each pixel in the image to achieve target segmentation, such as FCN (Fully Convolutional Networks, Fully Convolutional Neural Networks), Deep Lab series, etc.
目标检测方法,有采用积分图特征+AdaBoost方法对主体进行检测。有通过对主体目标候选区域提取HOG特征,并结合SVM(Support Vector Machine,支持向量机)分类器或者DPM来进行判定等。基于深度学习的目标检测方法,One-stage(YOLO(You Only LookOnce,只看一次目标检测)和SSD系列的网络):直接回归出目标框的位置,即不用产生候选框,直接将目标边框定位的问题转化为回归问题。Two-stage(Faster RCNN系列的网络):利用RPN网络对候选区域进行推荐。The target detection method includes the use of integral graph feature + AdaBoost method to detect the subject. In some cases, the HOG feature is extracted from the subject target candidate area, and the judgment is made in combination with the SVM (Support Vector Machine, Support Vector Machine) classifier or DPM. The target detection method based on deep learning, One-stage (YOLO (You Only LookOnce, only look at the target detection once) and SSD series of networks): directly return to the position of the target frame, that is, directly locate the target frame without generating a candidate frame The problem is transformed into a regression problem. Two-stage (Faster RCNN series of networks): Use the RPN network to recommend candidate regions.
但是,现有的主体分割或主体检测算法是在有主体目标场景下进行的,但是,在没有主体目标的场景中,容易产生误分割或者误检测,从而导致误分割或者误检测的结果传给自动对焦模块,会导致对焦不准确,导致用户体验变差。However, the existing subject segmentation or subject detection algorithms are performed in a scene with a subject target. However, in a scene without a subject target, mis-segmentation or mis-detection are prone to occur, resulting in the result of mis-segmentation or mis-detection being transmitted to The auto focus module will lead to inaccurate focus, resulting in poor user experience.
下面参考附图描述本申请实施例的对焦方法、装置、终端设备和存储介质。The focusing method, apparatus, terminal device, and storage medium of the embodiments of the present application are described below with reference to the accompanying drawings.
图1是根据本申请一个实施例的对焦方法的流程图。需要说明的是,本申请实施例的对焦方法可应用于本申请实施例的对焦装置,该装置可被配置于终端设备上。其中,在本申请的实施例中,终端设备可以是移动终端,例如智能手机、平板电脑、可穿戴式设备等。FIG. 1 is a flowchart of a focusing method according to an embodiment of the present application. It should be noted that, the focusing method in the embodiment of the present application can be applied to the focusing device in the embodiment of the present application, and the device can be configured on a terminal device. Wherein, in the embodiments of the present application, the terminal device may be a mobile terminal, such as a smart phone, a tablet computer, a wearable device, and the like.
如图1所示,该对焦方法可以包括:As shown in Figure 1, the focusing method may include:
S110,获取当前预览图像。S110, acquire the current preview image.
在本申请的实施例中,摄像头对当前图像进行采集时,可获取当前的预览图像。其中,预览图像是终端设备通过摄像头实时捕捉当前场景的画面生成的,可以包括人物图像,还可以包括背景图像,其中,预览图像包括但不限于RGB(红、绿、蓝三个通道的颜色)图、深度图,其中,为了减小后续处理的计算量,可将预览图像缩小到较小的尺寸,例如,224*224。In the embodiment of the present application, when the camera collects the current image, the current preview image can be obtained. Wherein, the preview image is generated by the terminal device capturing the picture of the current scene in real time through the camera, which may include a person image or a background image, wherein the preview image includes but is not limited to RGB (colors of three channels of red, green, and blue) image, depth map, in which, in order to reduce the calculation amount of subsequent processing, the preview image can be reduced to a smaller size, for example, 224*224.
其中,在本申请的一个实施例中,使用摄像头对当前图像进行采集可以是当终端设备上的摄像头打开后,而采集到的图像。Wherein, in an embodiment of the present application, the use of a camera to collect the current image may be an image collected after the camera on the terminal device is turned on.
其中,摄像头可为安装在终端设备中的装置,摄像头包括但不仅限于各种广角摄像头、长焦摄像头、彩色摄像头、或黑白摄像头中的一种或多种。The camera may be a device installed in the terminal device, and the camera includes but is not limited to one or more of various wide-angle cameras, telephoto cameras, color cameras, or black-and-white cameras.
在本申请的一个实施例中,预览图像可以实时展示在终端设备的显示屏上。In an embodiment of the present application, the preview image may be displayed on the display screen of the terminal device in real time.
在本申请的一个实施例中,当前预览图像也可以是终端设备在录制视频时获取的视频中的一帧图像。In an embodiment of the present application, the current preview image may also be a frame of image in the video acquired by the terminal device when recording the video.
S120,获取当前预览图像的主体置信度。S120: Obtain the subject confidence of the current preview image.
在本申请的是实施例中,获取到当前预览图像之后,可通过基于支持向量机SVM回归模型或基于卷积神经网络CNN的分类网络模型,获取该当前预览图像的主体置信度。例如,可通过以下两种方式获取当前预览图像的主体置信度:In this embodiment of the present application, after the current preview image is acquired, the subject confidence of the current preview image can be acquired through a support vector machine SVM regression model or a classification network model based on a convolutional neural network CNN. For example, the subject confidence of the current preview image can be obtained in the following two ways:
作为一种实现方式的实施例,可提取当前预览图像的特征,之后将当前预览图像的特征输入经过训练的支持向量机回归模型进行回归预测,得到回归值,将得到的回归值确定为当前预览图像的主体置信度。As an example of implementation, the features of the current preview image can be extracted, and then the features of the current preview image can be input into a trained support vector machine regression model for regression prediction to obtain a regression value, and the obtained regression value is determined as the current preview. The subject confidence of the image.
作为另一种实现方式的实施例,将当前预览图像输入经过训练的神经网络分类器,其中,神经网络分类器是基于卷积神经网络的分类网络模型,神经网络分类器包括用于提取特征的输入层,以及用于输出网络原始输出值的输出层,然后获取神经网络分类器输出的网络原始输出值,将网络原始输出值中的目标输出值进行概率转换,并将转换后得到的概率确定为当前预览图像的主体置信度,其中,目标输出值是指类别标签为有主体类别标签所对应的输出值。具体的实现方式可参考后续实施例。As another implementation example, the current preview image is input into a trained neural network classifier, wherein the neural network classifier is a classification network model based on a convolutional neural network, and the neural network classifier includes a The input layer, and the output layer for outputting the original output value of the network, then obtain the original output value of the network output by the neural network classifier, convert the target output value in the original output value of the network to the probability, and determine the probability obtained after the conversion. is the subject confidence of the current preview image, wherein the target output value refers to the output value corresponding to the category label having the subject category label. For a specific implementation manner, reference may be made to subsequent embodiments.
S130,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像。S130, according to the subject confidence, detect whether the current preview image is an image with a subject or an image without a subject.
在本申请的实施例中,获取到当前预览图像的主体置信度,可判断主体置信度是否小于目标阈值,若主体置信度小于目标阈值,则判定当前预览图像为无主体图像,若主体置信度大于或等于目标阈值,则判定当前预览图像为有主体图像。In the embodiment of the present application, the subject confidence of the current preview image is obtained, and it can be determined whether the subject confidence is less than the target threshold. If the subject confidence is less than the target threshold, it is determined that the current preview image is an image without subject. If it is greater than or equal to the target threshold, it is determined that the current preview image is a subject image.
也就是说,获取到当前预览图像的主体置信度,可将当前预览图像的主体置信度与目标阈值进行比较,若主体置信度小于目标阈值,则判定当前预览图像为无主体图像,若主体置信度大于或等于目标阈值,则判定当前预览图像为有主体图像。作为一种示例,该目标阈值可为0.5。That is to say, after obtaining the subject confidence of the current preview image, the subject confidence of the current preview image can be compared with the target threshold. If the subject confidence is less than the target threshold, it is determined that the current preview image is an image without subject. If the degree is greater than or equal to the target threshold, it is determined that the current preview image is a subject image. As an example, the target threshold may be 0.5.
S140,根据检测结果,采用对应的对焦方式进行对焦处理。S140, according to the detection result, use a corresponding focusing mode to perform focusing processing.
可选地,在本申请的实施例中,当检测到当前预览图像为有主体图像时,从当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理;当检测到当前预览图像为无主体图像时,根据当前预览图像中目标区域进行对焦处理。Optionally, in the embodiment of the present application, when it is detected that the current preview image is an image with a subject, the size and area position of the subject target are detected from the current preview image, and the size and area of the subject target are detected according to the detected size and area. Focus processing is performed according to the position of the current preview image; when it is detected that the current preview image is an image without a subject, focus processing is performed according to the target area in the current preview image.
其中,目标区域可以是中间区域,即当前预览图像中的中间区域。The target area may be an intermediate area, that is, an intermediate area in the current preview image.
作为一种示例,当检测到当前预览图像为无主体图像时,可对当前预览图像中的中间区域进行自动对焦,当检测到当前预览图像为有主体图像时,可通过目标分割或目标检测的方式,从当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。具体的实现过程可参考后续实施例。As an example, when it is detected that the current preview image is an image without a subject, the middle area in the current preview image can be automatically focused. When it is detected that the current preview image is an image with a subject, the target segmentation or target detection can In the method, the size and area position of the main target are detected from the current preview image, and focus processing is performed according to the detected size and area position of the main target. For the specific implementation process, reference may be made to subsequent embodiments.
根据本申请实施例的对焦方法,可获取当前预览图像,之后获取当前预览图像的主体置信度,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像,然后根据检测结果,采用对应的对焦方式进行对焦处理。该方法可通过获取预览图像主体的置信度,检测当前预览图像是否为有主体图像或无主体图像,并根据检测结果采用对应的对焦方式以实现对焦处理,实现了在有无主体图像场景下的对焦处理,减少了在无主体图像的场景下目标误分割或目标误检测,避免了主体分割或主体检测的错误导致的自动对焦的不准确,提升了目标主体分割或目标主体检测的精度,提高了自动对焦的准确性。According to the focusing method of the embodiment of the present application, the current preview image can be acquired, and then the subject confidence of the current preview image can be acquired, and according to the subject confidence, it is detected whether the current preview image is an image with a subject or an image without a subject, and then according to the detection result, using The corresponding focusing method is used for focusing processing. The method can detect whether the current preview image is an image with or without a subject by obtaining the confidence of the subject of the preview image, and adopt a corresponding focusing method according to the detection result to realize focusing processing, which realizes the detection in the scene with or without subject image. Focus processing reduces target mis-segmentation or target mis-detection in scenes without subject images, avoids inaccurate auto-focusing caused by subject segmentation or subject detection errors, improves the accuracy of target subject segmentation or target subject detection, and improves the accuracy of autofocus.
图2是根据本申请一个具体实施例的对焦方法的流程图,如图2所示,该对焦方法可以包括:FIG. 2 is a flowchart of a focusing method according to a specific embodiment of the present application. As shown in FIG. 2 , the focusing method may include:
S210,获取当前预览图像。S210, acquire the current preview image.
举例而言,摄像头对某一用户进行拍照时,该用户背对大海,可获取当前的预览图像。其中,获取到的预览图像包括该用户以及大海。For example, when the camera takes a picture of a user, the user faces the sea, and the current preview image can be obtained. The obtained preview image includes the user and the sea.
S220,获取当前预览图像的主体置信度。S220: Obtain the subject confidence of the current preview image.
作为一种实现方式的实施例,可提取当前预览图像的特征,之后将当前预览图像的特征输入经过训练的支持向量机回归模型进行回归预测,得到回归值,将得到的回归值确定为当前预览图像的主体置信度。As an example of implementation, the features of the current preview image can be extracted, and then the features of the current preview image can be input into a trained support vector machine regression model for regression prediction to obtain a regression value, and the obtained regression value is determined as the current preview. The subject confidence of the image.
其中,当前预览图像的特征包括但不仅限于HOG(Histogram of OrientedGradient,方向梯度直方图)、纹理、颜色、形状等特征。The features of the current preview image include but are not limited to features such as HOG (Histogram of Oriented Gradient, histogram of directional gradients), texture, color, and shape.
其中,支持向量机回归模型包括但不仅限于线性核函数配置的支持向量机回归模型、多项式核函数配置的支持向量机回归模型、径向基核函数配置的支持向量机回归模型。The support vector machine regression model includes but is not limited to a support vector machine regression model configured with a linear kernel function, a support vector machine regression model configured with a polynomial kernel function, and a support vector machine regression model configured with a radial basis kernel function.
举例而言,获取到当前预览图像,可提取当前预览人物图像的HOG特征,将HOG特征输入到线性核函数配置的支持向量机回归模型中,可得到回归值,比如,回归值介于0到1之间,之后将得到的回归值确定为当前预览图像的主体置信度。For example, after obtaining the current preview image, the HOG feature of the current preview person image can be extracted, and the HOG feature can be input into the support vector machine regression model configured by the linear kernel function, and the regression value can be obtained. For example, the regression value is between 0 and 0. 1, and then the obtained regression value is determined as the subject confidence of the current preview image.
其中,提取当前预览人物图像的HOG特征的具体实现方式如下:获取到当前预览图像,可将整个图像进行归一化,然后计算图像横坐标和纵坐标方向的梯度,并据此计算每个像素位置的梯度方向值,之后为局部图像区域提供一个编码,同时能够保持对图像中人体对象的姿势和外观的弱敏感性,由于局部光照的变化以及前景-背景对比度的变化,使得梯度强度的变化范围非常大,需要对梯度强度做归一化,最后将检测窗口中所有重叠的块进行HOG特征的收集,并将它们结合成最终的特征向量供分类使用,进而可获取当前预览图像的HOG特征。Among them, the specific implementation method of extracting the HOG feature of the current preview image is as follows: after obtaining the current preview image, the entire image can be normalized, and then the gradients in the abscissa and ordinate directions of the image are calculated, and each pixel is calculated accordingly. The value of the gradient direction of the position, which then provides an encoding for the local image region, while maintaining a weak sensitivity to the pose and appearance of human objects in the image, due to changes in local illumination and changes in foreground-background contrast, the gradient strength changes The range is very large, and the gradient strength needs to be normalized. Finally, HOG features are collected for all overlapping blocks in the detection window, and they are combined into a final feature vector for classification, and then the HOG features of the current preview image can be obtained. .
又如,获取到当前预览图像,可通过提取模块提取当前预览人物图像的纹理特征,其中,纹理图像是由图像上地物重复排列造成的灰度值有规则的分布,之后将纹理特征输入到多项式核函数配置的支持向量机回归模型中,可得到回归值,然后将得到的回归值确定为当前预览图像的主体置信度。For another example, after the current preview image is obtained, the texture feature of the current preview character image can be extracted through the extraction module, wherein the texture image is the regular distribution of gray values caused by the repeated arrangement of the objects on the image, and then the texture feature is input into the image. In the support vector machine regression model configured by the polynomial kernel function, the regression value can be obtained, and then the obtained regression value is determined as the subject confidence of the current preview image.
其中,提取当前预览人物图像的纹理特征可通过统计算法、几何算法、信号处理算法。Among them, the extraction of the texture features of the currently previewed person image can be performed through statistical algorithms, geometric algorithms, and signal processing algorithms.
再如,获取到当前预览图像,可通过提取模块提取当前预览人物图像的形状特征,之后将形状特征输入到径向基核函数配置的支持向量机回归模型中,可得到回归值,然后将得到的回归值确定为当前预览图像的主体置信度。For another example, when the current preview image is obtained, the shape feature of the current preview person image can be extracted by the extraction module, and then the shape feature is input into the support vector machine regression model configured by the radial basis kernel function, the regression value can be obtained, and then the obtained value is obtained. The regression value of is determined as the subject confidence of the current preview image.
其中,提取当前预览人物图像的形状特征可通过边界特征算法、形状不变矩算法、傅里叶形状描述符算法。Among them, the shape feature of the currently previewed person image can be extracted through a boundary feature algorithm, a shape invariant moment algorithm, and a Fourier shape descriptor algorithm.
作为另一种实现方式的实施例,将当前预览图像输入经过训练的神经网络分类器,其中,神经网络分类器是基于卷积神经网络的分类网络模型,神经网络分类器包括用于提取特征的输入层,以及用于输出网络原始输出值的输出层,然后获取神经网络分类器输出的网络原始输出值,将网络原始输出值中的目标输出值进行概率转换,并将转换后得到的概率确定为当前预览图像的主体置信度,其中,目标输出值是指类别标签为有主体类别标签所对应的输出值。作为一种示例,所述分类网络模型可为二分类网络模型。As another implementation example, the current preview image is input into a trained neural network classifier, wherein the neural network classifier is a classification network model based on a convolutional neural network, and the neural network classifier includes a The input layer, and the output layer for outputting the original output value of the network, then obtain the original output value of the network output by the neural network classifier, convert the target output value in the original output value of the network to the probability, and determine the probability obtained after the conversion. is the subject confidence of the current preview image, wherein the target output value refers to the output value corresponding to the category label having the subject category label. As an example, the classification network model may be a binary classification network model.
例如,获取到当前预览图像,可将当前预览图像输入经过训练的神经网络分类器,神经网络分类器可提取该预览图像的特征,基于该预览图像的特征,可输出网络原始输出值,其中,网络原始输出值可只有一个输出值,之后将网络原始输出值通过Sigmoid函数进行概率转换,并将转换后得到的概率确定为当前预览图像的主体置信度。For example, after obtaining the current preview image, the current preview image can be input into a trained neural network classifier, and the neural network classifier can extract the features of the preview image, and based on the features of the preview image, the original output value of the network can be output, wherein, The original output value of the network may have only one output value, and then the original output value of the network is converted into probability by the Sigmoid function, and the probability obtained after the conversion is determined as the subject confidence of the current preview image.
又如,获取到当前预览图像,可将当前预览图像输入经过训练的神经网络分类器,神经网络分类器可提取该预览图像的特征,基于该预览图像的特征,可输出网络原始输出值,其中,网络原始输出值可包括两个输出值,之后将网络原始输出值中的两个输出值通过Softmax函数进行概率转换,并将“有主体”类别标签对应输出值的转换概率确定为当前预览图像的主体置信度。For another example, when the current preview image is obtained, the current preview image can be input into a trained neural network classifier, and the neural network classifier can extract the features of the preview image, and based on the features of the preview image, the original output value of the network can be output, wherein , the original output value of the network can include two output values, and then the two output values in the original output value of the network are converted into probability by the Softmax function, and the conversion probability of the output value corresponding to the category label "with subject" is determined as the current preview image subject confidence.
S230,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像。S230, according to the subject confidence, detect whether the current preview image is an image with a subject or an image without a subject.
举例而言,获取到当前预览图像的主体置信度,可将主体置信度与目标阈值进行比较,比如,主体置信度为0.6,目标阈值为0.5,此时主体置信度0.6大于目标阈值0.5,则判定当前预览图像为有主体图像。For example, to obtain the subject confidence of the current preview image, the subject confidence can be compared with the target threshold. For example, the subject confidence is 0.6 and the target threshold is 0.5. At this time, the subject confidence 0.6 is greater than the target threshold 0.5, then It is determined that the current preview image is an image with a subject.
S240,当检测到当前预览图像为无主体图像时,根据当前预览图像中目标区域进行对焦处理。S240, when it is detected that the current preview image is an image without a subject, focus processing is performed according to the target area in the current preview image.
其中,目标区域可以是中间区域。Wherein, the target area may be an intermediate area.
也就是说,当检测到当前预览图像为无主体图像时,可对当前预览图像中的中间区域进行自动对焦。That is to say, when it is detected that the current preview image is an image without a subject, the center area in the current preview image can be automatically focused.
S250,当检测到当前预览图像为有主体图像时,可通过目标分割的方式,从当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。S250, when it is detected that the current preview image is an image with a subject, the size and regional position of the subject object may be detected from the current preview image by means of target segmentation, and focus is performed according to the detected size and regional position of the subject target deal with.
在本申请的实施例中,当检测到当前预览图像为有主体图像时,可将当前预览图像输入经过训练的目标分割网络模型,其中,目标分割网络模型是已经学习得到各图像特征与主体的大小和区域位置之间的映射关系,目标分割网络模型包括用于提取图像特征的输入层,以及用于输出主体的大小和区域位置的输出层,然后获取目标分割网络模型输出的主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。In the embodiment of the present application, when it is detected that the current preview image is an image with a subject, the current preview image can be input into a trained target segmentation network model, wherein the target segmentation network model has been learned to obtain the characteristics of each image and the subject. The mapping relationship between size and region position, the target segmentation network model includes an input layer for extracting image features, and an output layer for outputting the size and region position of the subject, and then obtaining the size of the subject object output by the target segmentation network model and area position, and focus processing is performed according to the detected size and area position of the main target.
其中,目标分割网络模型是一种基于目标分割算法的分割模型,其中,目标分割算法包括但不仅限于深度学习系列分割算法、U-Net、FCN(fuzzy c-means,模糊C均值聚类)算法,其中,该类目标分割算法包括Encoder(编码器)特征编码模块和Decoder(解码器)目标模板生成模块。例如,目标分割网络模型的网络结构可如图3所示。The target segmentation network model is a segmentation model based on a target segmentation algorithm, wherein the target segmentation algorithms include but are not limited to deep learning series segmentation algorithms, U-Net, and FCN (fuzzy c-means, fuzzy C-means clustering) algorithms , wherein this type of target segmentation algorithm includes an Encoder (encoder) feature encoding module and a Decoder (decoder) target template generation module. For example, the network structure of the target segmentation network model can be shown in Figure 3.
也就是说,当检测到当前预览图像为有主体图像时,可将当前预览图像输入经过训练的目标分割网络模型,目标分割网络模型可提取图像特征,并基于各图像特征与主体的大小和区域位置之间的映射关系,可输出主体的大小和区域位置的输出层,进而获取目标分割网络模型输出的主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行自动对焦。That is to say, when it is detected that the current preview image is an image with a subject, the current preview image can be input into the trained target segmentation network model, and the target segmentation network model can extract image features, and based on the size and area of each image feature and the subject The mapping relationship between the positions can output the output layer of the size of the subject and the position of the region, and then obtain the size and region position of the subject target output by the target segmentation network model, and automatically focus according to the size and region position of the detected subject target. .
根据本申请实施例的对焦方法,可获取当前预览图像,之后获取当前预览图像的主体置信度,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像,当检测到当前预览图像为无主体图像时,根据当前预览图像中目标区域进行对焦处理;当检测到当前预览图像为有主体图像时,可通过目标分割的方式,从当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。该方法可通过获取预览图像主体的置信度,检测当前预览图像是否为有主体图像或无主体图像,并根据检测结果采用对应的对焦方式以实现对焦处理,实现了在有无主体图像场景下的对焦处理,减少了在无主体图像的场景下目标误分割,避免了主体分割导致的自动对焦的不准确,提升了目标主体分割的精度,提高了自动对焦的准确性。According to the focusing method of the embodiment of the present application, the current preview image can be acquired, and then the subject confidence level of the current preview image can be acquired. According to the subject confidence level, it is detected whether the current preview image is an image with or without subject. When there is no subject image, focus processing is performed according to the target area in the current preview image; when it is detected that the current preview image is an image with a subject, the size and area position of the subject target can be detected from the current preview image by means of target segmentation , and focus processing is performed according to the size and area position of the detected subject. The method can detect whether the current preview image is an image with or without a subject by obtaining the confidence of the subject of the preview image, and adopt a corresponding focusing method according to the detection result to realize focusing processing, which realizes the detection in the scene with or without subject image. Focusing processing reduces the target erroneous segmentation in scenes without subject images, avoids the inaccuracy of autofocus caused by subject segmentation, improves the accuracy of target subject segmentation, and improves the accuracy of autofocusing.
图4是根据本申请另一个具体实施例的对焦方法的流程图,如图3所示,该对焦方法可以包括:FIG. 4 is a flowchart of a focusing method according to another specific embodiment of the present application. As shown in FIG. 3 , the focusing method may include:
S410,获取当前预览图像。S410, obtain the current preview image.
S420,获取当前预览图像的主体置信度。S420: Obtain the subject confidence of the current preview image.
S430,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像。S430, according to the subject confidence, detect whether the current preview image is an image with a subject or an image without a subject.
S440,当检测到当前预览图像为无主体图像时,根据当前预览图像中目标区域进行对焦处理。S440, when it is detected that the current preview image is an image without a subject, focus processing is performed according to the target area in the current preview image.
其中,目标区域可以是中间区域。Wherein, the target area may be an intermediate area.
也就是说,当检测到当前预览图像为无主体图像时,可对当前预览图像中的中间区域进行自动对焦。That is to say, when it is detected that the current preview image is an image without a subject, the center area in the current preview image can be automatically focused.
需要说明的是,在本申请的实施例中,上述步骤S410~S440的实现方式可参见上述步骤S210~S240的实现方式的描述,在此不再赘述。It should be noted that, in the embodiments of the present application, for the implementation manner of the foregoing steps S410 to S440, reference may be made to the description of the implementation manner of the foregoing steps S210 to S240, and details are not described herein again.
S450,当检测到当前预览图像为有主体图像时,可通过目标检测的方式,从当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。S450, when it is detected that the current preview image is an image with a subject, the size and regional position of the subject object may be detected from the current preview image by means of target detection, and focus is performed according to the detected size and regional position of the subject target deal with.
在本申请的实施例中,当检测到当前预览图像为有主体图像时,可将当前预览图像输入经过训练的目标检测网络模型,其中,目标检测网络模型是已经学习得到各图像特征与主体内接矩形的大小和位置之间的映射关系,目标检测网络模型包括用于提取图像特征的输入层,以及用于输出主体内接矩形的大小和位置的输出层,然后获取目标检测网络模型输出的主体内接矩形的大小和位置,将主体内接矩形的大小确定所主体目标的大小,并将主体内接矩形的位置确定为主体目标的区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。In the embodiment of the present application, when it is detected that the current preview image is an image with a subject, the current preview image can be input into a trained target detection network model, wherein the target detection network model has been learned to obtain the characteristics of each image and the internal characteristics of the subject. The mapping relationship between the size and position of the inscribed rectangle, the target detection network model includes an input layer for extracting image features, and an output layer for outputting the size and position of the inscribed rectangle of the subject, and then obtains the output of the target detection network model. The size and position of the inscribed rectangle of the main body, the size of the inscribed rectangle of the main body determines the size of the subject object, and the position of the inscribed rectangle of the main body is determined as the area position of the main body target, and according to the detected size and area of the main body target position to focus.
其中,目标检测模型是一种基于目标检测算法的检测模型,其中,目标检测算法包括但不仅限于SSD、YOLO、Fast R-CNN(快速的区域卷积神经网络)等算法。The target detection model is a detection model based on a target detection algorithm, wherein the target detection algorithm includes but is not limited to algorithms such as SSD, YOLO, and Fast R-CNN (Fast Regional Convolutional Neural Network).
也就是说,当检测到当前预览图像为有主体图像时,可将当前预览图像输入经过训练的目标检测网络模型,目标检测网络模型可提取图像特征,并基于各图像特征与主体内接矩形的大小和位置之间的映射关系,可输出主体内接矩形的大小和位置的输出层,进而获取目标分割网络模型输出的主体内接矩形的大小和位置,并根据检测到的主体目标的大小和区域位置进行自动对焦。That is to say, when it is detected that the current preview image is an image with a subject, the current preview image can be input into the trained target detection network model, and the target detection network model can extract image features, and based on each image feature and the main body inscribed rectangle The mapping relationship between size and position can output the output layer of the size and position of the inscribed rectangle of the main body, and then obtain the size and position of the inscribed rectangle of the main body output by the target segmentation network model. Area position for autofocus.
需要说明的是,由于主体的外接矩形包含背景区域,易导致自动对焦的不准确,因此,在本申请的实施例中,目标检测网络模型输出的是主体内接矩形的大小和位置。It should be noted that, since the circumscribed rectangle of the main body includes the background area, it is easy to cause inaccuracy of auto focus. Therefore, in the embodiment of the present application, the target detection network model outputs the size and position of the inscribed rectangle of the main body.
根据本申请实施例的对焦方法,可获取当前预览图像,之后获取当前预览图像的主体置信度,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像,当检测到当前预览图像为无主体图像时,根据当前预览图像中目标区域进行对焦处理;当检测到当前预览图像为有主体图像时,可通过目标检测的方式,从当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理。该方法可通过获取预览图像主体的置信度,检测当前预览图像是否为有主体图像或无主体图像,并根据检测结果采用对应的对焦方式以实现对焦处理,实现了在有无主体图像场景下的对焦处理,减少了在无主体图像的场景下目标误检测,避免了主体检测导致的自动对焦的不准确,提升了目标主体检测的精度,提高了自动对焦的准确性。According to the focusing method of the embodiment of the present application, the current preview image can be acquired, and then the subject confidence level of the current preview image can be acquired. According to the subject confidence level, it is detected whether the current preview image is an image with or without subject. When there is no subject image, focus processing is performed according to the target area in the current preview image; when it is detected that the current preview image is an image with a subject, the size and area position of the subject target can be detected from the current preview image by means of target detection. , and focus processing is performed according to the size and area position of the detected subject. The method can detect whether the current preview image is an image with or without a subject by obtaining the confidence of the subject of the preview image, and adopt a corresponding focusing method according to the detection result to realize focusing processing, which realizes the detection in the scene with or without subject image. Focusing processing reduces target false detection in scenes without subject images, avoids the inaccuracy of autofocus caused by subject detection, improves the accuracy of target subject detection, and improves the accuracy of autofocus.
与上述几种实施例提供的对焦方法相对应,本申请的一种实施例还提供一种对焦装置,由于本申请实施例提供的对焦装置与上述几种实施例提供的对焦方法相对应,因此在对焦方法的实施方式也适用于本实施例提供的对焦装置,在本实施例中不再详细描述。图5是根据本申请一个实施例的对焦装置的结构示意图。Corresponding to the focusing methods provided by the above-mentioned embodiments, an embodiment of the present application further provides a focusing device. Since the focusing device provided by the embodiments of the present application corresponds to the focusing methods The embodiments of the focusing method are also applicable to the focusing device provided in this embodiment, which will not be described in detail in this embodiment. FIG. 5 is a schematic structural diagram of a focusing device according to an embodiment of the present application.
如图5所示,该对焦装置500包括:预览图像获取模块510、主体置信度获取模块520、检测模块530和对焦模块540。其中:As shown in FIG. 5 , the focusing
预览图像获取模块510用于获取当前预览图像。The preview
主体置信度获取模块520用于获取所述当前预览图像的主体置信度;作为一种示例,主体置信度获取模块具体用于:提取所述当前预览图像的特征;将所述当前预览图像的特征输入经过训练的支持向量机回归模型进行回归预测,得到回归值;将得到的回归值确定为所述当前预览图像的主体置信度。The subject confidence
在本申请的实施例中,主体置信度获取模块520具体用于:将所述当前预览图像输入经过训练的神经网络分类器;所述神经网络分类器是基于卷积神经网络的分类网络模型,所述神经网络分类器包括用于提取特征的输入层,以及用于输出网络原始输出值的输出层;获取所述神经网络分类器输出的网络原始输出值;将所述网络原始输出值中的目标输出值进行概率转换,并将转换后得到的概率确定为所述当前预览图像的主体置信度,其中,所述目标输出值是指类别标签为有主体类别标签所对应的输出值。In the embodiment of the present application, the subject
检测模块530用于根据所述主体置信度,检测所述当前预览图像是否为有主体图像或无主体图像;作为一种示例,所述检测模块具体用于:判断所述主体置信度是否小于目标阈值;若所述主体置信度小于所述目标阈值,则判定所述当前预览图像为无主体图像;若所述主体置信度大于或等于所述目标阈值,则判定所述当前预览图像为有主体图像。The
对焦模块540用于根据检测结果,采用对应的对焦方式进行对焦处理。作为一种示例,所述对焦模块具体用于:当检测到所述当前预览图像为有主体图像时,从所述当前预览图像中检测出主体目标的大小和区域位置,并根据检测到的主体目标的大小和区域位置进行对焦处理;当检测到所述当前预览图像为无主体图像时,根据所述当前预览图像中目标区域进行对焦处理。The focusing
在本申请的实施例中,所述对焦模块540具体用于:将所述当前预览图像输入经过训练的目标分割网络模型;所述目标分割网络模型是已经学习得到各图像特征与主体的大小和区域位置之间的映射关系,所述目标分割网络模型包括用于提取图像特征的输入层,以及用于输出主体的大小和区域位置的输出层;获取所述目标分割网络模型输出的主体目标的大小和区域位置。In the embodiment of the present application, the focusing
在本申请的实施例中,所述对焦模块540具体用于:将所述当前预览图像输入经过训练的目标检测网络模型;所述目标检测网络模型是已经学习得到各图像特征与主体内接矩形的大小和位置之间的映射关系,所述目标检测网络模型包括用于提取图像特征的输入层,以及用于输出主体内接矩形的大小和位置的输出层;获取所述目标检测网络模型输出的主体内接矩形的大小和位置;将所述主体内接矩形的大小确定所所述主体目标的大小,并将所述主体内接矩形的位置确定为所述主体目标的区域位置。In the embodiment of the present application, the focusing
根据本申请实施例的对焦方法,可获取当前预览图像,之后获取当前预览图像的主体置信度,根据主体置信度,检测当前预览图像是否为有主体图像或无主体图像,然后根据检测结果,采用对应的对焦方式进行对焦处理。该方法可通过获取预览图像主体的置信度,检测当前预览图像是否为有主体图像或无主体图像,并根据检测结果采用对应的对焦方式以实现对焦处理,实现了在有无主体图像场景下的对焦处理,减少了在无主体图像的场景下目标误分割或目标误检测,避免了主体分割或主体检测的错误导致的自动对焦的不准确,提升了目标主体分割或目标主体检测的精度,提高了自动对焦的准确性。According to the focusing method of the embodiment of the present application, the current preview image can be obtained, and then the subject confidence of the current preview image can be obtained, and according to the subject confidence, it is detected whether the current preview image is an image with or without a subject, and then, according to the detection result, using The corresponding focusing method is used for focusing processing. The method can detect whether the current preview image is an image with or without a subject by obtaining the confidence of the subject of the preview image, and adopt a corresponding focusing method according to the detection result to realize focusing processing, which realizes the detection in the scene with or without subject image. Focus processing reduces target mis-segmentation or target mis-detection in scenes without subject images, avoids inaccurate auto-focusing caused by subject segmentation or subject detection errors, improves the accuracy of target subject segmentation or target subject detection, and improves the accuracy of autofocus.
为了实现上述实施例,本申请还提出了一种终端设备。In order to realize the above embodiments, the present application also proposes a terminal device.
图6是根据本申请一个实施例的终端设备的结构示意图。如图6所示,该终端设备600可以包括:存储器610、处理器620及存储在所述存储器610上并可在所述处理器620上运行的计算机程序630,所述处理器620执行所述程序时,实现本申请上述任一项所述的的对焦方法。FIG. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in FIG. 6 , the
为了实现上述实施例,本申请还提出了一种计算机可读存储介质,所述计算机程序被处理器执行时实现上述任一项所述的对焦方法。In order to implement the above embodiments, the present application further provides a computer-readable storage medium, and when the computer program is executed by a processor, any one of the focusing methods described above is implemented.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or a combination of the following techniques known in the art: Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those skilled in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010084012.2A CN111277753A (en) | 2020-02-10 | 2020-02-10 | Focusing method, device, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010084012.2A CN111277753A (en) | 2020-02-10 | 2020-02-10 | Focusing method, device, terminal device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111277753A true CN111277753A (en) | 2020-06-12 |
Family
ID=71003580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010084012.2A Pending CN111277753A (en) | 2020-02-10 | 2020-02-10 | Focusing method, device, terminal device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111277753A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113572957A (en) * | 2021-06-26 | 2021-10-29 | 荣耀终端有限公司 | A shooting focusing method and related equipment |
CN114666490A (en) * | 2020-12-23 | 2022-06-24 | 北京小米移动软件有限公司 | Focusing method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955718A (en) * | 2014-05-15 | 2014-07-30 | 厦门美图之家科技有限公司 | Image subject recognition method |
CN109167910A (en) * | 2018-08-31 | 2019-01-08 | 努比亚技术有限公司 | focusing method, mobile terminal and computer readable storage medium |
CN109587394A (en) * | 2018-10-23 | 2019-04-05 | 广东智媒云图科技股份有限公司 | A kind of intelligence patterning process, electronic equipment and storage medium |
CN110149482A (en) * | 2019-06-28 | 2019-08-20 | Oppo广东移动通信有限公司 | Focusing method, focusing device, electronic equipment and computer readable storage medium |
CN110418064A (en) * | 2019-09-03 | 2019-11-05 | 北京字节跳动网络技术有限公司 | Focusing method, device, electronic equipment and storage medium |
-
2020
- 2020-02-10 CN CN202010084012.2A patent/CN111277753A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955718A (en) * | 2014-05-15 | 2014-07-30 | 厦门美图之家科技有限公司 | Image subject recognition method |
CN109167910A (en) * | 2018-08-31 | 2019-01-08 | 努比亚技术有限公司 | focusing method, mobile terminal and computer readable storage medium |
CN109587394A (en) * | 2018-10-23 | 2019-04-05 | 广东智媒云图科技股份有限公司 | A kind of intelligence patterning process, electronic equipment and storage medium |
CN110149482A (en) * | 2019-06-28 | 2019-08-20 | Oppo广东移动通信有限公司 | Focusing method, focusing device, electronic equipment and computer readable storage medium |
CN110418064A (en) * | 2019-09-03 | 2019-11-05 | 北京字节跳动网络技术有限公司 | Focusing method, device, electronic equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114666490A (en) * | 2020-12-23 | 2022-06-24 | 北京小米移动软件有限公司 | Focusing method and device, electronic equipment and storage medium |
CN114666490B (en) * | 2020-12-23 | 2024-02-09 | 北京小米移动软件有限公司 | Focusing method, focusing device, electronic equipment and storage medium |
CN113572957A (en) * | 2021-06-26 | 2021-10-29 | 荣耀终端有限公司 | A shooting focusing method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765278B (en) | Image processing method, mobile terminal and computer readable storage medium | |
US11457138B2 (en) | Method and device for image processing, method for training object detection model | |
AU2021201933B2 (en) | Hierarchical multiclass exposure defects classification in images | |
US10079974B2 (en) | Image processing apparatus, method, and medium for extracting feature amount of image | |
JP4772839B2 (en) | Image identification method and imaging apparatus | |
US8374440B2 (en) | Image processing method and apparatus | |
US8306262B2 (en) | Face tracking method for electronic camera device | |
US8446494B2 (en) | Automatic redeye detection based on redeye and facial metric values | |
CN107633237B (en) | Image background segmentation method, device, equipment and medium | |
US20070031032A1 (en) | Method and apparatus for performing conversion of skin color into preference color by applying face detection and skin area detection | |
US20170054897A1 (en) | Method of automatically focusing on region of interest by an electronic device | |
JP2011188496A (en) | Backlight detection device and backlight detection method | |
CN112784750B (en) | Fast video object segmentation method and device based on pixel and region feature matching | |
CN111368698B (en) | Subject identification methods, devices, electronic equipment and media | |
CN104063709A (en) | Line-of-sight Detection Apparatus, Method, Image Capturing Apparatus And Control Method | |
CN112365513A (en) | Model training method and device | |
CN111277753A (en) | Focusing method, device, terminal device and storage medium | |
CN108664968A (en) | A kind of unsupervised text positioning method based on text selection model | |
CN114691915A (en) | Method and device for improving tile image recognition through algorithm | |
CN111275045B (en) | Image subject recognition methods, devices, electronic equipment and media | |
CN118608926A (en) | Image quality evaluation method, device, electronic device and storage medium | |
CN110909564B (en) | Pedestrian detection method and device | |
JP2024056578A (en) | Image processing apparatus, photographing apparatus, control method of image processing apparatus, and program | |
CN112926417A (en) | Pedestrian detection method, system, device and medium based on deep neural network | |
CN114820448A (en) | Highlight detection method and device based on image segmentation, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200612 |