CN110610171A - Image processing method and apparatus, electronic device, computer-readable storage medium - Google Patents

Image processing method and apparatus, electronic device, computer-readable storage medium Download PDF

Info

Publication number
CN110610171A
CN110610171A CN201910905113.9A CN201910905113A CN110610171A CN 110610171 A CN110610171 A CN 110610171A CN 201910905113 A CN201910905113 A CN 201910905113A CN 110610171 A CN110610171 A CN 110610171A
Authority
CN
China
Prior art keywords
face
image
area
portrait
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910905113.9A
Other languages
Chinese (zh)
Inventor
黄海东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910905113.9A priority Critical patent/CN110610171A/en
Publication of CN110610171A publication Critical patent/CN110610171A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及一种图像处理方法和装置、电子设备、计算机可读存储介质。所述方法包括获取待识别图像;检测所述待识别图像中是否存在人脸;当所述待识别图像存在人脸时,根据所述人脸获取包含人像的候选区域;所述人像包括所述人脸;将所述候选区域输入人像分割网络,得到人像区域,将所述人像区域作为所述待识别图像的主体区域;当所述待识别图像中不存在人脸时,将所述待识别图像输入主体识别网络,得到所述待识别图像的主体区域。上述方法和装置、电子设备、计算机可读存储介质,可以提高主体识别的准确性。

The present application relates to an image processing method and apparatus, an electronic device, and a computer-readable storage medium. The method includes acquiring an image to be recognized; detecting whether there is a human face in the image to be recognized; when there is a human face in the to-be-recognized image, acquiring a candidate area containing a human portrait according to the human face; the human portrait includes the face; input the candidate area into a portrait segmentation network to obtain a portrait area, and use the portrait area as the main area of the image to be recognized; when there is no face in the image to be recognized, the image to be recognized The image is input into the subject recognition network to obtain the subject area of the image to be recognized. The above-mentioned method and apparatus, electronic device, and computer-readable storage medium can improve the accuracy of subject identification.

Description

图像处理方法和装置、电子设备、计算机可读存储介质Image processing method and apparatus, electronic device, computer-readable storage medium

技术领域technical field

本申请涉及影像技术领域,特别是涉及一种图像处理方法、装置、电子设备、计算机可读存储介质。The present application relates to the field of imaging technologies, and in particular, to an image processing method, apparatus, electronic device, and computer-readable storage medium.

背景技术Background technique

随着影像技术的发展,人们越来越习惯通过电子设备上的摄像头等图像采集设备拍摄图像或视频,记录各种信息。电子设备获取到图像后,往往需要对图像进行主体识别,识别出主体,从而可以获取该主体更清晰的图像。然而,传统的主体识别技术在对人像进行识别时,往往将最显著的区域作为主体,并不能准确识别出人像,存在图像处理不准确的问题。With the development of imaging technology, people are more and more accustomed to shooting images or videos through image acquisition devices such as cameras on electronic devices to record various information. After an electronic device acquires an image, it is often necessary to perform subject recognition on the image to identify the subject, so that a clearer image of the subject can be acquired. However, the traditional subject recognition technology often regards the most conspicuous area as the subject when recognizing the portrait, and cannot accurately identify the portrait, and there is a problem of inaccurate image processing.

发明内容SUMMARY OF THE INVENTION

本申请实施例提供一种图像处理方法、装置、电子设备、计算机可读存储介质,可以提高主体识别的准确性。Embodiments of the present application provide an image processing method, apparatus, electronic device, and computer-readable storage medium, which can improve the accuracy of subject identification.

一种图像处理方法,包括:An image processing method, comprising:

获取待识别图像;Get the image to be recognized;

检测所述待识别图像中是否存在人脸;Detecting whether there is a human face in the to-be-recognized image;

当所述待识别图像中存在人脸时,根据所述人脸获取包含人像的候选区域;所述人像包括所述人脸;When there is a human face in the to-be-recognized image, obtain a candidate area containing a portrait according to the human face; the human portrait includes the human face;

将所述候选区域输入人像分割网络,得到人像区域,将所述人像区域作为所述待识别图像的主体区域;Inputting the candidate region into a portrait segmentation network to obtain a portrait region, and using the portrait region as the subject region of the image to be recognized;

当所述待识别图像中不存在人脸时,将所述待识别图像输入主体识别网络,得到所述待识别图像的主体区域。When there is no face in the to-be-recognized image, the to-be-recognized image is input into a subject recognition network to obtain the subject area of the to-be-recognized image.

一种图像处理装置,包括:An image processing device, comprising:

图像获取模块,用于获取待识别图像;an image acquisition module, used to acquire an image to be recognized;

人脸检测模块,用于检测所述待识别图像中是否存在人脸;a face detection module for detecting whether there is a face in the to-be-recognized image;

候选区域获取模块,用于当所述待识别图像中存在人脸时,根据所述人脸获取包含人像的候选区域;所述人像包括所述人脸;a candidate region acquisition module, configured to acquire a candidate region containing a portrait according to the human face when there is a human face in the to-be-recognized image; the human portrait includes the human face;

人像分割模块,用于将所述候选区域输入人像分割网络,得到人像区域,将所述人像区域作为所述待识别图像的主体区域;a portrait segmentation module, configured to input the candidate region into a portrait segmentation network to obtain a portrait region, and use the portrait region as the main region of the image to be recognized;

主体识别模块,用于当所述待识别图像中不存在人脸时,将所述待识别图像输入主体识别网络,得到所述待识别图像的主体区域。The subject recognition module is configured to input the to-be-recognized image into a subject recognition network when there is no human face in the to-be-recognized image to obtain the subject area of the to-be-recognized image.

一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述的图像处理方法的步骤。An electronic device includes a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the steps of the above-mentioned image processing method.

一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的方法的步骤。A computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the above-mentioned method.

上述图像处理方法和装置、电子设备、计算机可读存储介质,当检测到待识别图像存在人脸时,根据人脸获取包含人像的候选区域,将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域;当检测到待识别图像中不存在人脸时,将待识别图像输入主体识别网络,得到待识别图像的主体区域。通过设计获取待识别图像的主体区域的双路网络,即当待识别图像中不存在人脸时,通过主体识别网络获取主体区域;当待识别图像中存在人脸时,从待识别图像中确定包含人像的候选图像,并通过人像分割网络可以获取更准确的人像区域作为主体区域。The above-mentioned image processing method and device, electronic equipment, and computer-readable storage medium, when detecting that there is a human face in the image to be recognized, obtain a candidate region containing the portrait according to the human face, input the candidate region into the portrait segmentation network, obtain the portrait region, and divide the The portrait area is used as the subject area of the image to be recognized; when it is detected that there is no human face in the image to be recognized, the image to be recognized is input into the subject recognition network to obtain the subject area of the image to be recognized. By designing a two-way network to obtain the subject area of the image to be recognized, that is, when there is no face in the image to be recognized, the subject area is obtained through the subject recognition network; when there is a face in the image to be recognized, it is determined from the image to be recognized. Candidate images containing portraits can be obtained through the portrait segmentation network to obtain more accurate portrait regions as subject regions.

附图说明Description of drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.

图1为一个实施例中图像处理电路的示意图;1 is a schematic diagram of an image processing circuit in one embodiment;

图2为一个实施例中图像处理方法的流程图;2 is a flowchart of an image processing method in one embodiment;

图3为另一个实施例中图像处理方法的流程图;3 is a flowchart of an image processing method in another embodiment;

图4为一个实施例中步骤获取候选区域的流程图;4 is a flowchart of steps to obtain candidate regions in one embodiment;

图5为一个实施例中步骤确定人脸的角度的流程图;5 is a flowchart of steps in one embodiment to determine the angle of a human face;

图6为另一个实施例中图像处理的示意图;6 is a schematic diagram of image processing in another embodiment;

图7为一个实施例中图像处理装置的结构框图;7 is a structural block diagram of an image processing apparatus in one embodiment;

图8为一个实施例中电子设备的内部结构示意图。FIG. 8 is a schematic diagram of the internal structure of an electronic device in one embodiment.

具体实施方式Detailed ways

为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.

可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一人像区域称为第二人像区域,且类似地,可将第二人像区域称为第一人像区域。第一人像区域和第二人像区域两者都是人像区域,但其不是同一人像区域。It will be understood that the terms "first", "second", etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish a first element from another element. For example, a first portrait area may be referred to as a second portrait area, and similarly, a second portrait area may be referred to as a first portrait area, without departing from the scope of this application. Both the first portrait area and the second portrait area are portrait areas, but they are not the same portrait area.

本申请实施例提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图1为一个实施例中图像处理电路的示意图。如图1所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。Embodiments of the present application provide an electronic device. The above electronic device includes an image processing circuit, and the image processing circuit may be implemented by hardware and/or software components, and may include various processing units that define an ISP (Image Signal Processing, image signal processing) pipeline. FIG. 1 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 1 , for the convenience of description, only various aspects of the image processing technology related to the embodiments of the present application are shown.

如图1所示,图像处理电路包括ISP处理器140和控制逻辑器150。成像设备110捕捉的图像数据首先由ISP处理器140处理,ISP处理器140对图像数据进行分析以捕捉可用于确定和/或成像设备110的一个或多个控制参数的图像统计信息。成像设备110可包括具有一个或多个透镜112和图像传感器114的照相机。图像传感器114可包括色彩滤镜阵列(如Bayer滤镜),图像传感器114可获取用图像传感器114的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器140处理的一组原始图像数据。姿态传感器120(如三轴陀螺仪、霍尔传感器、加速度计)可基于姿态传感器120接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器140。姿态传感器120接口可以利用SMIA(Standard Mobile ImagingArchitecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。As shown in FIG. 1 , the image processing circuit includes an ISP processor 140 and a control logic 150 . Image data captured by imaging device 110 is first processed by ISP processor 140 , which analyzes the image data to capture image statistics that can be used to determine and/or control one or more parameters of imaging device 110 . Imaging device 110 may include a camera having one or more lenses 112 and an image sensor 114 . Image sensor 114 may include an array of color filters (eg, Bayer filters), image sensor 114 may obtain light intensity and wavelength information captured with each imaging pixel of image sensor 114 and provide a set of raw materials that may be processed by ISP processor 140 . image data. The attitude sensor 120 (eg, three-axis gyroscope, Hall sensor, accelerometer) may provide the acquired image processing parameters (eg, anti-shake parameters) to the ISP processor 140 based on the interface type of the attitude sensor 120 . The attitude sensor 120 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above interfaces.

此外,图像传感器114也可将原始图像数据发送给姿态传感器120,传感器120可基于姿态传感器120接口类型把原始图像数据提供给ISP处理器140,或者姿态传感器120将原始图像数据存储到图像存储器130中。Additionally, the image sensor 114 may also send raw image data to the gesture sensor 120 , the sensor 120 may provide the raw image data to the ISP processor 140 based on the gesture sensor 120 interface type, or the gesture sensor 120 may store the raw image data to the image memory 130 middle.

ISP处理器140按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器140可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。The ISP processor 140 processes raw image data pixel by pixel in various formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth precision.

ISP处理器140还可从图像存储器130接收图像数据。例如,姿态传感器120接口将原始图像数据发送给图像存储器130,图像存储器130中的原始图像数据再提供给ISP处理器140以供处理。图像存储器130可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。The ISP processor 140 may also receive image data from the image memory 130 . For example, the gesture sensor 120 interface sends the raw image data to the image memory 130, and the raw image data in the image memory 130 is provided to the ISP processor 140 for processing. The image memory 130 may be a part of a memory device, a storage device, or an independent dedicated memory in an electronic device, and may include a DMA (Direct Memory Access, direct memory access) feature.

当接收到来自图像传感器114接口或来自姿态传感器120接口或来自图像存储器130的原始图像数据时,ISP处理器140可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器130,以便在被显示之前进行另外的处理。ISP处理器140从图像存储器130接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器140处理后的图像数据可输出给显示器160,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器140的输出还可发送给图像存储器130,且显示器160可从图像存储器130读取图像数据。在一个实施例中,图像存储器130可被配置为实现一个或多个帧缓冲器。When receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. The ISP processor 140 receives processed data from the image memory 130 and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data. The image data processed by the ISP processor 140 may be output to the display 160 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit, graphics processor). In addition, the output of the ISP processor 140 may also be sent to the image memory 130 , and the display 160 may read image data from the image memory 130 . In one embodiment, image memory 130 may be configured to implement one or more frame buffers.

ISP处理器140确定的统计数据可发送给控制逻辑器150单元。例如,统计数据可包括陀螺仪的振动频率、自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜112阴影校正等图像传感器114统计信息。控制逻辑器150可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备110的控制参数及ISP处理器140的控制参数。例如,成像设备110的控制参数可包括姿态传感器120控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、照相机防抖位移参数、透镜112控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜112阴影校正参数。Statistics determined by the ISP processor 140 may be sent to the control logic 150 unit. For example, the statistics may include gyroscope vibration frequency, auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 112 shading correction, etc. image sensor 114 statistics. Control logic 150 may include a processor and/or microcontroller executing one or more routines (eg, firmware) that may determine control parameters and ISP processing of imaging device 110 based on received statistics control parameters of the controller 140. For example, control parameters of imaging device 110 may include attitude sensor 120 control parameters (eg, gain, integration time for exposure control, stabilization parameters, etc.), camera flash control parameters, camera stabilization displacement parameters, lens 112 control parameters (eg, focus or focal length for zooming), or a combination of these parameters. ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), and lens 112 shading correction parameters.

在一个实施例中,通过成像设备(照相机)110中的透镜112和图像传感器114获取待识别图像,并将待识别图像发送至ISP处理器140。ISP处理器140接收到待识别图像后,检测待识别图像中是否存在人脸;当待识别图像中存在人脸时,根据人脸获取包含人像的候选区域;人像包括人脸;对候选区域进行人像分割,即将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域。In one embodiment, the to-be-recognized image is acquired through the lens 112 and the image sensor 114 in the imaging device (camera) 110 , and the to-be-recognized image is sent to the ISP processor 140 . After the ISP processor 140 receives the to-be-recognized image, it detects whether there is a human face in the to-be-recognized image; when there is a human face in the to-be-recognized image, obtains a candidate area containing the human portrait according to the human face; the human portrait includes human faces; Portrait segmentation, that is, input the candidate region into the portrait segmentation network to obtain the portrait region, and use the portrait region as the main region of the image to be recognized.

当待识别图像中不存在人脸时,ISP处理器140对待识别图像进行主体识别,即将待识别图像输入主体识别网络,得到待识别图像的主体区域。When there is no face in the to-be-recognized image, the ISP processor 140 performs subject recognition on the to-be-recognized image, that is, the to-be-recognized image is input into the subject recognition network to obtain the subject area of the to-be-recognized image.

在一个实施例中,可以将识别到的主体区域发送至控制逻辑器150。控制逻辑器150接收到主体区域后,控制成像设备110中的透镜112进行移动,从而对焦至主体区域对应的主体,获取主体更清晰的图像。In one embodiment, the identified body regions may be sent to the control logic 150 . After receiving the subject area, the control logic 150 controls the lens 112 in the imaging device 110 to move, so as to focus on the subject corresponding to the subject area and obtain a clearer image of the subject.

图2为一个实施例中图像处理方法的流程图。如图2所示,图像处理方法包括步骤202至步骤210。FIG. 2 is a flowchart of an image processing method in one embodiment. As shown in FIG. 2 , the image processing method includes steps 202 to 210 .

步骤202,获取待识别图像。Step 202, acquiring an image to be recognized.

待识别图像指的是用于识别出主体区域的图像。通过对待识别图像进行识别,可以得到待识别图像中的主体。待识别图像可以是RGB(Red、Green、Blue)图像、灰度图像等其中的一种。RGB图像可以通过彩色摄像头拍摄得到。灰度图像可以通过黑白摄像头拍摄得到。该待识别图像可为电子设备本地存储的,也可为其他设备存储的,也可以为从网络上存储的,还可为电子设备实时拍摄的,不限于此。The image to be recognized refers to an image for recognizing the subject area. By recognizing the to-be-recognized image, the subject in the to-be-recognized image can be obtained. The image to be recognized may be one of RGB (Red, Green, Blue) images, grayscale images, and the like. RGB images can be captured by a color camera. Grayscale images can be captured with a black and white camera. The to-be-recognized image may be stored locally by the electronic device, may also be stored by other devices, may also be stored from the network, or may be captured by the electronic device in real time, which is not limited thereto.

具体地,电子设备的ISP处理器或中央处理器可从本地或其他设备或网络上获取待识别图像,或者通过摄像头拍摄一场景得到待识别图像。Specifically, the ISP processor or the central processing unit of the electronic device may acquire the image to be recognized from a local or other device or network, or obtain the image to be recognized by photographing a scene with a camera.

步骤204,检测待识别图像中是否存在人脸。Step 204, detecting whether there is a human face in the image to be recognized.

一般地,人脸包括眼睛、鼻子、嘴巴、耳朵、眉毛等特征,并且各个特征之间存在对应的位置关系,如左眼和右眼成对称,嘴巴成左右对称,鼻子位于正中间,耳朵位于人脸两侧,眉毛位于眼睛上方等。通过人脸识别平台检测待识别图像中是否包含眼睛、鼻子、嘴巴、耳朵、眉毛等特征以及各个特征的位置关系,可以确定待识别图像中是否存在人脸。在人脸识别平台中,预先存储有大量的人脸信息,通过人脸识别平台进行人脸检测,可以减少额外开销,节约计算机资源。Generally, a human face includes features such as eyes, nose, mouth, ears, eyebrows, etc., and there is a corresponding positional relationship between each feature. For example, the left eye and the right eye are symmetrical, the mouth is left-right symmetrical, the nose is in the middle, and the ears are in the middle. The sides of the face, the eyebrows are above the eyes, etc. The face recognition platform detects whether the image to be recognized contains features such as eyes, nose, mouth, ears, eyebrows, etc. and the positional relationship of each feature, so as to determine whether there is a face in the image to be recognized. In the face recognition platform, a large amount of face information is pre-stored, and the face detection through the face recognition platform can reduce extra overhead and save computer resources.

步骤206,当待识别图像中存在人脸时,根据人脸获取包含人像的候选区域;人像包括人脸。Step 206 , when there is a human face in the image to be recognized, obtain a candidate area containing the human portrait according to the human face; the human portrait includes human faces.

人像指的是包含人脸的区域,即人像可以包含人脸、脖子、手臂、双腿等部位。候选区域指的是包含人像的区域。A portrait refers to an area that contains a human face, that is, a portrait can include parts such as the face, neck, arms, and legs. The candidate area refers to the area that contains the portrait.

可以理解的是,人脸一般处于人像的顶端,则可以根据人脸的位置向下选中包含人像的区域作为候选区域。候选区域可以通过框选得到,即候选区域为矩形区域。候选区域还可以是圆形区域、正方形区域、三角形区域等,不限于此。It can be understood that the face is generally at the top of the portrait, and the region containing the portrait can be selected downward as a candidate region according to the position of the face. The candidate area can be obtained by box selection, that is, the candidate area is a rectangular area. The candidate area may also be a circular area, a square area, a triangular area, etc., which is not limited thereto.

在一个实施例中,当待识别图像中存在一个人脸时,根据该人脸获取包含人像的一个候选区域。在另一个实施例中,当待识别图像中存在至少两个人脸时,从至少两个人脸中确定一个人脸,根据该确定的人脸获取包含人像的候选区域。其中,从至少两个人脸中确定一个人脸,可以通过比较各个人脸的面积大小,从中确定面积最大的人脸;也可以通过比较各个人脸在待识别图像中的位置,从中确定最接近待识别图像的中心的人脸。In one embodiment, when there is a human face in the image to be recognized, a candidate region containing the human portrait is acquired according to the human face. In another embodiment, when there are at least two human faces in the image to be recognized, one human face is determined from the at least two human faces, and a candidate region containing the human portrait is obtained according to the determined human face. Among them, to determine a face from at least two faces, the face with the largest area can be determined by comparing the area of each face; it can also be determined by comparing the position of each face in the to-be-recognized image to determine the closest face. The face in the center of the image to be recognized.

在一个实施例中,当待识别图像存在人脸时,获取各个人脸的位置,当各个人脸的位置均未处于待识别图像的预设范围内时,执行将待识别图像输入主体识别网络,得到待识别图像的主体区域步骤。In one embodiment, when there is a face in the image to be recognized, the position of each face is obtained, and when the position of each face is not within the preset range of the image to be recognized, the image to be recognized is input into the subject recognition network. , the step of obtaining the subject area of the image to be recognized.

可以理解的是,当用户在拍摄风景照时,可能在照片的边缘部分将一些游客拍摄进去,则用户拍摄的主体为风景,并不是游客。因此,当待识别图像存在人脸,但是各个人脸的位置均处于待识别图像的预设范围内时,执行将待识别图像输入主体识别网络,得到待识别图像的主体区域步骤,即将待识别图像作为不存在人脸的图像进行主体识别。其中,预设范围可以是待识别图像的中心区域。It can be understood that, when the user is taking a landscape photo, some tourists may be captured in the edge of the photo, and the subject captured by the user is the landscape, not the tourists. Therefore, when there are faces in the to-be-recognized image, but the position of each face is within the preset range of the to-be-recognized image, the step of inputting the to-be-recognized image into the subject recognition network to obtain the subject area of the to-be-recognized image is performed, that is, the to-be-recognized image is obtained. The image is used for subject recognition as an image without a human face. Wherein, the preset range may be the central area of the image to be recognized.

在一个实施例中,当待识别图像存在人脸时,获取各个人脸的面积,当各个人脸的面积均小于面积阈值时,执行将待识别图像输入主体识别网络,得到待识别图像的主体区域步骤。In one embodiment, when there is a face in the image to be recognized, the area of each face is obtained, and when the area of each face is smaller than the area threshold, the image to be recognized is input into the subject recognition network to obtain the subject of the image to be recognized. Area steps.

可以理解的是,当用户在拍摄风景照或者拍摄其他对象时,可能在照片的背景中存在一些游客或者其他人,则用户拍摄的主体为风景或者其他对象,并不是背景中的游客或者其他人。因此,当待识别图像存在人脸,但是各个人脸的面积均小于面积阈值时,表示该人脸并不是用户所要拍摄的主体的人脸,执行将待识别图像输入主体识别网络,得到待识别图像的主体区域步骤,即将待识别图像作为不存在人脸的图像进行主体识别。It is understandable that when the user is taking a landscape photo or other objects, there may be some tourists or other people in the background of the photo, then the subject of the user's photo is the scenery or other objects, not the tourists or other people in the background. . Therefore, when there is a face in the image to be recognized, but the area of each face is smaller than the area threshold, it means that the face is not the face of the subject to be photographed by the user, and the image to be recognized is input into the subject recognition network to obtain the image to be recognized. The step of the subject area of the image is to perform subject recognition on the image to be recognized as an image without a human face.

步骤208,将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域。Step 208 , input the candidate region into the portrait segmentation network to obtain the portrait region, and use the portrait region as the main region of the image to be recognized.

人像分割网络指的是用于分割出人像区域的网络。分割出的人像区域可以包括人的全部部位,也可以包括人的部分部位。例如,人像区域可以包括人脸、脖子、双手、双脚、上半身躯干等全部部位;人像区域也可以仅包括人脸、双手、上半身躯干。The portrait segmentation network refers to the network used to segment out the portrait area. The segmented portrait region may include all parts of the person, or part of the person. For example, the portrait area may include all parts such as the face, neck, hands, feet, and upper torso; the portrait area may also include only the face, hands, and upper torso.

候选区域包含了人像,将候选区域输入人像分割网络之后,可以得到人像区域,则将人像区域作为待识别图像的主体区域。The candidate area contains a portrait. After the candidate area is input into the portrait segmentation network, the portrait area can be obtained, and the portrait area is used as the main area of the image to be recognized.

传统的主体识别技术,通常是将存在人脸的待识别图像输入主体识别网络,则识别出的人像区域存在识别不准确的问题,可能识别不出整个人像,或者将人像以及人像周围的区域作为主体区域,存在主体识别不准确的问题。The traditional subject recognition technology usually inputs the image to be recognized with a face into the subject recognition network, and the identified portrait area has the problem of inaccurate recognition, and the entire portrait may not be recognized, or the portrait and the area around the portrait are used as In the subject area, there is a problem of inaccurate subject recognition.

而在本申请中,当待识别图像中存在人脸时,通过人像分割网络可以更准确地得到人像区域,将人像区域作为待识别图像的主体区域,从而更准确地识别出待识别图像的主体区域。In this application, when there is a human face in the image to be recognized, the portrait region can be obtained more accurately through the portrait segmentation network, and the portrait region can be used as the subject area of the image to be recognized, so as to more accurately identify the subject of the image to be recognized. area.

步骤210,当待识别图像中不存在人脸时,将待识别图像输入主体识别网络,得到待识别图像的主体区域。Step 210, when there is no face in the image to be recognized, input the image to be recognized into the subject recognition network to obtain the subject area of the image to be recognized.

其中,主体识别(salient object detection)是指面对一个场景时,自动地对感兴趣区域进行处理而选择性的忽略不感兴趣区域。感兴趣区域称为主体区域。其中,主体是指各种对象,如花、猫、狗、牛、蓝天、白云、背景等。Among them, salient object detection refers to automatically processing the region of interest and selectively ignoring the uninteresting region when facing a scene. The region of interest is called the subject region. Among them, the subject refers to various objects, such as flowers, cats, dogs, cows, blue sky, white clouds, backgrounds, etc.

当待识别图像中不存在人脸时,将待识别图像输入主体识别网络,通过主体识别网络识别图像的主体区域。When there is no face in the image to be recognized, the image to be recognized is input into the subject recognition network, and the subject area of the image is recognized through the subject recognition network.

在一个实施例中,主体识别网络对待识别图像进行主体识别,得到待识别图像的主体区域,包括:步骤1,生成与待识别图像对应的中心权重图,其中,中心权重图所表示的权重值从中心到边缘逐渐减小;步骤2,将待识别图像和中心权重图输入到主体识别模型中,得到主体区域置信度图,其中,主体识别模型是预先根据同一场景的待识别图像、中心权重图及对应的已标注的主体掩膜图进行训练得到的模型;步骤3,根据主体区域置信度图确定待识别图像中的主体区域。In one embodiment, the subject recognition network performs subject recognition on the image to be recognized to obtain the subject area of the image to be recognized, including: Step 1, generating a center weight map corresponding to the image to be recognized, wherein the weight value represented by the center weight map It gradually decreases from the center to the edge; step 2, the image to be recognized and the center weight map are input into the subject recognition model, and the confidence map of the subject area is obtained, wherein the subject recognition model is based on the image to be recognized and the center weight of the same scene in advance. The model obtained by training the image and the corresponding marked subject mask image; step 3, determining the subject area in the image to be recognized according to the subject area confidence map.

步骤1,生成与该待处理图像对应的中心权重图,其中,该中心权重图所表示的权重值从中心到边缘逐渐减小。Step 1: Generate a center weight map corresponding to the image to be processed, wherein the weight value represented by the center weight map gradually decreases from the center to the edge.

其中,中心权重图是指用于记录待识别图像中各个像素点的权重值的图。中心权重图中记录的权重值从中心向四边逐渐减小,即中心权重最大,向四边权重逐渐减小。通过中心权重图表征待识别图像的图像中心像素点到图像边缘像素点的权重值逐渐减小。The central weight map refers to a map used to record the weight values of each pixel in the image to be recognized. The weight value recorded in the center weight map gradually decreases from the center to the four sides, that is, the center weight is the largest, and the weight gradually decreases toward the four sides. The weight value from the image center pixel point to the image edge pixel point of the image to be recognized is gradually reduced by the center weight map.

ISP处理器或中央处理器可以根据待识别图像的大小生成对应的中心权重图。该中心权重图所表示的权重值从中心向四边逐渐减小。中心权重图可采用高斯函数、或采用一阶方程、或二阶方程生成。该高斯函数可为二维高斯函数。The ISP processor or the central processing unit can generate the corresponding center weight map according to the size of the image to be recognized. The weight value represented by the center weight map gradually decreases from the center to the four sides. The center weight map can be generated by using a Gaussian function, or using a first-order equation, or a second-order equation. The Gaussian function may be a two-dimensional Gaussian function.

步骤2,将该待识别图像和中心权重图输入到主体识别模型中,得到主体区域置信度图,其中,主体识别模型是预先根据同一场景的待识别图像、深度图、中心权重图及对应的已标注的主体掩膜图进行训练得到的模型。Step 2, input the to-be-recognized image and the center weight map into the subject recognition model, and obtain the subject area confidence map, wherein the subject recognition model is based on the image to be recognized, the depth map, the center weight map and the corresponding image of the same scene in advance. The model obtained by training the annotated subject mask map.

其中,主体识别模型是预先采集大量的训练数据,将训练数据输入到包含有初始网络权重的主体识别模型进行训练得到的。每组训练数据包括同一场景对应的待识别图像、中心权重图及已标注的主体掩膜图。其中,待识别图像和中心权重图作为训练的主体识别模型的输入,已标注的主体掩膜(mask)图作为训练的主体识别模型期望输出得到的真实值(ground truth)。主体掩膜图是用于识别图像中主体的图像滤镜模板,可以遮挡图像的其他部分,筛选出图像中的主体。主体识别模型可训练能够识别检测各种主体,如人、花、猫、狗、背景等。Among them, the subject recognition model is obtained by collecting a large amount of training data in advance, and inputting the training data into the subject recognition model including the initial network weights for training. Each set of training data includes the to-be-recognized image corresponding to the same scene, the center weight map, and the labeled subject mask map. Among them, the image to be recognized and the center weight map are used as the input of the trained subject recognition model, and the marked subject mask map is used as the ground truth that the trained subject recognition model expects to output. The subject mask map is an image filter template used to identify the subject in the image, which can block other parts of the image and filter out the subject in the image. The subject recognition model can be trained to identify and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.

具体地,ISP处理器或中央处理器可将该待识别图像和中心权重图输入到主体识别模型中,进行检测可以得到主体区域置信度图。主体区域置信度图是用于记录主体属于哪种能识别的主体的概率,例如某个像素点属于人的概率是0.8,花的概率是0.1,背景的概率是0.1。Specifically, the ISP processor or the central processing unit can input the to-be-recognized image and the center weight map into the subject recognition model, and perform detection to obtain the subject area confidence map. The confidence map of the subject area is used to record the probability of which identifiable subject the subject belongs to. For example, the probability of a pixel belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of the background is 0.1.

步骤3,根据该主体区域置信度图确定该待识别图像中的主体区域。Step 3: Determine the subject area in the to-be-recognized image according to the subject area confidence map.

其中,主体是指各种对象,如人、花、猫、狗、牛、蓝天、白云、背景等。主体区域是指需要的主体,可根据需要选择。Among them, the subject refers to various objects, such as people, flowers, cats, dogs, cows, blue sky, white clouds, background, etc. The subject area refers to the required subject, which can be selected as required.

具体地,ISP处理器或中央处理器可根据主体区域置信度图选取置信度最高或次高等作为待识别图像中的主体,若存在一个主体,则将该主体作为主体区域;若存在多个主体,可根据需要选择其中一个或多个主体作为主体区域。Specifically, the ISP processor or the central processing unit may select the highest or second highest confidence level as the subject in the image to be recognized according to the subject area confidence map, and if there is one subject, the subject will be used as the subject area; if there are multiple subjects , you can select one or more subjects as the subject area as required.

本实施例中的图像处理方法,获取待识别图像,并生成与待识别图像对应的中心权重图后,将待识别图像和中心权重图输入到对应的主体识别模型中检测,可以得到主体区域置信度图,根据主体区域置信度图可以确定得到待识别图像中的主体区域,利用中心权重图可以让图像中心的对象更容易被检测,利用训练好的利用待识别图像、中心权重图和主体掩膜图等训练得到的主体识别模型,可以更加准确的识别出待识别图像中的主体区域。In the image processing method in this embodiment, after acquiring the image to be recognized and generating the center weight map corresponding to the image to be recognized, the image to be recognized and the center weight map are input into the corresponding subject recognition model for detection, and the confidence of the subject area can be obtained. According to the confidence map of the subject area, the subject area in the image to be recognized can be determined. Using the center weight map can make the object in the center of the image easier to detect. Using the trained image to be recognized, the center weight map and the subject mask The subject recognition model obtained by training such as film image can more accurately identify the subject area in the image to be recognized.

上述主体识别方法,当检测到待识别图像存在人脸时,根据人脸获取包含人像的候选区域,将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域;当检测到待识别图像中不存在人脸时,将待识别图像输入主体识别网络,得到待识别图像的主体区域。通过设计获取待识别图像的主体区域的双路网络,即当待识别图像中不存在人脸时,通过主体识别网络获取主体区域;当待识别图像中存在人脸时,从待识别图像中确定包含人像的候选图像,并通过人像分割网络可以获取更准确的人像区域作为主体区域。In the above-mentioned subject recognition method, when it is detected that there is a human face in the to-be-recognized image, a candidate area containing the portrait is obtained according to the human face, the candidate area is input into the portrait segmentation network to obtain the portrait area, and the portrait area is used as the subject area of the to-be-recognized image; when When it is detected that there is no human face in the to-be-recognized image, the to-be-recognized image is input into the subject recognition network to obtain the subject area of the to-be-recognized image. By designing a two-way network to obtain the subject area of the image to be recognized, that is, when there is no face in the image to be recognized, the subject area is obtained through the subject recognition network; when there is a face in the image to be recognized, it is determined from the image to be recognized. Candidate images containing portraits can be obtained through the portrait segmentation network to obtain more accurate portrait regions as subject regions.

在一个实施例中,当待识别图像存在人脸时,根据人脸获取包含人像的候选区域,包括:当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域;一个人脸区域包含一个人脸;分别获取至少两个人脸区域的面积;将至少两个人脸区域的面积进行比较,获取面积最大的人脸区域作为第一人脸区域;根据第一人脸区域获取包含人像的第一候选区域。将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域,包括:将第一候选区域输入人像分割网络,得到第一人像区域,将第一人像区域作为待识别图像的主体区域。In one embodiment, when there is a human face in the to-be-recognized image, acquiring a candidate area containing the human face according to the human face includes: when it is detected that there are at least two human faces in the to-be-recognized image, separately acquiring the human face area containing the human face ; a face area contains one face; respectively obtain the areas of at least two face areas; compare the areas of at least two face areas, and obtain the face area with the largest area as the first face area; according to the first face area The region acquires the first candidate region containing the portrait. Input the candidate area into the portrait segmentation network to obtain the portrait area, and use the portrait area as the main area of the image to be recognized, including: inputting the first candidate area into the portrait segmentation network to obtain the first portrait area, and using the first portrait area as the to-be-recognized image area. Identify the subject area of the image.

人脸区域指的是包含人脸的区域。人脸区域可以通过框选得到,即人脸区域为矩形区域。人脸区域还可以是圆形区域、正方形区域、三角形区域等,不限于此。第一人脸区域指的是面积最大的人脸区域。第一候选区域指的是包含第一人脸区域对应的人像的候选区域。A face area refers to an area containing a human face. The face area can be obtained by frame selection, that is, the face area is a rectangular area. The face area may also be a circular area, a square area, a triangular area, etc., which is not limited thereto. The first face area refers to the face area with the largest area. The first candidate region refers to a candidate region containing a portrait corresponding to the first face region.

可以理解的是,当待处理图像中的人脸区域的面积越大时,表示该人脸越靠近摄像头,则越靠近摄像头的对象为用户想拍摄的主体。因此,可以分别获取至少两个人脸区域的面积;将至少两个人脸区域的面积进行比较,并获取面积最大的人脸区域作为第一人脸区域;根据It can be understood that, when the area of the face area in the image to be processed is larger, it means that the face is closer to the camera, and the object closer to the camera is the subject that the user wants to photograph. Therefore, the areas of at least two face regions can be obtained respectively; the areas of the at least two face regions are compared, and the face region with the largest area is obtained as the first face region; according to

第一人脸区域获取包含人像的第一候选区域。一般地,面积最大的人脸区域中的人脸最靠近摄像头。The first face region acquires a first candidate region containing a portrait. Generally, the face in the face area with the largest area is closest to the camera.

在本实施例中,当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域;分别获取至少两个人脸区域的面积,并将各个人脸区域的面积进行比较,获取面积最大的人脸区域作为第一人脸区域,根据第一人脸区域获取包含人像的第一候选区域,提高了获取的第一人脸区域和第一候选区域的准确性,并输入人像分割网络,可以得到更准确的第一人像区域,将第一人像区域作为主体区域,从而提高了主体识别的准确性。从多个人脸中确定一个人脸,并最终得到该人脸对应的人像,避免了多人脸场景下将识别出多个主体的问题,提高了对焦的单一性。In this embodiment, when it is detected that there are at least two faces in the to-be-recognized image, face areas containing the faces are obtained respectively; areas of at least two face areas are obtained respectively, and the areas of the respective face areas are compared , obtain the face area with the largest area as the first face area, obtain the first candidate area containing the portrait according to the first face area, improve the accuracy of the obtained first face area and the first candidate area, and input The portrait segmentation network can obtain a more accurate first portrait area, and use the first portrait area as the subject area, thereby improving the accuracy of subject recognition. A face is determined from multiple faces, and finally a portrait corresponding to the face is obtained, which avoids the problem of identifying multiple subjects in a multi-face scene, and improves the unity of focusing.

在一个实施例中,还可以获取面积次大的人脸区域作为第一人脸区域,不限于此。In one embodiment, the face region with the next largest area may also be acquired as the first face region, but not limited to this.

在一个实施例中,当待识别图像存在人脸时,根据人脸获取包含人像的候选区域,包括:当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域;一个人脸区域包含一个人脸;分别获取至少两个人脸区域的位置信息;根据至少两个人脸区域的位置信息获取距离待识别图像的中心最近的人脸区域作为第二人脸区域;根据第二人脸区域获取包含人像的第二候选区域。将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域,包括:将第二候选区域输入人像分割网络,得到第二人像区域,将第二人像区域作为待识别图像的主体区域。In one embodiment, when there is a human face in the to-be-recognized image, acquiring a candidate area containing the human face according to the human face includes: when it is detected that there are at least two human faces in the to-be-recognized image, separately acquiring the human face area containing the human face A human face area includes a human face; Obtain the position information of at least two human face areas respectively; Obtain the nearest human face area from the center of the image to be recognized according to the position information of at least two human face areas as the second human face area; According to The second face region acquires a second candidate region containing a portrait. Input the candidate area into the portrait segmentation network to obtain the portrait area, and use the portrait area as the main area of the image to be recognized, including: inputting the second candidate area into the portrait segmentation network to obtain the second portrait area, and using the second portrait area as the image to be recognized main area.

人脸区域指的是包含人脸的区域。人脸区域可以通过框选得到,即人脸区域为矩形区域。人脸区域还可以是圆形区域、正方形区域、三角形区域等,不限于此。人脸区域的位置信息可以用人脸区域的中心的坐标进行表示,也可以用人脸区域中的任意一个点的坐标进行表示,不限于此。A face area refers to an area containing a human face. The face area can be obtained by frame selection, that is, the face area is a rectangular area. The face area may also be a circular area, a square area, a triangular area, etc., which is not limited thereto. The position information of the face area may be represented by the coordinates of the center of the face area, or may be represented by the coordinates of any point in the face area, which is not limited thereto.

第二人脸区域指的是距离待识别图像的中心最近的人脸区域。第二候选区域指的是包含第二人脸区域对应的人像的候选区域。The second face area refers to the face area closest to the center of the image to be recognized. The second candidate region refers to a candidate region containing a portrait corresponding to the second face region.

具体地,可以预先获取待识别图像的中心的位置信息,将至少两个人脸区域的位置信息分别与待识别图像的中心的位置信息进行比较,根据比较结果可以获取距离待识别图像的中心最近的人脸区域。Specifically, the position information of the center of the image to be recognized can be obtained in advance, the position information of at least two face regions is compared with the position information of the center of the image to be recognized, and the center of the image to be recognized can be obtained according to the comparison result. face area.

例如,待识别图像的中心的位置信息为(50,50),人脸区域A的位置信息用(40,30)进行表示,人脸区域B的位置信息用(20,50),人脸区域C的位置信息用(40,40),因此,可以采用以下计算公式计算各个人脸区域与待识别图像的中心的距离: 其中,S为人脸区域与待识别图像的中心的距离,a1和b1分别为一个点的横坐标和纵坐标,a2和b2分别为另一个点的横坐标和纵坐标。For example, the position information of the center of the image to be recognized is (50, 50), the position information of the face area A is represented by (40, 30), the position information of the face area B is represented by (20, 50), and the face area The position information of C is (40,40), therefore, the following formula can be used to calculate the distance between each face region and the center of the image to be recognized: Among them, S is the distance between the face area and the center of the image to be recognized, a 1 and b 1 are the abscissa and ordinate of one point, respectively, and a 2 and b 2 are the abscissa and ordinate of another point, respectively.

因此,人脸区域A与待识别图像的中心的距离为 人脸区域B与待识别图像的中心的距离为人脸区域C与待识别图像的中心的距离为人脸区域C距离待识别图像的中心最近,将人脸区域C作为第二人脸区域。Therefore, the distance between the face area A and the center of the image to be recognized is The distance between the face area B and the center of the image to be recognized is The distance between the face area C and the center of the image to be recognized is but The face area C is the closest to the center of the image to be recognized, and the face area C is used as the second face area.

在本实施例中,当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域,分别获取至少两个人脸区域的位置信息;将距离待识别图像的中心最近的人脸区域作为第二人脸区域,并根据第二人脸区域获取包含人像的第二候选区域,可以获取更准确的第二人脸区域和第二候选区域,从而将第二候选区域输入人像分割网络,可以获取更准确的第二人像区域作为主体区域。从多个人脸中确定一个人脸,并最终得到该人脸对应的人像,避免了多人脸场景下将识别出多个主体的问题,提高了对焦的单一性。In this embodiment, when it is detected that there are at least two faces in the to-be-recognized image, face regions containing human faces are obtained respectively, and position information of at least two face regions is obtained respectively; The face area is used as the second face area, and the second candidate area containing the portrait can be obtained according to the second face area, and the more accurate second face area and the second candidate area can be obtained, so that the second candidate area can be input into the portrait By segmenting the network, a more accurate second portrait area can be obtained as the subject area. A face is determined from multiple faces, and finally a portrait corresponding to the face is obtained, which avoids the problem of identifying multiple subjects in a multi-face scene, and improves the unity of focusing.

在一个实施例中,当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域;一个人脸区域包含一个人脸;分别获取至少两个人脸区域的面积;从至少两个人脸区域中获取面积大于面积阈值的候选人脸区域;分别获取各个候选人脸区域的位置信息;根据各个候选人脸区域的位置信息获取距离待识别图像的中心最近的候选人脸区域作为目标人脸区域;根据目标人脸区域获取包含人像的候选区域。In one embodiment, when it is detected that there are at least two human faces in the image to be recognized, face regions containing human faces are obtained respectively; one face region includes one human face; the areas of at least two face regions are obtained respectively; Obtain candidate face regions whose area is greater than the area threshold in at least two face regions; obtain the position information of each candidate face region respectively; obtain the candidate face region closest to the center of the image to be recognized according to the position information of each candidate face region As the target face area; obtain the candidate area containing the portrait according to the target face area.

具体地,可以分别获取各个人脸区域的面积,并获取面积大于面积阈值的候选人脸区域,即筛选掉面积较小的人脸区域,避免了对面积较小的人脸区域进行处理,提高主体识别的效率。Specifically, the area of each face area can be obtained separately, and the candidate face area with the area larger than the area threshold can be obtained, that is, the face area with a smaller area is screened out, and the processing of the face area with a smaller area is avoided, and the improvement of Efficiency of subject recognition.

获取各个候选人脸区域的位置信息,并获取距离待识别图像的中心最近的候选人脸区域作为目标人脸区域,可以获取更准确的人脸区域;根据目标人脸区域获取包含人像的候选区域,可以获取更准确的候选区域。Obtain the position information of each candidate face region, and obtain the candidate face region closest to the center of the image to be recognized as the target face region, which can obtain a more accurate face region; obtain the candidate region containing the portrait according to the target face region , more accurate candidate regions can be obtained.

在一个实施例中,如图3所示,获取待识别图像302;对待识别图像302进行人脸检测,判断待识别图像302中是否存在人脸,即步骤304;当待识别图像302中不存在人脸时,将待识别图像302输入主体识别网络306,得到待识别图像302的主体区域308。In one embodiment, as shown in FIG. 3 , the image to be recognized 302 is acquired; the image to be recognized 302 is subjected to face detection to determine whether there is a human face in the image to be recognized 302 , namely step 304 ; when the image to be recognized 302 does not exist In the case of a human face, the to-be-recognized image 302 is input into the subject recognition network 306 to obtain a subject area 308 of the to-be-recognized image 302 .

当待识别图像302中存在人脸时,可以获取包含人脸的人脸区域310。当待识别图像302中存在至少两个人脸时,可以分别获取包含人脸的人脸区域,从至少两个人脸区域中确定一个人脸区域310。When there is a human face in the image to be recognized 302, a human face area 310 containing the human face can be acquired. When there are at least two human faces in the to-be-recognized image 302, face regions including human faces may be acquired respectively, and a human face region 310 may be determined from the at least two human face regions.

在一个实施例中,可以分别获取至少两个人脸区域的面积,获取面积最大的人脸区域作为第一人脸区域,即人脸区域310。在另外一个实施例中,也可以分别获取至少两个人脸区域的位置信息,获取距离待识别图像的中心最近的人脸区域作为第二人脸区域,即人脸区域310。In one embodiment, the areas of at least two face regions may be obtained respectively, and the face region with the largest area may be obtained as the first face region, that is, the face region 310 . In another embodiment, the position information of at least two face regions can also be obtained separately, and the face region closest to the center of the image to be recognized is obtained as the second face region, that is, the face region 310 .

根据人脸区域310获取包含人像的候选区域310;将候选区域312输入人像分割网络314,得到人像区域,将人像区域作为待识别图像302的主体区域308。A candidate region 310 containing a portrait is obtained according to the face region 310 ; the candidate region 312 is input into the portrait segmentation network 314 to obtain a portrait region, and the portrait region is used as the main region 308 of the image to be recognized 302 .

在一个实施例中,如图4所示,根据人脸获取包含人像的候选区域,包括:In one embodiment, as shown in FIG. 4 , a candidate region containing a portrait is obtained according to a human face, including:

步骤402,获取包含人脸的人脸区域。Step 402, acquiring a face area including a face.

步骤404,获取人脸区域中的至少两个特征点。Step 404, acquiring at least two feature points in the face region.

特征点指的是图像灰度值发生剧烈变化的点或者在图像边缘上曲率较大的点(即两个边缘的交点)。在人脸区域中,特征点可以是眼睛、鼻子、嘴角、眉毛、痣等。The feature point refers to the point where the gray value of the image changes drastically or the point with large curvature on the edge of the image (that is, the intersection of two edges). In the face area, the feature points can be eyes, nose, mouth corners, eyebrows, moles, etc.

步骤406,根据至少两个特征点确定人脸的角度。Step 406: Determine the angle of the human face according to the at least two feature points.

人脸的角度指的是人脸倾斜的角度。人脸的角度可以是向左倾斜,向右倾斜,往左后方偏转,往右后方偏转等等。例如,人脸的角度可以是向左倾斜20度,向右倾斜10度等。The angle of the face refers to the angle at which the face is tilted. The angle of the face can be tilted to the left, tilted to the right, deflected to the rear left, deflected to the rear right, and so on. For example, the angle of the face can be tilted 20 degrees to the left, 10 degrees to the right, and so on.

可以理解的是,当人脸处于倾斜时,人脸区域中的特征点也相应地改变相对位置关系。例如,当人脸向左倾斜时,将人脸区域中两个特征点,即左眼特征点和右眼特征点进行连线,则该连线也向左倾斜。又如,当人脸向右倾斜时,将人脸区域中两个特征点,即左嘴角特征点和左嘴角特征点进行连线,则该连线也向右倾斜。又如,当人脸往左后方偏转时,左眼特征点和右眼特征点的连线长度变短,鼻尖特征点处于人脸区域的左方。It can be understood that when the face is tilted, the feature points in the face region also change the relative positional relationship accordingly. For example, when the face is tilted to the left, a line is connected between two feature points in the face region, that is, the left-eye feature point and the right-eye feature point, and the line is also tilted to the left. For another example, when the face is tilted to the right, two feature points in the face region, that is, the feature point of the left mouth corner and the feature point of the left mouth corner are connected, and the line is also tilted to the right. For another example, when the face is deflected to the rear left, the length of the line connecting the feature point of the left eye and the feature point of the right eye becomes shorter, and the feature point of the nose tip is located to the left of the face area.

步骤408,根据人脸的角度获取包含人像的候选区域;候选区域的角度与人脸的角度的差值在预设范围内。Step 408: Obtain a candidate area containing the portrait according to the angle of the human face; the difference between the angle of the candidate area and the angle of the human face is within a preset range.

一般地,当身体处于倾斜状态时,则人脸会相应地存在倾斜。因此,可以根据人脸的角度获取包含人像的候选区域,候选区域的角度与人脸的角度的差值在预设范围内。预设范围可以根据用户需要进行设定。例如,预设范围可以是5度至10度之间。Generally, when the body is in a tilted state, the human face will be tilted accordingly. Therefore, a candidate region containing the portrait can be obtained according to the angle of the human face, and the difference between the angle of the candidate region and the angle of the human face is within a preset range. The preset range can be set according to user needs. For example, the preset range may be between 5 degrees and 10 degrees.

例如,当人脸的角度为向左倾斜20度,则根据人脸的角度获取包含人像的候选区域的角度也可以为20度,也可以为25度等。通过人脸的角度可以获取更加准确的候选区域。For example, when the angle of the human face is inclined to the left by 20 degrees, the angle at which the candidate region containing the human portrait is obtained according to the angle of the human face may be 20 degrees, 25 degrees, or the like. A more accurate candidate region can be obtained from the angle of the face.

在本实施例中,获取包含人脸的人脸区域,获取人脸区域中的至少两个特征点,根据至少两个特征点确定人脸的角度,根据人脸的角度可以获取更准确的候选区域。In this embodiment, a face region containing a human face is acquired, at least two feature points in the face region are acquired, an angle of the human face is determined according to the at least two feature points, and a more accurate candidate can be acquired according to the angle of the human face area.

在一个实施例中,如图5所示,根据至少两个特征点确定人脸的角度,包括:In one embodiment, as shown in FIG. 5 , the angle of the human face is determined according to at least two feature points, including:

步骤502,将至少两个特征点进行连接,得到各条连接线。Step 502: Connect at least two feature points to obtain each connection line.

获取人脸区域中的至少两个特征点之后,将至少两个特征点进行连接。例如,当特征点为左眼特征点和右眼特征点,将左眼特征点和右眼特征点进行连接,得到连接线;当特征点为左嘴角特征点和右嘴角特征点,将左嘴角特征点和右嘴角特征点进行连接,得到连接线;当特征点为鼻尖特征点、左眼特征点和右眼特征点,将左眼特征点与右眼特征点进行连接,得到连接线,再将连接线的中心与鼻尖特征点进行连接,得到另外一条连接线。After acquiring at least two feature points in the face region, the at least two feature points are connected. For example, when the feature points are left-eye feature points and right-eye feature points, connect the left-eye feature points and right-eye feature points to obtain a connecting line; when the feature points are left-mouth feature points and right-mouth feature points, connect the left-mouth feature points Connect the feature points and the feature points of the right corner of the mouth to obtain the connecting line; when the feature points are the nose tip feature point, the left eye feature point and the right eye feature point, connect the left eye feature point and the right eye feature point to obtain the connecting line, and then Connect the center of the connecting line to the nose tip feature point to get another connecting line.

步骤504,获取各条连接线的角度,并基于各条连接线的角度确定人脸的角度。Step 504: Acquire the angle of each connection line, and determine the angle of the face based on the angle of each connection line.

连接线的角度可以用于表示人脸的角度。例如,当左眼特征点和右眼特征点的连接线的角度向左偏转20度,则人脸的角度可以为向左偏转20度;当左嘴角特征点和右嘴角特征点的连接线的角度向右偏转10度,则人脸的角度可以为向右偏转10度。The angle of the connecting line can be used to represent the angle of the face. For example, when the angle of the connecting line between the left-eye feature point and the right-eye feature point is deflected to the left by 20 degrees, the angle of the face can be deflected to the left by 20 degrees; If the angle is deflected to the right by 10 degrees, the angle of the face can be deflected to the right by 10 degrees.

当存在一条连接线时,可以将该连接线的角度作为人脸的角度。当存在至少两条连接线时,基于至少两条连接线的角度确定人脸的角度。例如,当存在两条连接线,一条连接线为左眼特征点和右眼特征点的连接线,另一条连接线为左嘴角特征点和右嘴角特征点的连接线,则可以将两条连接线的角度求平均值,将该平均值作为人脸的角度。When there is a connecting line, the angle of the connecting line can be regarded as the angle of the face. When there are at least two connecting lines, the angle of the human face is determined based on the angles of the at least two connecting lines. For example, when there are two connecting lines, one connecting line is the connecting line between the left eye feature point and the right eye feature point, and the other connecting line is the connecting line between the left mouth corner feature point and the right mouth corner feature point, you can connect the two The angle of the line is averaged, and the average value is taken as the angle of the face.

又如,当存在两条连接线,第一条连接线为左眼特征点和右眼特征点的连接线,将第一条连接线的中心与鼻尖特征点进行连接得到第二条连接线,则第二条连接线的角度可以表示人脸的角度。For another example, when there are two connecting lines, the first connecting line is the connecting line between the left eye feature point and the right eye feature point, and the second connecting line is obtained by connecting the center of the first connecting line with the nose tip feature point, Then the angle of the second connecting line can represent the angle of the face.

在本实施例中,将至少两个特征点进行连接,得到各条连接线,获取各条连接线的角度,并基于各条连接线的角度可以确定更准确的人脸的角度。In this embodiment, at least two feature points are connected to obtain each connection line, the angle of each connection line is obtained, and a more accurate face angle can be determined based on the angle of each connection line.

在一个实施例中,将至少两个特征点进行连接,得到各条连接线,包括:当至少两个特征点包括左眼特征点和右眼特征点时,将左眼特征点与右眼特征点进行连接得到第一连接线。获取各条连接的角度,并基于各条连接线的角度确定人脸的角度,包括:获取第一连接线的角度,并基于第一连接线的角度确定人脸的角度。In one embodiment, connecting at least two feature points to obtain each connection line includes: when the at least two feature points include a left-eye feature point and a right-eye feature point, connecting the left-eye feature point and the right-eye feature point Point to connect to get the first connection line. Obtaining the angle of each connection, and determining the angle of the face based on the angle of each connection line includes: obtaining the angle of the first connection line, and determining the angle of the face based on the angle of the first connection line.

第一连接线指的是左眼特征点与右眼特征点的连线。The first connecting line refers to the connecting line between the left-eye feature point and the right-eye feature point.

可以理解的是,当待处理图像中的人脸处于倾斜状态时,则人脸上的左眼和右眼的连线同样也处于倾斜状态。因此,可以将左眼特征点与右眼特征点进行连接得到第一连接线,获取第一连接线的角度,并基于第一连接线的角度确定人脸的角度。It can be understood that, when the human face in the image to be processed is in an oblique state, the line connecting the left eye and the right eye on the human face is also in an oblique state. Therefore, the left-eye feature point and the right-eye feature point can be connected to obtain the first connecting line, the angle of the first connecting line can be obtained, and the angle of the face can be determined based on the angle of the first connecting line.

进一步地,基于第一连接线的角度确定人脸的角度;人脸的角度与第一连接线的角度的差值在预设范围内。Further, the angle of the human face is determined based on the angle of the first connecting line; the difference between the angle of the human face and the angle of the first connecting line is within a preset range.

预设范围可以根据用户需要进行设定。在一个实施例中,可以将第一连接线的角度作为人脸的角度。在另外一个实施例中,也可以基于第一连接线的角度确定人脸的角度,人脸的角度与第一连接线的角度不同,并且人脸的角度与第一连接线的角度的差值在预设范围内。例如,第一连接线的角度为向左偏10度,而确定的人脸的角度可以为向左偏8度。The preset range can be set according to user needs. In one embodiment, the angle of the first connection line may be used as the angle of the human face. In another embodiment, the angle of the human face may also be determined based on the angle of the first connecting line, the angle of the human face is different from the angle of the first connecting line, and the difference between the angle of the human face and the angle of the first connecting line within the preset range. For example, the angle of the first connection line is 10 degrees to the left, and the determined angle of the face may be 8 degrees to the left.

在本实施例中,将左眼特征点和右眼特征点进行连接得到第一连接线,获取第一连接线的角度,并基于第一连接线的角度可以确定出更准确的人脸的角度。In this embodiment, the first connecting line is obtained by connecting the left-eye feature point and the right-eye feature point, the angle of the first connecting line is obtained, and based on the angle of the first connecting line, a more accurate angle of the face can be determined .

在一个实施例中,将至少两个特征点进行连接,得到各条连接线,包括:当至少两个特征点包括左嘴角特征点和右嘴角特征点时,将左嘴角特征点和右嘴角特征点进行连接得到第二连接线。获取各条连接的角度,并基于各条连接线的角度确定人脸的角度,包括:获取第二连接线的角度,并基于第二连接线的角度确定人脸的角度。In one embodiment, connecting at least two feature points to obtain each connection line includes: when the at least two feature points include a left mouth corner feature point and a right mouth corner feature point, connecting the left mouth corner feature point and the right mouth corner feature point Point to connect to get the second connecting line. Acquiring the angle of each connection and determining the angle of the face based on the angle of each connection line includes: acquiring the angle of the second connection line, and determining the angle of the face based on the angle of the second connection line.

第二连接线指的是左嘴角特征点和右嘴角特征点的连线。The second connecting line refers to the connecting line between the feature points of the left corner of the mouth and the feature points of the right corner of the mouth.

可以理解的是,当待处理图像中的人脸处于倾斜状态时,则人脸上的左嘴角和右嘴角的连线同样也处于倾斜状态。因此,可以将左嘴角特征点和右嘴角特征点进行连接得到第二连接线,获取第二连接线的角度,并基于第二连接线的角度确定人脸的角度。It can be understood that when the human face in the image to be processed is in a tilted state, the line connecting the left corner of the mouth and the right corner of the mouth on the human face is also in a tilted state. Therefore, a second connecting line can be obtained by connecting the feature points of the left corner of the mouth and the feature points of the right corner of the mouth, the angle of the second connecting line is obtained, and the angle of the face is determined based on the angle of the second connecting line.

进一步地,基于第二连接线的角度确定人脸的角度;人脸的角度与第二连接线的角度的差值在预设范围内。Further, the angle of the human face is determined based on the angle of the second connecting line; the difference between the angle of the human face and the angle of the second connecting line is within a preset range.

预设范围可以根据用户需要进行设定。在一个实施例中,可以将第二连接线的角度作为人脸的角度。在另外一个实施例中,也可以基于第二连接线的角度确定人脸的角度,人脸的角度与第二连接线的角度不同,并且人脸的角度与第二连接线的角度的差值在预设范围内。例如,第二连接线的角度为向左偏8度,而确定的人脸的角度可以为向左偏9度。The preset range can be set according to user needs. In one embodiment, the angle of the second connection line may be used as the angle of the human face. In another embodiment, the angle of the human face may also be determined based on the angle of the second connecting line, the angle of the human face is different from the angle of the second connecting line, and the difference between the angle of the human face and the angle of the second connecting line within the preset range. For example, the angle of the second connection line is 8 degrees to the left, and the determined angle of the face can be 9 degrees to the left.

在本实施例中,将左嘴角特征点和右嘴角特征点进行连接得到第二连接线,获取第二连接线的角度,并基于第二连接线的角度可以确定出更准确的人脸的角度。In this embodiment, a second connecting line is obtained by connecting the feature point of the left corner of the mouth and the feature point of the right corner of the mouth, the angle of the second connecting line is obtained, and based on the angle of the second connecting line, a more accurate angle of the face can be determined .

在一个实施例中,上述方法还包括:当至少两个特征点包括左眼特征点、右眼特征点、左嘴角特征点和右嘴角特征点时,将左眼特征点和右眼特征点进行连接得到第一连接线,将左嘴角特征点和右嘴角特征点进行连接得到第二连接线;分别获取第一连接线的角度和第二连接线的角度,并基于第一连接线的角度和第二连接线的角度确定人脸的角度。In one embodiment, the above method further includes: when the at least two feature points include a left-eye feature point, a right-eye feature point, a left-mouth corner feature point, and a right-mouth corner feature point, comparing the left-eye feature point and the right-eye feature point The first connecting line is obtained by connecting, and the second connecting line is obtained by connecting the feature point of the left corner of the mouth and the feature point of the right corner of the mouth; the angle of the first connecting line and the angle of the second connecting line are obtained respectively, and the angle of the first connecting line and the angle of the second connecting line are obtained. The angle of the second connecting line determines the angle of the face.

在一个实施例中,可以对第一连接线的角度和第二连接线的角度求平均值,将该平均值作为人脸的角度。在另外一个实施例中,还可以获取第一连接线的角度的第一权重因子,以及第二连接线的角度的第二权重因子,根据第一连接线的角度、第一权重因子、第二连接线的角度和第二权重因子求取加权平均值,将该加权平均值作为人脸的角度。In one embodiment, the angle of the first connection line and the angle of the second connection line may be averaged, and the average value may be used as the angle of the human face. In another embodiment, the first weighting factor of the angle of the first connecting line and the second weighting factor of the angle of the second connecting line may also be obtained, according to the angle of the first connecting line, the first weighting factor, the second weighting factor A weighted average is obtained from the angle of the connecting line and the second weighting factor, and the weighted average is taken as the angle of the face.

在本实施例中,基于左眼特征点、右眼特征点、左嘴角特征点和右嘴角特征点得到第一连接线和第二连接线,并根据第一连接线的角度和第二连接线的角度可以得到更准确的人脸的角度。In this embodiment, the first connecting line and the second connecting line are obtained based on the left eye feature point, the right eye feature point, the left mouth corner feature point and the right mouth corner feature point, and the angle of the first connecting line and the second connecting line are obtained according to the angle of the first connecting line and the second connecting line. The angle can get a more accurate angle of the face.

在一个实施例中,上述方法还包括:当至少两个特征点还包括鼻尖特征点时,获取鼻尖特征点在人脸区域中的位置信息。所述基于所述第一连接线的角度确定人脸的角度,包括:基于第一连接线的角度和鼻尖特征点在人脸区域中的位置信息确定人脸的角度。In one embodiment, the above method further includes: when the at least two feature points further include a nose tip feature point, acquiring position information of the nose tip feature point in the face region. The determining the angle of the face based on the angle of the first connecting line includes: determining the angle of the face based on the angle of the first connecting line and the position information of the nose tip feature point in the face region.

鼻尖特征点在人脸区域中的位置信息可以用坐标进行表示,也可以用鼻尖特征点距离人脸区域的中心的距离进行表示,不限于此。The position information of the nose tip feature point in the face region may be represented by coordinates, and may also be represented by the distance between the nose tip feature point and the center of the face region, which is not limited thereto.

一般地,鼻尖位于人脸的中心区域。通过获取鼻尖特征点在人脸区域中的位置信息,可以知道人脸的前后朝向。例如,当人脸往左后方转时,则鼻尖在人脸区域中处于左方;当人脸往右后方转时,则鼻尖在人脸区域中朝向右方。基于第一连接线的角度和鼻尖特征点在人脸区域中的位置信息,可以更准确地确定人脸的角度。Generally, the tip of the nose is located in the central area of the face. By obtaining the position information of the nose tip feature point in the face area, the front and rear orientation of the face can be known. For example, when the human face is turned to the left and rear, the nose tip is to the left in the human face area; when the human face is turned to the right rear, the nose tip is to the right in the human face area. Based on the angle of the first connecting line and the position information of the nose tip feature point in the face region, the angle of the face can be determined more accurately.

在另一个实施例中,上述方法还包括:当至少两个特征点还包括鼻尖特征点时,获取鼻尖特征点在人脸区域中的位置信息。所述基于所述第二连接线的角度确定人脸的角度,包括:基于第二连接线的角度和鼻尖特征点在人脸区域中的位置信息确定人脸的角度。In another embodiment, the above method further includes: when the at least two feature points further include a nose tip feature point, acquiring position information of the nose tip feature point in the face region. The determining the angle of the face based on the angle of the second connecting line includes: determining the angle of the face based on the angle of the second connecting line and the position information of the nose tip feature point in the face region.

例如,当人脸往左后方转时,则鼻尖在人脸区域中处于左方;当人脸往右后方转时,则鼻尖在人脸区域中朝向右方。基于第二连接线的角度和鼻尖特征点在人脸区域中的位置信息,可以更准确地确定人脸的角度。For example, when the human face is turned to the left and rear, the nose tip is to the left in the human face area; when the human face is turned to the right rear, the nose tip is to the right in the human face area. Based on the angle of the second connecting line and the position information of the nose tip feature point in the face region, the angle of the face can be more accurately determined.

在一个实施例中,当至少两个特征点还包括鼻尖特征点时,获取鼻尖特征点在人脸区域中的位置信息;分别获取第一连接线的角度和第二连接线的角度,并基于第一连接线的角度、第二连接线的角度和鼻尖特征点在人脸区域中的位置信息确定人脸的角度。In one embodiment, when the at least two feature points further include the nose tip feature point, the position information of the nose tip feature point in the face region is obtained; the angle of the first connecting line and the angle of the second connecting line are obtained respectively, and based on The angle of the first connecting line, the angle of the second connecting line and the position information of the nose tip feature point in the face region determine the angle of the face.

例如,第一连接线的角度为向左偏转8度,第二连接线的角度为向左偏转10度,鼻尖特征点处于人脸区域中的左方,则基于第一连接线的角度和第二连接线的角度可以求取平均值,得到人脸向左偏转9度;鼻尖特征点处于人脸区域中的左方,表示人脸往左后方偏转;因此,人脸的角度为向左偏转9度且往左后方转。For example, if the angle of the first connection line is 8 degrees to the left, the angle of the second connection line is 10 degrees to the left, and the nose tip feature point is located to the left in the face area, then based on the angle of the first connection line and the The angle of the two connecting lines can be averaged, and the face is deflected to the left by 9 degrees; the feature point of the nose tip is located on the left side of the face area, indicating that the face is deflected to the left and rear; therefore, the angle of the face is a left deflection 9 degrees and turn left rear.

在本实施例中,结合左眼特征点、右眼特征点、左嘴角特征点、右嘴角特征点以及鼻尖特征点,可以更准确地确定人脸的角度。In this embodiment, the angle of the human face can be more accurately determined by combining the left eye feature point, the right eye feature point, the left mouth corner feature point, the right mouth corner feature point and the nose tip feature point.

在一个实施例中,上述方法还包括:获取人脸区域的面积。根据人脸的角度获取包含人像的候选区域,包括:根据人脸的角度和人脸区域的面积获取包含人像的候选区域;候选区域的面积与人脸区域的面积成正相关。In one embodiment, the above method further includes: acquiring the area of the face region. Obtaining a candidate region containing a portrait according to the angle of the face includes: obtaining a candidate region containing the portrait according to the angle of the face and the area of the face region; the area of the candidate region is positively correlated with the area of the face region.

可以理解的是,当人脸区域的面积越大时,表示人脸越靠近摄像头,则包含人脸的人像的面积也越大,包含人像的候选区域也越大。因此,候选区域的面积与人脸区域的面积成正相关。根据人脸的角度和人脸区域的面积,可以获取更加准确的候选区域。It can be understood that when the area of the face area is larger, it means that the face is closer to the camera, the area of the portrait including the face is also larger, and the candidate area including the portrait is also larger. Therefore, the area of the candidate region is positively correlated with the area of the face region. According to the angle of the face and the area of the face region, a more accurate candidate region can be obtained.

在一个实施例中,如图6所示,获取待识别图像602;对待识别图像602进行人脸检测,检测到两个人脸。分别获取包含人脸的人脸区域,得到两个人脸区域。可以分别获取两个人脸区域的位置信息,并获取距离待识别图像的中心最近的人脸区域,即人脸区域604。In one embodiment, as shown in FIG. 6 , the to-be-recognized image 602 is acquired; the to-be-recognized image 602 is subjected to face detection, and two faces are detected. Obtain the face regions containing the faces respectively, and obtain two face regions. The position information of the two face regions can be obtained respectively, and the face region closest to the center of the image to be recognized, that is, the face region 604 can be obtained.

从人脸区域604中获取5个特征点,分别为左眼特征点、右眼特征点、鼻尖特征点、左嘴角特征点和右嘴角特征点;将左眼特征点和右眼特征点进行连接得到第一连接线,将左嘴角特征点和右嘴角特征点进行连接得到第二连接线;分别获取第一连接线的角度和第二连接线的角度;基于第一连接线的角度和第二连接线的角度,将鼻尖特征点分别与第一连接线、第二连接线进行连接,得到第三连接线。Obtain 5 feature points from the face area 604, which are respectively the feature point of the left eye, the feature point of the right eye, the feature point of the tip of the nose, the feature point of the left mouth corner and the feature point of the right mouth corner; connect the feature points of the left eye and the right eye Obtain the first connecting line, connect the feature points of the left corner of the mouth and the feature points of the right mouth corner to obtain the second connecting line; obtain the angle of the first connecting line and the angle of the second connecting line respectively; based on the angle of the first connecting line and the second connecting line The angle of the connecting line is to connect the nose tip feature points with the first connecting line and the second connecting line respectively to obtain the third connecting line.

例如,可以求取第一连接线的角度和第二连接线的角度的平均值,则该平均值对应的直线与穿过鼻尖特征点的第三连接线相垂直。For example, the average value of the angle of the first connecting line and the angle of the second connecting line can be obtained, and the straight line corresponding to the average value is perpendicular to the third connecting line passing through the nose tip feature point.

获取第三连接线的角度;基于第三连接线的角度确定人脸的角度;根据人脸的角度获取包含人像的候选区域606。其中,候选区域的角度与人脸的角度的差值在预设范围内。将候选区域606输入人像分割网络,可以得到人像区域608,将人像区域608作为待识别图像的主体区域。The angle of the third connecting line is obtained; the angle of the face is determined based on the angle of the third connecting line; the candidate area 606 containing the portrait is obtained according to the angle of the face. Wherein, the difference between the angle of the candidate area and the angle of the face is within a preset range. By inputting the candidate region 606 into the portrait segmentation network, a portrait region 608 can be obtained, and the portrait region 608 is used as the main region of the image to be recognized.

应该理解的是,虽然图2、图4和图5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2、图4和图5中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the steps in the flowcharts of FIGS. 2 , 4 and 5 are sequentially displayed in accordance with the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIG. 2 , FIG. 4 and FIG. 5 may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The order of execution of the sub-steps or phases is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or phases of the other steps.

图7为一个实施例的图像处理装置的结构框图。如图7所示,提供了一种图像处理装置700,包括:图像获取模块702、人脸检测模块704、候选区域获取模块706、人像分割模块708和主体识别模块710,其中:FIG. 7 is a structural block diagram of an image processing apparatus according to an embodiment. As shown in FIG. 7, an image processing apparatus 700 is provided, including: an image acquisition module 702, a face detection module 704, a candidate region acquisition module 706, a portrait segmentation module 708, and a subject recognition module 710, wherein:

图像获取模块702,用于获取待识别图像。An image acquisition module 702, configured to acquire an image to be recognized.

人脸检测模块704,用于检测待识别图像中是否存在人脸。The face detection module 704 is used for detecting whether there is a face in the image to be recognized.

候选区域获取模块706,用于当待识别图像中存在人脸时,根据人脸获取包含人像的候选区域;人像包括人脸。The candidate region obtaining module 706 is configured to obtain a candidate region including a portrait according to the human face when there is a human face in the image to be recognized; the human portrait includes a human face.

人像分割模块708,用于将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域。The portrait segmentation module 708 is configured to input the candidate region into the portrait segmentation network to obtain the portrait region, and use the portrait region as the main region of the image to be recognized.

主体识别模块710,用于当待识别图像中不存在人脸时,将待识别图像输入主体识别网络,得到待识别图像的主体区域。The subject recognition module 710 is configured to input the to-be-recognized image into the subject recognition network when there is no human face in the to-be-recognized image to obtain the subject area of the to-be-recognized image.

上述图像处理装置,当检测到待识别图像存在人脸时,根据人脸获取包含人像的候选区域,将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域;当检测到待识别图像中不存在人脸时,将待识别图像输入主体识别网络,得到待识别图像的主体区域。通过设计获取待识别图像的主体区域的双路网络,即当待识别图像中不存在人脸时,通过主体识别网络获取主体区域;当待识别图像中存在人脸时,从待识别图像中确定包含人像的候选图像,并通过人像分割网络可以获取更准确的人像区域作为主体区域。The above-mentioned image processing device, when detecting that there is a human face in the to-be-recognized image, obtains a candidate area containing the portrait according to the human face, inputs the candidate area into the portrait segmentation network, obtains the portrait area, and uses the portrait area as the main area of the image to be identified; when When it is detected that there is no human face in the to-be-recognized image, the to-be-recognized image is input into the subject recognition network to obtain the subject area of the to-be-recognized image. By designing a two-way network to obtain the subject area of the image to be recognized, that is, when there is no face in the image to be recognized, the subject area is obtained through the subject recognition network; when there is a face in the image to be recognized, it is determined from the image to be recognized. Candidate images containing portraits can be obtained through the portrait segmentation network to obtain more accurate portrait regions as subject regions.

在一个实施例中,上述候选区域获取模块706还用于当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域;一个人脸区域包含一个人脸;分别获取至少两个人脸区域的面积;将至少两个人脸区域的面积进行比较,获取面积最大的人脸区域作为第一人脸区域;根据第一人脸区域获取包含人像的第一候选区域。将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域,包括:将第一候选区域输入人像分割网络,得到第一人像区域,将第一人像区域作为待识别图像的主体区域。In one embodiment, the above-mentioned candidate region obtaining module 706 is further configured to obtain face regions containing human faces when it is detected that there are at least two faces in the image to be identified; one face region includes one face; Areas of at least two face regions; compare the areas of the at least two face regions, and obtain the face region with the largest area as the first face region; obtain a first candidate region containing a portrait according to the first face region. Input the candidate area into the portrait segmentation network to obtain the portrait area, and use the portrait area as the main area of the image to be recognized, including: inputting the first candidate area into the portrait segmentation network to obtain the first portrait area, and using the first portrait area as the to-be-recognized image area. Identify the subject area of the image.

在一个实施例中,上述候选区域获取模块706还用于当检测到待识别图像中存在至少两个人脸时,分别获取包含人脸的人脸区域;一个人脸区域包含一个人脸;分别获取至少两个人脸区域的位置信息;根据至少两个人脸区域的位置信息获取距离待识别图像的中心最近的人脸区域作为第二人脸区域;根据第二人脸区域获取包含人像的第二候选区域。将候选区域输入人像分割网络,得到人像区域,将人像区域作为待识别图像的主体区域,包括:将第二候选区域输入人像分割网络,得到第二人像区域,将第二人像区域作为待识别图像的主体区域。In one embodiment, the above-mentioned candidate region obtaining module 706 is further configured to obtain face regions containing human faces when it is detected that there are at least two faces in the image to be identified; one face region includes one face; position information of at least two face regions; obtain the face region closest to the center of the image to be recognized as a second face region according to the position information of at least two face regions; obtain a second candidate containing a portrait according to the second face region area. Input the candidate area into the portrait segmentation network to obtain the portrait area, and use the portrait area as the main area of the image to be recognized, including: inputting the second candidate area into the portrait segmentation network to obtain the second portrait area, and using the second portrait area as the image to be recognized main area.

在一个实施例中,上述候选区域获取模块706还用于获取包含人脸的人脸区域;获取人脸区域中的至少两个特征点;根据至少两个特征点确定人脸的角度;根据人脸的角度获取包含人像的候选区域;候选区域的角度与人脸的角度的差值在预设范围内。In one embodiment, the above-mentioned candidate region obtaining module 706 is further configured to obtain a face region containing a human face; obtain at least two feature points in the face region; determine the angle of the face according to the at least two feature points; The angle of the face obtains a candidate area containing the portrait; the difference between the angle of the candidate area and the angle of the face is within a preset range.

在一个实施例中,上述候选区域获取模块706还用于将至少两个特征点进行连接,得到各条连接线;获取各条连接线的角度,并基于各条连接线的角度确定人脸的角度。In one embodiment, the above-mentioned candidate region obtaining module 706 is further configured to connect at least two feature points to obtain each connecting line; obtain the angle of each connecting line, and determine the face value based on the angle of each connecting line angle.

在一个实施例中,上述候选区域获取模块706还用于当至少两个特征点包括左眼特征点和右眼特征点时,将左眼特征点与右眼特征点进行连接得到第一连接线。获取各条连接的角度,并基于各条连接线的角度确定人脸的角度,包括:获取第一连接线的角度,并基于第一连接线的角度确定人脸的角度。In one embodiment, the above-mentioned candidate region obtaining module 706 is further configured to connect the left-eye feature point and the right-eye feature point to obtain a first connecting line when the at least two feature points include a left-eye feature point and a right-eye feature point . Obtaining the angle of each connection, and determining the angle of the face based on the angle of each connection line includes: obtaining the angle of the first connection line, and determining the angle of the face based on the angle of the first connection line.

在一个实施例中,上述候选区域获取模块706还用于当至少两个特征点包括左嘴角特征点和右嘴角特征点时,将左嘴角特征点和右嘴角特征点进行连接得到第二连接线。获取各条连接的角度,并基于各条连接线的角度确定人脸的角度,包括:获取第二连接线的角度,并基于第二连接线的角度确定人脸的角度。In one embodiment, the candidate region obtaining module 706 is further configured to connect the left mouth corner feature point and the right mouth corner feature point to obtain the second connecting line when the at least two feature points include the left mouth corner feature point and the right mouth corner feature point . Acquiring the angle of each connection and determining the angle of the face based on the angle of each connection line includes: acquiring the angle of the second connection line, and determining the angle of the face based on the angle of the second connection line.

在一个实施例中,上述主体识别装置700还包括位置信息获取模块,用于当至少两个特征点还包括鼻尖特征点时,获取鼻尖特征点在人脸区域中的位置信息。基于第一连接线的角度确定人脸的角度,包括:基于第一连接线的角度和鼻尖特征点在人脸区域中的位置信息确定人脸的角度。或基于第二连接线的角度确定人脸的角度,包括:基于第二连接线和鼻尖特征点在人脸区域中的位置信息确定人脸的角度。In one embodiment, the above-mentioned subject identification apparatus 700 further includes a position information acquisition module, configured to acquire position information of the nose tip feature point in the face region when the at least two feature points further include the nose tip feature point. Determining the angle of the face based on the angle of the first connecting line includes: determining the angle of the face based on the angle of the first connecting line and the position information of the nose tip feature point in the face region. Or determining the angle of the face based on the angle of the second connecting line includes: determining the angle of the face based on the second connecting line and the position information of the nose tip feature point in the face region.

在一个实施例中,上述图像处理装置700还包括面积获取模块,用于获取人脸区域的面积。根据人脸的角度获取包含人像的候选区域,包括:根据人脸的角度和人脸区域的面积获取包含人像的候选区域;候选区域的面积与人脸区域的面积成正相关。In one embodiment, the above-mentioned image processing apparatus 700 further includes an area acquisition module for acquiring the area of the face region. Obtaining a candidate region containing a portrait according to the angle of the face includes: obtaining a candidate region containing the portrait according to the angle of the face and the area of the face region; the area of the candidate region is positively correlated with the area of the face region.

上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。The division of each module in the above image processing apparatus is only for illustration. In other embodiments, the image processing apparatus may be divided into different modules as required to complete all or part of the functions of the above image processing apparatus.

图8为一个实施例中电子设备的内部结构示意图。如图8所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种图像处理方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。FIG. 8 is a schematic diagram of the internal structure of an electronic device in one embodiment. As shown in FIG. 8, the electronic device includes a processor and a memory connected by a system bus. Among them, the processor is used to provide computing and control capabilities to support the operation of the entire electronic device. The memory may include non-volatile storage media and internal memory. The nonvolatile storage medium stores an operating system and a computer program. The computer program can be executed by the processor to implement an image processing method provided by the following embodiments. Internal memory provides a cached execution environment for operating system computer programs in non-volatile storage media. The electronic device may be a mobile phone, a tablet computer, a personal digital assistant or a wearable device, and the like.

本申请实施例中提供的图像处理装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的步骤。The implementation of each module in the image processing apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program can be run on a terminal or server. The program modules constituted by the computer program can be stored in the memory of the terminal or the server. When the computer program is executed by the processor, the steps of the methods described in the embodiments of the present application are implemented.

本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行图像处理方法的步骤。The embodiment of the present application also provides a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when executed by one or more processors, cause the processors to perform the steps of an image processing method.

一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行图像处理方法。A computer program product containing instructions, when run on a computer, causes the computer to perform an image processing method.

本申请实施例所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。Any reference to a memory, storage, database, or other medium as used in embodiments of the present application may include non-volatile and/or volatile memory. Suitable nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM).

以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several embodiments of the present application, and the descriptions thereof are relatively specific and detailed, but should not be construed as a limitation on the scope of the patent of the present application. It should be pointed out that for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the patent of the present application shall be subject to the appended claims.

Claims (11)

1. An image processing method, comprising:
acquiring an image to be identified;
detecting whether a human face exists in the image to be recognized;
when a face exists in the image to be recognized, acquiring a candidate region containing a portrait according to the face; the portrait includes the face;
inputting the candidate area into a portrait segmentation network to obtain a portrait area, and taking the portrait area as a main area of the image to be identified;
and when the face does not exist in the image to be recognized, inputting the image to be recognized into a main body recognition network to obtain a main body area of the image to be recognized.
2. The method according to claim 1, wherein when a face exists in the image to be recognized, acquiring a candidate region containing the face according to the face comprises:
when detecting that at least two faces exist in the image to be recognized, respectively acquiring face regions containing the faces;
respectively acquiring the areas of at least two face regions;
comparing the areas of at least two face regions to obtain a face region with the largest area as a first face region;
acquiring a first candidate region containing a portrait according to the first face region;
inputting the candidate region into a portrait segmentation network to obtain a portrait region, and using the portrait region as a main region of the image to be recognized, including:
and inputting the first candidate region into a portrait segmentation network to obtain a first portrait region, and taking the first portrait region as a main region of the image to be identified.
3. The method according to claim 1, wherein when a face exists in the image to be recognized, acquiring a candidate region containing the face according to the face comprises:
when detecting that at least two faces exist in the image to be recognized, respectively acquiring face regions containing the faces; a face region contains a face;
respectively acquiring the position information of at least two face areas;
acquiring a face area closest to the center of the image to be recognized according to the position information of at least two face areas as a second face area;
acquiring a second candidate region containing a portrait according to the second face region;
inputting the candidate region into a portrait segmentation network to obtain a portrait region, and using the portrait region as a main region of the image to be recognized, including:
and inputting the second candidate area into a portrait segmentation network to obtain a second portrait area, and taking the second portrait area as a main area of the image to be identified.
4. The method of claim 1, wherein obtaining candidate regions containing human images from the human face comprises:
acquiring a face area containing the face;
acquiring at least two feature points in the face region;
determining the angle of the face according to the at least two feature points;
acquiring a candidate region containing a portrait according to the angle of the face; and the difference value between the angle of the candidate region and the angle of the human face is within a preset range.
5. The method of claim 4, wherein determining the angle of the face from the at least two feature points comprises:
connecting the at least two characteristic points to obtain each connecting line;
and acquiring the angle of each connecting line, and determining the angle of the face based on the angle of each connecting line.
6. The method of claim 5, wherein said connecting the at least two feature points to obtain each connecting line comprises:
when the at least two feature points comprise a left-eye feature point and a right-eye feature point, connecting the left-eye feature point and the right-eye feature point to obtain a first connecting line; or
When the at least two feature points comprise a left mouth corner feature point and a right mouth corner feature point, connecting the left mouth corner feature point and the right mouth corner feature point to obtain a second connecting line;
the obtaining the angles of the connections and determining the angles of the human face based on the angles of the connections include:
acquiring the angle of the first connecting line, and determining the angle of the face based on the angle of the first connecting line; or
And acquiring the angle of the second connecting line, and determining the angle of the face based on the angle of the second connecting line.
7. The method of claim 6, further comprising:
when the at least two feature points further comprise nose tip feature points, acquiring position information of the nose tip feature points in the face region;
the determining the angle of the human face based on the angle of the first connecting line includes: determining the angle of the face based on the angle of the first connecting line and the position information of the nose tip characteristic point in the face region; or
The determining the angle of the face based on the angle of the second connecting line includes: and determining the angle of the face based on the second connecting line and the position information of the nose tip characteristic point in the face region.
8. The method of claim 4, further comprising:
acquiring the area of the face region;
the obtaining of the candidate region containing the portrait according to the angle of the face includes:
acquiring a candidate region containing a portrait according to the angle of the face and the area of the face region; the area of the candidate region is positively correlated with the area of the face region.
9. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an image to be identified;
the face detection module is used for detecting whether a face exists in the image to be identified;
the candidate region acquisition module is used for acquiring a candidate region containing a portrait according to the face when the face exists in the image to be identified; the portrait includes the face;
the portrait segmentation module is used for inputting the candidate region into a portrait segmentation network to obtain a portrait region, and the portrait region is used as a main region of the image to be identified;
and the main body identification module is used for inputting the image to be identified into a main body identification network when the human face does not exist in the image to be identified, so as to obtain a main body area of the image to be identified.
10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN201910905113.9A 2019-09-24 2019-09-24 Image processing method and apparatus, electronic device, computer-readable storage medium Pending CN110610171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910905113.9A CN110610171A (en) 2019-09-24 2019-09-24 Image processing method and apparatus, electronic device, computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910905113.9A CN110610171A (en) 2019-09-24 2019-09-24 Image processing method and apparatus, electronic device, computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN110610171A true CN110610171A (en) 2019-12-24

Family

ID=68892144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910905113.9A Pending CN110610171A (en) 2019-09-24 2019-09-24 Image processing method and apparatus, electronic device, computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110610171A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221754A (en) * 2021-05-14 2021-08-06 深圳前海百递网络有限公司 Express waybill image detection method and device, computer equipment and storage medium
CN113723168A (en) * 2021-04-09 2021-11-30 腾讯科技(深圳)有限公司 Artificial intelligence-based subject identification method, related device and storage medium
CN116719970A (en) * 2022-03-04 2023-09-08 腾讯科技(深圳)有限公司 Video cover determining method, device, equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561710A (en) * 2009-05-19 2009-10-21 重庆大学 Man-machine interaction method based on estimation of human face posture
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
CN107454335A (en) * 2017-08-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, computer readable storage medium and mobile terminal
CN107592473A (en) * 2017-10-31 2018-01-16 广东欧珀移动通信有限公司 Exposure parameter adjustment method, device, electronic device and readable storage medium
CN107820017A (en) * 2017-11-30 2018-03-20 广东欧珀移动通信有限公司 Image capturing method, device, computer-readable recording medium and electronic equipment
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN108537155A (en) * 2018-03-29 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN108717530A (en) * 2018-05-21 2018-10-30 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN108875479A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 The acquisition methods and device of facial image
CN108921148A (en) * 2018-09-07 2018-11-30 北京相貌空间科技有限公司 Determine the method and device of positive face tilt angle
CN109191398A (en) * 2018-08-29 2019-01-11 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109325905A (en) * 2018-08-29 2019-02-12 Oppo广东移动通信有限公司 Image processing method, apparatus, computer-readable storage medium and electronic device
CN109360254A (en) * 2018-10-15 2019-02-19 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, computer-readable storage medium
CN109389018A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of facial angle recognition methods, device and equipment
CN109461186A (en) * 2018-10-15 2019-03-12 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN109582811A (en) * 2018-12-17 2019-04-05 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110047053A (en) * 2019-04-26 2019-07-23 腾讯科技(深圳)有限公司 Portrait Picture Generation Method, device and computer equipment
CN110248096A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Focusing method and apparatus, electronic device, computer-readable storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561710A (en) * 2009-05-19 2009-10-21 重庆大学 Man-machine interaction method based on estimation of human face posture
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
CN109389018A (en) * 2017-08-14 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of facial angle recognition methods, device and equipment
CN108875479A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 The acquisition methods and device of facial image
CN107454335A (en) * 2017-08-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, computer readable storage medium and mobile terminal
CN107592473A (en) * 2017-10-31 2018-01-16 广东欧珀移动通信有限公司 Exposure parameter adjustment method, device, electronic device and readable storage medium
CN107820017A (en) * 2017-11-30 2018-03-20 广东欧珀移动通信有限公司 Image capturing method, device, computer-readable recording medium and electronic equipment
CN108009999A (en) * 2017-11-30 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN108537155A (en) * 2018-03-29 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN108717530A (en) * 2018-05-21 2018-10-30 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN109191398A (en) * 2018-08-29 2019-01-11 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109325905A (en) * 2018-08-29 2019-02-12 Oppo广东移动通信有限公司 Image processing method, apparatus, computer-readable storage medium and electronic device
CN108921148A (en) * 2018-09-07 2018-11-30 北京相貌空间科技有限公司 Determine the method and device of positive face tilt angle
CN109360254A (en) * 2018-10-15 2019-02-19 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, computer-readable storage medium
CN109461186A (en) * 2018-10-15 2019-03-12 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN109582811A (en) * 2018-12-17 2019-04-05 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110047053A (en) * 2019-04-26 2019-07-23 腾讯科技(深圳)有限公司 Portrait Picture Generation Method, device and computer equipment
CN110248096A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Focusing method and apparatus, electronic device, computer-readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723168A (en) * 2021-04-09 2021-11-30 腾讯科技(深圳)有限公司 Artificial intelligence-based subject identification method, related device and storage medium
CN113221754A (en) * 2021-05-14 2021-08-06 深圳前海百递网络有限公司 Express waybill image detection method and device, computer equipment and storage medium
CN116719970A (en) * 2022-03-04 2023-09-08 腾讯科技(深圳)有限公司 Video cover determining method, device, equipment and storage medium
CN116719970B (en) * 2022-03-04 2025-06-13 腾讯科技(深圳)有限公司 Video cover determination method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113766125B (en) Focusing method and apparatus, electronic device, computer-readable storage medium
EP3598736B1 (en) Method and apparatus for processing image
CN110660090B (en) Subject detection method and apparatus, electronic device, computer-readable storage medium
CN110149482A (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
WO2020259474A1 (en) Focus tracking method and apparatus, terminal device, and computer-readable storage medium
CN110248096A (en) Focusing method and apparatus, electronic device, computer-readable storage medium
CN108537155A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107800965B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110661977B (en) Subject detection method and apparatus, electronic device, computer-readable storage medium
CN111932587A (en) Image processing method and device, electronic equipment and computer readable storage medium
US12039767B2 (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110650288B (en) Focus control method and apparatus, electronic device, computer-readable storage medium
CN107862658B (en) Image processing method, apparatus, computer-readable storage medium and electronic device
CN108810413A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107742274A (en) Image processing method, device, computer-readable storage medium, and electronic device
WO2019105261A1 (en) Background blurring method and apparatus, and device
CN108846807A (en) Light efficiency processing method, device, terminal and computer readable storage medium
CN112866552B (en) Focusing method and device, electronic device, computer-readable storage medium
CN110248101A (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107578372B (en) Image processing method, apparatus, computer-readable storage medium and electronic device
CN110276831A (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN110688926B (en) Subject detection method and device, electronic device, computer-readable storage medium
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
CN110881103B (en) Focusing control method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191224