TWI787113B - Methods, apparatuses, processors, electronic equipment and storage media for image processing - Google Patents

Methods, apparatuses, processors, electronic equipment and storage media for image processing Download PDF

Info

Publication number
TWI787113B
TWI787113B TW111114745A TW111114745A TWI787113B TW I787113 B TWI787113 B TW I787113B TW 111114745 A TW111114745 A TW 111114745A TW 111114745 A TW111114745 A TW 111114745A TW I787113 B TWI787113 B TW I787113B
Authority
TW
Taiwan
Prior art keywords
area
threshold
image
pixel
processed
Prior art date
Application number
TW111114745A
Other languages
Chinese (zh)
Other versions
TW202248954A (en
Inventor
張金豪
高哲峰
李若岱
莊南慶
馬堃
Original Assignee
大陸商深圳市商湯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商深圳市商湯科技有限公司 filed Critical 大陸商深圳市商湯科技有限公司
Application granted granted Critical
Publication of TWI787113B publication Critical patent/TWI787113B/en
Publication of TW202248954A publication Critical patent/TW202248954A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application provides methods, apparatuses, processors, electronic equipment and storage media for image processing. According to an example of the method, after obtaining a to-be-processed image, a first threshold, a second threshold and a third threshold, where the first threshold is different from both the second threshold and the third threshold, and the second threshold is less than or equal to the third threshold, a first number of first pixels in a to-be-measured area of the to-be-processed image can be determined, where the first pixel is a pixel with a color value which is greater than or equal to the second threshold and less than or equal to the third threshold, and a skin occlusion detection result of the to-be-processed image can be obtained based on a first ratio of the first number with respect to a number of pixels in the to-be-measured area and the first threshold.

Description

圖像處理方法、裝置、處理器、電子設備及儲存媒體Image processing method, device, processor, electronic device and storage medium

本申請涉及圖像處理技術領域,尤其涉及一種圖像處理方法、裝置、處理器、電子設備及儲存媒體。The present application relates to the technical field of image processing, and in particular to an image processing method, device, processor, electronic equipment and storage medium.

為提高檢測安全性,越來越多的場景應用了對皮膚進行非接觸檢測。而這類非接觸檢測的檢測準確度很大程度上受皮膚遮擋狀態的影響。如,若皮膚區域被遮擋的面積較大,對該皮膚區域進行非接觸檢測的檢測結果的準確度可能較低。因此,如何檢測皮膚遮擋狀態具有非常重要的意義。In order to improve the safety of detection, non-contact detection of skin is applied in more and more scenarios. The detection accuracy of this type of non-contact detection is largely affected by the state of skin occlusion. For example, if the covered area of the skin area is relatively large, the accuracy of the detection result of the non-contact detection on the skin area may be low. Therefore, how to detect the state of skin occlusion is of great significance.

本申請提供一種圖像處理方法及裝置、處理器、電子設備及儲存媒體,以確定皮膚是否處於遮擋狀態。The present application provides an image processing method and device, a processor, electronic equipment and a storage medium to determine whether the skin is in a blocking state.

本申請提供了一種圖像處理方法,所述方法包括:獲取待處理圖像、第一閾值、第二閾值和第三閾值,所述第一閾值和所述第二閾值不同,所述第一閾值和所述第三閾值不同,所述第二閾值小於等於所述第三閾值;確定所述待處理圖像的待測區域中第一像素點的第一數量;所述第一像素點為顏色值大於等於所述第二閾值、且小於等於所述第三閾值的像素點;依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果。The present application provides an image processing method, the method comprising: acquiring an image to be processed, a first threshold, a second threshold, and a third threshold, the first threshold is different from the second threshold, and the first The threshold is different from the third threshold, the second threshold is less than or equal to the third threshold; determine the first number of first pixels in the region to be detected of the image to be processed; the first pixel is Pixels whose color values are greater than or equal to the second threshold and less than or equal to the third threshold; according to the first ratio of the first number to the number of pixels in the region to be tested and the first threshold, A skin occlusion detection result of the image to be processed is obtained.

結合本申請任一實施方式,所述確定所述待處理圖像的皮膚區域中第一像素點的第一數量,包括:對所述待處理圖像進行人臉檢測處理,得到第一人臉框;依據所述第一人臉框,從所述待處理圖像中確定所述待測區域;確定所述待測區域中所述第一像素點的第一數量。In combination with any embodiment of the present application, the determining the first number of first pixels in the skin area of the image to be processed includes: performing face detection processing on the image to be processed to obtain a first face Frame; according to the first face frame, determine the region to be detected from the image to be processed; determine a first number of the first pixels in the region to be detected.

結合本申請任一實施方式,所述第一人臉框包括上框線和下框線;所述上框線和所述下框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的橫軸的邊,且所述上框線的縱坐標小於所述下框線的縱坐標;所述依據所述第一人臉框,從所述待處理圖像中確定所述待測區域,包括:對所述待處理圖像進行人臉關鍵點檢測,得到至少一個人臉關鍵點;所述至少一個人臉關鍵點包括左眉毛關鍵點和右眉毛關鍵點;在保持所述上框線的縱坐標不變的情況下,將所述下框線沿所述待處理圖像的像素坐標系的縱軸的負方向移動,使得所述下框線所在直線與第一直線重合,得到第二人臉框;所述第一直線為過所述左眉毛關鍵點和所述右眉毛關鍵點的直線;依據所述第二人臉框包含的區域,得到所述待測區域。In combination with any embodiment of the present application, the first face frame includes an upper frame line and a lower frame line; both the upper frame line and the lower frame line are in the first face frame parallel to the Process the side of the horizontal axis of the pixel coordinate system of the image, and the ordinate of the upper frame line is smaller than the ordinate of the lower frame line; according to the first human face frame, from the image to be processed Determining the area to be tested in the process includes: performing face key point detection on the image to be processed to obtain at least one face key point; the at least one face key point includes a left eyebrow key point and a right eyebrow key point; Keeping the vertical coordinate of the upper frame line unchanged, moving the lower frame line along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed, so that the line where the lower frame line is located is the same as the first A straight line coincides to obtain a second human face frame; the first straight line is a straight line passing through the left eyebrow key point and the right eyebrow key point; according to the area included in the second human face frame, the region to be tested is obtained .

結合本申請任一實施方式,所述依據所述第二人臉框包含的區域,得到所述待測區域,包括:在保持所述第二人臉框的下框線的縱坐標不變的情況下,將所述第二人臉框的上框線沿所述待處理圖像的像素坐標系的縱軸移動,使得所述第二人臉框的上框線和所述第二人臉框的下框線之間的距離為預設距離,得到第三人臉框;依據所述第三人臉框包含的區域,得到所述待測區域。In combination with any embodiment of the present application, the obtaining the area to be tested according to the area contained in the second face frame includes: keeping the ordinate of the lower frame line of the second face frame unchanged case, move the upper frame line of the second human face frame along the vertical axis of the pixel coordinate system of the image to be processed, so that the upper frame line of the second human face frame and the second human face The distance between the lower frame lines of the frames is a preset distance to obtain a third face frame; and the area to be tested is obtained according to the area included in the third face frame.

結合本申請任一實施方式,所述至少一個人臉關鍵點還包括左嘴角關鍵點和右嘴角關鍵點;所述第一人臉框還包括左框線和右框線;所述左框線和所述右框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的縱軸的邊,且所述左框線的橫坐標小於所述右框線的橫坐標;所述依據所述第三人臉框包含的區域,得到所述待測區域,包括:在保持所述第三人臉框的左框線的橫坐標不變的情況下,將所述第三人臉框的右框線沿所述待處理圖像的像素坐標系的橫軸移動,使得所述第三人臉框的右框線和所述第三人臉框的左框線之間的距離為參考距離,得到第四人臉框;所述參考距離為第二直線與所述第三人臉框包含的人臉輪廓的兩個交點之間的距離;所述第二直線為在所述第一直線和第三直線之間且平行於所述第一直線或所述第三直線的直線;所述第三直線為過所述左嘴角關鍵點和所述右嘴角關鍵點的直線;將所述第四人臉框包含的區域作為所述待測區域。In combination with any embodiment of the present application, the at least one human face key point also includes a left mouth corner key point and a right mouth corner key point; the first human face frame also includes a left frame line and a right frame line; the left frame line and The right frame lines are all sides parallel to the vertical axis of the pixel coordinate system of the image to be processed in the first face frame, and the abscissa of the left frame line is smaller than the abscissa of the right frame line Coordinates; said obtaining the region to be tested according to the region included in the third human face frame includes: keeping the abscissa of the left frame line of the third human face frame unchanged, The right frame line of the third human face frame moves along the horizontal axis of the pixel coordinate system of the image to be processed, so that between the right frame line of the third human face frame and the left frame line of the third human face frame The distance between is the reference distance, obtains the 4th human face frame; Described reference distance is the distance between the two intersection points of the human face profile that the second straight line and the described third human face frame comprise; Described second straight line is A straight line between the first straight line and the third straight line and parallel to the first straight line or the third straight line; the third straight line is a straight line passing through the key point of the left corner of the mouth and the key point of the right corner of the mouth; The area included in the fourth human face frame is used as the area to be detected.

結合本申請任一實施方式,所述獲取第二閾值和第三閾值,包括:從所述第一人臉框包含的像素點區域中確定皮膚像素點區域;獲取所述皮膚像素點區域中第二像素點的顏色值;將所述第二像素點的顏色值與第一值的差作為所述第二閾值,將所述第二像素點的顏色值與第二值的和作為所述第三閾值;其中,所述第一值和所述第二值均不超過所述待處理圖像的顏色值中的最大值。In combination with any embodiment of the present application, the acquiring the second threshold and the third threshold includes: determining the skin pixel area from the pixel area contained in the first human face frame; acquiring the second skin pixel area in the skin pixel area The color value of two pixels; the difference between the color value of the second pixel and the first value is used as the second threshold, and the sum of the color value of the second pixel and the second value is used as the first threshold Three thresholds; wherein, neither the first value nor the second value exceeds the maximum value among the color values of the image to be processed.

結合本申請任一實施方式,所述從所述第一人臉框包含的像素點區域中確定皮膚像素點區域,包括:在檢測到所述待處理圖像中人臉區域未佩戴口罩的情況下,將所述人臉區域中除額頭區域、嘴巴區域、眉毛區域和眼睛區域之外的像素點區域,作為所述皮膚像素點區域;在檢測到所述待處理圖像中人臉區域佩戴口罩的情況下,將所述第一直線和第四直線之間的像素點區域作為所述皮膚像素點區域;所述第四直線為過左眼下眼瞼關鍵點和右眼下眼瞼關鍵點的直線;所述左眼下眼瞼關鍵點和所述右眼下眼瞼關鍵點均屬所述至少一個人臉關鍵點。In combination with any embodiment of the present application, the determining the skin pixel area from the pixel area contained in the first human face frame includes: detecting that the face area in the image to be processed does not wear a mask Next, the pixel point area in the face area except the forehead area, mouth area, eyebrow area and eye area is used as the skin pixel point area; when the human face area in the image to be processed is detected In the case of a mouth mask, the pixel point area between the first straight line and the fourth straight line is used as the skin pixel point area; the fourth straight line is a straight line passing the key point of the lower eyelid of the left eye and the key point of the lower eyelid of the right eye; Both the key points of the lower eyelid of the left eye and the key points of the lower eyelid of the right eye belong to the at least one human face key point.

結合本申請任一實施方式,所述獲取所述皮膚像素點區域中第二像素點的顏色值,包括:在所述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點,且所述至少一個人臉關鍵點包含屬右眉內側區域中的至少一個第二關鍵點的情況下,根據所述至少一個第一關鍵點和所述至少一個第二關鍵點確定矩形區域;對所述矩形區域進行灰度化處理,得到所述矩形區域的灰度圖;將第一行和第一列的交點的顏色值作為所述第二像素點的顏色值;所述第一行為所述灰度圖中灰度值之和最大的行,所述第一列為所述灰度圖中灰度值之和最大的列。In combination with any embodiment of the present application, the acquiring the color value of the second pixel in the skin pixel area includes: including at least one first key point in the area inside the left eyebrow in the at least one human face key point , and when the at least one face key point includes at least one second key point belonging to the inner region of the right eyebrow, the rectangular area is determined according to the at least one first key point and the at least one second key point; The rectangular area is grayscaled to obtain a grayscale image of the rectangular area; the color value of the intersection point of the first row and the first column is used as the color value of the second pixel; The row with the largest sum of grayscale values in the grayscale image, and the first column is the column with the largest sum of grayscale values in the grayscale image.

結合本申請任一實施方式,所述依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果,包括:在所述第一比值未超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於遮擋狀態;在所述第一比值超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於未遮擋狀態。In combination with any embodiment of the present application, the skin occlusion detection result of the image to be processed is obtained according to the first ratio of the first number to the number of pixels in the area to be detected and the first threshold , including: when the first ratio does not exceed the first threshold, determining that the skin occlusion detection result indicates that the skin area corresponding to the region to be detected is in an occlusion state; when the first ratio exceeds the In the case of the first threshold, it is determined that the skin occlusion detection result is that the skin area corresponding to the area to be detected is in an unoccluded state.

結合本申請任一實施方式,所述皮膚區域屬待檢測人物,所述方法還包括:獲取所述待處理圖像的溫度熱力圖;在所述皮膚遮擋檢測結果為所述皮膚區域處於未遮擋狀態的情況下,從所述溫度熱力圖中讀取所述皮膚區域的溫度,作為所述待檢測人物的體溫。In combination with any embodiment of the present application, the skin area belongs to a person to be detected, and the method further includes: acquiring a temperature thermodynamic map of the image to be processed; In the case of the state, the temperature of the skin area is read from the temperature thermodynamic map as the body temperature of the person to be detected.

在一些實施例中,本申請還提供了一種圖像處理的裝置,所述裝置包括:獲取單元,用於獲取待處理圖像、第一閾值、第二閾值和第三閾值,所述第一閾值和所述第二閾值不同,所述第一閾值和所述第三閾值不同,所述第二閾值小於等於所述第三閾值;第一處理單元,用於確定所述待處理圖像的待測區域中第一像素點的第一數量;所述第一像素點為顏色值大於等於所述第二閾值且小於等於所述第三閾值的像素點;檢測單元,用於依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果。In some embodiments, the present application also provides an image processing device, which includes: an acquisition unit configured to acquire an image to be processed, a first threshold, a second threshold, and a third threshold, the first The threshold is different from the second threshold, the first threshold is different from the third threshold, and the second threshold is less than or equal to the third threshold; the first processing unit is configured to determine the image to be processed The first quantity of the first pixel in the region to be detected; the first pixel is a pixel whose color value is greater than or equal to the second threshold and less than or equal to the third threshold; the detection unit is configured to A first ratio of a number to the number of pixels in the region to be detected and the first threshold to obtain a skin occlusion detection result of the image to be processed.

結合本申請任一實施方式,所述待測區域包括人臉區域,所述皮膚遮擋檢測結果包括人臉遮擋檢測結果;所述圖像處理裝置還包括:第二處理單元,用於在所述確定所述待處理圖像的待測區域中第一像素點的第一數量之前,對所述待處理圖像進行人臉檢測處理,得到第一人臉框;依據所述第一人臉框,從所述待處理圖像中確定所述人臉區域。In combination with any embodiment of the present application, the area to be tested includes a human face area, and the skin occlusion detection result includes a human face occlusion detection result; the image processing device further includes: a second processing unit, configured to Before determining the first number of first pixels in the region to be detected of the image to be processed, performing face detection processing on the image to be processed to obtain a first face frame; according to the first face frame , determining the face area from the image to be processed.

結合本申請任一實施方式,所述人臉區域包括額頭區域,所述人臉遮擋檢測結果包括額頭遮擋檢測結果,所述第一人臉框包括:上框線和下框線;所述上框線和所述下框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的橫軸的邊,且所述上框線的縱坐標小於所述下框線的縱坐標;所述第二處理單元用於:對所述待處理圖像進行人臉關鍵點檢測,得到至少一個人臉關鍵點;所述至少一個人臉關鍵點包括左眉毛關鍵點和右眉毛關鍵點;在保持所述上框線的縱坐標不變的情況下,將所述下框線沿所述待處理圖像的像素坐標系的縱軸的負方向移動,使得所述下框線所在直線與第一直線重合,得到第二人臉框;所述第一直線為過所述左眉毛關鍵點和所述右眉毛關鍵點的直線;依據所述第二人臉框包含的區域,得到所述額頭區域。In combination with any embodiment of the present application, the face area includes a forehead area, the face occlusion detection result includes a forehead occlusion detection result, and the first face frame includes: an upper frame line and a lower frame line; Both the frame line and the lower frame line are sides parallel to the horizontal axis of the pixel coordinate system of the image to be processed in the first face frame, and the ordinate of the upper frame line is smaller than that of the lower frame The ordinate of the line; the second processing unit is used to: perform face key point detection on the image to be processed to obtain at least one face key point; the at least one face key point includes a left eyebrow key point and a right eyebrow Key point: under the condition of keeping the ordinate of the upper frame line unchanged, move the lower frame line along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed, so that the lower frame line Where the straight line coincides with the first straight line to obtain the second human face frame; the first straight line is a straight line passing through the left eyebrow key point and the right eyebrow key point; according to the area included in the second human face frame, the obtained Describe the forehead area.

結合本申請任一實施方式,所述第二處理單元用於:在保持所述第二人臉框的下框線的縱坐標不變的情況下,將所述第二人臉框的上框線沿所述待處理圖像的像素坐標系的縱軸移動,使得所述第二人臉框的上框線和所述第二人臉框的下框線的距離為預設距離,得到第三人臉框;依據所述第三人臉框包含的區域,得到所述額頭區域。In combination with any embodiment of the present application, the second processing unit is configured to: keep the ordinate of the lower frame line of the second face frame unchanged, and convert the upper frame of the second face frame to The line moves along the vertical axis of the pixel coordinate system of the image to be processed, so that the distance between the upper frame line of the second human face frame and the lower frame line of the second human face frame is a preset distance, and the first Three face frames; according to the area included in the third face frame, the forehead area is obtained.

結合本申請任一實施方式,所述至少一個人臉關鍵點還包括左嘴角關鍵點和右嘴角關鍵點;所述第一人臉框還包括:左框線和右框線;所述左框線和所述右框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的縱軸的邊,且所述左框線的橫坐標小於所述右框線的橫坐標;所述第二處理單元用於:在保持所述第三人臉框的左框線的橫坐標不變的情況下,將所述第三人臉框的右框線沿所述待處理圖像的像素坐標系的橫軸移動,使得所述第三人臉框的右框線和所述第三人臉框的左框線的距離為參考距離,得到第四人臉框;所述參考距離為第二直線與所述第三人臉框包含的人臉輪廓的兩個交點之間的距離;所述第二直線為在所述第一直線和第三直線之間且平行於所述第一直線或所述第三直線的直線;所述第三直線為過所述左嘴角關鍵點和所述右嘴角關鍵點的直線;將所述第四人臉框包含的區域作為所述額頭區域。In combination with any embodiment of the present application, the at least one human face key point also includes a left mouth corner key point and a right mouth corner key point; the first human face frame further includes: a left frame line and a right frame line; the left frame line and the right frame line are the sides parallel to the vertical axis of the pixel coordinate system of the image to be processed in the first face frame, and the abscissa of the left frame line is smaller than that of the right frame line abscissa; the second processing unit is configured to: keep the abscissa of the left frame line of the third face frame unchanged, and place the right frame line of the third face frame along the The horizontal axis of the pixel coordinate system of the processing image moves, so that the distance between the right frame line of the third human face frame and the left frame line of the third human face frame is a reference distance, and the fourth human face frame is obtained; The reference distance is the distance between the second straight line and the two intersection points of the face contour included in the third face frame; the second straight line is between the first straight line and the third straight line and parallel to the The straight line of the first straight line or the third straight line; the third straight line is a straight line passing through the key point of the left corner of the mouth and the key point of the right corner of the mouth; the area included in the fourth human face frame is used as the forehead area.

結合本申請任一實施方式,所述圖像裝置還包括:確定單元,用於在所述確定所述待處理圖像的待測區域中第一像素點的第一數量之前,從所述第一人臉框包含的像素點區域中確定皮膚像素點區域;所述獲取單元,還用於獲取所述皮膚像素點區域中第二像素點的顏色值;所述第一處理單元,還用於將所述第二像素點的顏色值與第一值的差作為所述第二閾值,將所述第二像素點的顏色值與第二值的和作為所述第三閾值;所述第一值和所述第二值均不超過所述待處理圖像的顏色值中的最大值。In combination with any embodiment of the present application, the image device further includes: a determining unit, configured to, before determining the first number of first pixels in the region to be detected of the image to be processed, from the first A skin pixel area is determined in the pixel area included in the face frame; the acquisition unit is also used to acquire the color value of the second pixel in the skin pixel area; the first processing unit is also used to The difference between the color value of the second pixel and the first value is used as the second threshold, and the sum of the color value of the second pixel and the second value is used as the third threshold; the first Neither the value nor the second value exceeds the maximum value among the color values of the image to be processed.

結合本申請任一實施方式,所述圖像處理裝置還包括:第三處理單元,用於在所述從所述第一人臉框包含的像素點區域中確定皮膚像素點區域之前,對所述待處理圖像進行口罩佩戴檢測處理,得到檢測結果;所述確定單元用於:在檢測到所述待處理圖像中人臉區域未佩戴口罩的情況下,將所述人臉區域中除所述額頭區域、嘴巴區域、眉毛區域和眼睛區域之外的像素點區域,作為所述皮膚像素點區域;在檢測到所述待處理圖像中人臉區域佩戴口罩的情況下,將所述第一直線和第四直線之間的像素點區域作為所述皮膚像素點區域。其中,所述第四直線為過左眼下眼瞼關鍵點和右眼下眼瞼關鍵點的直線;所述左眼下眼瞼關鍵點和所述右眼下眼瞼關鍵點均屬所述至少一個人臉關鍵點。In combination with any embodiment of the present application, the image processing device further includes: a third processing unit, configured to, before determining the skin pixel area from the pixel area included in the first human face frame, The mask wearing detection process is carried out on the image to be processed to obtain the detection result; the determination unit is used to: when it is detected that the face area in the image to be processed is not wearing a mask, remove the mask from the face area The forehead area, the mouth area, the eyebrow area and the pixel area outside the eye area are used as the skin pixel area; when it is detected that the face area in the image to be processed is wearing a mask, the The pixel area between the first straight line and the fourth straight line is used as the skin pixel area. Wherein, the fourth straight line is a straight line passing through the key points of the lower eyelid of the left eye and the key point of the lower eyelid of the right eye; both the key points of the lower eyelid of the left eye and the key points of the lower eyelid of the right eye belong to the at least one human face key point.

結合本申請任一實施方式,所述獲取單元用於:在所述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點,且包含屬右眉內側區域中的至少一個第二關鍵點的情況下,根據所述至少一個第一關鍵點和所述至少一個第二關鍵點確定矩形區域;對所述矩形區域進行灰度化處理,得到矩形區域的灰度圖;將矩形區域的灰度圖中第一行和第一列的交點的顏色值作為所述第二像素點的顏色值;所述第一行為所述灰度圖中灰度值之和最大的行,所述第一列為所述灰度圖中灰度值之和最大的列。In combination with any embodiment of the present application, the acquisition unit is configured to: the at least one face key point includes at least one first key point in the inner area of the left eyebrow, and includes at least one first key point in the inner area of the right eyebrow. In the case of two key points, determine a rectangular area according to the at least one first key point and the at least one second key point; grayscale the rectangular area to obtain a grayscale image of the rectangular area; The color value of the intersection point of the first row and the first column in the grayscale image of the region is used as the color value of the second pixel point; the first row is the row with the largest sum of grayscale values in the grayscale image, so The first column is the column with the largest sum of gray values in the gray scale image.

結合本申請任一實施方式,所述檢測單元用於:在所述第一比值未超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於遮擋狀態;在所述第一比值超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於未遮擋狀態。In combination with any embodiment of the present application, the detection unit is configured to: if the first ratio does not exceed the first threshold, determine that the skin occlusion detection result indicates that the skin region corresponding to the region to be detected is in the Blocking state: when the first ratio exceeds the first threshold, determine that the skin blockage detection result is that the skin area corresponding to the region to be detected is in an unblocked state.

結合本申請任一實施方式,所述皮膚區域屬待檢測人物,所述獲取單元還用於:獲取所述待處理圖像的溫度熱力圖;所述圖像處理裝置還包括:第四處理單元,用於在所述皮膚遮擋檢測結果為所述皮膚區域處於未遮擋狀態的情況下,從所述溫度熱力圖中讀取所述皮膚區域的溫度,作為所述待檢測人物的體溫。In combination with any embodiment of the present application, the skin area belongs to a person to be detected, and the acquisition unit is further configured to: acquire a temperature thermodynamic map of the image to be processed; the image processing device further includes: a fourth processing unit , for reading the temperature of the skin area from the temperature thermodynamic map as the body temperature of the person to be detected when the skin occlusion detection result shows that the skin area is in an unoccluded state.

本申請還提供了一種處理器,所述處理器用於執行如上述第一方面及其任意一種可能實現的方式的方法。The present application also provides a processor, configured to execute the method in the above first aspect and any possible implementation manner thereof.

本申請還提供了一種電子設備,包括:處理器、發送裝置、輸入裝置、輸出裝置和儲存器,所述儲存器用於儲存計算機程式代碼,所述計算機程式代碼包括計算機指令,在所述處理器執行所述計算機指令的情況下,所述電子設備執行如上述第一方面及其任意一種可能實現的方式的方法。The present application also provides an electronic device, including: a processor, a sending device, an input device, an output device, and a storage, the storage is used to store computer program codes, the computer program codes include computer instructions, and the processor In the case of executing the computer instructions, the electronic device executes the method in the above first aspect and any possible implementation manner thereof.

本申請還提供了一種計算機可讀儲存媒體,所述計算機可讀儲存媒體中儲存有計算機程式,所述計算機程式包括程式指令,在所述程式指令被處理器執行的情況下,使所述處理器執行如上述第一方面及其任意一種可能實現的方式的方法。The present application also provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a processor, the processing The device executes the method in the above first aspect and any possible implementation manner thereof.

本申請還提供了一種計算機程式產品,所述計算機程式產品包括計算機程式或指令,在所述計算機程式或指令在計算機上運行的情況下,使得所述計算機執行上述第一方面及其任一種可能的實現方式的方法。The present application also provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, it causes the computer to execute the above first aspect and any possible method of implementation.

應當理解的是,以上的一般描述和後文的細節描述僅是示例性和解釋性的,而非限制本申請。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

本申請要求於2021年5月31日提交的、申請號為202110600103.1的中國專利申請的優先權,該中國專利申請公開的全部內容以引用的方式併入本文中。This application claims priority to the Chinese patent application with application number 202110600103.1 filed on May 31, 2021, the entire disclosure of which is incorporated herein by reference.

為了更清楚地說明本申請實施例或發明內容中的技術方案,下面將對本申請實施例或發明內容中所需要使用的附圖進行說明。In order to more clearly illustrate the technical solutions in the embodiment of the present application or the summary of the invention, the following will describe the drawings that need to be used in the embodiment of the application or the summary of the invention.

此處的附圖被併入說明書中並構成本說明書的一部分,這些附圖示出了符合本申請的實施例,用於說明本申請的技術方案。The accompanying drawings here are incorporated into the specification and constitute a part of the specification. These drawings show embodiments consistent with the application and are used to illustrate the technical solution of the application.

為了使本技術領域的人員更好地理解本申請方案,下面將結合本申請中的附圖,對本申請實施例中的技術方案進行清楚、完整地描述。所描述的實施例僅僅是本申請的一部分實施例,而不是全部的實施例。基於本申請中的實施例,本領域普通技術人員在沒有做出創造性勞動的前提下所獲得的所有其他實施例,都應屬本申請保護的範圍。In order to enable those skilled in the art to better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the present application. The described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall fall within the protection scope of this application.

本申請的說明書和權利要求書及上述附圖中的術語“第一”、“第二”等是用於區別不同對象,而不是用於描述特定順序。此外,術語“包括”和“具有”以及它們任何變形,意圖在於覆蓋不排他的包含。例如包含了一系列步驟或單元的過程、方法、系統、產品或設備不應被理解為限定於已列出的步驟或單元,而是可選地還包括沒有列出的步驟或單元,或可選地還包括對於這些過程、方法、產品或設備固有的其他步驟或單元。The terms "first", "second" and the like in the specification and claims of the present application and the above drawings are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "include" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, product, or device comprising a series of steps or units should not be understood as being limited to the listed steps or units, but may optionally also include steps or units not listed, or may Other steps or elements inherent to these processes, methods, products or devices are optionally also included.

在本文中提及“實施例”意味著,結合實施例描述的特定特徵、結構或特性可以包含在本申請的至少一個實施例中。在說明書中的各個位置出現該詞語並不一定均是指相同的實施例,也不是與其它實施例互斥的獨立的或備選的實施例。本領域技術人員可以理解的是,本文所描述的任意實施例可以與其它實施例相結合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The occurrences of the term in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art can understand that any embodiment described herein can be combined with other embodiments.

在進行接下來的闡述之前,首先對本申請實施例中的像素坐標系進行定義。如圖1所示,以圖像的左上角為像素坐標系的原點o、平行於圖像的列的方向為x軸的方向、平行於圖像的行的方向為y軸的方向,構建像素坐標系xoy。在像素坐標系下,橫坐標用於表示圖像中的像素在圖像中的行數,縱坐標用於表示圖像中的像素在圖像中的列數,橫坐標和縱坐標的單位均可以是像素。例如,假設圖1中的像素a的坐標為(30,25),即像素a的橫坐標為30個像素,像素a的縱坐標為25個像素,像素a為圖像中第30行第25列的像素。Before proceeding to the following description, first define the pixel coordinate system in the embodiment of the present application. As shown in Figure 1, taking the upper left corner of the image as the origin o of the pixel coordinate system, the direction parallel to the column of the image as the direction of the x-axis, and the direction parallel to the row of the image as the direction of the y-axis, construct Pixel coordinate system xoy. In the pixel coordinate system, the abscissa is used to indicate the number of rows of pixels in the image in the image, and the ordinate is used to indicate the number of columns of pixels in the image in the image. The units of the abscissa and ordinate are both Can be pixels. For example, assume that the coordinates of pixel a in Figure 1 are (30, 25), that is, the abscissa of pixel a is 30 pixels, the ordinate of pixel a is 25 pixels, and pixel a is the 25th row of the 30th row in the image. columns of pixels.

為提高檢測安全性,越來越多的場景應用了對皮膚進行非接觸檢測。而這類非接觸檢測的檢測準確度很大程度上受皮膚遮擋狀態的影響。如,若皮膚區域被遮擋的面積較大,對該皮膚區域進行非接觸檢測的檢測結果的準確度可能較低。因此,如何檢測皮膚遮擋狀態具有非常重要的意義。例如,目前,非接觸式測溫在體溫檢測領域中運用廣泛。非接觸式測溫工具具有測量速度快、超溫語音報警等優點,適用於在人流量特別大的公共場合快速篩檢人體體溫。In order to improve the safety of detection, non-contact detection of skin is applied in more and more scenarios. The detection accuracy of this type of non-contact detection is largely affected by the state of skin occlusion. For example, if the covered area of the skin area is relatively large, the accuracy of the detection result of the non-contact detection on the skin area may be low. Therefore, how to detect the state of skin occlusion is of great significance. For example, at present, non-contact temperature measurement is widely used in the field of body temperature detection. The non-contact temperature measurement tool has the advantages of fast measurement speed and over-temperature voice alarm. It is suitable for rapid screening of human body temperature in public places with a particularly large flow of people.

熱成像設備主要通過採集熱紅外波段的光,來探測物體發出的熱輻射,最後建立熱輻射與溫度的準確對應關係,實現測溫功能。熱成像設備作為一種非接觸式測溫工具,可覆蓋較大區域,在人流量大的檢測場景下可提高通行速度,減少群體聚集時間。Thermal imaging equipment mainly detects the thermal radiation emitted by objects by collecting light in the thermal infrared band, and finally establishes an accurate correspondence between thermal radiation and temperature to realize the temperature measurement function. As a non-contact temperature measurement tool, thermal imaging equipment can cover a large area. It can increase the speed of traffic and reduce the gathering time of groups in detection scenarios with a large flow of people.

熱成像設備主要是識別出行人的額頭的位置,然後根據該額頭區域測量體溫。但在行人佩戴帽子或者有劉海的情況下,無法確定額頭區域是否處於遮擋狀態。此時,是否能確定額頭的遮擋狀態對體溫檢測的準確度具有非常大的影響。The thermal imaging device mainly recognizes the position of the forehead of the pedestrian, and then measures the body temperature according to the forehead area. However, in the case of pedestrians wearing hats or bangs, it is impossible to determine whether the forehead area is blocked. At this time, whether or not the covering state of the forehead can be determined has a great influence on the accuracy of body temperature detection.

基於此,本申請實施例提供了一種圖像處理方法,以實現對例如待測溫對象的皮膚遮擋檢測。例如:在體溫檢測的實施例中,待測溫對象可為人臉,或具體為人臉中的額頭區域,或更具體為額頭區域中的特定位置。為簡化表述,在接下來的說明中,將待處理圖像中與待測溫對象對應的區域稱為待測區域。換言之,待測溫對象通常為與待處理圖像中的待測區域對應的皮膚區域,且待測溫對象的皮膚遮擋檢測結果包括對應的皮膚區域是否被遮擋。Based on this, an embodiment of the present application provides an image processing method to realize skin occlusion detection of, for example, an object to be measured. For example: in the embodiment of body temperature detection, the object to be measured can be a human face, or specifically the forehead area of a human face, or more specifically a specific position in the forehead area. To simplify the expression, in the following description, the area corresponding to the object to be measured in the image to be processed is referred to as the area to be measured. In other words, the temperature-measuring object is usually a skin area corresponding to the to-be-measured area in the image to be processed, and the skin occlusion detection result of the temperature-measuring object includes whether the corresponding skin area is occluded.

本申請實施例的執行主體為圖像處理裝置,圖像處理裝置可以是以下中的一種:手機、計算機、伺服器、平板電腦。The execution subject of the embodiment of the present application is an image processing device, and the image processing device may be one of the following: a mobile phone, a computer, a server, and a tablet computer.

下面結合本申請實施例中的附圖對本申請實施例進行描述。Embodiments of the present application are described below with reference to the drawings in the embodiments of the present application.

請參閱圖2,圖2是本申請實施例提供的一種圖像處理方法的流程示意圖。Please refer to FIG. 2 . FIG. 2 is a schematic flowchart of an image processing method provided in an embodiment of the present application.

201、獲取待處理圖像、第一閾值、第二閾值和第三閾值,上述第一閾值和上述第二閾值不同,上述第一閾值和上述第三閾值不同,上述第二閾值小於等於上述第三閾值。201. Acquire an image to be processed, a first threshold, a second threshold, and a third threshold, the first threshold is different from the second threshold, the first threshold is different from the third threshold, and the second threshold is less than or equal to the first threshold Three thresholds.

本申請實例中,待處理圖像包括包含人臉的圖像塊和不包含人臉的圖像塊。第一閾值是根據具體實施情況預先設定的額頭區域中皮膚像素點的數量和額頭區域中像素點的數量的標準比值,是評價額頭區域是否被遮擋的標準。In the example of this application, the image to be processed includes an image block containing a human face and an image block not containing a human face. The first threshold is a standard ratio between the number of skin pixels in the forehead area and the number of pixels in the forehead area preset according to specific implementation conditions, and is a criterion for evaluating whether the forehead area is blocked.

本申請實施例中的第一閾值與溫度檢測或者其他實施例的準確度有關。舉例來說,假設對行人進行額頭區域的測溫操作,額頭區域中露出的皮膚區域越多,那麼測溫的結果越準確。在額頭區域露出的皮膚區域占60%以上的情況下,認為測溫的結果是準確的。如果在體溫檢測場景下需要這種準確度的話,那麼就可以把第一閾值設為60%。如果在溫度檢測場景下需要更高的準確度,則可以把第一閾值設定在60%以上。如果認為把第一閾值設定為60%的要求太高,實際上不需要過於準確的結果,那麼可以設置第一閾值在60%以下。在這種情況下,相應的測溫結果的準確度是要降低的。因此,第一閾值的設定需要在具體的實施中進行,本申請實施例中不做限定。The first threshold in this embodiment of the present application is related to the accuracy of temperature detection or other embodiments. For example, assuming that the temperature measurement operation is performed on the forehead area of pedestrians, the more exposed skin areas in the forehead area, the more accurate the temperature measurement result will be. In the case where the exposed skin area of the forehead area accounts for more than 60%, the result of the temperature measurement is considered to be accurate. If such accuracy is required in the temperature detection scenario, then the first threshold can be set to 60%. If higher accuracy is required in the temperature detection scenario, the first threshold may be set above 60%. If it is considered that the requirement of setting the first threshold to 60% is too high and an over-accurate result is actually not needed, then the first threshold can be set below 60%. In this case, the accuracy of the corresponding temperature measurement results will be reduced. Therefore, the setting of the first threshold needs to be performed in specific implementation, which is not limited in this embodiment of the present application.

在一種獲取待處理圖像的實現方式中,圖像處理裝置接收用戶通過輸入組件輸入的待處理圖像。上述輸入組件包括:鍵盤、滑鼠、觸控螢幕、觸控板和音視頻輸入器等。In an implementation manner of acquiring an image to be processed, the image processing apparatus receives an image to be processed input by a user through an input component. The above-mentioned input components include: a keyboard, a mouse, a touch screen, a touch pad, and an audio and video input device.

在另一種獲取待處理圖像的實現方式中,圖像處理裝置接收數據終端發送的待處理圖像。上述數據終端可以是以下任意一種:手機、計算機、平板電腦、伺服器等。In another implementation manner of acquiring an image to be processed, the image processing device receives the image to be processed sent by the data terminal. The above-mentioned data terminal can be any of the following: mobile phone, computer, tablet computer, server, etc.

在又一種獲取待處理圖像的實現方式中,圖像處理裝置接收監控攝像頭發送的待處理圖像。可選的,該監控攝像頭可能部署於人工智能(artificial intelligence,AI)紅外成像儀、安檢門這類非接觸式測溫產品上(這類產品主要放置在車站、機場、地鐵、商店、超市、學校、公司大廳以及小區門口這些人流量密集的場景)。In yet another implementation manner of acquiring an image to be processed, the image processing device receives the image to be processed sent by the surveillance camera. Optionally, the surveillance camera may be deployed on non-contact temperature measurement products such as artificial intelligence (AI) infrared imagers and security gates (such products are mainly placed in stations, airports, subways, shops, supermarkets, Scenes with dense traffic such as schools, company halls, and community gates).

在又一種獲取待處理圖像的實現方式中,圖像處理裝置接收監控攝像頭發送的視頻流,對視頻流進行解碼處理,將獲得的圖像作為待處理圖像。可選的,該監控攝像頭可能部署於AI紅外成像儀、安檢門這類非接觸式測溫產品上(這類產品主要放置在車站、機場、地鐵、商店、超市、學校、公司大廳以及小區門口這些人流量密集的場景)。In yet another implementation manner of acquiring an image to be processed, the image processing device receives a video stream sent by a surveillance camera, performs decoding processing on the video stream, and uses the obtained image as an image to be processed. Optionally, the surveillance camera may be deployed on non-contact temperature measurement products such as AI infrared imagers and security gates (such products are mainly placed at stations, airports, subways, shops, supermarkets, schools, company halls and community gates) These crowded scenes).

在又一種獲取待處理圖像的實現方式中,圖像處理裝置與攝像頭相連,圖像處理裝置可從每個攝像頭獲取實時採集的數據幀,數據幀可以包含圖像和/或視頻的形式。In yet another implementation manner of acquiring images to be processed, the image processing device is connected to the cameras, and the image processing device can obtain real-time collected data frames from each camera, and the data frames may include images and/or videos.

需要理解的是,與圖像處理裝置連接的攝像頭的數量並不是固定的,將攝像頭的網路地址輸入至圖像處理裝置,即可通過圖像處理裝置從攝像頭獲取採集的數據幀。It should be understood that the number of cameras connected to the image processing device is not fixed, and the collected data frames can be obtained from the cameras by inputting the network addresses of the cameras into the image processing device.

舉例來說,A地方的人員想要利用本申請提供的技術方案,則只需將A地方的攝像頭的網路地址輸入至圖像處理裝置,即可通過圖像處理裝置獲取A地方的攝像頭採集的數據幀,並可對A地方的攝像頭採集的數據幀進行後續處理,圖像處理裝置輸出額頭是否遮擋的檢測結果。For example, if a person in place A wants to use the technical solution provided by this application, he only needs to input the network address of the camera in place A into the image processing device, and then the camera in place A can be obtained through the image processing device. The data frame can be followed by processing the data frame collected by the camera in place A, and the image processing device can output the detection result of whether the forehead is blocked.

202、確定上述待處理圖像的待測區域中第一像素點的第一數量,其中,上述第一像素點為顏色值大於等於第二閾值且小於等於第三閾值的像素點。202. Determine a first number of first pixels in the region to be detected of the image to be processed, wherein the first pixels are pixels whose color values are greater than or equal to a second threshold and less than or equal to a third threshold.

本申請實例中,顏色值為六角錐體模型((hue,saturation,value,HSV)的參數。這個模型中顏色值的三個參數分別是:色調(hue,H),飽和度(saturation ,S),亮度(value ,V)。也就是說,顏色值攜帶色度、飽和度和亮度三種資訊。因本申請涉及皮膚檢測,故需要檢測待測區域的皮膚像素點的數量,也就是第一像素點的第一數量。In the example of this application, the color value is a parameter of the hexagonal pyramid model ((hue, saturation, value, HSV). The three parameters of the color value in this model are: hue (hue, H), saturation (saturation, S ), brightness (value, V). That is to say, the color value carries three kinds of information: chroma, saturation and brightness. Since this application involves skin detection, it is necessary to detect the number of skin pixels in the area to be tested, that is, the first The first number of pixels.

具體的,圖像處理裝置將顏色值大於等於第二閾值且小於等於第三閾值的像素點視為皮膚像素點。即在本申請實施例中,第二閾值和第三閾值用於判斷像素點是否為皮膚像素點。Specifically, the image processing device regards pixels whose color values are greater than or equal to the second threshold and less than or equal to the third threshold as skin pixels. That is, in the embodiment of the present application, the second threshold and the third threshold are used to determine whether the pixel is a skin pixel.

在確定第一像素點為顏色值大於等於第二閾值且小於等於第三閾值的像素點的實現方式中,當待測區域的像素點的顏色值的所有參數都大於等於第二閾值對應的參數且小於等於第三閾值對應的參數時,才能認為這個像素點是對應皮膚區域未被遮擋的皮膚像素點。舉例說明,設第二閾值的H為26,S為43,V為46,第三閾值的H為34,S為255,V為255。那麼,皮膚像素點的顏色值範圍是H是26至34,S是43至255,V是46至255。當待測區域的某個像素點的顏色值分別為H為25,S為45,V為200時,因為H的值不在設定的H的26至34的範圍內,那麼認為這個像素點不是皮膚像素點。又例如說,當待測區域的某個像素點的顏色值分別為H為28,S為45,V為200時,因為H、S、V的值都在設定的範圍內,那麼認為這個像素點是皮膚像素點。也就是說,將待測區域從RGB通道轉化為HSV通道,只有當待測區域的某個像素點的顏色值都在上述給出的第二閾值和第三閾值的範圍內,才說明這個像素點是對應皮膚區域未被遮擋的皮膚像素點,即這個像素點是第一像素點。In the implementation of determining that the first pixel is a pixel whose color value is greater than or equal to the second threshold and less than or equal to the third threshold, when all parameters of the color value of the pixel in the area to be tested are greater than or equal to the parameters corresponding to the second threshold and is less than or equal to the parameter corresponding to the third threshold, the pixel can be considered as a skin pixel corresponding to an unoccluded skin area. For example, suppose the H of the second threshold is 26, the S is 43, and the V is 46; the H of the third threshold is 34, the S is 255, and the V is 255. Then, the color value range of the skin pixel is 26 to 34 for H, 43 to 255 for S, and 46 to 255 for V. When the color value of a certain pixel in the area to be tested is H is 25, S is 45, and V is 200, because the value of H is not within the range of 26 to 34 of the set H, then this pixel is considered not to be skin pixel. For another example, when the color value of a certain pixel in the area to be tested is 28 for H, 45 for S, and 200 for V, since the values of H, S, and V are all within the set range, then the pixel is considered to be Points are skin pixels. That is to say, the area to be tested is converted from the RGB channel to the HSV channel. Only when the color values of a certain pixel in the area to be tested are within the range of the second threshold and the third threshold given above, this pixel is indicated. A point is a skin pixel point corresponding to an unoccluded skin area, that is, this pixel point is the first pixel point.

圖像處理裝置在確定待測區域中的第一像素點後,進一步確定第一像素點的數量得到第一數量。After the image processing device determines the first pixel points in the region to be detected, it further determines the number of the first pixel points to obtain the first number.

203、依據上述第一數量與上述待測區域內像素點的數量的第一比值和上述第一閾值,得到上述待處理圖像的皮膚遮擋檢測結果。203. Obtain a skin occlusion detection result of the image to be processed according to a first ratio of the first number to the number of pixels in the region to be detected and the first threshold.

本申請實施例中,皮膚遮擋檢測結果包括皮膚區域處於被遮擋狀態或皮膚區域處於未被遮擋狀態。In the embodiment of the present application, the skin occlusion detection result includes that the skin area is in an occluded state or that the skin area is in an unoccluded state.

本申請實施例中,第一數量和待測區域內像素點的數量的第一比值,表示待測區域中未被遮擋的皮膚像素點在待測區域內的占比(下文簡稱為占比)。若第一比值表示占比較小,說明待測區域對應的皮膚區域被遮擋,反之,若第一比值表示占比較大,說明待測區域對應的皮膚區域未被遮擋。In the embodiment of the present application, the first ratio between the first number and the number of pixels in the area to be tested represents the proportion of unoccluded skin pixels in the area to be tested (hereinafter referred to as the ratio) . If the first ratio indicates that the proportion is small, it means that the skin area corresponding to the area to be tested is blocked; on the contrary, if the first ratio indicates that the proportion is large, it means that the skin area corresponding to the area to be tested is not blocked.

本申請實施例中,圖像處理裝置將第一閾值作為判斷占比大小的依據,進而可依據占比大小確定皮膚區域是否被遮擋,從而得到皮膚遮擋檢測結果。In the embodiment of the present application, the image processing device uses the first threshold as the basis for judging the proportion, and then can determine whether the skin area is blocked according to the proportion, so as to obtain the skin occlusion detection result.

在一種可能實現的方式中,占比未超過第一閾值,說明占比較小,進而確定皮膚區域處於被遮擋狀態。占比超過第一閾值,說明占比較大,進而確定皮膚區域處於未被遮擋狀態。In a possible implementation manner, if the proportion does not exceed the first threshold, it means that the proportion is small, and then it is determined that the skin area is in a blocked state. If the proportion exceeds the first threshold, it means that the proportion is relatively large, and then it is determined that the skin area is in an unoccluded state.

本申請實施中,圖像處理裝置依據第一閾值,確定待處理圖像的待測區域中皮膚像素點的數量,即第一數量。通過確定第一數量與上述待測區域內像素點的數量的第一比值,得到待測區域內皮膚像素點的占比,進而可依據該占比和第一閾值之間的大小關係,確定皮膚區域的遮擋狀態,從而得到待處理圖像的皮膚遮擋檢測結果。In the implementation of the present application, the image processing device determines the number of skin pixels in the region to be detected in the image to be processed according to the first threshold, that is, the first number. By determining the first ratio of the first number to the number of pixels in the area to be tested, the proportion of skin pixels in the area to be tested is obtained, and then the skin pixel can be determined according to the relationship between the proportion and the first threshold. The occlusion state of the area, so as to obtain the skin occlusion detection result of the image to be processed.

作為一種可選的實施方式,皮膚區域包括人臉區域,皮膚遮擋檢測結果包括人臉遮擋檢測結果。在該種實施方式中,圖像處理裝置在確定待處理圖像中的人臉區域內的皮膚像素點的數量的情況下,進一步確定人臉區域中皮膚像素點的占比,進而可依據該占比確定人臉區域是否被遮擋,得到人臉遮擋檢測結果。具體的,在確定人臉區域被遮擋的情況下,確定人臉遮擋檢測結果為人臉區域處於被遮擋的狀態;在確定人臉區域未被遮擋的情況下,確定人臉遮擋檢測結果為人臉區域未處於被遮擋的狀態。As an optional implementation manner, the skin area includes a human face area, and the skin occlusion detection result includes a human face occlusion detection result. In this embodiment, the image processing device further determines the proportion of skin pixels in the face area after determining the number of skin pixels in the face area in the image to be processed, and then can The proportion determines whether the face area is occluded, and obtains the face occlusion detection result. Specifically, when it is determined that the face area is blocked, it is determined that the face occlusion detection result is the state that the face area is blocked; when it is determined that the face area is not blocked, it is determined that the face occlusion detection result is a human The face area is not occluded.

在該種實施方式中,在確定待處理圖像的待測區域中第一像素點的第一數量之前,圖像處理裝置還執行以下步驟:In this embodiment, before determining the first number of first pixels in the region to be detected of the image to be processed, the image processing device further performs the following steps:

1、對上述待處理圖像進行人臉檢測處理,得到第一人臉框。1. Perform face detection processing on the image to be processed to obtain a first face frame.

本申請實施例中,人臉檢測處理用於識別待處理圖像中是否包含人物對象。In the embodiment of the present application, the face detection process is used to identify whether the image to be processed contains a human object.

對上述待處理圖像進行人臉檢測處理,得到第一人臉框的坐標(如圖1的D所示)。第一人臉框的坐標可以是左上角坐標、左下角坐標、右下角坐標、右上角坐標。第一人臉框的坐標也可以是一對對角坐標,也就是左上角坐標和右下角坐標或者左下角坐標和右上角坐標。第一人臉框包含的區域是人臉的額頭到下巴的區域。Face detection processing is performed on the image to be processed to obtain the coordinates of the first face frame (as shown in D in FIG. 1 ). The coordinates of the first face frame may be upper left corner coordinates, lower left corner coordinates, lower right corner coordinates, and upper right corner coordinates. The coordinates of the first face frame may also be a pair of diagonal coordinates, that is, the coordinates of the upper left corner and the lower right corner or the coordinates of the lower left corner and the upper right corner. The area contained in the first face frame is the area from the forehead to the chin of the face.

在一種可能的實現方式中,通過預先訓練好的神經網路對待處理圖像進行特徵提取處理,獲得特徵數據,該預先訓練好的神經網路根據特徵數據中的特徵識別待處理圖像中是否包含人臉。通過對待處理圖像進行特徵提取處理,在特徵提取的數據中確定待處理圖像中包含人臉的情況下,確定上述待處理圖像第一人臉框的位置,也就是實現對人臉的檢測。對待處理圖像進行人臉檢測處理可通過卷積神經網路實現。In a possible implementation, the pre-trained neural network is used to perform feature extraction processing on the image to be processed to obtain feature data, and the pre-trained neural network identifies whether the image to be processed is Contains faces. By performing feature extraction processing on the image to be processed, in the case where it is determined that the image to be processed contains a human face in the data of feature extraction, determine the position of the first human face frame of the image to be processed, that is, realize the recognition of the human face detection. The face detection processing of the image to be processed can be realized through a convolutional neural network.

通過將多張帶有標註資訊的圖像作為訓練數據,對卷積神經網路進行訓練,使訓練後的卷積神經網路可完成對圖像的人臉檢測處理。訓練數據中的圖像的標註資訊為人臉以及人臉的位置。在使用訓練數據對卷積神經網路進行訓練的過程中,卷積神經網路從圖像中提取出圖像的特徵數據,並依據特徵數據確定圖像中是否有人臉,在圖像中有人臉的情況下,依據圖像的特徵數據得到人臉的位置。以標註資訊為監督資訊監督卷積神經網路在訓練過程中得到的結果,並更新卷積神經網路的參數,完成對卷積神經網路的訓練。這樣,可使用訓練後的卷積神經網路對待處理圖像進行處理,以得到待處理圖像中的人臉的位置。By using multiple images with labeled information as training data, the convolutional neural network is trained, so that the trained convolutional neural network can complete the face detection processing of the image. The annotation information of the image in the training data is the face and the location of the face. In the process of using the training data to train the convolutional neural network, the convolutional neural network extracts the feature data of the image from the image, and determines whether there is a human face in the image based on the feature data, and whether there is a human face in the image. In the case of a face, the position of the face is obtained from the feature data of the image. The marked information is used as the supervision information to supervise the results obtained by the convolutional neural network during the training process, and the parameters of the convolutional neural network are updated to complete the training of the convolutional neural network. In this way, the image to be processed can be processed by using the trained convolutional neural network to obtain the position of the face in the image to be processed.

在另一種可能的實現方式中,人臉檢測處理可通過人臉檢測演算法實現,其中,人臉檢測演算法可以是以下中的至少一種:基於直方圖粗分割和奇異值特徵的人臉檢測演算法、基於二進小波變換的人臉檢測、基於概率決策的神經網路方法(pdbnn)、隱馬爾可夫模型方法(hidden markov model)等,本申請對實現人臉檢測處理的人臉檢測演算法不做具體限定。In another possible implementation, the face detection process can be implemented by a face detection algorithm, wherein the face detection algorithm can be at least one of the following: face detection based on histogram rough segmentation and singular value features Algorithm, face detection based on binary wavelet transform, neural network method (pdbnn) based on probability decision-making, hidden markov model method (hidden markov model), etc. The algorithm is not specifically limited.

2、依據上述第一人臉框,從上述待處理圖像中確定上述人臉區域。2. According to the first human face frame, determine the human face area from the image to be processed.

在一種可能的實現方式中,圖像處理裝置將第一人臉框所包圍的區域作為人臉區域。In a possible implementation manner, the image processing apparatus uses the area surrounded by the first human face frame as the human face area.

作為一種可選的實施方式,第一人臉框包括上框線和下框線。或者,第一人臉框包括上框線、下框線、左框線和右框線;上框線和下框線均為上述第一人臉框中平行於待處理圖像的像素坐標系的橫軸的邊,且上框線的縱坐標小於下框線的縱坐標;左框線和右框線均為第一人臉框中平行於待處理圖像的像素坐標系的縱軸的邊,且左框線的橫坐標小於右框線的橫坐標。As an optional implementation manner, the first human face frame includes an upper frame line and a lower frame line. Or, the first human face frame includes an upper frame line, a lower frame line, a left frame line and a right frame line; both the upper frame line and the lower frame line are pixel coordinate systems parallel to the image to be processed in the first human face frame and the vertical coordinate of the upper frame line is less than the vertical coordinate of the lower frame line; the left frame line and the right frame line are both parallel to the vertical axis of the pixel coordinate system of the image to be processed in the first human face frame side, and the abscissa of the left frame line is smaller than the abscissa of the right frame line.

在該種實施方式中,人臉區域包括額頭區域,此時圖像處理裝置依據第一人臉框從待處理圖像中確定人臉區域,即依據第一人臉框從待處理圖像中確定額頭區域。In this embodiment, the face area includes the forehead area. At this time, the image processing device determines the face area from the image to be processed according to the first face frame, that is, determines the face area from the image to be processed according to the first face frame. Identify the forehead area.

在一種確定額頭區域的實現方式中,上框線和下框線的距離是第一人臉框包含的人臉的額頭上邊沿到下巴下邊沿的距離,左框線和右框線的距離是第一人臉框包含的人臉的左耳內側和右耳內側的距離。一般來說,人臉的額頭區域的寬度(即額頭區域的上下邊沿之間的距離)約占整個人臉的長度(即整個人臉的上下邊沿之間的距離)的1/3,但是額頭區域的寬度占人臉長度的比例是因人而異的。不過,每個人的額頭區域的寬度占整個人臉的長度的比例均在30%到40%的範圍內。在保持上框線的縱坐標不變的情況下,沿著待處理圖像的像素坐標系的縱軸的負方向移動下框線,使得移動後的上框線和下框線的距離為上框線和下框線的初始距離的30%到40%,移動後的第一人臉框包含的區域為額頭區域。在第一人臉框的坐標是一對對角坐標的時候,第一人臉框的左上角的坐標或者第一人臉框的右上角的坐標確定了額頭區域的位置。因此,通過改變第一人臉框的大小和位置,可以使得第一人臉框內的區域為待處理圖像中人臉的額頭區域。In an implementation of determining the forehead area, the distance between the upper frame line and the lower frame line is the distance from the upper edge of the forehead to the lower edge of the chin of the face included in the first face frame, and the distance between the left frame line and the right frame line is The distance between the inner side of the left ear and the inner side of the right ear of the face included in the first face frame. Generally speaking, the width of the forehead area of a face (that is, the distance between the upper and lower edges of the forehead area) accounts for about 1/3 of the length of the entire face (that is, the distance between the upper and lower edges of the entire face), but the forehead The ratio of the width of the region to the length of the face varies from person to person. However, the proportion of the width of the forehead area of each person to the length of the entire face is in the range of 30% to 40%. In the case of keeping the vertical coordinate of the upper frame line unchanged, move the lower frame line along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed, so that the distance between the moved upper frame line and the lower frame line is 30% to 40% of the initial distance between the frame line and the lower frame line, and the area included in the moved first face frame is the forehead area. When the coordinates of the first face frame are a pair of diagonal coordinates, the coordinates of the upper left corner of the first face frame or the coordinates of the upper right corner of the first face frame determine the position of the forehead area. Therefore, by changing the size and position of the first human face frame, the area within the first human face frame can be made to be the forehead area of the human face in the image to be processed.

在另一確定額頭區域的實現方式中,圖像處理裝置通過執行以下步驟確定額頭區域:In another implementation manner of determining the forehead area, the image processing device determines the forehead area by performing the following steps:

21、對上述待處理圖像進行人臉關鍵點檢測,得到至少一個人臉關鍵點;上述至少一個人臉關鍵點包括左眉毛關鍵點和右眉毛關鍵點。21. Perform face key point detection on the image to be processed to obtain at least one face key point; the at least one face key point includes a left eyebrow key point and a right eyebrow key point.

本申請實施例中,通過對上述待處理圖像進行人臉關鍵點檢測,得到至少一個人臉關鍵點,至少一個關鍵點包括左眉毛關鍵點和右眉毛關鍵點。In the embodiment of the present application, at least one key point of human face is obtained by performing human face key point detection on the image to be processed, and at least one key point includes a left eyebrow key point and a right eyebrow key point.

對待處理圖像進行特徵提取處理,獲得特徵數據,可以實現人臉關鍵點檢測。其中,該特徵提取處理可通過預先訓練好的神經網路實現,也可通過特徵提取模型實現,本申請對此不作限定。特徵數據用於提取待處理圖像中人臉的關鍵點資訊。上述待處理圖像為數位圖像,通過對待處理圖像進行特徵提取處理得到特徵數據可以理解為待處理圖像的更深層次的語意資訊。Feature extraction is performed on the image to be processed to obtain feature data, which can realize face key point detection. Wherein, the feature extraction process can be realized by a pre-trained neural network, or by a feature extraction model, which is not limited in this application. The feature data is used to extract the key point information of the face in the image to be processed. The above image to be processed is a digital image, and the feature data obtained by performing feature extraction on the image to be processed can be understood as deeper semantic information of the image to be processed.

在一種人臉關鍵點檢測可能的實現方式中,建立訓練用人臉圖像集,標註需要檢測的關鍵點位置。構建第一層深度神經網路並訓練人臉區域估計模型,構建第二層深度神經網路,做人臉關鍵點初步檢測;對內臉區域繼續做局部區域劃分,對每個局部區域分別構建第三層深度神經網路;對每個局部區域估計其旋轉角度,按照估計的旋轉角度做矯正,對每個局部區域的矯正數據集構建第四層深度神經網路。任給一張新的人臉圖像,採用上述四層深度神經網路模型進行關鍵點檢測,即可得到最終的人臉關鍵點檢測結果。In a possible implementation of face key point detection, a face image set for training is established, and positions of key points to be detected are marked. Construct the first layer of deep neural network and train the face area estimation model, build the second layer of deep neural network, and do preliminary detection of key points of the face; continue to divide the inner face area into local areas, and construct the second layer for each local area. Three-layer deep neural network; estimate the rotation angle of each local area, correct it according to the estimated rotation angle, and construct a fourth-layer deep neural network for the correction data set of each local area. Given any new face image, the above four-layer deep neural network model is used for key point detection, and the final face key point detection result can be obtained.

又一種人臉關鍵點檢測可能的實現方式中,通過將多張帶有標註資訊的圖像作為訓練數據,對卷積神經網路進行訓練,使訓練後的卷積神經網路可完成對圖像的人臉關鍵點檢測處理。訓練數據中的圖像的標註資訊為人臉的關鍵點位置。在使用訓練數據對卷積神經網路進行訓練的過程中,卷積神經網路從圖像中提取出圖像的特徵數據,並依據特徵數據確定圖像中人臉的關鍵點位置。以標註資訊為監督資訊監督卷積神經網路在訓練過程中得到的結果,並更新卷積神經網路的參數,完成對卷積神經網路的訓練。這樣,可使用訓練後的卷積神經網路對待處理圖像進行處理,以得到待處理圖像中的人臉的關鍵點位置。In yet another possible implementation of face key point detection, the convolutional neural network is trained by using multiple images with labeled information as training data, so that the trained convolutional neural network can complete image matching. Face key point detection processing of images. The annotation information of the image in the training data is the key point position of the face. In the process of using the training data to train the convolutional neural network, the convolutional neural network extracts the feature data of the image from the image, and determines the key point position of the face in the image based on the feature data. The marked information is used as the supervision information to supervise the results obtained by the convolutional neural network during the training process, and the parameters of the convolutional neural network are updated to complete the training of the convolutional neural network. In this way, the image to be processed can be processed by using the trained convolutional neural network, so as to obtain the position of key points of the face in the image to be processed.

又一種可能的實現方式中,通過至少兩層卷積層對待處理圖像逐層進行卷積處理,完成對待處理圖像的特徵提取處理。至少兩層卷積層中的卷積層依次串聯,即上一層卷一層的輸出為下一層卷積層的輸入,每層卷積層提取出的內容及語意資訊均不一樣,具體表現為,特徵提取處理一步步地將待處理圖像中人臉的特徵抽象出來,同時也將逐步丟棄相對次要的特徵數據,其中,相對次要的特徵數據指除被檢測人臉的特徵數據之外的特徵數據。因此,越到後面提取出的特徵數據的尺寸越小,但內容及語意資訊更濃縮。通過多層卷積層逐級對待處理圖像進行卷積處理,可在獲得待處理圖像中的內容資訊及語意資訊的同時,將待處理圖像的尺寸縮小,減小圖像處理裝置的數據處理量,提高圖像處理裝置的運算速度。In yet another possible implementation manner, at least two convolutional layers are used to perform convolution processing on the image to be processed layer by layer to complete the feature extraction process of the image to be processed. The convolutional layers in at least two convolutional layers are connected in sequence, that is, the output of the previous convolutional layer is the input of the next convolutional layer, and the content and semantic information extracted by each convolutional layer are different. The specific performance is that the feature extraction process is one step The features of the face in the image to be processed are gradually abstracted, and the relatively minor feature data will be gradually discarded, wherein the relatively minor feature data refers to feature data other than the feature data of the detected face. Therefore, the size of the feature data extracted later is smaller, but the content and semantic information are more concentrated. The image to be processed is convoluted step by step through the multi-layer convolution layer, which can reduce the size of the image to be processed while obtaining the content information and semantic information in the image to be processed, and reduce the data processing of the image processing device Amount, improve the calculation speed of the image processing device.

又一種人臉關鍵點檢測可能的實現方式中,卷積處理的實現過程如下:通過使卷積核在待處理圖像上滑動,並將待處理圖像上與卷積核的中心像素對應的像素稱為目標像素。將待處理圖像上的像素值與卷積核上對應的數值相乘,然後將所有相乘後的值相加得到卷積處理後的像素值。將卷積處理後的像素值作為目標像素的像素值。最終滑動處理完待處理圖像,更新待處理圖像中所有像素的像素值,完成對待處理圖像的卷積處理,得到特徵數據。在一種可能的實現方式中,通過提取出特徵數據的神經網路對特徵數據中的特徵進行識別,可獲得待處理圖像中人臉的關鍵點資訊。In another possible implementation of face key point detection, the implementation process of convolution processing is as follows: by making the convolution kernel slide on the image to be processed, and the image corresponding to the central pixel of the convolution kernel The pixels are called target pixels. Multiply the pixel value on the image to be processed by the corresponding value on the convolution kernel, and then add all the multiplied values to obtain the pixel value after convolution. The pixel value after convolution processing is used as the pixel value of the target pixel. Finally, the image to be processed is slid and processed, the pixel values of all pixels in the image to be processed are updated, and the convolution processing of the image to be processed is completed to obtain feature data. In a possible implementation manner, the features in the feature data are identified by the neural network that extracts the feature data, so as to obtain the key point information of the face in the image to be processed.

又一種人臉關鍵點檢測可能的實現方式中,採用人臉關鍵點檢測演算法實現人臉關鍵點檢測,採用的人臉關鍵點檢測演算法可以是OpenFace、多任務級聯卷積神經網路(multi-task cascaded convolutional networks,MTCNN)、調整卷積神經網路(tweaked convolutional neural networks,TCNN)、或任務約束深度卷積神經網路(tasks-constrained deep convolutional network,TCDCN)中的至少一種,本申請對人臉關鍵點檢測演算法不做限定。In another possible implementation of face key point detection, the face key point detection algorithm is used to realize the face key point detection. The face key point detection algorithm can be OpenFace, multi-task cascaded convolutional neural network At least one of (multi-task cascaded convolutional networks, MTCNN), adjusted convolutional neural networks (tweaked convolutional neural networks, TCNN), or task-constrained deep convolutional networks (tasks-constrained deep convolutional network, TCDCN), This application does not limit the facial key point detection algorithm.

22、在保持上述第一人臉框的上框線的縱坐標不變的情況下,將上述第一人臉框的下框線沿上述待處理圖像的像素坐標系的縱軸的負方向移動,使得上述第一人臉框的下框線所在直線與第一直線重合,得到第二人臉框。其中,第一直線為過上述左眉毛關鍵點和上述右眉毛關鍵點的直線。22. While keeping the ordinate of the upper frame line of the first face frame unchanged, align the lower frame line of the first face frame along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed Move so that the line where the lower frame line of the first face frame is located coincides with the first line to obtain the second face frame. Wherein, the first straight line is a straight line passing through the above-mentioned left eyebrow key point and the above-mentioned right eyebrow key point.

23、依據上述第二人臉框包含的區域,得到上述額頭區域。23. Obtain the aforementioned forehead area according to the area included in the aforementioned second human face frame.

本申請實施例中,上述上框線和上述下框線的距離是上述第一人臉框包含的人臉的額頭上邊沿到下巴下邊沿的距離,上述左框線和上述右框線的距離是上述第一人臉框包含的人臉的左耳內側和右耳內側的距離。第一直線是過上述左眉毛關鍵點和上述右眉毛關鍵點的直線。因為額頭區域在第一人臉框包含的第一直線的上方,因此移動上述下框線至與第一直線重合,就可以使得移動後的第一人臉框包含的區域為額頭區域。也就是在保持上述上框線的縱坐標不變的情況下,將上述下框線沿著上述待處理圖像的像素坐標系的縱軸的負方向移動,使得移動後的上述下框線所在直線與上述第一直線重合,得到第二人臉框。第二人臉框包含的區域為額頭區域。In the embodiment of the present application, the distance between the above-mentioned upper frame line and the above-mentioned lower frame line is the distance from the upper edge of the forehead to the lower edge of the chin of the face included in the first face frame, and the distance between the above-mentioned left frame line and the above-mentioned right frame line is the distance between the inside of the left ear and the inside of the right ear of the face included in the first face frame. The first straight line is a straight line passing through the above-mentioned left eyebrow key point and the above-mentioned right eyebrow key point. Because the forehead area is above the first straight line included in the first face frame, moving the lower frame line to coincide with the first straight line can make the area included in the moved first face frame the forehead area. That is, while keeping the vertical coordinate of the above-mentioned upper frame line unchanged, the above-mentioned lower frame line is moved along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed, so that the moved above-mentioned lower frame line is located The straight line coincides with the above-mentioned first straight line to obtain the second face frame. The area contained in the second face frame is the forehead area.

作為一種可選的實施方式,圖像處理裝置在執行步驟23的過程中執行以下步驟:As an optional implementation manner, the image processing device performs the following steps during step 23:

24、在保持上述第二人臉框的下框線的縱坐標不變的情況下,將上述第二人臉框的上框線沿上述待處理圖像的像素坐標系的縱軸移動,使得上述第二人臉框的上框線和上述第二人臉框的下框線的距離為預設距離,得到第三人臉框。24. While keeping the vertical coordinate of the lower frame line of the second human face frame unchanged, move the upper frame line of the second human face frame along the vertical axis of the pixel coordinate system of the image to be processed, so that The distance between the upper frame line of the second face frame and the lower frame line of the second face frame is a preset distance to obtain a third face frame.

25、依據上述第三人臉框包含的區域,得到額頭區域。25. Obtain the forehead area according to the area included in the third face frame.

本申請實施例中,第二人臉框的左框線和第二人臉框的右框線的距離為第二人臉框包含的人臉的左耳內側到右耳內側的距離。第一人臉框的上框線和第一人臉框的下框線的距離為第一人臉框包含的人臉的額頭上邊沿到下巴下邊沿的距離,一般來說額頭區域的寬度大約占整個人臉的長度的1/3,但每個人的額頭區域的寬度占人臉長度的比例不一樣,不過,所有人的額頭區域的寬度與人臉長度的比例均在30%到40%的範圍內。因此,設置預設距離為第一人臉框的上框線和第一人臉框的下框線的距離的30%到40%。因此要讓第二人臉框內的區域為額頭區域,需要把第二人臉框的上框線和第二人臉框的下框線之間的距離縮小到上述第一人臉框的上框線和下框線之間的距離的30%到40%。在保持第二人臉框的下框線的縱坐標不變的情況下,將第二人臉框的上框線沿著上述待處理圖像的像素坐標系的縱軸移動,使得上述第二人臉框的上框線和上述第二人臉框的下框線之間的距離為預設距離,得到第三人臉框。此時,第三人臉框包含的區域為額頭區域。In the embodiment of the present application, the distance between the left frame line of the second face frame and the right frame line of the second face frame is the distance from the inside of the left ear to the inside of the right ear of the face included in the second face frame. The distance between the upper frame line of the first face frame and the lower frame line of the first face frame is the distance from the upper edge of the forehead to the lower edge of the chin of the face contained in the first face frame. Generally speaking, the width of the forehead area is about It accounts for 1/3 of the length of the entire face, but the ratio of the width of the forehead area to the length of the face is different for each person. However, the ratio of the width of the forehead area to the length of the face of all people is 30% to 40%. In the range. Therefore, the preset distance is set to be 30% to 40% of the distance between the upper frame line of the first human face frame and the lower frame line of the first human face frame. Therefore, to make the area inside the second face frame the forehead area, it is necessary to reduce the distance between the upper frame line of the second face frame and the lower frame line of the second face frame to the above-mentioned first face frame. 30% to 40% of the distance between the frame line and the bottom frame line. In the case of keeping the ordinate of the lower frame line of the second face frame unchanged, move the upper frame line of the second face frame along the vertical axis of the pixel coordinate system of the image to be processed, so that the second The distance between the upper frame line of the face frame and the lower frame line of the second face frame is a preset distance to obtain a third face frame. At this time, the area included in the third face frame is the forehead area.

作為一種可選的實施方式,圖像處理裝置在執行步驟25的過程中執行以下步驟:As an optional implementation manner, the image processing device performs the following steps during the execution of step 25:

26、在保持上述第三人臉框的左框線的橫坐標不變的情況下,將上述第三人臉框的右框線沿上述待處理圖像的像素坐標系的橫軸移動,使得上述第三人臉框的右框線和上述第三人臉框的左框線之間的距離為參考距離,得到第四人臉框。其中,上述參考距離為第二直線與上述第三人臉框包含的人臉輪廓的兩個交點之間的距離,上述第二直線為在上述第一直線和第三直線之間且平行於上述第一直線或上述第三直線的直線,上述第三直線為過左嘴角關鍵點和右嘴角關鍵點的直線。26. While keeping the abscissa of the left frame of the third face frame unchanged, move the right frame of the third face frame along the abscissa of the pixel coordinate system of the image to be processed, so that The distance between the right frame line of the third face frame and the left frame line of the third face frame is a reference distance, and a fourth face frame is obtained. Wherein, the above-mentioned reference distance is the distance between the two intersection points of the second straight line and the human face contour included in the third human-face frame, and the above-mentioned second straight line is between the above-mentioned first straight line and the third straight line and parallel to the above-mentioned first straight line. A straight line or a straight line of the above-mentioned third straight line, the above-mentioned third straight line is a straight line passing through the key points of the left corner of the mouth and the key point of the right corner of the mouth.

27、將上述第四人臉框包含的區域作為上述額頭區域。27. Use the region included in the fourth face frame as the forehead region.

本申請實施例中,上述至少一個人臉關鍵點還包括左嘴角關鍵點和右嘴角關鍵點。第三直線為過上述左嘴角關鍵點和右嘴角關鍵點的直線。第二直線在上述第一直線和第三直線之間,且第二直線平行於上述第一直線或者第三直線。將第二直線與上述第三人臉框包含的人臉圖像的人臉輪廓的兩個交點之間的距離作為參考距離。第二直線在第一直線和第三直線之間,也就是在眉毛區域和嘴巴區域的中間區域。因為眉毛區域和嘴巴區域的中間區域的人臉寬度是與額頭區域的長度比較接近的,採用這部分區域的寬度來確定額頭區域的長度是比較準確的。此時,額頭區域的長度為人臉輪廓的寬度,也就是參考距離。在保持上述第三人臉框的左框線的橫坐標不變的情況下,將上述第三人臉框的右框線沿著上述待處理圖像的像素坐標系的橫軸移動,使得上述第三人臉框的左框線和上述第三人臉框的右框線之間的距離為參考距離,得到第四人臉框。此時,第四人臉框包含的區域為額頭區域。In the embodiment of the present application, the at least one human face key point further includes a left mouth corner key point and a right mouth corner key point. The third straight line is a straight line passing through the key points of the left corner of the mouth and the key point of the right corner of the mouth. The second straight line is between the first straight line and the third straight line, and the second straight line is parallel to the first straight line or the third straight line. The distance between the two intersection points of the second straight line and the face contour of the face image included in the third face frame is taken as the reference distance. The second straight line is between the first straight line and the third straight line, that is, in the middle area between the eyebrow area and the mouth area. Because the width of the human face in the middle area of the eyebrow area and the mouth area is relatively close to the length of the forehead area, it is more accurate to use the width of this part of the area to determine the length of the forehead area. At this time, the length of the forehead area is the width of the contour of the face, that is, the reference distance. While keeping the abscissa of the left frame line of the third human face frame unchanged, move the right frame line of the third human face frame along the abscissa of the pixel coordinate system of the image to be processed, so that the above The distance between the left frame line of the third face frame and the right frame line of the third face frame is a reference distance, and the fourth face frame is obtained. At this time, the area included in the fourth face frame is the forehead area.

又一種可能的實現方式中,在保持上述第三人臉框的右框線的橫坐標不變的情況下,將上述第三人臉框的左框線沿著上述待處理圖像的像素坐標系的橫軸移動,使得移動後的上述第三人臉框的左框線和上述第三人臉框的右框線之間的距離為參考距離,移動後的上述第三人臉框包含的區域為額頭區域。In yet another possible implementation, while keeping the abscissa of the right frame line of the third face frame unchanged, align the left frame line of the third face frame along the pixel coordinates of the image to be processed The horizontal axis of the system moves, so that the distance between the left frame line of the third human face frame after the movement and the right frame line of the third human face frame is the reference distance, and the third human face frame after the movement contains The area is the forehead area.

又一種可能的實現方式中,將上述第三人臉框的右框線沿著上述待處理圖像的像素坐標系的橫軸的負方向移動第三人臉框的左框線和右框線之間的距離與參考距離差值的一半的同時,將上述第三人臉框的左框線沿著上述待處理圖像的像素坐標系的橫軸的正方向移動第三人臉框的左框線和右框線之間的距離與參考距離差值的一半,使得移動後的上述第三人臉框的左框線和移動後的上述第三人臉框的右框線之間的距離為參考距離。此時,移動後的上述第三人臉框包含的區域為額頭區域。In yet another possible implementation, the right frame line of the third face frame is moved along the negative direction of the horizontal axis of the pixel coordinate system of the image to be processed, and the left frame line and the right frame line of the third face frame are moved While the distance between them is half of the reference distance difference, move the left frame line of the third human face frame to the left side of the third human face frame along the positive direction of the horizontal axis of the pixel coordinate system of the image to be processed. Half of the difference between the distance between the frame line and the right frame line and the reference distance, so that the distance between the left frame line of the above-mentioned third face frame after moving and the right frame line of the above-mentioned third face frame after moving as the reference distance. At this time, the region included in the moved third human face frame is the forehead region.

作為一種可選的實施方式,在確定待處理圖像的待測區域中第一像素點的第一數量之前,圖像處理裝置還執行以下步驟:As an optional implementation manner, before determining the first number of first pixels in the region to be detected of the image to be processed, the image processing device further performs the following steps:

3、從上述第一人臉框包含的像素點區域中確定皮膚像素點區域。3. Determine the skin pixel area from the pixel area included in the first human face frame.

本申請實施例中,因為要找到皮膚區域中露出的皮膚的顏色基準,需要取皮膚像素點區域中的像素點的顏色值作為皮膚區域中露出的皮膚的顏色基準。因此,需要從上述第一人臉框包含的像素點區域中確定皮膚像素點區域。舉例說明,如圖1所示,皮膚像素點區域可以是第一人臉框包含的眼睛下方的臉頰區域,也可以是第一人臉框包含的鼻子下方區域和嘴巴上方區域的交集區域,還可以是第一人臉框包含的嘴巴下方區域。In the embodiment of the present application, to find the color reference of the exposed skin in the skin area, it is necessary to take the color value of the pixel in the skin pixel area as the color reference of the exposed skin in the skin area. Therefore, it is necessary to determine the skin pixel area from the pixel area included in the first human face frame. For example, as shown in Figure 1, the skin pixel point area can be the cheek area below the eyes included in the first human face frame, or the intersection area of the area below the nose and the area above the mouth included in the first human face frame, or It may be the region under the mouth contained in the first face frame.

作為一種可選的實施方式,在從上述人臉框包含的像素點區域中確定皮膚像素點區域之前,圖像處理裝置還執行以下步驟:As an optional implementation manner, before determining the skin pixel point area from the pixel point area contained in the above-mentioned face frame, the image processing device further performs the following steps:

31、對上述待處理圖像進行口罩佩戴檢測處理,得到檢測結果。31. Perform mask wearing detection processing on the image to be processed to obtain a detection result.

本申請實施例中,對待處理圖像進行口罩佩戴檢測,得到檢測結果包括:待處理圖像中的人物已佩戴口罩或待處理圖像中的人物未佩戴口罩。In the embodiment of the present application, the image to be processed is tested for wearing a mask, and the detection results obtained include: the person in the image to be processed has worn a mask or the person in the image to be processed has not worn a mask.

在一種可能實現的方式中,圖像處理裝置對待處理圖像進行第一特徵提取處理,得到第一特徵數據,其中,第一特徵數據攜帶待檢測人物是否佩戴口罩的資訊。圖像處理裝置依據口罩佩戴檢測得到的第一特徵數據,得到檢測結果。In a possible implementation manner, the image processing device performs first feature extraction processing on the image to be processed to obtain first feature data, where the first feature data carries information about whether the person to be detected is wearing a mask. The image processing device obtains the detection result according to the first feature data obtained from the mask wearing detection.

可選的,第一特徵提取處理可通過口罩檢測網路實現。通過將至少一張帶有標註資訊的第一訓練圖像作為訓練數據,對深度卷積神經網路進行訓練可得到口罩檢測網路。其中,標註資訊包括第一訓練圖像中的人物是否佩戴口罩。Optionally, the first feature extraction process can be implemented through a mask detection network. By using at least one first training image with labeled information as training data, the mask detection network can be obtained by training the deep convolutional neural network. Wherein, the annotation information includes whether the person in the first training image wears a mask.

32、在檢測結果為上述人臉區域未佩戴口罩的情況下,將人臉區域中除額頭區域、嘴巴區域、眉毛區域和眼睛區域之外的像素點區域,作為上述皮膚像素點區域。其中,上述至少一個人臉關鍵點還包括左眼下眼瞼關鍵點、右眼下眼瞼關鍵點。32. When the detection result shows that the above-mentioned face area is not wearing a mask, use the pixel point area in the face area except the forehead area, mouth area, eyebrow area and eye area as the above-mentioned skin pixel point area. Wherein, the at least one human face key point also includes key points of the lower eyelid of the left eye and key points of the lower eyelid of the right eye.

在檢測結果為上述人臉區域佩戴口罩的情況下,將人臉區域中上述第一直線和第四直線之間的像素點區域作為皮膚像素點區域。其中,第四直線為過左眼下眼瞼關鍵點和右眼下眼瞼關鍵點的直線;左眼下眼瞼關鍵點和右眼下眼瞼關鍵點均屬上述至少一個人臉關鍵點。If the detection result is that the face area wears a mask, the pixel point area between the first straight line and the fourth straight line in the face area is taken as the skin pixel point area. Wherein, the fourth straight line is a straight line passing through the key points of the lower eyelid of the left eye and the key point of the lower eyelid of the right eye; both the key points of the lower eyelid of the left eye and the key points of the lower eyelid of the right eye belong to at least one of the aforementioned key points of the human face.

本申請實施例中,在檢測結果是人臉區域沒有佩戴口罩的情況下,人臉區域的皮膚像素點區域為除了皮膚區域、嘴巴區域、眉毛區域以及眼睛區域以外的區域。因為人臉區域在眼睛區域和眉毛區域帶有顏色值顯示為黑色的像素點以及在嘴巴的區域帶有顏色值顯示為紅色的像素點。因此,皮膚像素點區域不包括眼睛區域、嘴巴區域和眉毛區域。又因為在不確定皮膚區域是否處於戴帽子或者有劉海等的遮擋情況下,無法判斷皮膚區域對應的皮膚像素點區域。因此,對待處理圖像進行口罩佩戴檢測處理確定上述人臉區域未佩戴口罩的情況下,皮膚像素點區域包括人臉區域中除皮膚區域、嘴巴區域、眉毛區域、眼睛區域之外的像素點區域。In the embodiment of the present application, when the detection result is that the face area is not wearing a mask, the skin pixel area of the face area is an area other than the skin area, mouth area, eyebrow area, and eye area. Because the face area has pixels whose color values are displayed as black in the eye area and eyebrow area, and pixels whose color value is displayed as red in the mouth area. Therefore, the skin pixel area does not include the eye area, mouth area and eyebrow area. And because it is not sure whether the skin area is covered by a hat or bangs, etc., it is impossible to determine the skin pixel area corresponding to the skin area. Therefore, when the mask wearing detection processing of the image to be processed determines that the above-mentioned face area is not wearing a mask, the skin pixel area includes the pixel area in the face area except the skin area, mouth area, eyebrow area, and eye area. .

在檢測結果是人臉區域佩戴口罩的情況下,人臉區域的鼻子以下大部分區域會被遮擋。所以,皮膚未被遮擋部分可以是眉心區域、眼皮區域、鼻根區域。人臉關鍵點檢測可以得到左眼下眼瞼關鍵點坐標、右眼下眼瞼關鍵點坐標、左眉毛關鍵點坐標和右眉毛關鍵點坐標。第四直線是過左眼下眼瞼關鍵點和右眼下眼瞼關鍵點的直線,第一直線是過左眉毛關鍵點和右眉毛關鍵點的直線。眉心區域、眼皮區域、鼻根區域這三部分區域都在人臉區域內左眉毛和右眉毛確定的水平線與左眼下眼瞼和右眼下眼瞼確定的直線之間。因此,在上述檢測結果為人臉區域佩戴口罩的情況下,將人臉區域中第一直線與第四直線之間的像素點區域作為上述皮膚像素點區域。In the case that the detection result is that the mask is worn in the face area, most of the area below the nose of the face area will be blocked. Therefore, the unoccluded part of the skin can be the eyebrow area, eyelid area, and nasion area. Face key point detection can obtain the key point coordinates of the lower eyelid of the left eye, the key point coordinates of the lower eyelid of the right eye, the key point coordinates of the left eyebrow, and the key point coordinates of the right eyebrow. The fourth straight line is the straight line passing through the key points of the lower eyelid of the left eye and the key point of the lower eyelid of the right eye, and the first straight line is the straight line passing through the key points of the left eyebrow and the right eyebrow. The three parts of the eyebrow area, the eyelid area and the nasion area are all between the horizontal line determined by the left eyebrow and the right eyebrow in the human face area and the straight line determined by the lower eyelid of the left eye and the lower eyelid of the right eye. Therefore, in the case that the detection result is that the face area wears a mask, the pixel point area between the first straight line and the fourth straight line in the human face area is taken as the skin pixel point area.

4、獲取皮膚像素點區域中第二像素點的顏色值;4. Obtain the color value of the second pixel in the skin pixel area;

本申請實施例中,從皮膚像素點區域中獲取第二像素點的顏色值,這裡第二像素點的顏色值是作為衡量皮膚區域露出的皮膚顏色的基準。因此,第二像素點可以是皮膚像素點區域中的任意一點。In the embodiment of the present application, the color value of the second pixel point is obtained from the skin pixel point area, where the color value of the second pixel point is used as a benchmark for measuring the skin color exposed in the skin area. Therefore, the second pixel point may be any point in the skin pixel point area.

獲取皮膚像素點區域中第二像素點的實現方式可以是:找到某個皮膚像素點區域的坐標平均值作為第二像素點;又或者找到一些關鍵點確定的直線的交點坐標處的像素點作為第二像素點;又或者是對一部分皮膚像素點區域的圖像進行灰度化處理,將灰度值最大的像素點作為第二像素點。本申請實施例對獲取第二像素點的方式不做限定。The implementation of obtaining the second pixel in the skin pixel area can be: find the coordinate average of a certain skin pixel area as the second pixel; or find the pixel at the intersection coordinates of the straight lines determined by some key points as The second pixel; or grayscale processing is performed on an image of a part of the skin pixel area, and the pixel with the largest grayscale value is used as the second pixel. The embodiment of the present application does not limit the manner of acquiring the second pixel.

一種可能的實現方式中,在右眉內側區域和左眉內側區域分別有兩個關鍵點的情況下,設關鍵點為右眉內側上方點、右眉內側下方點、左眉內側上方點、左眉內側下方點。將右眉內側上方點和左眉內側下方點相連,左眉內側上方點和右眉內側下方點相連,獲得兩條相交的直線。通過這兩條相交的直線可以獲得唯一交點。如圖所示,假設這四個關鍵點對應的編號分別為37、38、67、68。也就是將關鍵點37和68相連,關鍵點38和67相連,確定這兩條直線後就可以得到一個交點。基於人臉框的位置,可以確定37、38、67、68這四個關鍵點的坐標,然後可以利用Opencv求解出交點的坐標。通過確定交點的坐標,就可以得到交點對應的像素點。將交點對應的像素點的RGB通道轉換成HSV通道,就可以獲取交點坐標對應的像素點的顏色值。交點坐標對應的像素點的顏色值就是第二像素點的顏色值。In a possible implementation, when there are two key points in the inner area of the right eyebrow and the inner area of the left eyebrow, the key points are set as the upper point on the inner side of the right eyebrow, the lower point on the inner side of the right eyebrow, the upper point on the inner side of the left eyebrow, and the upper point on the inner side of the left eyebrow. Point below the inside of the eyebrow. Connect the upper point on the inner side of the right eyebrow with the lower point on the inner side of the left eyebrow, connect the upper point on the inner side of the left eyebrow with the lower point on the inner side of the right eyebrow, and obtain two intersecting straight lines. The unique point of intersection can be obtained by these two intersecting straight lines. As shown in the figure, suppose the numbers corresponding to these four key points are 37, 38, 67, and 68 respectively. That is, the key points 37 and 68 are connected, and the key points 38 and 67 are connected. After determining these two straight lines, an intersection point can be obtained. Based on the position of the face frame, the coordinates of the four key points 37, 38, 67, and 68 can be determined, and then the coordinates of the intersection points can be solved by using Opencv. By determining the coordinates of the intersection point, the pixel point corresponding to the intersection point can be obtained. By converting the RGB channel of the pixel corresponding to the intersection point into an HSV channel, the color value of the pixel corresponding to the intersection coordinate can be obtained. The color value of the pixel corresponding to the intersection coordinate is the color value of the second pixel.

又一種可能的實現方式中,在右眉內側區域和左眉內側區域分別有兩個關鍵點的情況下,設關鍵點為右眉內側上方點、右眉內側下方點、左眉內側上方點、左眉內側下方點。通過這4個關鍵點求一個矩形區域為眉心區域。如圖所示,假設這四個關鍵點對應的編號分別為37、38、67、68,通過這四個關鍵點求一個矩形區域為眉心區域。獲取關鍵點37、38、67、68的坐標分別定為(X1,Y1)、(X2,Y2)、(X3,Y3)、(X4,Y4)。取(X1,Y1),(X2,Y2)中Y坐標的最大值為Y5,取(X3,Y3)、(X4,Y4)中Y坐標的最小值為Y6,取(X1,Y1)、(X3,Y3)中X坐標的最大值為X5,取(X2,Y2)、(X4,Y4)中X坐標的最小值為X6,因此可以得到矩形區域。也就是截取的眉心區域的4個坐標為(X6,Y6)、(X5,Y5)、(X5,Y6)、(X6,Y5)。基於人臉框的位置,可以確定37、38、67、68這四個關鍵點的坐標,就可以確定(X6,Y6)、(X5,Y5)、(X5,Y6)、(X6,Y5)這四個點的位置。將(X6,Y6)、(X5,Y5)相連,(X5,Y6)、(X6,Y5)相連,獲得兩條直線,通過這兩個直線可以獲得一個唯一交點。然後,可以利用Opencv求解出交點的坐標。通過確定交點的坐標,就可以得到交點對應的像素點。將交點對應的像素點的RGB通道轉換成HSV通道,就可以獲取交點坐標對應的像素點的顏色值。交點坐標對應的像素點的顏色值就是第二像素點的顏色值。In another possible implementation, when there are two key points in the inner area of the right eyebrow and the inner area of the left eyebrow respectively, set the key points as the upper point on the inner side of the right eyebrow, the lower point on the inner side of the right eyebrow, the upper point on the inner side of the left eyebrow, Point below the inside of the left eyebrow. Find a rectangular area through these 4 key points as the eyebrow area. As shown in the figure, assuming that the numbers corresponding to these four key points are 37, 38, 67, and 68 respectively, a rectangular area is calculated as the eyebrow area through these four key points. The obtained coordinates of key points 37, 38, 67, and 68 are set as (X1, Y1), (X2, Y2), (X3, Y3), and (X4, Y4) respectively. Take the maximum value of Y coordinates in (X1, Y1), (X2, Y2) as Y5, take the minimum value of Y coordinates in (X3, Y3), (X4, Y4) as Y6, take (X1, Y1), ( The maximum value of X coordinates in X3, Y3) is X5, and the minimum value of X coordinates in (X2, Y2), (X4, Y4) is X6, so a rectangular area can be obtained. That is, the four coordinates of the intercepted eyebrow area are (X6, Y6), (X5, Y5), (X5, Y6), (X6, Y5). Based on the position of the face frame, the coordinates of the four key points 37, 38, 67, and 68 can be determined, and (X6, Y6), (X5, Y5), (X5, Y6), (X6, Y5) can be determined. The positions of these four points. Connect (X6, Y6) and (X5, Y5), and connect (X5, Y6) and (X6, Y5) to obtain two straight lines, and a unique intersection point can be obtained through these two straight lines. Then, Opencv can be used to solve the coordinates of the intersection point. By determining the coordinates of the intersection point, the pixel point corresponding to the intersection point can be obtained. By converting the RGB channel of the pixel corresponding to the intersection point into an HSV channel, the color value of the pixel corresponding to the intersection coordinate can be obtained. The color value of the pixel corresponding to the intersection coordinate is the color value of the second pixel.

作為一種可選的實施方式,圖像處理裝置在執行步驟4的過程中執行以下步驟:As an optional implementation manner, the image processing device performs the following steps during step 4:

41、在上述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點,且包含屬右眉內側區域中的至少一個第二關鍵點的情況下,根據上述至少一個第一關鍵點和上述至少一個第二關鍵點確定矩形區域。41. In the case where the at least one face key point includes at least one first key point belonging to the inner area of the left eyebrow and at least one second key point belonging to the inner area of the right eyebrow, according to the above at least one first key point The points and the at least one second key point define a rectangular area.

42、對上述矩形區域進行灰度化處理,得到矩形區域的灰度圖。42. Perform grayscale processing on the above rectangular area to obtain a grayscale image of the rectangular area.

43、將矩形區域的灰度圖中第一行和第一列的交點的顏色值作為上述第二像素點的顏色值,其中,第一行為上述灰度圖中灰度值之和最大的行,第一列為上述灰度圖中灰度值之和最大的列。43. Use the color value of the intersection of the first row and the first column in the grayscale image of the rectangular area as the color value of the second pixel, wherein the first row is the row with the largest sum of grayscale values in the grayscale image above , the first column is the column with the largest sum of gray values in the above grayscale image.

本申請實施例中,包含多種根據上述至少一個第一關鍵點和上述至少一個第二關鍵點,獲取一個矩形區域的多種方案。對這個矩形區域進行灰度化處理,得到矩形區域的灰度圖。計算灰度圖的每一行的灰度值之和,記取得灰度值之和最大的行是第一行。同理,計算灰度圖的每一列的灰度值之和,記取得灰度值之和最大的列是第一列。根據灰度值之和最大的行和最大的列,找到交點坐標。也就是第一行和第一列交點坐標。通過確定交點的坐標,就可以得到交點對應的像素點。將交點對應的像素點的RGB通道轉換成HSV通道,就可以獲取交點對應的像素點的顏色值。交點坐標對應的像素點的顏色值就是第二像素點的顏色值。In the embodiment of the present application, various schemes for obtaining a rectangular area according to the at least one first key point and the at least one second key point are included. Perform grayscale processing on this rectangular area to obtain a grayscale image of the rectangular area. Calculate the sum of the gray values of each row of the grayscale image, and remember that the row with the largest sum of gray values is the first row. In the same way, calculate the sum of the gray values of each column of the grayscale image, and remember that the column with the largest sum of gray values is the first column. Find the intersection coordinates according to the row and column with the largest sum of gray values. That is, the intersection coordinates of the first row and the first column. By determining the coordinates of the intersection point, the pixel point corresponding to the intersection point can be obtained. By converting the RGB channel of the pixel corresponding to the intersection to the HSV channel, the color value of the pixel corresponding to the intersection can be obtained. The color value of the pixel corresponding to the intersection coordinate is the color value of the second pixel.

一種獲取矩形區域可能的實現方式中,在左眉內側關鍵點和右眉內側關鍵點各自只有一個且這兩個關鍵點的縱坐標不一致的情況下,以這兩個關鍵點縱坐標的差值作為矩形區域的寬度,以這兩個關鍵點橫坐標的差值作為矩形區域的長,確定出一個以這兩個關鍵點為對角的矩形區域。In a possible implementation of obtaining a rectangular area, when there is only one key point on the inner side of the left eyebrow and one key point on the inner side of the right eyebrow and the vertical coordinates of the two key points are inconsistent, the difference between the vertical coordinates of the two key points As the width of the rectangular area, the difference between the abscissas of the two key points is used as the length of the rectangular area to determine a rectangular area with the two key points as the diagonal.

又一種獲取矩形區域的可能的實現方式中,在左眉內側關鍵點有兩個且右眉內側關鍵點有一個的情況下,將左眉內側的兩個關鍵點的連線作為矩形區域的第一條邊長,在左眉內側的兩個關鍵點中選取一個與右眉內側關鍵點縱坐標不一致的關鍵點,將其與右眉內側關鍵點的連線作為矩形區域的第二條邊長。根據確定的第一條邊長和第二條邊長分別作平行線,可以得到矩形區域剩下的兩條邊長,從而確定出矩形區域。In another possible implementation of obtaining a rectangular area, when there are two key points inside the left eyebrow and one key point inside the right eyebrow, the line connecting the two key points inside the left eyebrow is used as the first key point of the rectangular area. One side length, select one of the two key points on the inner side of the left eyebrow that is inconsistent with the ordinate of the key point on the inner side of the right eyebrow, and use the line connecting it with the key point on the inner side of the right eyebrow as the second side length of the rectangular area . Draw parallel lines according to the determined first side length and second side length respectively, and obtain the remaining two side lengths of the rectangular area, thereby determining the rectangular area.

又一種獲取矩形區域的可能實現方式中,在左眉內側區域關鍵點和右眉內側區域關鍵點分別有兩個以上的情況下,可以選擇其中的四個關鍵點組成一個四邊形區域。然後再根據這四個關鍵點的坐標得到矩形區域。In yet another possible implementation of acquiring a rectangular area, when there are more than two key points in the inner area of the left eyebrow and more than two key points in the inner area of the right eyebrow, four of the key points can be selected to form a quadrilateral area. Then get the rectangular area according to the coordinates of these four key points.

又一種獲取矩形區域的可能的實現方式中,至少一個第一關鍵點包括第三關鍵點和第四關鍵點;至少一個第二關鍵點包括第五關鍵點和第六關鍵點;第三關鍵點縱坐標小於第四關鍵點;第五關鍵點縱坐標小於第六關鍵點;第一橫坐標和第一縱坐標確定第一坐標;第二橫坐標和第一縱坐標確定第二坐標;第一橫坐標和第二縱坐標確定第三坐標;第二橫坐標和第二縱坐標確定第四坐標;第一縱坐標為第三關鍵點和第五關鍵點的縱坐標的最大值;第二縱坐標為第四關鍵點和第六關鍵點的縱坐標的最小值;第一橫坐標為第三關鍵點和第四關鍵點的橫坐標的最大值;第二橫坐標為第五關鍵點和第六關鍵點的橫坐標的最小值;第一坐標、第二坐標、第三坐標和第四坐標圍成的區域作為矩形區域。舉例說明,在左眉內側區域關鍵點和右眉內側區域關鍵點分別有兩個的情況下,設這四個關鍵點分別為第三關鍵點(X1,Y1)、第五關鍵點(X2,Y2)、第四關鍵點(X3,Y3)、第六關鍵點(X4,Y4)。取(X1,Y1),(X2,Y2)中Y坐標的最大值為Y5,作為第一縱坐標;取(X3,Y3)、(X4,Y4)中Y坐標的最小值為Y6,作為第二縱坐標;取(X1,Y1)、(X3,Y3)中X坐標的最大值為X5,作為第一橫坐標;取(X2,Y2)、(X4,Y4)中X坐標的最小值為X6,作為第二橫坐標。因此可以得到矩形區域的4個坐標為第一坐標(X5,Y5)、第二坐標(X6,Y5)、第三坐標(X5,Y6)、第四坐標(X6,Y6)。In yet another possible implementation of acquiring a rectangular area, at least one first key point includes a third key point and a fourth key point; at least one second key point includes a fifth key point and a sixth key point; the third key point The ordinate is smaller than the fourth key point; the ordinate of the fifth key point is smaller than the sixth key point; the first abscissa and the first ordinate determine the first coordinate; the second abscissa and the first ordinate determine the second coordinate; the first The abscissa and the second ordinate determine the third coordinate; the second abscissa and the second ordinate determine the fourth coordinate; the first ordinate is the maximum value of the ordinate of the third key point and the fifth key point; the second ordinate The coordinate is the minimum value of the ordinate of the fourth key point and the sixth key point; the first abscissa is the maximum value of the abscissa of the third key point and the fourth key point; the second abscissa is the fifth key point and the The minimum value of the abscissa of the six key points; the area enclosed by the first coordinate, the second coordinate, the third coordinate and the fourth coordinate is regarded as a rectangular area. For example, if there are two key points in the inner area of the left eyebrow and two key points in the inner area of the right eyebrow, set these four key points as the third key point (X1, Y1), the fifth key point (X2, Y2), the fourth key point (X3, Y3), the sixth key point (X4, Y4). Take the maximum value of the Y coordinates in (X1, Y1), (X2, Y2) as Y5, as the first ordinate; take the minimum value of the Y coordinates in (X3, Y3), (X4, Y4) as Y6, as the first ordinate Two vertical coordinates; take the maximum value of X coordinates in (X1, Y1), (X3, Y3) as X5, as the first horizontal coordinate; take the minimum value of X coordinates in (X2, Y2), (X4, Y4) X6, as the second abscissa. Therefore, the four coordinates of the rectangular area can be obtained as the first coordinate (X5, Y5), the second coordinate (X6, Y5), the third coordinate (X5, Y6), and the fourth coordinate (X6, Y6).

作為另一種可選的實施方式,圖像處理裝置在執行步驟4的過程中執行以下步驟:As another optional implementation manner, the image processing device performs the following steps during step 4:

44、在上述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點,且上述至少一個人臉關鍵點包含屬右眉內側區域中的至少一個第二關鍵點的情況下,確定至少一個第一關鍵點和至少一個第二關鍵點的平均值坐標。44. In the case where the at least one face key point includes at least one first key point belonging to the inner area of the left eyebrow, and the at least one face key point includes at least one second key point belonging to the inner area of the right eyebrow, determine Mean coordinates of at least one first keypoint and at least one second keypoint.

45、將依據平均值坐標確定的像素點的顏色值作為上述皮膚像素點區域中第二像素點的顏色值。45. Use the color value of the pixel point determined according to the average value coordinates as the color value of the second pixel point in the skin pixel point area.

本申請實施例中,在上述至少一個人臉關鍵點包含屬右眉內側區域中的至少一個第二關鍵點,且上述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點的情況下,對至少一個第一關鍵點和至少一個第二關鍵點的坐標求平均值。例如,在右眉內側區域和左眉內側區域的關鍵點坐標分別有兩個的時候,設右眉內側區域和左眉內側區域的關鍵點為右眉內側上方點、右眉內側下方點、左眉內側上方點、左眉內側下方點四個點。如圖4所示,假設這四個點對應的編號分別為37、38、67、68。獲取37、38、67、68的坐標分別為(X1,Y1)、(X2,Y2)、(X3,Y3)、(X4,Y4),分別對這四個坐標的橫坐標和縱坐標相加求平均值,得到平均值坐標為(X0,Y0)。將像素點的RGB通道轉換成HSV通道,根據平均值坐標就可以獲取平均值坐標為(X0,Y0)對應的像素點的顏色值。平均值坐標對應的像素點的顏色值就是第二像素點的顏色值。In the embodiment of the present application, the above-mentioned at least one human face key point includes at least one second key point belonging to the inner area of the right eyebrow, and the above-mentioned at least one human face key point includes at least one first key point belonging to the inner area of the left eyebrow case, the coordinates of at least one first key point and at least one second key point are averaged. For example, when there are two key point coordinates of the inner area of the right eyebrow and the inner area of the left eyebrow, the key points of the inner area of the right eyebrow and the inner area of the left eyebrow are set as the upper point of the inner right eyebrow, the lower point of the inner right eyebrow, and the lower point of the left inner eyebrow. Point on the upper inner side of the eyebrow and four points on the lower inner side of the left eyebrow. As shown in FIG. 4 , it is assumed that the numbers corresponding to these four points are 37, 38, 67, and 68 respectively. Get the coordinates of 37, 38, 67, and 68 as (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), and add the abscissa and ordinate of these four coordinates respectively Calculate the average value and obtain the average value coordinates as (X0, Y0). The RGB channel of the pixel is converted into the HSV channel, and the color value of the pixel corresponding to the average coordinate (X0, Y0) can be obtained according to the average coordinate. The color value of the pixel point corresponding to the average value coordinate is the color value of the second pixel point.

作為又一種可選的實施方式,圖像處理裝置在執行步驟4的過程中執行以下步驟:As yet another optional implementation manner, the image processing device performs the following steps during step 4:

46、依據右眉內側關鍵點和鼻根左側關鍵點坐標確定第五直線;依據左眉內側關鍵點和鼻根右側關鍵點坐標確定第六直線。46. Determine the fifth straight line according to the coordinates of the key points inside the right eyebrow and the key points on the left side of the nasion; determine the sixth straight line according to the coordinates of the key points inside the left eyebrow and the key points on the right side of the nasion.

47、將依據第五直線和第六直線的交點坐標確定的像素點的顏色值,作為皮膚像素點區域中第二像素點的顏色值。47. Use the color value of the pixel determined according to the coordinates of the intersection of the fifth straight line and the sixth straight line as the color value of the second pixel in the skin pixel area.

本申請實施例中,上述至少一個人臉關鍵點還包括右眉內側關鍵點、鼻根左側關鍵點、鼻根右側關鍵點和左眉內側關鍵點。將右眉內側關鍵點和鼻根左側關鍵點相連,將左眉內側關鍵點與鼻根右側關鍵點相連,獲得兩條相交的直線分別為第五直線和第六直線。本申請對右眉內側關鍵點和左眉內側關鍵點不做限定,右眉內側關鍵點是在右眉內側區域取的任意一個關鍵點,左眉內側關鍵點是在左眉內側區域取的任意一個關鍵點。如圖4所示,假設這四個關鍵點對應的編號分別為67、68、78、79時,也就是將關鍵點78和68相連,將關鍵點79和67相連,確定這兩條直線後就可以得到一個交點。基於人臉框的位置,可以確定67、68、79、78這四個關鍵點的坐標,然後可以利用Opencv求解出交點的坐標。通過確定交點的坐標,就可以得到交點對應的像素點。將交點對應的像素點的RGB通道轉換成HSV通道,就可以獲取交點坐標對應的像素點的顏色值。交點坐標對應的像素點的顏色值就是第二像素點的顏色值。In the embodiment of the present application, the at least one facial key point further includes a key point inside the right eyebrow, a key point on the left side of the nasion, a key point on the right side of the nasion, and a key point inside the left eyebrow. Connect the key point inside the right eyebrow with the key point on the left side of the nasion, connect the key point inside the left eyebrow with the key point on the right side of the nasion, and obtain two intersecting straight lines as the fifth straight line and the sixth straight line. This application does not limit the key point inside the right eyebrow and the key point inside the left eyebrow. The key point inside the right eyebrow is any key point taken in the inner area of the right eyebrow, and the key point inside the left eyebrow is any key point taken in the area inside the left eyebrow a key point. As shown in Figure 4, assuming that the numbers corresponding to these four key points are 67, 68, 78, and 79 respectively, that is, connecting key points 78 and 68, and connecting key points 79 and 67, after determining the two straight lines You can get a point of intersection. Based on the position of the face frame, the coordinates of the four key points 67, 68, 79, and 78 can be determined, and then the coordinates of the intersection points can be solved by using Opencv. By determining the coordinates of the intersection point, the pixel point corresponding to the intersection point can be obtained. By converting the RGB channel of the pixel corresponding to the intersection point into an HSV channel, the color value of the pixel corresponding to the intersection coordinate can be obtained. The color value of the pixel corresponding to the intersection coordinate is the color value of the second pixel.

5、將上述第二像素點的顏色值與第一值的差作為第二閾值,將第二像素點的顏色值與第二值的和作為第三閾值,其中,上述第一值和上述第二值均不超過待處理對象的顏色值中的最大值。5. The difference between the color value of the second pixel point and the first value is used as the second threshold, and the sum of the color value of the second pixel point and the second value is used as the third threshold, wherein the first value and the first value Neither value exceeds the maximum value among the color values of the object to be processed.

本申請實施例中,確定第二像素點的顏色值就能確定第二閾值和第三閾值。通過Opencv演算法的函數可以把圖像的表示形式從RGB通道圖轉換到HSV通道圖,從而得到第二像素點的顏色值。In the embodiment of the present application, the second threshold and the third threshold can be determined by determining the color value of the second pixel. The function of the Opencv algorithm can convert the representation of the image from the RGB channel map to the HSV channel map, so as to obtain the color value of the second pixel.

顏色值包括色度、亮度、飽和度三個參數值。其中,色度的範圍是0至180,亮度和飽和度的範圍均是0至255。也就是說顏色值的色度最大值是180,亮度和飽和度的最大值是255。需要理解的是,第一值和第二值也分別包括了色度、亮度、飽和度三個參數。因此,第一值的色度和第二值的色度均不超過180,第一值的亮度和第二值的亮度均不超過255,第一值的飽和度和第二值的飽和度均不超過255。一般來說,第一值和第二值的色度、亮度、飽和度三個參數值是一致的。也就是說第二像素點的顏色值的色度、亮度、飽和度三個參數值是第二閾值和第三閾值對應的色度、亮度、飽和度三個參數值的中間值。The color value includes three parameter values of chroma, brightness and saturation. Among them, the range of hue is 0 to 180, and the range of brightness and saturation are both 0 to 255. That is to say, the maximum value of chroma is 180, and the maximum value of brightness and saturation is 255. It should be understood that the first value and the second value also respectively include three parameters of hue, brightness and saturation. Therefore, neither the chroma of the first value nor the chroma of the second value exceeds 180, neither the brightness of the first value nor the brightness of the second value exceeds 255, and the saturation of the first value and the saturation of the second value both No more than 255. Generally speaking, the three parameter values of the first value and the second value of chroma, brightness, and saturation are consistent. That is to say, the three parameter values of chroma, brightness, and saturation of the color value of the second pixel point are intermediate values of the three parameter values of chroma, brightness, and saturation corresponding to the second threshold and the third threshold.

在一種獲取第二像素點的顏色值和第二閾值以及第三閾值的映射關係的實現方式中,通過機器學習的二分類演算法,例如Logistic 迴歸、樸素貝葉斯演算法,根據輸入某個顏色的顏色值判斷這個顏色是否屬第二像素點的顏色值進行分類。也就是輸入一堆顏色值,對這些顏色值是否屬第二像素點的顏色值進行分類,確定在哪些顏色值是屬第二像素點的顏色值。通過機器演算法可以得到第二像素點的顏色值與第二閾值、第三閾值的映射關係。In an implementation of obtaining the mapping relationship between the color value of the second pixel and the second threshold and the third threshold, through machine learning binary classification algorithms, such as Logistic regression and naive Bayesian algorithm, according to an input The color value of the color judges whether the color belongs to the color value of the second pixel point for classification. That is, input a bunch of color values, classify whether these color values belong to the color values of the second pixel, and determine which color values belong to the color values of the second pixel. The mapping relationship between the color value of the second pixel point and the second threshold value and the third threshold value can be obtained through a machine algorithm.

可選的,第一值和第二值對應的色度、亮度、飽和度三個參數值分別為30、60、70。也就是說,得到第二像素點的顏色值後,對應的第二閾值是對色度減少30,亮度減少60,飽和度減少70,對應的第三閾值是對色度增加30,亮度增加60,飽和度增加70。Optionally, the three parameter values of chroma, brightness and saturation corresponding to the first value and the second value are 30, 60 and 70 respectively. That is to say, after obtaining the color value of the second pixel, the corresponding second threshold is to decrease the chroma by 30, the brightness by 60, and the saturation by 70, and the corresponding third threshold is to increase the chroma by 30 and the brightness by 60 , increase the saturation by 70.

作為一種可選的實施方式,圖像處理裝置在執行步驟203的過程中執行以下步驟:As an optional implementation manner, the image processing device performs the following steps during the execution of step 203:

6、在上述第一數量與上述待測區域內像素點的數量的第一比值超過第一閾值的情況下,確定上述皮膚遮擋檢測結果為上述皮膚區域處於未遮擋狀態。6. When the first ratio of the first number to the number of pixels in the area to be detected exceeds a first threshold, determine that the skin occlusion detection result is that the skin area is in an unoccluded state.

本申請實施例中,圖像處理裝置根據第一數量和待測區域內像素點的數量的第一比值是否超過第一閾值的結果,判斷皮膚區域是否處於遮擋狀態。在第一比值比第一閾值小的情況下,確定上述皮膚遮擋檢測結果為皮膚區域處於遮擋狀態。舉例說明,第一數量為50,待測區域內像素點的數量為100,第一閾值為60%。因為第一比值為50/100=50%,小於60%。那麼認為皮膚遮擋檢測結果為皮膚區域處於遮擋狀態。In the embodiment of the present application, the image processing device judges whether the skin area is in an occluded state according to whether the first ratio of the first number to the number of pixels in the area to be detected exceeds a first threshold. If the first ratio is smaller than the first threshold, it is determined that the skin occlusion detection result is that the skin area is in an occlusion state. For example, the first number is 50, the number of pixels in the region to be tested is 100, and the first threshold is 60%. Because the first ratio is 50/100=50%, which is less than 60%. Then it is considered that the skin occlusion detection result indicates that the skin area is in an occlusion state.

在皮膚遮擋檢測結果為皮膚區域處於遮擋的情況下,圖像處理裝置輸出需要露出皮膚的提示資訊。可以根據露出皮膚的提示資訊,露出皮膚後再重新進行皮膚遮擋檢測,或者進行其他的操作。本申請不做限定。If the skin occlusion detection result shows that the skin area is occluded, the image processing device outputs prompt information that the skin needs to be exposed. You can perform skin occlusion detection again after exposing the skin according to the prompt information of exposing the skin, or perform other operations. This application is not limited.

7、在上述第一比值超過上述第一閾值的情況下,確定上述皮膚遮擋檢測結果為上述皮膚區域處於未遮擋狀態。7. When the first ratio exceeds the first threshold, determine that the skin occlusion detection result indicates that the skin area is in an unoccluded state.

本申請實施例中,圖像處理裝置根據第一數量和待測區域內像素點的數量的第一比值等於或者大於第一閾值的結果,確定上述皮膚遮擋檢測結果為皮膚區域處於未遮擋狀態。舉例說明,第一數量為60,待測區域內像素點的數量為100,第一閾值為60%。因為第一比值為60/100=60%,等於60%。那麼認為皮膚遮擋檢測結果為皮膚區域處於未遮擋狀態。又或者,第一數量為70,待測區域內像素點的數量為100,第一閾值為60%。因為第一比值為70/100=70%,大於60%,那麼認為皮膚遮擋檢測結果為皮膚區域處於未遮擋狀態。In the embodiment of the present application, the image processing device determines that the skin occlusion detection result is that the skin area is in an unoccluded state according to the result that the first ratio of the first number to the number of pixels in the area to be detected is equal to or greater than the first threshold. For example, the first number is 60, the number of pixels in the region to be tested is 100, and the first threshold is 60%. Because the first ratio is 60/100=60%, equal to 60%. Then it is considered that the skin occlusion detection result indicates that the skin area is in an unoccluded state. Alternatively, the first number is 70, the number of pixels in the region to be tested is 100, and the first threshold is 60%. Since the first ratio is 70/100=70%, which is greater than 60%, it is considered that the skin occlusion detection result indicates that the skin area is in an unoccluded state.

在確定皮膚遮擋檢測結果為皮膚區域處於未遮擋狀態的情況下,可以實行測溫的操作或者其他的操作。如果在皮膚遮擋檢測結果為皮膚區域處於未遮擋狀態的情況下進行測溫,可以提高檢測溫度的準確性。對於皮膚遮擋檢測結果為皮膚區域處於未遮擋狀態的情況下進行的後續操作,本申請在這裡不做限定。When it is determined that the skin occlusion detection result indicates that the skin area is in an unoccluded state, a temperature measurement operation or other operations may be performed. If the temperature measurement is performed when the skin occlusion detection result shows that the skin area is in an unoccluded state, the accuracy of temperature detection can be improved. The present application does not limit the subsequent operations performed when the skin occlusion detection result shows that the skin area is in an unoccluded state.

作為一種可選的實施方式,圖像處理裝置還執行以下步驟:As an optional implementation manner, the image processing device also performs the following steps:

8、獲取上述待處理圖像的溫度熱力圖。8. Obtain the temperature thermodynamic map of the image to be processed above.

本申請實施例中的圖像處理方法可用於測溫領域,上述皮膚區域屬待檢測人物。溫度熱力圖中的每個像素點都攜帶對應像素點的溫度資訊。可選的,溫度熱力圖由圖像處理裝置上的紅外熱成像設備採集得到。圖像處理裝置通過對溫度熱力圖和待處理圖像進行圖像匹配處理,從溫度熱力圖中確定與上述待處理圖像的人臉區域對應的像素點區域,得到在溫度熱力圖上的待處理圖像的人臉區域對應的像素點區域。The image processing method in the embodiment of the present application can be used in the field of temperature measurement, and the above skin area belongs to the person to be detected. Each pixel in the temperature thermodynamic map carries the temperature information of the corresponding pixel. Optionally, the temperature thermodynamic map is collected by an infrared thermal imaging device on the image processing device. The image processing device performs image matching processing on the temperature thermodynamic map and the image to be processed, determines the pixel point area corresponding to the face area of the image to be processed from the temperature thermodynamic map, and obtains the to-be-processed image on the temperature thermodynamic map. Process the pixel area corresponding to the face area of the image.

9、在上述皮膚遮擋檢測結果為上述皮膚區域處於未遮擋狀態的情況下,從上述溫度熱力圖中讀取上述皮膚區域的溫度,作為上述待檢測人物的體溫。9. When the skin occlusion detection result shows that the skin area is in an unoccluded state, read the temperature of the skin area from the temperature thermodynamic map as the body temperature of the person to be detected.

在本申請通過檢測待測對象的額頭區域溫度確定其體溫的實施例中,在上述皮膚遮擋檢測結果為皮膚區域處於未被遮擋狀態的情況下,從溫度熱力圖中先找到與上述待處理圖像的人臉區域對應的像素點區域,一般來說皮膚區域是位於整個人臉區域的上30%至40%的部分,因此可以獲取溫度熱力圖中皮膚區域對應的溫度。可以將皮膚區域的平均值溫度作為上述待檢測人物的體溫,也可以將皮膚區域的最高溫度作為上述待檢測人物的體溫,本申請不做限定。In the embodiment of this application in which the body temperature of the subject is determined by detecting the temperature of the forehead area of the subject, in the case that the above-mentioned skin occlusion detection result shows that the skin area is in an unoccluded state, first find the above-mentioned figure to be processed from the temperature thermodynamic map The pixel area corresponding to the face area of the image, generally speaking, the skin area is located in the upper 30% to 40% of the entire face area, so the temperature corresponding to the skin area in the temperature thermodynamic map can be obtained. The average temperature of the skin area can be used as the body temperature of the person to be detected, or the highest temperature of the skin area can be used as the body temperature of the person to be detected, which is not limited in this application.

請參閱圖3,圖3是本申請實施例提供的一種應用圖像處理方法的流程示意圖。Please refer to FIG. 3 . FIG. 3 is a schematic flowchart of an applied image processing method provided by an embodiment of the present application.

基於本申請實施例提供的圖像處理方法,本申請實施例還提供了一種圖像處理方法可能的應用場景。Based on the image processing method provided in the embodiment of the present application, the embodiment of the present application also provides a possible application scenario of the image processing method.

在使用熱成像設備對行人進行非接觸測溫的時候,一般測量的是行人額頭區域的溫度。但是行人有劉海遮擋額頭或戴帽子時,因為無法確定額頭區域是否處於遮擋狀態,會對測溫帶來一定程度的干擾,這給當前的測溫工作帶來了一定挑戰。因此,在測溫前對行人進行額頭遮擋狀態檢測,在額頭區域處於未遮擋的狀態下,對行人的額頭區域進行測溫,能夠提高測溫的準確性。When thermal imaging equipment is used to measure the temperature of pedestrians in a non-contact manner, the temperature of the forehead area of pedestrians is generally measured. However, when pedestrians have bangs covering their foreheads or wearing hats, because it is impossible to determine whether the forehead area is covered, it will cause a certain degree of interference to the temperature measurement, which brings certain challenges to the current temperature measurement work. Therefore, before measuring the temperature, the pedestrian's forehead is covered by detection, and when the forehead is not covered, the temperature of the pedestrian's forehead can be measured, which can improve the accuracy of temperature measurement.

如圖3所示,圖像處理裝置獲取相機幀數據,也就是一張待處理圖像。對待處理圖像進行人臉檢測,如果人臉檢測的結果為待處理圖像中不存在人臉,那麼圖像處理裝置重新去獲取一張待處理圖像。如果人臉檢測的結果為存在人臉,那麼圖像處理裝置就將待處理圖像輸入到已經訓練好的神經網路,可以輸出待處理圖像的人臉框(如圖1的D所示)和人臉框坐標(如圖1所示)以及106個關鍵點的坐標(如圖4所示)。需要理解的是,人臉框的坐標可以是一對對角坐標包括左上角坐標和右下角坐標或者左下角坐標和右上角坐標,本申請實施例為便於理解給出了人臉框的四個角點坐標(如圖1所示)。本申請實施例中輸出待處理圖像的人臉框坐標和106個關鍵點坐標的神經網路可以是一個神經網路,也可以是分別實現人臉檢測和人臉關鍵點檢測的兩個神經網路的串聯。As shown in FIG. 3 , the image processing device acquires camera frame data, that is, an image to be processed. Face detection is performed on the image to be processed, and if the result of the face detection is that there is no human face in the image to be processed, the image processing device acquires a new image to be processed. If the result of face detection is that there is a face, then the image processing device will input the image to be processed into the trained neural network, and can output the face frame of the image to be processed (as shown in D of Figure 1 ) and the coordinates of the face frame (as shown in Figure 1) and the coordinates of 106 key points (as shown in Figure 4). It should be understood that the coordinates of the face frame can be a pair of diagonal coordinates including the upper left corner coordinate and the lower right corner coordinate or the lower left corner coordinate and the upper right corner coordinate. Corner coordinates (as shown in Figure 1). In the embodiment of the present application, the neural network that outputs the coordinates of the face frame of the image to be processed and the coordinates of 106 key points can be a neural network, or it can be two neural networks that implement face detection and face key point detection respectively. Network concatenation.

為了檢測額頭區域露出的皮膚區域,以眉心區域的最亮像素點的顏色值作為額頭區域露出的皮膚區域的顏色值基準。最亮像素點是上述第二像素點。因此需要先獲取眉心區域。通過人臉關鍵點檢測,獲取左眉毛內側區域和右眉毛內側的關鍵點。在右眉毛內側區域和左眉毛內側區域分別有兩個關鍵點的情況下,關鍵點為右眉內側上方點、右眉內側下方點、左眉內側上方點、左眉內側下方點。通過這四個關鍵點求一個矩形區域為眉心區域。本申請實施例以106個關鍵點坐標為例,右眉內側上方點、右眉內側下方點、左眉內側上方點、左眉內側下方點對應的也就是37、38、67、68這四個關鍵點。需要理解的是,這裡的關鍵點的數量和關鍵點的編號並不構成限定,只要是分別取右眉內側區域和左眉內側區域的兩個關鍵點都是本申請所要求保護的範圍。In order to detect the exposed skin area of the forehead area, the color value of the brightest pixel point in the brow area is used as the color value reference of the exposed skin area of the forehead area. The brightest pixel is the above-mentioned second pixel. Therefore, it is necessary to obtain the eyebrow area first. Through face key point detection, the key points of the inner area of the left eyebrow and the inner area of the right eyebrow are obtained. In the case where there are two key points in the inner area of the right eyebrow and the inner area of the left eyebrow respectively, the key points are the upper point on the inner side of the right eyebrow, the lower point on the inner side of the right eyebrow, the upper point on the inner side of the left eyebrow, and the lower point on the inner side of the left eyebrow. Find a rectangular area through these four key points as the eyebrow area. The embodiment of the present application takes the coordinates of 106 key points as an example. The upper point on the inner side of the right eyebrow, the lower point on the inner side of the right eyebrow, the upper point on the inner side of the left eyebrow, and the lower point on the inner side of the left eyebrow correspond to the four points 37, 38, 67, and 68. key point. It should be understood that the number of key points and the number of key points here do not constitute a limitation, as long as the two key points of the inner area of the right eyebrow and the inner area of the left eyebrow are taken respectively, they are within the scope of protection claimed by this application.

通過獲取37、38、67、68關鍵點的坐標分別定為(X1,Y1)、(X2,Y2)、(X3,Y3)、(X4,Y4)。取(X1,Y1),(X2,Y2)中Y坐標的最大值為Y5取(X3,Y3)、(X4,Y4)中Y坐標的最小值為Y6,取(X1,Y1)、(X3,Y3)中X坐標的最大值為X5,取(X2,Y2)、(X4,Y4)中X坐標的最小值為X6。把得到的X5、X6坐標和Y5、Y6坐標進行組合,得到四個坐標。根據這四個坐標就可以確定一個矩形區域。其中,矩形區域的四個頂點坐標為(X6,Y6)、(X5,Y5)、(X5,Y6)、(X6,Y5),這個矩形區域也就是要截取的眉心區域。通過人臉的關鍵點檢測可以確定37、38、67、68這四個點的坐標,那麼就可以確定(X6,Y6)、(X5,Y5)、(X5,Y6)、(X6,Y5)這四個點的位置。截取根據這四個點確定的矩形區域,獲取眉心區域。The coordinates of key points 37, 38, 67, and 68 are respectively determined as (X1, Y1), (X2, Y2), (X3, Y3), and (X4, Y4). Take the maximum value of the Y coordinates in (X1, Y1), (X2, Y2) as Y5 Take the minimum value of the Y coordinates in (X3, Y3), (X4, Y4) as Y6, take (X1, Y1), (X3 , Y3), the maximum value of the X coordinate is X5, and the minimum value of the X coordinate among (X2, Y2), (X4, Y4) is X6. Combine the obtained X5, X6 coordinates with Y5, Y6 coordinates to obtain four coordinates. According to these four coordinates, a rectangular area can be determined. Among them, the coordinates of the four vertices of the rectangular area are (X6, Y6), (X5, Y5), (X5, Y6), (X6, Y5), and this rectangular area is also the eyebrow area to be intercepted. The coordinates of the four points 37, 38, 67, and 68 can be determined through the key point detection of the face, then (X6, Y6), (X5, Y5), (X5, Y6), (X6, Y5) can be determined The positions of these four points. Intercept the rectangular area determined according to these four points to obtain the area between the eyebrows.

獲取眉心區域後,需要找到眉心區域中的最亮像素點。因此對眉心區域進行灰度化處理,得到眉心區域的灰度圖。本申請實例中,灰度化處理就是讓像素點矩陣中的每一個像素點都滿足下面的關係:R=G=B。也就是讓紅色變量的值,綠色變量的值和藍色變量的值相等。這個“=”的意思是數學中的相等,此時的這個值叫做灰度值。一般灰度處理經常使用兩種方法來進行處理:After obtaining the brow area, you need to find the brightest pixel in the brow area. Therefore, gray-scale processing is performed on the brow area to obtain a gray-scale image of the brow area. In the example of this application, grayscale processing is to make each pixel in the pixel matrix satisfy the following relationship: R=G=B. That is to make the value of the red variable, the value of the green variable and the value of the blue variable equal. This "=" means equality in mathematics, and this value at this time is called the gray value. Generally, grayscale processing often uses two methods for processing:

方法一:灰度化後的R=灰度化後的G=灰度化後的B=(處理前的R + 處理前的G +處理前的B)/ 3Method 1: R after grayscale = G after grayscale = B after grayscale = (R before processing + G before processing + B before processing) / 3

舉例說明:圖片A的像素點m的R為100,G為120,B為110。也就是說,在灰度化處理前像素點m的R為100,G為120,B為110。那麼對圖片A進行灰度化處理,灰度化處理後像素點m的R=G=B=(100+120+110)/ 3=110。For example: the R of the pixel point m of the picture A is 100, the G is 120, and the B is 110. That is to say, the R of the pixel point m is 100, the G is 120, and the B is 110 before grayscale processing. Then grayscale processing is performed on picture A, and R=G=B=(100+120+110)/3=110 of pixel m after grayscale processing.

方法二:灰度化後的R = 灰度化後的G =灰度化後的B = 處理前的R * 0.3+ 處理前的G * 0.59 +處理前的B * 0.11Method 2: R after grayscale = G after grayscale = B after grayscale = R before processing * 0.3 + G before processing * 0.59 + B before processing * 0.11

舉例說明:圖片A的像素點m的R為100,G為120,B為110。也就是說,在灰度化處理前像素點m的R為100,G為120,B為110。那麼對圖片A進行灰度化處理,灰度化處理後像素點m的R=G=B=100*0.3+120*0.59+110*0.11=112.9。For example: the R of the pixel point m of the picture A is 100, the G is 120, and the B is 110. That is to say, the R of the pixel point m is 100, the G is 120, and the B is 110 before grayscale processing. Then grayscale processing is performed on picture A, and R=G=B=100*0.3+120*0.59+110*0.11=112.9 of pixel m after grayscale processing.

還可以採用Opencv函數對眉心區域進行灰度化處理,本申請中對眉心區域的灰度化處理方法不做限定。為了求出最亮像素點的顏色值,也就是找到眉心區域灰度化處理後的灰度值最大的像素點的顏色值。對眉心區域的灰度圖的每一行的灰度值進行相加,記錄取得灰度值之和最大的行的坐標。同理,對眉心區域的灰度圖像的每一列的灰度值進行相加,記錄取得灰度值之和最大的列的坐標。通過獲得灰度值之和最大行和最大列的坐標確定的交點坐標,得到眉心區域最亮像素點的坐標。通過RGB和HSV的轉換關係,找到眉心區域最亮像素點的RGB值可以通過公式轉換得到對應的HSV值,也可以通過opencv的cvtcolor函數將眉心區域的RGB通道轉換為HSV通道,找到最亮像素點的HSV值。因為眉心區域最亮像素點的HSV值和第二閾值以及第三閾值具有確定的關係,也就是說眉心區域最亮像素點的HSV值可以確定對應的第二閾值和第三閾值。The Opencv function can also be used to perform grayscale processing on the region between the eyebrows, and the grayscale processing method of the region between the eyebrows is not limited in this application. In order to obtain the color value of the brightest pixel point, that is to find the color value of the pixel point with the largest gray value after the gray scale processing in the brow area. Add the gray values of each row of the gray image of the brow area, and record the coordinates of the row with the largest sum of gray values. Similarly, the gray value of each column of the gray image of the brow area is added, and the coordinates of the column with the largest sum of gray values are recorded. The coordinates of the brightest pixel in the eyebrow area are obtained by obtaining the intersection coordinates determined by the coordinates of the maximum row and maximum column of the sum of gray values. Through the conversion relationship between RGB and HSV, find the RGB value of the brightest pixel in the brow area. You can convert the corresponding HSV value through the formula. You can also convert the RGB channel of the brow area into an HSV channel through the cvtcolor function of opencv to find the brightest pixel. The HSV value of the point. Because the HSV value of the brightest pixel in the eyebrow area has a definite relationship with the second threshold and the third threshold, that is to say, the HSV value of the brightest pixel in the eyebrow area can determine the corresponding second threshold and third threshold.

獲取額頭區域需要確定額頭區域的大小和位置。額頭區域的長度是人臉的寬度。通過計算關鍵點0和關鍵點32的距離,縮小人臉框使得人臉框的左框線和右框線的距離為關鍵點0和關鍵點32之間的距離。也就是說,把關鍵點0和關鍵點32的之間的距離作為額頭區域的長度。額頭區域的寬度約占整個人臉框的1/3,雖然每個人的額頭區域的寬度占整個人臉的長度的比例是不一樣的,但是額頭區域的寬度幾乎都在人臉長度的30%到40%的範圍內。因此,把人臉框的上框線和下框線之間的距離縮小到原人臉框的上框線和下框線之間距離的30%到40%,作為額頭區域的寬度。額頭區域是位於眉毛以上的區域。這裡關鍵點35和40確定的水平線是眉毛的位置。因此移動改變大小的人臉框,使得改變大小的人臉框的下框線位於35和40這兩個關鍵點確定的水平線,得到改變位置和大小的人臉框。改變大小和位置的人臉框所包含的矩形區域就是額頭區域。Obtaining the forehead region requires determining the size and location of the forehead region. The length of the forehead area is the width of the human face. By calculating the distance between key point 0 and key point 32, the face frame is reduced so that the distance between the left frame line and the right frame line of the face frame is the distance between key point 0 and key point 32. That is to say, the distance between the key point 0 and the key point 32 is taken as the length of the forehead area. The width of the forehead area accounts for about 1/3 of the entire face frame. Although the ratio of the width of the forehead area to the length of the entire face is different for each person, the width of the forehead area is almost 30% of the length of the face. to the range of 40%. Therefore, the distance between the upper frame line and the lower frame line of the face frame is reduced to 30% to 40% of the distance between the upper frame line and the lower frame line of the original face frame, as the width of the forehead area. The forehead area is the area located above the eyebrows. Here the horizontal line defined by key points 35 and 40 is the position of the eyebrows. Therefore, move the size-changed face frame so that the lower frame of the size-changed face frame is located at the horizontal line determined by the two key points 35 and 40, and obtain a changed position and size face frame. The rectangular area contained in the face frame whose size and position are changed is the forehead area.

截取額頭區域,然後根據第二閾值和第三閾值對額頭區域進行二值化,得到額頭區域的二值化圖像。這裡採用二值化圖像,可以減少數據處理量,加快圖像處理裝置檢測額頭區域的速度。二值化的標準就是:額頭區域的某個像素點的HSV值大於等於第二閾值且小於等於第三閾值,那麼這個像素點的灰度值為255,額頭區域的某個像素點的HSV值小於第二閾值或者大於第三閾值,那麼這個像素點的灰度值為0。首先,把額頭區域圖像從RGB通道圖轉換成HSV通道圖。然後,統計額頭區域灰度值為255的像素點的數量,也就是灰度圖中顏色為白色的像素點的數量。在白色的像素點的數量與額頭區域內的像素點的數量之比達到閾值的情況下,認為額頭區域處於未被遮擋的狀態,因此進行熱力成像測溫操作。在白色的像素點的數量與額頭區域內的像素點的數量之比沒有達到閾值的情況下,認為額頭區域處於遮擋狀態,此時進行測溫操作會影響測溫的準確性,因此輸出需要露出額頭的提示,並且需要圖像處理裝置重新獲取一張圖像重新進行額頭遮擋狀態檢測。舉例說明:假設第二閾值為(100,50,70),第三閾值為(120,90,100),額頭區域的像素點q的顏色值為(110,60,70),額頭區域的像素點p的顏色值為(130,90,20)。那麼q在第二閾值和第三閾值的範圍內,p不在第二閾值和第三閾值的範圍內。在進行額頭區域二值化處理的時候,像素點q的灰度值為255,像素點p的灰度值為0。假設閾值為60%,額頭區域內像素點的數量為100,白色像素點的數量為50,那麼白色像素點的數量和額頭區域內像素點的數量的比值為50%,沒有達到閾值,額頭區域處於遮擋狀態,因此輸出需要露出額頭的提示。The forehead area is intercepted, and then the forehead area is binarized according to the second threshold and the third threshold to obtain a binarized image of the forehead area. The binary image is used here, which can reduce the amount of data processing and speed up the detection of the forehead area by the image processing device. The binarization standard is: the HSV value of a certain pixel in the forehead area is greater than or equal to the second threshold and less than or equal to the third threshold, then the gray value of this pixel is 255, and the HSV value of a certain pixel in the forehead area If it is less than the second threshold or greater than the third threshold, then the gray value of this pixel is 0. First, convert the forehead region image from RGB channel map to HSV channel map. Then, count the number of pixels with a gray value of 255 in the forehead area, that is, the number of pixels whose color is white in the gray scale image. When the ratio of the number of white pixels to the number of pixels in the forehead area reaches the threshold, it is considered that the forehead area is not blocked, so a thermal imaging temperature measurement operation is performed. When the ratio of the number of white pixels to the number of pixels in the forehead area does not reach the threshold, the forehead area is considered to be in a blocked state, and the temperature measurement operation at this time will affect the accuracy of temperature measurement, so the output needs to be exposed Forehead prompts, and the image processing device needs to re-acquire an image to re-detect the forehead occlusion state. For example: Suppose the second threshold is (100, 50, 70), the third threshold is (120, 90, 100), the color value of pixel q in the forehead area is (110, 60, 70), the pixel in the forehead area The color value of point p is (130, 90, 20). Then q is within the range of the second threshold and the third threshold, and p is not within the range of the second threshold and the third threshold. During the binarization process of the forehead area, the gray value of the pixel point q is 255, and the gray value of the pixel point p is 0. Suppose the threshold is 60%, the number of pixels in the forehead area is 100, and the number of white pixels is 50, then the ratio of the number of white pixels to the number of pixels in the forehead area is 50%, and the threshold is not reached, the forehead area It is in an occluded state, so the output needs to show the prompt of the forehead.

本領域技術人員可以理解,在上述方法的具體實施方式中,各步驟的撰寫順序並不意味著嚴格的執行順序而對實施過程構成任何限定,各步驟的具體執行順序應當以其功能和可能的內在邏輯確定。Those skilled in the art can understand that in the specific implementation of the above method, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.

上述詳細闡述了本申請實施例的方法,下面提供了本申請實施例的裝置。The method of the embodiment of the present application has been described in detail above, and the device of the embodiment of the present application is provided below.

請參閱圖5,圖5為本申請實施例提供的一種圖像處理裝置的結構示意圖,其中,該裝置1包括獲取單元11、第一處理單元12、檢測單元13,可選的,圖像處理裝置1還包括第二處理單元14、確定單元15、第三處理單元16,第四處理單元17,其中:獲取單元11,用於獲取待處理圖像、第一閾值、第二閾值和第三閾值,所述第一閾值和所述第二閾值不同,所述第一閾值和所述第三閾值不同,所述第二閾值小於等於所述第三閾值;第一處理單元12,用於確定所述待處理圖像的待測區域中第一像素點的第一數量;所述第一像素點為顏色值大於等於第二閾值且小於等於第三閾值的像素點;檢測單元13,用於依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果。Please refer to FIG. 5. FIG. 5 is a schematic structural diagram of an image processing device provided in an embodiment of the present application, wherein the device 1 includes an acquisition unit 11, a first processing unit 12, and a detection unit 13. Optionally, image processing The device 1 also includes a second processing unit 14, a determination unit 15, a third processing unit 16, and a fourth processing unit 17, wherein: the acquisition unit 11 is used to acquire the image to be processed, the first threshold, the second threshold and the third threshold threshold, the first threshold is different from the second threshold, the first threshold is different from the third threshold, and the second threshold is less than or equal to the third threshold; the first processing unit 12 is configured to determine The first number of first pixels in the area to be detected of the image to be processed; the first pixels are pixels whose color value is greater than or equal to the second threshold and less than or equal to the third threshold; the detection unit 13 is used to A skin occlusion detection result of the image to be processed is obtained according to a first ratio of the first number to the number of pixels in the region to be detected and the first threshold.

結合本申請任一實施方式,所述待測區域包括人臉區域,所述皮膚遮擋檢測結果包括人臉遮擋檢測結果;所述圖像處理裝置還包括:第二處理單元14,用於在所述確定所述待處理圖像的待測區域中第一像素點的第一數量之前,對所述待處理圖像進行人臉檢測處理,得到第一人臉框;依據所述第一人臉框,從所述待處理圖像中確定所述人臉區域。In combination with any embodiment of the present application, the area to be tested includes a human face area, and the skin occlusion detection result includes a human face occlusion detection result; the image processing device further includes: a second processing unit 14, configured to Before determining the first number of first pixels in the region to be detected of the image to be processed, perform face detection processing on the image to be processed to obtain a first face frame; according to the first face Box, determine the face area from the image to be processed.

結合本申請任一實施方式,所述人臉區域包括額頭區域,所述人臉遮擋檢測結果包括額頭遮擋檢測結果,所述第一人臉框包括:上框線和下框線;所述上框線和所述下框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的橫軸的邊,且所述上框線的縱坐標小於所述下框線的縱坐標;所述第二處理單元14用於:對所述待處理圖像進行人臉關鍵點檢測,得到至少一個人臉關鍵點;所述至少一個人臉關鍵點包括左眉毛關鍵點和右眉毛關鍵點;在保持所述上框線的縱坐標不變的情況下,將所述下框線沿所述待處理圖像的像素坐標系的縱軸的負方向移動,使得所述下框線所在直線與第一直線重合,得到第二人臉框;所述第一直線為過所述左眉毛關鍵點和所述右眉毛關鍵點的直線;依據所述第二人臉框包含的區域,得到所述額頭區域。In combination with any embodiment of the present application, the face area includes a forehead area, the face occlusion detection result includes a forehead occlusion detection result, and the first face frame includes: an upper frame line and a lower frame line; Both the frame line and the lower frame line are sides parallel to the horizontal axis of the pixel coordinate system of the image to be processed in the first face frame, and the ordinate of the upper frame line is smaller than that of the lower frame The ordinate of the line; the second processing unit 14 is used to: perform face key point detection on the image to be processed to obtain at least one human face key point; the at least one human face key point includes left eyebrow key point and right eyebrow key point Eyebrow key point; in the case of keeping the vertical coordinate of the upper frame line unchanged, move the lower frame line along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed, so that the lower frame The straight line where the line is located coincides with the first straight line to obtain the second human face frame; the first straight line is a straight line passing through the left eyebrow key point and the right eyebrow key point; according to the area included in the second human face frame, obtain the forehead area.

結合本申請任一實施方式,所述第二處理單元14用於:在保持所述第二人臉框的下框線的縱坐標不變的情況下,將所述第二人臉框的上框線沿所述待處理圖像的像素坐標系的縱軸移動,使得所述第二人臉框的上框線和所述第二人臉框的下框線的距離為預設距離,得到第三人臉框;依據所述第三人臉框包含的區域,得到所述額頭區域。In combination with any embodiment of the present application, the second processing unit 14 is configured to: keep the vertical coordinate of the lower frame line of the second face frame unchanged, and convert the upper frame line of the second face frame to The frame line moves along the vertical axis of the pixel coordinate system of the image to be processed, so that the distance between the upper frame line of the second human face frame and the lower frame line of the second human face frame is a preset distance, obtaining a third face frame; obtaining the forehead area according to the area included in the third face frame.

結合本申請任一實施方式,所述至少一個人臉關鍵點還包括左嘴角關鍵點和右嘴角關鍵點;所述第一人臉框還包括:左框線和右框線;所述左框線和所述右框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的縱軸的邊,且所述左框線的橫坐標小於所述右框線的橫坐標;所述第二處理單元14用於:在保持所述第三人臉框的左框線的橫坐標不變的情況下,將所述第三人臉框的右框線沿所述待處理圖像的像素坐標系的橫軸移動,使得所述第三人臉框的右框線和所述第三人臉框的左框線的距離為參考距離,得到第四人臉框;所述參考距離為第二直線與所述第三人臉框包含的人臉輪廓的兩個交點之間的距離;所述第二直線為在所述第一直線和第三直線之間且平行於所述第一直線或所述第三直線的直線;所述第三直線為過所述左嘴角關鍵點和所述右嘴角關鍵點的直線;將所述第四人臉框包含的區域作為所述額頭區域。In combination with any embodiment of the present application, the at least one human face key point also includes a left mouth corner key point and a right mouth corner key point; the first human face frame further includes: a left frame line and a right frame line; the left frame line and the right frame line are the sides parallel to the vertical axis of the pixel coordinate system of the image to be processed in the first face frame, and the abscissa of the left frame line is smaller than that of the right frame line Abscissa; the second processing unit 14 is used to: keep the abscissa of the left frame line of the third human face frame unchanged, and place the right frame line of the third human face frame along the The horizontal axis of the pixel coordinate system of the image to be processed moves, so that the distance between the right frame line of the third human face frame and the left frame line of the third human face frame is a reference distance, and the fourth human face frame is obtained; The reference distance is the distance between the second straight line and the two intersection points of the face contour included in the third face frame; the second straight line is between the first straight line and the third straight line and parallel to The straight line of the first straight line or the third straight line; the third straight line is a straight line passing through the key point of the left corner of the mouth and the key point of the right corner of the mouth; the region included in the fourth human face frame is used as the forehead area.

結合本申請任一實施方式,所述圖像裝置還包括:確定單元15,用於在所述確定所述待處理圖像的待測區域中第一像素點的第一數量之前,從所述第一人臉框包含的像素點區域中確定皮膚像素點區域;所述獲取單元11,還用於獲取所述皮膚像素點區域中第二像素點的顏色值;所述第一處理單元12,還用於將所述第二像素點的顏色值與第一值的差作為所述第二閾值,將所述第二像素點的顏色值與第二值的和作為所述第三閾值;所述第一值和所述第二值均不超過所述待處理圖像的顏色值中的最大值。In combination with any embodiment of the present application, the image device further includes: a determining unit 15, configured to, before determining the first number of first pixels in the region to be detected of the image to be processed, from the Determine the skin pixel point area in the pixel point area included in the first human face frame; the acquisition unit 11 is also used to acquire the color value of the second pixel point in the skin pixel point area; the first processing unit 12, It is also used to use the difference between the color value of the second pixel point and the first value as the second threshold, and use the sum of the color value of the second pixel point and the second value as the third threshold; Neither the first value nor the second value exceeds the maximum value among the color values of the image to be processed.

結合本申請任一實施方式,所述圖像處理裝置還包括:第三處理單元16,用於在所述從所述第一人臉框包含的像素點區域中確定皮膚像素點區域之前,對所述待處理圖像進行口罩佩戴檢測處理,得到檢測結果;所述確定單元15用於:在檢測到所述待處理圖像中人臉區域未佩戴口罩的情況下,將所述人臉區域中除所述額頭區域、嘴巴區域、眉毛區域和眼睛區域之外的像素點區域,作為所述皮膚像素點區域;在檢測到所述待處理圖像中人臉區域佩戴口罩的情況下,將所述第一直線和第四直線之間的像素點區域作為所述皮膚像素點區域。其中,所述第四直線為過左眼下眼瞼關鍵點和右眼下眼瞼關鍵點的直線;所述左眼下眼瞼關鍵點和所述右眼下眼瞼關鍵點均屬所述至少一個人臉關鍵點。In combination with any embodiment of the present application, the image processing device further includes: a third processing unit 16, configured to, before determining the skin pixel area from the pixel area included in the first human face frame, The image to be processed is subjected to a mask wearing detection process to obtain a detection result; the determination unit 15 is used to: detect that the face area in the image to be processed is not wearing a mask, and the face area In addition to the pixel point area of the forehead area, mouth area, eyebrow area and eye area, as the skin pixel point area; in the case of detecting that the face area in the image to be processed is wearing a mask, the The pixel area between the first straight line and the fourth straight line is used as the skin pixel area. Wherein, the fourth straight line is a straight line passing through the key points of the lower eyelid of the left eye and the key point of the lower eyelid of the right eye; both the key points of the lower eyelid of the left eye and the key points of the lower eyelid of the right eye belong to the at least one human face key point.

結合本申請任一實施方式,所述獲取單元11用於:在所述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點,且包含屬右眉內側區域中的至少一個第二關鍵點的情況下,根據所述至少一個第一關鍵點和所述至少一個第二關鍵點確定矩形區域;對所述矩形區域進行灰度化處理,得到矩形區域的灰度圖;將矩形區域的灰度圖中第一行和第一列的交點的顏色值作為所述第二像素點的顏色值;所述第一行為所述灰度圖中灰度值之和最大的行,所述第一列為所述灰度圖中灰度值之和最大的列。In combination with any embodiment of the present application, the acquisition unit 11 is configured to: the at least one facial key point includes at least one first key point belonging to the inner area of the left eyebrow, and includes at least one key point belonging to the inner area of the right eyebrow. In the case of the second key point, a rectangular area is determined according to the at least one first key point and the at least one second key point; grayscale processing is performed on the rectangular area to obtain a grayscale image of the rectangular area; The color value of the intersection point of the first row and the first column in the grayscale image of the rectangular area is used as the color value of the second pixel point; the first row is the row with the largest sum of grayscale values in the grayscale image, The first column is the column with the largest sum of gray values in the gray scale image.

結合本申請任一實施方式,所述檢測單元13用於:在所述第一比值未超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於遮擋狀態;在所述第一比值超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於未遮擋狀態。In combination with any embodiment of the present application, the detection unit 13 is configured to: if the first ratio does not exceed the first threshold, determine that the skin occlusion detection result is the skin area corresponding to the area to be tested In an occluded state; when the first ratio exceeds the first threshold, it is determined that the skin occlusion detection result indicates that the skin area corresponding to the region to be detected is in an unoccluded state.

結合本申請任一實施方式,所述皮膚區域屬待檢測人物,所述獲取單元11還用於:獲取所述待處理圖像的溫度熱力圖;所述圖像處理裝置還包括:第四處理單元17,用於在所述皮膚遮擋檢測結果為所述皮膚區域處於未遮擋狀態的情況下,從所述溫度熱力圖中讀取所述皮膚區域的溫度,作為所述待檢測人物的體溫。In combination with any embodiment of the present application, the skin area belongs to a person to be detected, and the acquisition unit 11 is further configured to: acquire a temperature thermodynamic map of the image to be processed; the image processing device further includes: a fourth processing A unit 17, configured to read the temperature of the skin area from the temperature thermodynamic map as the body temperature of the person to be detected when the skin occlusion detection result shows that the skin area is in an unoccluded state.

在一些實施例中,本申請實施例提供的裝置具有的功能或包含的模塊可以用於執行上文方法實施例描述的方法,其具體實現可以參照上文方法實施例的描述,為了簡潔,這裡不再贅述。In some embodiments, the functions or modules included in the device provided by the embodiments of the present application can be used to execute the methods described in the above method embodiments, and its specific implementation can refer to the descriptions of the above method embodiments. For brevity, here No longer.

圖6為本申請實施例提供的一種圖像處理裝置的硬體結構示意圖。該圖像處理裝置2包括處理器21,儲存器22,輸入裝置23,輸出裝置24。該處理器21、儲存器22、輸入裝置23和輸出裝置24通過連接器25相耦合,該連接器25包括各類介面、傳輸線或總線等等,本申請實施例對此不作限定。應當理解,本申請的各個實施例中,耦合是指通過特定方式的相互聯繫,包括直接相連或者通過其他設備間接相連,例如可以通過各類介面、傳輸線、總線等相連。FIG. 6 is a schematic diagram of a hardware structure of an image processing device provided by an embodiment of the present application. The image processing device 2 includes a processor 21 , a storage 22 , an input device 23 and an output device 24 . The processor 21 , storage 22 , input device 23 and output device 24 are coupled through a connector 25 , and the connector 25 includes various interfaces, transmission lines or buses, etc., which are not limited in this embodiment of the present application. It should be understood that in various embodiments of the present application, coupling refers to interconnection in a specific way, including direct connection or indirect connection through other devices, such as connection through various interfaces, transmission lines, and buses.

處理器21可以是一個或多個圖形處理器(graphics processing unit, GPU),在處理器21是一個GPU的情況下,該GPU可以是單核GPU,也可以是多核GPU。可選的,處理器21可以是多個GPU構成的處理器組,多個處理器之間通過一個或多個總線彼此耦合。可選的,該處理器還可以為其他類型的處理器等等,本申請實施例不作限定。The processor 21 may be one or more graphics processing units (graphics processing unit, GPU). In the case where the processor 21 is a GPU, the GPU may be a single-core GPU or a multi-core GPU. Optionally, the processor 21 may be a processor group composed of multiple GPUs, and the multiple processors are coupled to each other through one or more buses. Optionally, the processor may also be other types of processors, etc., which are not limited in this embodiment of the present application.

儲存器22可用於儲存計算機程式指令,以及用於執行本申請方案的程式代碼在內的各類計算機程式代碼。可選地,儲存器包括但不限於是隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、可擦除可程式唯讀記憶體(erasable programmable read only memory,EPROM)、或唯讀記憶光碟(compact disc read-only memory,CD-ROM),該儲存器用於相關指令及數據。The memory 22 can be used to store computer program instructions and various computer program codes including program codes for executing the solutions of the present application. Optionally, the storage includes but is not limited to random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.

輸入裝置23用於輸入數據和/或信號,以及輸出裝置24用於輸出數據和/或信號。輸入裝置23和輸出裝置24可以是相獨立的器件,也可以是一個整體的器件。The input device 23 is used for inputting data and/or signals and the output device 24 is used for outputting data and/or signals. The input device 23 and the output device 24 can be independent devices, or an integrated device.

可理解,本申請實施例中,儲存器22不僅可用於儲存相關指令,還可用於儲存,如該儲存器22可用於儲存通過輸入裝置23獲取的數據,又或者該儲存器22還可用於儲存通過處理器21處理的數據等等,本申請實施例對於該儲存器中具體所儲存的數據不作限定。It can be understood that in the embodiment of the present application, the storage 22 can not only be used for storing relevant instructions, but also can be used for storage, for example, the storage 22 can be used for storing data obtained through the input device 23, or the storage 22 can also be used for storing The data processed by the processor 21, etc., the embodiment of the present application does not limit the specific data stored in the storage.

可以理解的是,圖6僅僅示出了圖像處理裝置的簡化設計。在實際應用中,圖像處理裝置還可以分別包含必要的其他元件,包含但不限於任意數量的輸入/輸出裝置、處理器、儲存器等,而所有可以實現本申請實施例的圖像處理裝置都應在本申請的保護範圍之內。It can be understood that Fig. 6 only shows a simplified design of the image processing device. In practical applications, the image processing device can also include other necessary components, including but not limited to any number of input/output devices, processors, storage, etc., and all image processing devices that can realize the embodiments of the present application All should be within the scope of protection of this application.

本領域普通技術人員可以意識到,結合本文中所公開的實施例描述的各示例的單元及演算法步驟,能夠以電子硬體、或者計算機軟體和電子硬體的結合來實現。這些功能究竟以硬體還是軟體方式來執行,取決於技術方案的特定應用和設計約束條件。專業技術人員可以對每個特定的應用來使用不同方法來實現所描述的功能,但是這種實現不應認為超出本申請的範圍。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.

所屬領域的技術人員可以清楚地瞭解到,為描述的方便和簡潔,上述描述的系統、裝置和單元的具體工作過程,可以參考前述方法實施例中的對應過程,在此不再贅述。所屬領域的技術人員還可以清楚地瞭解到,本申請各個實施例描述各有側重,為描述的方便和簡潔,相同或類似的部分在不同實施例中可能沒有贅述,因此,在某一實施例未描述或未詳細描述的部分可以參見其他實施例的記載。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here. Those skilled in the art can also clearly understand that the descriptions of each embodiment of the present application have their own emphases. For the convenience and brevity of description, the same or similar parts may not be repeated in different embodiments. Therefore, in a certain embodiment For parts not described or not described in detail, reference may be made to the descriptions of other embodiments.

在本申請所提供的幾個實施例中,應該理解到,所揭露的系統、裝置和方法,可以通過其它的方式實現。例如,以上所描述的裝置實施例僅僅是示意性的,例如,上述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或組件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通信連接可以是通過一些介面,裝置或單元的間接耦合或通信連接,可以是電性,機械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the above units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

上述作為分離部件說明的單元可以是或者也可以不是實體上分開的,作為單元顯示的部件可以是或者也可以不是實體單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。The units described above as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申請各個實施例中的各功能單元可以整合在一個第一處理單元中,也可以是各個單元單獨實體存在,也可以兩個或兩個以上單元整合在一個單元中。In addition, each functional unit in each embodiment of the present application may be integrated into a first processing unit, each unit may exist separately, or two or more units may be integrated into one unit.

在上述實施例中,可以全部或部分地通過軟體、硬體、韌體或者其任意組合來實現。當使用軟體實現時,可以全部或部分地以計算機程式產品的形式實現。上述計算機程式產品包括一個或多個計算機指令。在計算機上加載和執行上述計算機程式指令時,全部或部分地產生按照本申請實施例上述的流程或功能。上述計算機可以是通用計算機、專用計算機、計算機網路、或者其他可編程裝置。上述計算機指令可以儲存在計算機可讀儲存媒體中,或者通過上述計算機可讀儲存媒體進行傳輸。上述計算機指令可以從一個網站站點、計算機、伺服器或數據中心通過有線(例如同軸電纜、光纖、數位用戶線(digital subscriber line,DSL))或無線(例如紅外、無線、微波等)方式向另一個網站站點、計算機、伺服器或數據中心進行傳輸。上述計算機可讀儲存媒體可以是計算機能夠存取的任何可用媒體或者是包含一個或多個可用媒體整合的伺服器、數據中心等數據儲存設備。上述可用媒體可以是磁性媒體,(例如,軟碟、硬碟、磁帶)、光媒體(例如,數位通用光碟(digital versatile disc,DVD))、或者半導體媒體(例如固態硬碟(solid state disk ,SSD))等。In the above embodiments, all or part of the implementation may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product mentioned above includes one or more computer instructions. When the above-mentioned computer program instructions are loaded and executed on the computer, all or part of the above-mentioned processes or functions according to the embodiments of the present application will be generated. The above-mentioned computers may be general-purpose computers, special-purpose computers, computer networks, or other programmable devices. The above computer instructions may be stored in a computer-readable storage medium, or transmitted through the above-mentioned computer-readable storage medium. The above computer instructions can be sent from a website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (such as infrared, wireless, microwave, etc.) to Another website site, computer, server or data center for transmission. The above-mentioned computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The above-mentioned usable media can be magnetic media, (for example, floppy disk, hard disk, magnetic tape), optical media (for example, digital versatile disc (digital versatile disc, DVD)), or semiconductor media (for example, solid state hard disk (solid state disk, SSD)) etc.

本領域普通技術人員可以理解實現上述實施例方法中的全部或部分流程,該流程可以由計算機程式來指令相關的硬體完成,該程式可儲存於計算機可讀取儲存媒體中,該程式在執行時,可包括如上述各方法實施例的流程。而前述的儲存媒體包括:唯讀記憶體(read-only memory,ROM)或隨機存取記憶體(random access memory,RAM)、磁碟或者光碟等各種可儲存程式代碼的媒體。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments are realized. The processes can be completed by computer programs to instruct related hardware, and the programs can be stored in computer-readable storage media. When the programs are executed, , it may include the processes of the above-mentioned method embodiments. The aforementioned storage media include: various media capable of storing program codes such as read-only memory (ROM) or random access memory (random access memory, RAM), magnetic disk or optical disk.

201:獲取待處理圖像、第一閾值、第二閾值和第三閾值,上述第一閾值和上述第二閾值不同,上述第一閾值和上述第三閾值不同,上述第二閾值小於等於上述第三閾值 202:確定上述待處理圖像的待測區域中第一像素點的第一數量,其中,上述第一像素點為顏色值大於等於第二閾值且小於等於第三閾值的像素點 203:依據上述第一數量與上述待測區域內像素點的數量的第一比值和上述第一閾值,得到上述待處理圖像的皮膚遮擋檢測結果 1:圖像處理裝置 11:獲取單元 12:第一處理單元 13:檢測單元 14:第二處理單元 15:確定單元 16:第三處理單元 17:第四處理單元 2:圖像處理裝置 21:處理器 22:儲存器 23:輸入裝置 24:輸出裝置 25:連接器201: Obtain an image to be processed, a first threshold, a second threshold, and a third threshold, the first threshold is different from the second threshold, the first threshold is different from the third threshold, and the second threshold is less than or equal to the first threshold Three thresholds 202: Determine the first number of first pixels in the region to be detected of the image to be processed, wherein the first pixels are pixels whose color value is greater than or equal to the second threshold and less than or equal to the third threshold 203: Obtain the skin occlusion detection result of the image to be processed according to the first ratio of the first number to the number of pixels in the region to be detected and the first threshold 1: Image processing device 11: Get unit 12: The first processing unit 13: Detection unit 14: Second processing unit 15: Determine the unit 16: The third processing unit 17: The fourth processing unit 2: Image processing device 21: Processor 22: Storage 23: Input device 24: output device 25: Connector

圖1為本申請實施例提供的一種像素坐標系的示意圖。 圖2為本申請實施例提供的一種圖像處理方法的流程示意圖。 圖3為本申請實施例提供的另一種圖像處理方法的流程示意圖。 圖4為本申請實施例提供的一種人臉關鍵點示意圖。 圖5為本申請實施例提供的一種圖像處理裝置的結構示意圖。 圖6為本申請實施例提供的一種圖像處理裝置的硬體結構示意圖。 FIG. 1 is a schematic diagram of a pixel coordinate system provided by an embodiment of the present application. FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application. FIG. 3 is a schematic flowchart of another image processing method provided by the embodiment of the present application. FIG. 4 is a schematic diagram of key points of a human face provided by an embodiment of the present application. FIG. 5 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. FIG. 6 is a schematic diagram of a hardware structure of an image processing device provided by an embodiment of the present application.

201:獲取待處理圖像、第一閾值、第二閾值和第三閾值,上述第一閾值和上述第二閾值不同,上述第一閾值和上述第三閾值不同,上述第二閾值小於等於上述第三閾值 201: Obtain an image to be processed, a first threshold, a second threshold, and a third threshold, the first threshold is different from the second threshold, the first threshold is different from the third threshold, and the second threshold is less than or equal to the first threshold Three thresholds

202:確定上述待處理圖像的待測區域中第一像素點的第一數量,其中,上述第一像素點為顏色值大於等於第二閾值且小於等於第三閾值的像素點 202: Determine the first quantity of the first pixel in the area to be detected of the image to be processed, wherein the first pixel is a pixel whose color value is greater than or equal to the second threshold and less than or equal to the third threshold

203:依據上述第一數量與上述待測區域內像素點的數量的第一比值和上述第一閾值,得到上述待處理圖像的皮膚遮擋檢測 結果 203: Obtain the skin occlusion detection of the image to be processed according to the first ratio of the first number to the number of pixels in the region to be detected and the first threshold result

Claims (12)

一種圖像處理方法,其特徵在於,所述方法包括: 獲取待處理圖像、第一閾值、第二閾值和第三閾值,所述第一閾值和所述第二閾值不同,所述第一閾值和所述第三閾值不同,所述第二閾值小於等於所述第三閾值; 確定所述待處理圖像的待測區域中第一像素點的第一數量;所述第一像素點為顏色值大於等於所述第二閾值、且小於等於所述第三閾值的像素點; 依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果。 An image processing method, characterized in that the method comprises: Acquire the image to be processed, a first threshold, a second threshold and a third threshold, the first threshold is different from the second threshold, the first threshold is different from the third threshold, the second threshold is less than equal to said third threshold; determining a first number of first pixels in the region to be measured of the image to be processed; the first pixels are pixels whose color values are greater than or equal to the second threshold and less than or equal to the third threshold; A skin occlusion detection result of the image to be processed is obtained according to a first ratio of the first number to the number of pixels in the region to be detected and the first threshold. 如請求項1所述的方法,其特徵在於,所述確定所述待處理圖像的所述待測區域中第一像素點的第一數量,包括: 對所述待處理圖像進行人臉檢測處理,得到第一人臉框; 依據所述第一人臉框,從所述待處理圖像中確定所述待測區域; 確定所述待測區域中所述第一像素點的第一數量。 The method according to claim 1, wherein the determining the first number of first pixels in the region to be tested of the image to be processed includes: Performing face detection processing on the image to be processed to obtain a first face frame; Determining the region to be detected from the image to be processed according to the first face frame; A first quantity of the first pixel points in the region to be tested is determined. 如請求項2所述的方法,其特徵在於,所述第一人臉框包括上框線和下框線;所述上框線和所述下框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的橫軸的邊,且所述上框線的縱坐標小於所述下框線的縱坐標;所述依據所述第一人臉框,從所述待處理圖像中確定所述待測區域,包括: 對所述待處理圖像進行人臉關鍵點檢測,得到至少一個人臉關鍵點;所述至少一個人臉關鍵點包括左眉毛關鍵點和右眉毛關鍵點; 在保持所述上框線的縱坐標不變的情況下,將所述下框線沿所述待處理圖像的像素坐標系的縱軸的負方向移動,使得所述下框線所在直線與第一直線重合,得到第二人臉框;所述第一直線為過所述左眉毛關鍵點和所述右眉毛關鍵點的直線; 依據所述第二人臉框包含的區域,得到所述待測區域。 The method according to claim 2, wherein the first human face frame includes an upper frame line and a lower frame line; both the upper frame line and the lower frame line are in the first human face frame A side parallel to the horizontal axis of the pixel coordinate system of the image to be processed, and the ordinate of the upper frame line is smaller than the ordinate of the lower frame line; Determining the region to be tested in the image to be processed includes: Carrying out human face key point detection on the image to be processed to obtain at least one human face key point; the at least one human face key point includes a left eyebrow key point and a right eyebrow key point; In the case of keeping the vertical coordinate of the upper frame line unchanged, move the lower frame line along the negative direction of the vertical axis of the pixel coordinate system of the image to be processed, so that the line where the lower frame line is located is in line with The first straight line overlaps to obtain the second human face frame; the first straight line is a straight line passing through the left eyebrow key point and the right eyebrow key point; The region to be detected is obtained according to the region included in the second human face frame. 如請求項3所述的方法,其特徵在於,所述依據所述第二人臉框包含的區域,得到所述待測區域,包括: 在保持所述第二人臉框的下框線的縱坐標不變的情況下,將所述第二人臉框的上框線沿所述待處理圖像的像素坐標系的縱軸移動,使得所述第二人臉框的上框線和所述第二人臉框的下框線之間的距離為預設距離,得到第三人臉框; 依據所述第三人臉框包含的區域,得到所述待測區域。 The method according to claim 3, wherein the obtaining the region to be tested according to the region contained in the second face frame includes: In the case of keeping the vertical coordinate of the lower frame line of the second human face frame unchanged, moving the upper frame line of the second human face frame along the vertical axis of the pixel coordinate system of the image to be processed, Making the distance between the upper frame line of the second human face frame and the lower frame line of the second human face frame be a preset distance to obtain a third human face frame; The region to be detected is obtained according to the region included in the third human face frame. 如請求項4所述的方法,其特徵在於,所述至少一個人臉關鍵點還包括左嘴角關鍵點和右嘴角關鍵點;所述第一人臉框還包括左框線和右框線;所述左框線和所述右框線均為所述第一人臉框中平行於所述待處理圖像的像素坐標系的縱軸的邊,且所述左框線的橫坐標小於所述右框線的橫坐標;所述依據所述第三人臉框包含的區域,得到所述待測區域,包括: 在保持所述第三人臉框的左框線的橫坐標不變的情況下,將所述第三人臉框的右框線沿所述待處理圖像的像素坐標系的橫軸移動,使得所述第三人臉框的右框線和所述第三人臉框的左框線之間的距離為參考距離,得到第四人臉框;所述參考距離為第二直線與所述第三人臉框包含的人臉輪廓的兩個交點之間的距離;所述第二直線為在所述第一直線和第三直線之間且平行於所述第一直線或所述第三直線的直線;所述第三直線為過所述左嘴角關鍵點和所述右嘴角關鍵點的直線; 將所述第四人臉框包含的區域作為所述待測區域。 The method as described in claim 4, wherein the at least one human face key point also includes a left mouth corner key point and a right mouth corner key point; the first human face frame also includes a left frame line and a right frame line; Both the left frame line and the right frame line are sides parallel to the vertical axis of the pixel coordinate system of the image to be processed in the first face frame, and the abscissa of the left frame line is smaller than the The abscissa of the right frame line; the region included according to the third human face frame, obtains the region to be tested, including: In the case of keeping the abscissa of the left frame line of the third face frame unchanged, moving the right frame line of the third face frame along the abscissa of the pixel coordinate system of the image to be processed, The distance between the right frame line of the third human face frame and the left frame line of the third human face frame is a reference distance to obtain the fourth human face frame; the reference distance is the second straight line and the The distance between two intersection points of the human face contour contained in the third human face frame; the second straight line is between the first straight line and the third straight line and parallel to the first straight line or the third straight line A straight line; the third straight line is a straight line passing through the key point of the left corner of the mouth and the key point of the right corner of the mouth; The region included in the fourth human face frame is used as the region to be detected. 如請求項2至5中任意一項所述的方法,其特徵在於,所述獲取第二閾值和第三閾值,包括: 從所述第一人臉框包含的像素點區域中確定皮膚像素點區域; 獲取所述皮膚像素點區域中第二像素點的顏色值; 將所述第二像素點的顏色值與第一值的差作為所述第二閾值, 將所述第二像素點的顏色值與第二值的和作為所述第三閾值;其中,所述第一值和所述第二值均不超過所述待處理圖像的顏色值中的最大值。 The method according to any one of claims 2 to 5, wherein said obtaining the second threshold and the third threshold includes: Determining the skin pixel point area from the pixel point area included in the first human face frame; Acquiring the color value of the second pixel in the skin pixel area; using the difference between the color value of the second pixel point and the first value as the second threshold, Taking the sum of the color value of the second pixel point and the second value as the third threshold; wherein neither the first value nor the second value exceeds the color value of the image to be processed maximum value. 如請求項6所述的方法,其特徵在於,所述從所述第一人臉框包含的像素點區域中確定皮膚像素點區域,包括: 在檢測到所述待處理圖像中人臉區域未佩戴口罩的情況下,將所述人臉區域中除額頭區域、嘴巴區域、眉毛區域和眼睛區域之外的像素點區域,作為所述皮膚像素點區域; 在檢測到所述待處理圖像中人臉區域佩戴口罩的情況下,將所述第一直線和第四直線之間的像素點區域作為所述皮膚像素點區域;所述第四直線為過左眼下眼瞼關鍵點和右眼下眼瞼關鍵點的直線;所述左眼下眼瞼關鍵點和所述右眼下眼瞼關鍵點均屬所述至少一個人臉關鍵點。 The method according to claim 6, wherein the determining the skin pixel area from the pixel area included in the first human face frame includes: When it is detected that the face area in the image to be processed does not wear a mask, the pixel point area in the face area except the forehead area, mouth area, eyebrow area and eye area is used as the skin area. Pixel area; When it is detected that the face area in the image to be processed is wearing a mask, the pixel point area between the first straight line and the fourth straight line is used as the skin pixel point area; the fourth straight line is over the left The straight line of the key point of the lower eyelid of the eye and the key point of the lower eyelid of the right eye; the key point of the lower eyelid of the left eye and the key point of the lower eyelid of the right eye belong to the at least one human face key point. 如請求項6所述的方法,其特徵在於,所述獲取所述皮膚像素點區域中第二像素點的顏色值,包括: 在所述至少一個人臉關鍵點包含屬左眉內側區域中的至少一個第一關鍵點,且所述至少一個人臉關鍵點包含屬右眉內側區域中的至少一個第二關鍵點的情況下,根據所述至少一個第一關鍵點和所述至少一個第二關鍵點確定矩形區域; 對所述矩形區域進行灰度化處理,得到所述矩形區域的灰度圖; 將第一行和第一列的交點的顏色值作為所述第二像素點的顏色值;所述第一行為所述灰度圖中灰度值之和最大的行,所述第一列為所述灰度圖中灰度值之和最大的列。 The method according to claim 6, wherein the acquiring the color value of the second pixel in the skin pixel area includes: In the case where the at least one human face key point includes at least one first key point in the left eyebrow inner area, and the at least one human face key point includes at least one second key point in the right eyebrow inner area, according to The at least one first key point and the at least one second key point define a rectangular area; performing grayscale processing on the rectangular area to obtain a grayscale image of the rectangular area; The color value of the intersection point of the first row and the first column is used as the color value of the second pixel point; the first row is the row with the largest sum of grayscale values in the grayscale image, and the first column is The column with the largest sum of grayscale values in the grayscale image. 如請求項1至5中任意一項所述的方法,其特徵在於,所述依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果,包括: 在所述第一比值未超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於遮擋狀態; 在所述第一比值超過所述第一閾值的情況下,確定所述皮膚遮擋檢測結果為所述待測區域對應的皮膚區域處於未遮擋狀態, 響應於所述皮膚區域屬待檢測人物,所述方法還包括: 獲取所述待處理圖像的溫度熱力圖; 在所述皮膚遮擋檢測結果為所述皮膚區域處於未遮擋狀態的情況下,從所述溫度熱力圖中讀取所述皮膚區域的溫度,作為所述待檢測人物的體溫。 The method according to any one of claim items 1 to 5, wherein, according to the first ratio of the first number to the number of pixels in the area to be tested and the first threshold, the obtained The skin occlusion detection result of the image to be processed includes: If the first ratio does not exceed the first threshold, determine that the skin occlusion detection result indicates that the skin area corresponding to the area to be tested is in an occlusion state; When the first ratio exceeds the first threshold, determining that the skin occlusion detection result is that the skin area corresponding to the area to be tested is in an unoccluded state, In response to the skin area belonging to a person to be detected, the method further includes: Acquiring a temperature thermodynamic map of the image to be processed; If the skin occlusion detection result shows that the skin area is in an unoccluded state, read the temperature of the skin area from the temperature thermodynamic map as the body temperature of the person to be detected. 一種圖像處理裝置,其特徵在於,所述裝置包括: 獲取單元,用於獲取待處理圖像、第一閾值、第二閾值和第三閾值,所述第一閾值和所述第二閾值不同,所述第一閾值和所述第三閾值不同,所述第二閾值小於等於所述第三閾值; 第一處理單元,用於確定所述待處理圖像的待測區域中第一像素點的第一數量;所述第一像素點為顏色值大於等於所述第二閾值且小於等於所述第三閾值的像素點; 檢測單元,用於依據所述第一數量與所述待測區域內像素點的數量的第一比值和所述第一閾值,得到所述待處理圖像的皮膚遮擋檢測結果。 An image processing device, characterized in that the device comprises: An acquiring unit, configured to acquire an image to be processed, a first threshold, a second threshold, and a third threshold, the first threshold is different from the second threshold, the first threshold is different from the third threshold, and the The second threshold is less than or equal to the third threshold; A first processing unit, configured to determine a first number of first pixels in the region to be detected of the image to be processed; the first pixel is a color value greater than or equal to the second threshold and less than or equal to the first pixel Pixels with three thresholds; A detection unit, configured to obtain a skin occlusion detection result of the image to be processed according to a first ratio of the first number to the number of pixels in the region to be detected and the first threshold. 一種電子設備,其特徵在於,包括:處理器和儲存器,所述儲存器用於儲存計算機程式代碼,所述計算機程式代碼包括計算機指令,在所述處理器執行所述計算機指令的情況下,所述電子設備執行如請求項1至9中任意一項所述的方法。An electronic device, characterized in that it includes: a processor and a storage, the storage is used to store computer program codes, the computer program codes include computer instructions, and when the processor executes the computer instructions, the The electronic device executes the method described in any one of Claims 1 to 9. 一種計算機可讀儲存媒體,其特徵在於,所述計算機可讀儲存媒體中儲存有計算機程式,所述計算機程式包括程式指令,在所述程式指令被處理器執行的情況下,使所述處理器執行請求項1至9中任意一項所述的方法。A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a processor, the processor Execute the method described in any one of claims 1 to 9.
TW111114745A 2021-05-31 2022-04-19 Methods, apparatuses, processors, electronic equipment and storage media for image processing TWI787113B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110600103.1 2021-05-31
CN202110600103.1A CN113222973B (en) 2021-05-31 2021-05-31 Image processing method and device, processor, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
TWI787113B true TWI787113B (en) 2022-12-11
TW202248954A TW202248954A (en) 2022-12-16

Family

ID=77082028

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111114745A TWI787113B (en) 2021-05-31 2022-04-19 Methods, apparatuses, processors, electronic equipment and storage media for image processing

Country Status (3)

Country Link
CN (1) CN113222973B (en)
TW (1) TWI787113B (en)
WO (1) WO2022252737A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222973B (en) * 2021-05-31 2024-03-08 深圳市商汤科技有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN113592884B (en) * 2021-08-19 2022-08-09 遨博(北京)智能科技有限公司 Human body mask generation method
CN117495855B (en) * 2023-12-29 2024-03-29 广州中科医疗美容仪器有限公司 Skin defect evaluation method and system based on image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI639137B (en) * 2017-04-27 2018-10-21 立特克科技股份有限公司 Skin detection device and the method therefor
WO2019056986A1 (en) * 2017-09-19 2019-03-28 广州市百果园信息技术有限公司 Skin color detection method and device and storage medium
CN110443747A (en) * 2019-07-30 2019-11-12 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN112836625A (en) * 2021-01-29 2021-05-25 汉王科技股份有限公司 Face living body detection method and device and electronic equipment
CN112861661A (en) * 2021-01-22 2021-05-28 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426870B (en) * 2015-12-15 2019-09-24 北京文安智能技术股份有限公司 A kind of face key independent positioning method and device
CN105740758A (en) * 2015-12-31 2016-07-06 上海极链网络科技有限公司 Internet video face recognition method based on deep learning
CN107145833A (en) * 2017-04-11 2017-09-08 腾讯科技(上海)有限公司 The determination method and apparatus of human face region
CN108319953B (en) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN108427918B (en) * 2018-02-12 2021-11-30 杭州电子科技大学 Face privacy protection method based on image processing technology
US10915734B2 (en) * 2018-09-28 2021-02-09 Apple Inc. Network performance by including attributes
CN110532871B (en) * 2019-07-24 2022-05-10 华为技术有限公司 Image processing method and device
CN111428581B (en) * 2020-03-05 2023-11-21 平安科技(深圳)有限公司 Face shielding detection method and system
CN112633144A (en) * 2020-12-21 2021-04-09 平安科技(深圳)有限公司 Face occlusion detection method, system, device and storage medium
CN113222973B (en) * 2021-05-31 2024-03-08 深圳市商汤科技有限公司 Image processing method and device, processor, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI639137B (en) * 2017-04-27 2018-10-21 立特克科技股份有限公司 Skin detection device and the method therefor
WO2019056986A1 (en) * 2017-09-19 2019-03-28 广州市百果园信息技术有限公司 Skin color detection method and device and storage medium
CN110443747A (en) * 2019-07-30 2019-11-12 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN112861661A (en) * 2021-01-22 2021-05-28 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112836625A (en) * 2021-01-29 2021-05-25 汉王科技股份有限公司 Face living body detection method and device and electronic equipment

Also Published As

Publication number Publication date
TW202248954A (en) 2022-12-16
CN113222973A (en) 2021-08-06
WO2022252737A1 (en) 2022-12-08
CN113222973B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
TWI787113B (en) Methods, apparatuses, processors, electronic equipment and storage media for image processing
Kumar et al. Face detection techniques: a review
JP6636154B2 (en) Face image processing method and apparatus, and storage medium
US11043011B2 (en) Image processing method, apparatus, terminal, and storage medium for fusing images of two objects
US6611613B1 (en) Apparatus and method for detecting speaking person's eyes and face
CN111539276B (en) Method for detecting safety helmet in real time in power scene
WO2022174523A1 (en) Method for extracting gait feature of pedestrian, and gait recognition method and system
TW202121233A (en) Image processing method, processor, electronic device, and storage medium
CN106881716A (en) Human body follower method and system based on 3D cameras robot
WO2022083130A1 (en) Temperature measurement method and apparatus, electronic device, and storage medium
WO2022135574A1 (en) Skin color detection method and apparatus, and mobile terminal and storage medium
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
CN110751635A (en) Oral cavity detection method based on interframe difference and HSV color space
WO2022267653A1 (en) Image processing method, electronic device, and computer readable storage medium
Spivak et al. Approach to Recognizing of Visualized Human Emotions for Marketing Decision Making Systems.
CN112232205B (en) Mobile terminal CPU real-time multifunctional face detection method
KR102066892B1 (en) Make-up evaluation system and operating method thereof
CN115995097A (en) Deep learning-based safety helmet wearing standard judging method
KR20050052657A (en) Vision-based humanbeing detection method and apparatus
JP6467994B2 (en) Image processing program, image processing apparatus, and image processing method
Xiao et al. Facial mask detection system based on YOLOv4 algorithm
CN112818728B (en) Age identification method and related products
Çağla Adding Virtual Objects to Realtime Images; A Case Study in Augmented Reality
Ren et al. Fast eye localization based on pixel differences
TWI786969B (en) Eyeball locating method, image processing device, and image processing system