WO2019105457A1 - 图像处理方法、计算机设备和计算机可读存储介质 - Google Patents
图像处理方法、计算机设备和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2019105457A1 WO2019105457A1 PCT/CN2018/118555 CN2018118555W WO2019105457A1 WO 2019105457 A1 WO2019105457 A1 WO 2019105457A1 CN 2018118555 W CN2018118555 W CN 2018118555W WO 2019105457 A1 WO2019105457 A1 WO 2019105457A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- face
- target
- images
- target face
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 25
- 238000003491 array Methods 0.000 claims 3
- 230000001747 exhibiting effect Effects 0.000 abstract 1
- 238000012545 processing Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004397 blinking Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Definitions
- the present application relates to the field of computer technology, and in particular, to an image processing method, a computer device, and a computer readable storage medium.
- the intelligent computer device can classify a large number of images obtained by shooting according to different dimensions, for example, classification by time dimension, classification by location dimension, classification by character dimension, and the like. Intelligent computer devices can generate different atlases for image clustering according to different dimensions.
- An image processing method comprising:
- An image of the face map set is displayed on the computer device display interface, and a target face in the image is highlighted.
- a computer device comprising a memory and a processor, the memory storing computer readable instructions, wherein when executed by the processor, the processor causes the processor to:
- An image of the face map set is displayed on the computer device display interface, and a target face in the image is highlighted.
- One or more computer readable storage media containing computer executable instructions that, when executed by one or more processors, cause the processor to:
- An image of the face map set is displayed on the computer device display interface, and a target face in the image is highlighted.
- the target face in the image when displaying an image in a face map, the target face in the image may be highlighted, that is, the master face in the image is highlighted, and the problem that the number of faces in the image is large and the image cannot display the owner's face may be avoided.
- the way of viewing the images in the face map is more suitable for the user's needs.
- 1 is a flow chart of an image processing method in one embodiment.
- 2A is a diagram showing an interface for highlighting a master face in an image on a computer device display interface in one embodiment.
- 2B is a diagram showing an interface for highlighting a master face in an image on a computer device display interface in another embodiment.
- 2C is a diagram showing an interface for highlighting a master face in an image on a computer device display interface in another embodiment.
- 3 is a flow chart of an image processing method in another embodiment.
- FIG. 4 is a flow chart of an image processing method in another embodiment.
- Figure 5 is a flow chart of an image processing method in another embodiment.
- Fig. 6 is a block diagram showing the structure of an image processing apparatus in an embodiment.
- Fig. 7 is a block diagram showing the structure of an image processing apparatus in another embodiment.
- FIG. 8 is a block diagram showing a part of a structure of a mobile phone related to a computer device according to an embodiment of the present application.
- an image processing method includes:
- Step 102 Acquire a face atlas, where the computer atlas is generated by computer equipment clustering images containing the same face.
- the computer device obtains a face map to be processed, and the face map is generated by computer equipment clustering images containing the same face.
- the same face is the face corresponding to the same face identifier.
- the step of the computer device generating the face map comprises: the computer device performing face recognition on the stored image, and detecting and acquiring the face image included in the stored image.
- the face image described above is an image in which a face exists in the image.
- the computer device can recognize the face in the face image, and obtain the face identifier corresponding to the face.
- the face identifier corresponding to the face in the face image the same may be included.
- the face image of the face mark is clustered to obtain the above-mentioned face atlas.
- the computer device performs face recognition on the stored image to obtain the face image 1, the image 2, the image 3, and the image 4, it is detected that the image corresponding to the face identifier A is included in the image 1, the image 2, and the image 3. Then, the image 1, the image 2, and the image 3 are clustered to obtain a face map corresponding to the face identifier A.
- Step 104 Identify a face included in the image in the face map set, and obtain a target face in the face.
- the target face is a face existing in each image in the face map set.
- the face included in each image in the face map set may be acquired separately, that is, the face identifier included in each image in the face map set is obtained.
- the computer device can search for the face identifier included in each image in the face map set above, and use the face corresponding to the face identifier included in each image as the target face, that is, the computer device can obtain the corresponding face atlas The master's face.
- the face map includes image 1, image 2, image 3, and image 4, wherein the image 1 includes a face corresponding to the face identifier A and a face corresponding to the face identifier B, and the image 2 includes the face identifier.
- the image 3 includes a face corresponding to the face mark A and a face corresponding to the face mark C
- the image 4 includes a face corresponding to the face mark A and a face corresponding to the face mark D
- the above four images each include a face corresponding to the face identifier A, and the face corresponding to the face identifier A is used as the target face, that is, the owner face of the face atlas.
- Step 106 Display an image in the face map set on the computer device display interface, and highlight the target face in the image.
- the computer device may select an image from the image of the face gallery according to a preset rule as the cover of the face gallery.
- the above preset rules may include at least one of the following rules:
- the image in which the target face position is in the image preset area is selected as the cover image, for example, the image in which the target face image is in the center of the image is selected as the cover image.
- the computer device can display the cover image of the above-mentioned face atlas in the album interface, that is, the cover image represents the corresponding face atlas.
- the jump interface displays all face images included in the face map.
- the above-mentioned view command may be a touch command or a voice command, wherein the touch command acquired by the computer device is a touch command applied to the cover image of the face atlas.
- the computer device displays the images in the above-mentioned face map set, the target face in each image can be highlighted.
- the computer device highlighting the target face in the image may include at least one of the following methods:
- the computer device may perform blurring on the image, wherein the algorithm for blurring the image may include Gaussian blur, mean blur, median blur, and binary blur.
- the method for the computer device to blur the face other than the target face in the image may include: the computer device acquires a pixel region in the image that needs to be blurred, and performs blurring on the pixels in the pixel region.
- the computer device can set the level of blurring processing, that is, the degree of image blurring processing; the higher the level of blurring processing of the image, the more blurred the image.
- the computer device can also mark the target face in the image to highlight the target face in the image.
- the method for the computer device to mark the target face in the image may include at least one of the following methods:
- the face contour of the target face in the image is recognized, and the face contour of the target face is displayed in a preset color to highlight the target face in the image. For example, after the computer device recognizes the face contour of the target face in the image, the face contour of the target face is displayed in red.
- the target face image in the image may be extracted. For example, after the computer device recognizes the outline of the target face image, the target face image is extracted from the image. After the computer device obtains the target face image, when displaying the image in the face map set, only the target face image corresponding to the image may be displayed.
- an image is displayed on the display interface of the computer device.
- the image includes a first human face 210, a second human face 220, and a third human face 230, wherein the third human face 230 is a target human face.
- the computer device may add an arrow identifier 202 to the image, and the arrow identifier 202 points to the owner's face 230, that is, the owner's face in the image is 230.
- the computer device can also add a rectangular frame to the image and a rectangular frame to identify the target face. As shown in FIG.
- the computer device may add a rectangular frame 204 to the image, and the image displayed in the rectangular frame 204 is the target human face 230, that is, the target human face 230 is displayed in the rectangular frame 204, and the target is identified by the rectangular frame 204. Face 230.
- the computer device can also blur other faces in the image except the target face to highlight the target face.
- the computer device may perform a blurring process on the pixel area corresponding to the first human face 210 and the pixel area corresponding to the second human face 220 in the image, and display the blurred image on the computer device.
- the target face in the image when displaying an image in a face map, the target face in the image may be highlighted, that is, the master face in the image is highlighted, and the number of faces in the image may be avoided, and the image may not display the master face.
- the problem is that the way the images in the face map are viewed is more suitable for the user.
- the viewing instructions received by the computer device for the face map may include a first view instruction or a second view instruction.
- the first view command is an instruction to normally display an image
- the second view command is an instruction to highlight a target face in the image.
- the computer device can display images in the face map set in different forms according to different viewing instructions received. That is, after the computer device receives the viewing instruction for the face map, when it detects that the first viewing instruction is received, the image in the face map is normally displayed; when it detects that the second viewing instruction is received, it highlights Display the target face in the image.
- the computer device can provide an interactive interface between the atlas display interface and the image display interface, and determine the manner in which the image is displayed based on the viewing instructions received from the interactive interface. For example, the computer device provides a "highlighted" button on the image display interface. When the button is turned on, the computer device highlights the target face in the image; when the button is turned off, the computer device displays the image normally.
- the computer device may also determine that the viewing instruction is a first viewing instruction or a second viewing instruction according to a manner of triggering the viewing instruction.
- the image may be displayed in a plurality of atlases on the display interface of the computer device, for example, an "album” atlas, a “character” atlas, and a "place” map set.
- the "album” atlas contains all the images that have been stored by the computer device
- the "character” atlas is a clustered set of face images of the stored images in the computer device, and the plurality of sets may include the same image.
- the preset atlas may be a face atlas in the computer device, that is, when the user clicks on the cover image of the album atlas, even if the album image contains the image in the face map, the computer device displays the image normally; when the user clicks When the cover image of the face map is displayed, the image in the face map set is displayed on the computer device display interface, and the target face in the image is highlighted.
- the images contained in the set of face maps may include a variety of forms, such as a selfie, a single-person image, a multi-person group photo, and the like.
- Highlighting the target face in the image is to avoid the problem of showing the owner's face when there are more faces in the image or when the face is small. Highlighting the target face in the image includes:
- the computer device can obtain the ratio of the area of the target face in the image to the image area.
- the ratio of the area of the target face to the image area in the image is lower than a preset first threshold, it is determined that the target face area in the image is smaller. , highlight the target face in the image.
- the ratio of the area of the target face to the image area in the image is less than 40%, the computer device highlights the target face in the image.
- the computer device can recognize the face included in the image, and then obtain the number of faces included in the image.
- the computer device can screen out the face of the road in the image, and after screening the face of the road, whether the number of remaining faces in the detected image is higher than a preset second threshold.
- the target face in the image is highlighted.
- the road surface of the passerby face is small in the image, and the computer device can detect whether the face is a passerby face by the ratio of the face area to the image area.
- the computer device detects whether the ratio of the face area to the image area is lower than the image area.
- the preset ratio such as 5%, divides the face into a passerby face when the ratio of the face area to the image area is lower than the preset ratio.
- the computer device can receive an operation instruction of the user on a single image, and process the corresponding image according to the received operation instruction.
- the computer device displays an image on the display interface
- the operation instruction is a highlight instruction
- the computer device highlights the target face in the image.
- highlighting the target human face in the image comprises: highlighting the target human face in one image of the plurality of images when the similarity of the plurality of images in the face map set is detected to be higher than a third threshold .
- the target face in the image is highlighted to show the master face in the image.
- the similarity between the images is high, and the computer device can highlight the target human face in one of the multiple continuous shooting images, and display the other images in the multiple continuous shooting images normally.
- the method for the computer device to detect the similarity of the plurality of images may include: histogram matching, artificial intelligence algorithm detection, and the like.
- the computer device detects that the similarity of the plurality of images is high when the plurality of images are displayed, the target face in one of the plurality of images is highlighted.
- the similarity of each of the two images is higher than 90%, and when the plurality of images are displayed on the display interface of the computer device, the target face is highlighted only in one image.
- the target face of one image in the multiple images is highlighted, which can highlight the master face in the image and avoid the image display. Highlighting the target face reduces the display of other faces in the image, and the way the image is displayed is more intelligent.
- step 106 the method further includes:
- Step 108 Acquire an arrangement order of images in the face map set.
- the order of arrangement is in the order of the shooting time or the order set by the user.
- Step 110 Generate an album of the images in the face map set according to the arrangement order.
- the face map is a collection of face images containing the same face, which can record the appearance of the same person in each period.
- the computer device can generate an album in order of the images in the face map, and further display the face image in the face map.
- the computer device sequentially generates an album according to the images in the face map set: the computer device obtains an arrangement order of the images in the face map set, and the foregoing sorting order may be a sequence of image capturing moments, a sequence of image storage, or a user manual The order of settings.
- the computer device sequentially plays the images in the face map set in the above-described arrangement order, and the computer device can store the above albums as video file storage or animation files.
- the computer device when the computer device generates an album in the image of the face map, if it is detected that there are multiple images whose similarity is higher than the specified value in the face map set, the image in which the plurality of similarities are higher than the specified value is extracted.
- the image is used to generate an album, that is, when the computer device generates the album, when the image of the generated album includes the multi-frame continuous shooting image, one frame of the multi-frame continuous shooting image is extracted to generate an album.
- the face image is collected into the face image to generate an album, which facilitates the user to quickly browse multiple images in the form of an album, and conforms to the user's needs.
- step 106 the method further includes:
- Step 112 When the number of images in the face map set is higher than the fourth threshold, the images in the face map set are clustered according to the time dimension or the location dimension to obtain a sub-atlas.
- Step 114 displaying a sub-atlas at the computer device display interface.
- the computer device detects that the number of images in the face map set is higher than a preset fourth threshold, the images in the face map set may be clustered again according to the time dimension or the location dimension. That is, when there are many images in the face map set, the computer device can perform secondary classification on the face map.
- the clustering of the image in the face map by the computer device in the time dimension includes: the computer device clusters the images in the same time range according to the set time unit to generate the sub-atlas. For example, the "day" is used as a time unit, and images of the shooting time in the same day are clustered to generate a sub-atlas.
- the computer device clusters the images in the face map set by the location dimension, and the computer device clusters the images in the same location range to generate the sub-atlas according to the set location unit. For example, using "city" as a time unit, images of the same city are clustered at the time of shooting to generate a sub-atlas.
- the images in the face map set may be classified into two levels according to the time dimension or the location dimension, that is, the image in the face map set is displayed in multiple sub-atlases. To avoid the problem that the number of images in the face map is too large for the user to browse, which makes the user browse the image more conveniently.
- step 106 the method further includes:
- Step 116 When the number of images in the face map set is higher than the fifth threshold, acquiring a plurality of images whose similarity in the face map set is higher than the sixth threshold.
- Step 118 Acquire image information of multiple images, where the image information includes: image sharpness and/or target face state.
- the computer device selects the target image from the plurality of images according to the image information, and displays the target image in the image display interface corresponding to the face map.
- the computer device When the computer device detects that the number of images in the face map set is higher than the fifth threshold, it may be detected whether the face map set includes multiple images whose similarity is higher than the sixth threshold, that is, whether the face map set includes the similarity ratio.
- the fifth threshold and the sixth threshold may be values set by a computer device or values set by a user, respectively.
- the image information of the plurality of images with higher similarity may be respectively acquired.
- the above image information may include: image sharpness, target face state, image sharpness, and target face state.
- the above image sharpness is the sharpness value of the image, and the higher the sharpness value of the image, the clearer the image is.
- the computer device can determine the sharpness value of the image through various image sharpness evaluation functions, including: a grayscale variation function, a gradient function, an image grayscale entropy function, and the like.
- the target face state includes the rotation angle of the target face and the expression of the target face.
- the computer device can detect the rotation angle of the target face in the acquired image, where the rotation angle is the target face relative to the standard face in three dimensions. The angle of rotation within the space.
- the expression of the target face may include whether the target face is in a closed eye state, whether the target face is in a smiling state, or the like.
- the computer device can determine whether the target face is in a closed eye state by detecting whether there is eye whiteness in the target face, and when the eye can be detected in the target face, the target face is in a blinking state; when the target face is in the target face When the eye white cannot be detected, the target face is in a closed eye state.
- the computer device can determine whether the target face is in a smiling state by detecting whether there is a tooth in the target face. When there is a tooth in the target face, it is determined that the target face is in a smiling state; when there is no tooth in the target face, it is determined that the target face is not in a smiling state.
- the computer device After acquiring the image information of the plurality of images, the computer device selects the target image from the plurality of images by comparing the image information of the plurality of images.
- the standard for the computer device to select the target image from multiple images of China may be set by the computer device or set by the user. For example, an image with the highest image sharpness in a plurality of images is selected as the target image; and an image in which the target face is in a blinking and smiling state is selected as the target image in the plurality of images.
- the computer device displays an image in a face map set, only the target image in the plurality of images may be displayed.
- the face map set when the number of images in the face map set is large, when detecting that the face map set includes multiple similar images, only one image of the plurality of similar images may be displayed, and the face map is reduced. The number of images, avoiding the problem that the number of images is too large, causing the user to browse the image is inconvenient.
- the operations in the flowchart of the method of the embodiment of the present application are sequentially displayed in accordance with the indication of the arrows, but the operations are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these operations is not strictly limited, and may be performed in other sequences. Moreover, at least a part of the operations in the method flowchart of the embodiment of the present application may include multiple sub-operations or multiple stages, which are not necessarily performed at the same time, but may be executed at different times. The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with at least a portion of the sub-operations or phases of other operations or other operations.
- FIG. 6 is a block diagram showing the structure of an image processing apparatus in an embodiment. As shown in FIG. 6, an image processing apparatus includes:
- the obtaining module 602 is configured to obtain a face atlas, where the computer atlas is generated by computer equipment clustering images containing the same face.
- the identification module 604 is configured to identify a face included in the image in the face map set, and acquire a target face in the face, where the target face is a face existing in each image in the face map set.
- the display module 606 is configured to display an image in the face map set on the computer device display interface, and highlight the target face in the image.
- the presentation module 606 highlighting the target face in the image includes blurring other faces in the image other than the target face. Mark the target face in the image and display the marked image on the computer device display interface. The target face image is extracted from the image, and the target face image is displayed on the computer device display interface.
- the presentation module 606 highlighting the target human face in the image includes highlighting the target human face in the image if the target human face area in the image is below a first threshold. If the number of faces in the image is higher than the second threshold, the target face in the image is highlighted. If a user-initiated highlighting command is received, the target face in the image is highlighted.
- the displaying module 606 highlighting the target face in the image includes: if the similarity of the plurality of images in the face map set is detected to be higher than a third threshold, the target face in one image of the plurality of images highlight.
- Fig. 7 is a block diagram showing the structure of an image processing apparatus in another embodiment.
- an image processing apparatus includes an acquisition module 702, an identification module 704, a presentation module 706, and a processing module 708.
- the obtaining module 702, the identifying module 704, and the displaying module 706 have the same functions as the corresponding modules in FIG. 6.
- the obtaining module 702 is configured to acquire an arrangement order of images in the face map set.
- the order of arrangement is in the order of the shooting time or the order set by the user.
- the processing module 708 is configured to generate an image of the image in the face map set according to the arrangement order.
- the processing module 708 is further configured to cluster the images in the face map set according to the time dimension or the location dimension if the number of images in the face map set is higher than the fourth threshold, to obtain a sub-atlas.
- the display module 706 is further configured to display a sub-atlas at the computer device display interface.
- the obtaining module 702 is further configured to obtain, if the number of images in the face map set is higher than a fifth threshold, acquiring a plurality of images whose similarity in the face map set is higher than a sixth threshold. Acquiring image information of a plurality of images respectively, the image information includes: image sharpness and/or target face state.
- the display module 706 is further configured to select a target image from the plurality of images according to the image information, and display the target image in the image display interface corresponding to the face atlas.
- each module in the above image processing apparatus is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
- modules in the image processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof.
- the above modules may be embedded in the hardware in the processor or in the memory in the server, or may be stored in the memory in the server, so that the processor calls the corresponding operations of the above modules.
- the terms "module” and the like are intended to mean a computer-related entity, which may be hardware, a combination of hardware and software, software, or software in execution.
- a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- an application running on a server and a server can be a component.
- One or more components can reside within a process and/or executed thread, and the components can be located within one computer and/or distributed between two or more computers.
- the embodiment of the present application also provides a computer readable storage medium.
- One or more computer readable storage media containing computer executable instructions that, when executed by one or more processors, cause the processor to perform the following steps:
- the face map is generated by computer equipment clustering images containing the same face.
- highlighting the target face in the image includes blurring other faces in the image other than the target face. Mark the target face in the image and display the marked image on the computer device display interface. The target face image is extracted from the image, and the target face image is displayed on the computer device display interface.
- highlighting the target human face in the image includes highlighting the target human face in the image if the target human face area in the image is below a first threshold. If the number of faces in the image is higher than the second threshold, the target face in the image is highlighted. If a user-initiated highlighting command is received, the target face in the image is highlighted.
- highlighting the target human face in the image comprises: highlighting the target human face in one of the plurality of images if the similarity of the plurality of images in the face map set is detected to be higher than a third threshold.
- it is also performed to obtain an arrangement order of images in the face map set.
- the order of arrangement is in the order of the shooting time or the order set by the user.
- An image is generated from the images in the face map set according to the arrangement order.
- the images in the face map set are clustered according to the time dimension or the location dimension to obtain a sub-atlas.
- the sub-atlas is displayed on the display interface of the computer device.
- the target image is selected from the plurality of images according to the image information, and the target image is displayed in the image display interface corresponding to the face map.
- a computer program product comprising instructions that, when run on a computer, cause the computer to perform the following steps:
- the face map is generated by computer equipment clustering images containing the same face.
- highlighting the target face in the image includes blurring other faces in the image other than the target face. Mark the target face in the image and display the marked image on the computer device display interface. The target face image is extracted from the image, and the target face image is displayed on the computer device display interface.
- highlighting the target human face in the image includes highlighting the target human face in the image if the target human face area in the image is below a first threshold. If the number of faces in the image is higher than the second threshold, the target face in the image is highlighted. If a user-initiated highlighting command is received, the target face in the image is highlighted.
- highlighting the target human face in the image comprises: highlighting the target human face in one of the plurality of images if the similarity of the plurality of images in the face map set is detected to be higher than a third threshold.
- it is also performed to obtain an arrangement order of images in the face map set.
- the order of arrangement is in the order of the shooting time or the order set by the user.
- An image is generated from the images in the face map set according to the arrangement order.
- the images in the face map set are clustered according to the time dimension or the location dimension to obtain a sub-atlas.
- the sub-atlas is displayed on the display interface of the computer device.
- the target image is selected from the plurality of images according to the image information, and the target image is displayed in the image display interface corresponding to the face map.
- the embodiment of the present application also provides a computer device. As shown in FIG. 8 , for the convenience of description, only the parts related to the embodiments of the present application are shown. For details that are not disclosed, refer to the method part of the embodiment of the present application.
- the computer device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking a computer device as a mobile phone as an example. :
- FIG. 8 is a block diagram showing a part of a structure of a mobile phone related to a computer device according to an embodiment of the present application.
- the mobile phone includes: a radio frequency (RF) circuit 810 , a memory 820 , an input unit 830 , a display unit 840 , a sensor 850 , an audio circuit 860 , a wireless fidelity (WiFi) module 870 , and a processor 880 .
- RF radio frequency
- the RF circuit 810 can be used for receiving and transmitting information during the transmission or reception of information, and can receive and send the downlink information of the base station, and then send the uplink data to the base station.
- RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
- LNA Low Noise Amplifier
- RF circuitry 810 can also communicate with the network and other devices via wireless communication.
- the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
- GSM Global System of Mobile communication
- GPRS General Packet Radio Service
- CDMA Code Division Multiple Access
- the memory 820 can be used to store software programs and modules, and the processor 880 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 820.
- the memory 820 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application required for at least one function (such as an application of a sound playing function, an application of an image playing function, etc.);
- the data storage area can store data (such as audio data, address book, etc.) created according to the use of the mobile phone.
- memory 820 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
- the input unit 830 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset 800.
- the input unit 830 may include a touch panel 831 and other input devices 832.
- the touch panel 831 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 831 or near the touch panel 831. Operation) and drive the corresponding connection device according to a preset program.
- the touch panel 831 can include two portions of a touch detection device and a touch controller.
- the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
- the processor 880 is provided and can receive commands from the processor 880 and execute them.
- the touch panel 831 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
- the input unit 830 may also include other input devices 832.
- other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.).
- the display unit 840 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
- the display unit 840 can include a display panel 841.
- the display panel 841 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
- the touch panel 831 can cover the display panel 841. When the touch panel 831 detects a touch operation thereon or nearby, the touch panel 831 transmits to the processor 880 to determine the type of the touch event, and then the processor 880 is The type of touch event provides a corresponding visual output on display panel 841.
- the touch panel 831 and the display panel 841 are two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 can be integrated with the display panel 841. Realize the input and output functions of the phone.
- the handset 800 can also include at least one type of sensor 850, such as a light sensor, motion sensor, and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of the ambient light, and the proximity sensor may close the display panel 841 and/or when the mobile phone moves to the ear. Or backlight.
- the motion sensor may include an acceleration sensor, and the acceleration sensor can detect the magnitude of the acceleration in each direction, and the magnitude and direction of the gravity can be detected at rest, and can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching), and vibration recognition related functions (such as Pedometer, tapping, etc.; in addition, the phone can also be equipped with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors.
- the acceleration sensor can detect the magnitude of the acceleration in each direction, and the magnitude and direction of the gravity can be detected at rest, and can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching), and vibration recognition related functions (such as Pedometer, tapping, etc.; in addition, the phone can also be equipped with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors.
- Audio circuitry 860, speaker 861, and microphone 862 can provide an audio interface between the user and the handset.
- the audio circuit 860 can transmit the converted electrical data of the received audio data to the speaker 861 for conversion to the sound signal output by the speaker 861; on the other hand, the microphone 862 converts the collected sound signal into an electrical signal by the audio circuit 860. After receiving, it is converted into audio data, and then processed by the audio data output processor 880, sent to another mobile phone via the RF circuit 810, or outputted to the memory 820 for subsequent processing.
- WiFi is a short-range wireless transmission technology
- the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 870, which provides users with wireless broadband Internet access.
- FIG. 8 shows the WiFi module 870, it can be understood that it does not belong to the essential configuration of the mobile phone 800 and can be omitted as needed.
- the processor 880 is the control center of the handset, and connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 820, and invoking data stored in the memory 820, executing The phone's various functions and processing data, so that the overall monitoring of the phone.
- processor 880 can include one or more processing units.
- the processor 880 can integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, an application, etc.; the modem processor primarily processes wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 880.
- the mobile phone 800 also includes a power source 890 (such as a battery) that supplies power to various components.
- a power source 890 such as a battery
- the power source can be logically coupled to the processor 880 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
- the handset 800 can also include a camera, a Bluetooth module, and the like.
- the processor 880 included in the mobile terminal implements the following steps when executing a computer program stored in the memory:
- the face map is generated by computer equipment clustering images containing the same face.
- highlighting the target face in the image includes blurring other faces in the image other than the target face. Mark the target face in the image and display the marked image on the computer device display interface. The target face image is extracted from the image, and the target face image is displayed on the computer device display interface.
- highlighting the target human face in the image includes highlighting the target human face in the image if the target human face area in the image is below a first threshold. If the number of faces in the image is higher than the second threshold, the target face in the image is highlighted. If a user-initiated highlighting command is received, the target face in the image is highlighted.
- highlighting the target human face in the image comprises: highlighting the target human face in one of the plurality of images if the similarity of the plurality of images in the face map set is detected to be higher than a third threshold.
- it is also performed to obtain an arrangement order of images in the face map set.
- the order of arrangement is in the order of the shooting time or the order set by the user.
- An image is generated from the images in the face map set according to the arrangement order.
- the images in the face map set are clustered according to the time dimension or the location dimension to obtain a sub-atlas.
- the sub-atlas is displayed on the display interface of the computer device.
- the target image is selected from the plurality of images according to the image information, and the target image is displayed in the image display interface corresponding to the face map.
- Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM), which acts as an external cache.
- RAM is available in a variety of forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization.
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM dual data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM Link (Synchlink) DRAM
- SLDRAM Memory Bus
- Rambus Direct RAM
- RDRAM Direct Memory Bus Dynamic RAM
- RDRAM Memory Bus Dynamic RAM
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
一种图像处理方法包括:获取人脸图集,人脸图集是计算机设备对包含同一人脸的图像聚类生成的;识别人脸图集中的图像包含的人脸,获取人脸中目标人脸,目标人脸是在人脸图集中的每张图像中都存在的人脸;在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出显示。
Description
相关申请的交叉引用
本申请要求于2017年11月30日提交中国专利局、申请号为2017112441530、发明名称为“图像处理方法、装置、计算机设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机技术领域,特别是涉及一种图像处理方法、计算机设备和计算机可读存储介质。
随着智能计算机设备的迅速发展,越来越多的用户采用智能计算机设备进行拍照。智能计算机设备可按照不同维度对拍摄获取的大量图像进行分类,例如,按时间维度进行分类、按地点维度进行分类、按人物维度进行分类等。智能计算机设备按照不同维度可对图像聚类生成不同的图集。
发明内容
根据本申请提供的各种实施例提供一种图像处理方法、计算机设备和计算机可读存储介质。
一种图像处理方法,包括:
获取人脸图集,所述人脸图集是计算机设备对包含同一人脸的图像聚类生成的;
识别所述人脸图集中的图像包含的人脸,获取所述人脸中目标人脸,所述目标人脸是在所述人脸图集中的每张图像中都存在的人脸;及
在所述计算机设备显示界面展示所述人脸图集中的图像,并将所述图像中目标人脸突出显示。
一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行以下操作:
获取人脸图集,所述人脸图集是计算机设备对包含同一人脸的图像聚类生成的;
识别所述人脸图集中的图像包含的人脸,获取所述人脸中目标人脸,所述目标人脸是在所述人脸图集中的每张图像中都存在的人脸;及
在所述计算机设备显示界面展示所述人脸图集中的图像,并将所述图像中目标人脸突出显示。
一个或多个包含计算机可执行指令的计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行以下操作:
获取人脸图集,所述人脸图集是计算机设备对包含同一人脸的图像聚类生成的;
识别所述人脸图集中的图像包含的人脸,获取所述人脸中目标人脸,所述目标人脸是在所述人脸图集中的每张图像中都存在的人脸;及
在所述计算机设备显示界面展示所述人脸图集中的图像,并将所述图像中目标人脸突出显示。
本申请实施例中,在展示人脸图集中的图像时,可突出显示图像中目标人脸,即突出显示图像中主人脸,可避免图像中人脸数量较多,图像不能展示主人脸的问题,对人脸图集中的图像的查看方式更加贴合用户需求。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中图像处理方法的流程图。
图2A为一个实施例中在计算机设备显示界面将图像中主人脸突出显示的界面图。
图2B为另一个实施例中在计算机设备显示界面将图像中主人脸突出显示的界面图。
图2C为另一个实施例中在计算机设备显示界面将图像中主人脸突出显示的界面图。
图3为另一个实施例中图像处理方法的流程图。
图4为另一个实施例中图像处理方法的流程图。
图5为另一个实施例中图像处理方法的流程图。
图6为一个实施例中图像处理装置的结构框图。
图7为另一个实施例中图像处理装置的结构框图。
图8为与本申请实施例提供的计算机设备相关的手机的部分结构的框图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中图像处理方法的流程图。如图1所示,一种图像处理方法,包括:
步骤102,获取人脸图集,人脸图集是计算机设备对包含同一人脸的图像聚类生成的。
计算机设备获取待处理的人脸图集,上述人脸图集是计算机设备对包含同一人脸的图像聚类生成的。上述同一人脸即为同一人脸标识对应的人脸。计算机设备生成人脸图集的步骤包括:计算机设备对已存储的图像进行人脸识别,检测获取已存储的图像中包含的人脸图像。上述人脸图像即为图像中存在人脸的图像。在获取到人脸图像后,计算机设备可对人脸图像中人脸进行识别,获取人脸对应的人脸标识,在获取到人脸图像中人脸对应的人脸标识后,可将包含同一人脸标识的人脸图像聚类,得到上述人脸图集。例如,计算机设备对已存储图像进行人脸识别获取到人脸图像1、图像2、图像3和图像4后,检测到图像1、图像2和图像3中均包含人脸标识A对应的人脸,则将图像1、图像2和图像3聚类得到人脸标识A对应的人脸图集。
步骤104,识别人脸图集中的图像包含的人脸,获取人脸中目标人脸,目标人脸是在人脸图集中的每张图像中都存在的人脸。
计算机设备获取到聚类生成的人脸图集后,可分别获取上述人脸图集中每张图像包含的人脸,即获取上述人脸图集中每张图像包含的人脸标识。计算机设备可查找上述人脸图集中每张图像都包含的人脸标识,将上述每张图像都包含的人脸标识对应的人脸作为目标人脸,即计算机设备可获取上述人脸图集对应的主人脸。例如,人脸图集中包括图像1、图像2、图像3和图像4,其中,图像1中包含人脸标识A对应的人脸和人脸标识B对应的人脸,图像2中包含人脸标识A对应的人脸,图像3中包含人脸标识A对应的人脸和人脸标识C对应的人脸,图像4中包括人脸标识A对应的人脸和人脸标识D对应的人脸,则上述4张图像均包含人脸标识A对应的人脸,将人脸标识A对应的人脸作为目标人脸,即人脸图集的主人脸。
步骤106,在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出 显示。
计算机设备可根据预设规则从人脸图集的图像中选取一张图像作为人脸图集的封面。上述预设规则可包括以下规则中至少一种:
(1)选取目标人脸清晰度最高的图像作为封面图像。
(2)选取目标人脸面积与图像面积比值最大的图像作为封面图像。
(3)选取目标人脸位置处于图像预设区域的图像作为封面图像,例如,选取目标人脸图像处于图像中心区域的图像作为封面图像。
计算机设备可在相册界面中展示上述人脸图集的封面图像,即用封面图像表示对应的人脸图集。当计算机设备接收到对上述人脸图集的查看指令时,可跳转界面显示人脸图集包括的所有人脸图像。上述查看指令可为触控指令或语音指令,其中,计算机设备获取到的触控指令是作用于人脸图集的封面图像的触控指令。
计算机设备在展示上述人脸图集中的图像时,可将每张图像中目标人脸突出显示。其中,计算机设备将图像中目标人脸突出显示可包括以下方法中至少一种:
(1)将图像中除目标人脸外其他人脸模糊处理。
计算机设备可对图像进行模糊处理,其中,对图像进行模糊处理的算法可包括高斯模糊、均值模糊、中值模糊和二值模糊等。计算机设备对图像中除目标人脸外其他人脸模糊处理的方法可包括:计算机设备获取图像中需要模糊处理的像素区域,对上述像素区域内的像素进行模糊处理。其中,计算机设备可设置模糊处理的等级即对图像模糊处理的程度;对图像的模糊处理的等级越高,图像越模糊。
(2)对图像中目标人脸进行标记,在计算机设备显示界面展示标记后图像。
计算机设备还可对图像中目标人脸进行标记,突出显示图像中目标人脸。其中,计算机设备对图像中目标人脸进行标记的方法可包括以下方法中至少一种:
在图像中添加图形标识或文字标识,用上述标识对目标人脸进行标记。例如,在图像中添加箭头标识,箭头的方向指向目标人脸;或在图像中添加矩形框,使得目标人脸显示在上述矩形框内。
识别图像中目标人脸的人脸轮廓,用预设色彩显示目标人脸的人脸轮廓,用以突出图像中目标人脸。例如,计算机设备识别出图像中目标人脸的人脸轮廓后,用红色显示目标人脸的人脸轮廓。
(3)从图像中提取目标人脸图像,在计算机设备显示界面展示目标人脸图像。
计算机设备在识别出图像中目标人脸后,可提取上述图像中目标人脸图像。例如,计算机设备在识别出目标人脸图像的轮廓后,从图像中抠出目标人脸图像。计算机设备在获取到目标人脸图像后,在展示人脸图集中的图像时,可仅展示图像对应的目标人脸图像。
如图2A所示,在计算机设备显示界面展示有图像,上述图像中包括第一人脸210、第二人脸220和第三人脸230,其中第三人脸230为目标人脸。计算机设备可在图像中添加箭头标识202,上述箭头标识202指向主人脸230,即用于标识图像中主人脸为230。计算机设备还可在图像中添加矩形框,用矩形框标识目标人脸。如图2B所示,计算机设备可在图像中添加矩形框204,上述矩形框204内显示的图像即为目标人脸230,即将目标人脸230显示在矩形框204内,用矩形框204标识目标人脸230。计算机设备也可对图像中除目标人脸外其他人脸进行模糊处理,用以突出显示目标人脸。如图2C所示,计算机设备可对图像中第一人脸210对应的像素区域和第二人脸220对应的像素区域进行模糊处理,在计算机设备上展示模糊处理后的图像。
本申请实施例中方法,在展示人脸图集中的图像时,可突出显示图像中目标人脸,即突出显示图像中主人脸,可避免图像中人脸数量较多,图像不能展示主人脸的问题,对人脸图集中的图像的查看方式更加贴合用户需求。
在一个实施例中,计算机设备接收到的对人脸图集的查看指令可包括第一查看指令 或第二查看指令。上述第一查看指令是正常显示图像的指令,上述第二查看指令是将图像中目标人脸突出显示的指令。计算机设备可根据接收到的不同查看指令以不同的形式显示人脸图集中的图像。即计算机设备接收到对人脸图集的查看指令后,当检测到接收到第一查看指令时,则将人脸图集中的图像正常显示;当检测到接收到第二查看指令时,则突出显示图像中目标人脸。计算机设备可在图集展示界面和图像展示界面提供交互接口,根据从上述交互接口接收到的查看指令来确定显示图像的方式。例如,计算机设备在图像展示界面提供“突出显示”的按钮,当开启上述按钮时,则计算机设备将图像中目标人脸突出显示;当关闭上述按钮时,则计算机设备正常显示图像。
在一个实施例中,计算机设备也可根据触发上述查看指令的方式来确定上述查看指令是第一查看指令或第二查看指令。计算机设备在展示已存储的图像时,可将图像以多个图集的形式展示在计算机设备显示界面,例如,“相册”图集、“人物”图集、“地点”图集中。通常情况下,“相册”图集包含计算机设备已存储的所有图像,“人物”图集是计算机设备对已存储图像中人脸图像的聚类集合,上述多个图集中可包含同一张图像。当计算机设备检测到作用与预设图集的封面图像的触发操作时,在计算机设备显示界面展示图集中图像,并将图像中目标人脸突出显示。上述预设图集可为计算机设备中人脸图集,即当用户点击相册图集的封面图像时,即使相册图像中包含人脸图集中的图像,计算机设备将上述图像正常显示;当用户点击人脸图集的封面图像时,在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出显示。
在一个实施例中,人脸图集中包含的图像可包括多种形式,例如自拍照、单人图像、多人合影等。将图像中目标人脸突出显示是为了避免图像中人脸较多或人脸较小时不利于展示主人脸的问题。将图像中目标人脸突出显示包括:
(1)当图像中目标人脸面积低于第一阈值时,将图像中目标人脸突出显示。
计算机设备可获取图像中目标人脸的面积与图像面积的比值,当图像中目标人脸的面积与图像面积的比值低于预设的第一阈值时,则判定图像中目标人脸面积较小,将图像中目标人脸突出显示。例如当图像中目标人脸的面积与图像面积的比值低于40%,计算机设备将图像中目标人脸突出显示。
(2)当图像中人脸个数高于第二阈值时,将图像中目标人脸突出显示。
计算机设备在获取到人脸图像后,可识别图像中包含的人脸,再获取图像中包含的人脸个数。其中,计算机设备在获取图像中人脸个数时,可筛除图像中路人脸,在筛除图像中路人脸后,在检测图像中剩余人脸个数是否高于预设的第二阈值,当图像中剩余人脸个数高于预设的第二阈值时,则将图像中目标人脸突出显示。通常情况下,路人脸在图像中成像面积较少,计算机设备可通过人脸面积与图像面积的比值检测人脸是否为路人脸,例如,计算机设备检测人脸面积与图像面积的比值是否低于预设比例,如5%,当人脸面积与图像面积的比值低于预设比例时,则将人脸划分为路人脸。
(3)当接收到用户发起的突出显示指令时,将图像中目标人脸突出显示。
计算机设备可接收用户对单张图像的操作指令,根据接收到的操作指令对对应的图像进行处理。在计算机设备显示界面展示图像时,当计算机设备接收到用户对当前图像的操作指令时,上述操作指令为突出显示指令,则计算机设备将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:当检测到人脸图集中多张图像的相似度高于第三阈值时,将多张图像中一张图像中目标人脸突出显示。
将图像中目标人脸突出显示是为了展示图像中主人脸。对多张连拍图像,图像间相似度较高,计算机设备可将多张连拍图像中一种图像中目标人脸突出显示,将多张连拍图像中其他图像正常显示。
计算机设备检测多张图像的相似度的方法可包括:直方图匹配,人工智能算法检测等。当计算机设备检测到多张图像的相似度较高时,在显示上述多张图像时,将多张图像 中一张图像中目标人脸突出显示。例如,多张图像中每两张图像的相似度高于90%,在计算机设备显示界面展示上述多张图像时,仅在一张图像中将目标人脸突出显示。
本申请实施例中方法,当人脸图集中存在多张相似图像时,将多张图像中一张图像的目标人脸突出显示,既可突出显示图像中主人脸,又能避免在图像显示时突出显示目标人脸降低图像中其他的人脸的显示效果,显示图像的方式更加智能化。
在一个实施例中,在步骤106之后,还包括:
步骤108,获取人脸图集中的图像的排列顺序。排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序。
步骤110,根据排列顺序将人脸图集中的图像生成影集。
人脸图集是对包含同一人脸的人脸图像的集合,可记录同一个人在各个时期的相貌。计算机设备可将人脸图集中的图像按顺序生成影集,进一步展示人脸图集中人脸图像。其中,计算机设备将人脸图集中的图像按顺序生成影集包括:计算机设备获取人脸图集中的图像的排列顺序,上述排列顺序可为图像拍摄时刻的前后顺序、图像存储的先后顺序或用户手动设置的顺序。计算机设备按照上述排列顺序依次播放人脸图集中的图像,计算机设备可将上述影集以视频文件存储或动图文件存储。其中,计算机设备在将人脸图集中的图像生成影集时,若检测到人脸图集中存在多张相似度高于指定值的图像,则提取上述多张相似度高于指定值的图像中一张图像用于生成影集,即计算机设备在生成影集时,当生成影集的图像中包括多帧连拍图像时,则提取上述多帧连拍图像中一帧图像来生成影集。
本申请实施例中方法,将人脸图集中人脸图像生成影集,可方便用户以影集的形式快速浏览多张图像,贴合用户需求。
在一个实施例中,在步骤106之后,还包括:
步骤112,当人脸图集中的图像数量高于第四阈值时,按照时间维度或地点维度对人脸图集中的图像进行聚类,得到子图集。
步骤114,在计算机设备显示界面展示子图集。
当一个人脸图集中的图像较多时,不利于用户查看图像。当计算机设备检测到人脸图集中的图像数量高于预设的第四阈值时,可按照时间维度或地点维度对人脸图集中的图像再次聚类。即当人脸图集中的图像较多时,计算机设备可以对人脸图集进行二级分类。其中,计算机设备以时间维度对人脸图集中的图像聚类包括:计算机设备根据设定的时间单位,将拍摄时刻在同一时间范围内的图像聚类生成子图集。例如,以“日”为时间单位,将拍摄时刻在同一天内的图像聚类生成子图集。计算机设备以地点维度对人脸图集中的图像聚类包括:计算机设备根据设定的地点单位,将拍摄地点在同一地点范围内的图像聚类生成子图集。例如,以“市”为时间单位,将拍摄时刻在同一个市内的图像聚类生成子图集。
本申请实施例中方法,当人脸图集中的图像数量较多时,可根据时间维度或地点维度对人脸图集中的图像进行二级分类,即将人脸图集中的图像以多个子图集展示,避免人脸图集中的图像数量太多用户浏览不方便的问题,使得用户浏览图像的方式更加便捷。
在一个实施例中,在步骤106之后,还包括:
步骤116,当人脸图集中的图像数量高于第五阈值时,获取人脸图集中相似度高于第六阈值的多张图像。
步骤118,分别获取多张图像的图像信息,图像信息包括:图像清晰度和/或目标人脸状态。
计算机设备根据图像信息从多张图像中选取目标图像,在人脸图集对应的图像显示界面中展示目标图像。
当计算机设备检测到人脸图集中的图像数量高于第五阈值时,可检测上述人脸图集中是否包含相似度高于第六阈值的多张图像,即人脸图集中是否包含相似度较高的多张图 像,上述第五阈值和第六阈值可分别为计算机设备设定的值或用户设置的值。当计算机设备获取到人脸图集中多张相似度较高的图像后,可分别获取上述多张相似度较高的图像的图像信息。上述图像信息可包括:图像清晰度、目标人脸状态、图像清晰度和目标人脸状态。上述图像清晰度即为图像的清晰度值,图像的清晰度值越高,成像越清晰。计算机设备可通过多种图像清晰度评价函数来确定图像的清晰度值,包括:灰度变化函数、梯度函数、图像灰度熵函数等。上述目标人脸状态包括目标人脸的转动角度和目标人脸的表情,具体地,计算机设备可检测获取图像中目标人脸的转动角度,上述转动角度是目标人脸相对于标准人脸在三维空间内的旋转角度。目标人脸的表情可包括:目标人脸是否处于闭眼状态,目标人脸是否处于微笑状态等。其中,计算机设备可通过检测目标人脸中是否存在眼白来判定目标人脸是否处于闭眼状态,当目标人脸中可以检测到眼白时,则目标人脸处于睁眼状态;当目标人脸中不可以检测到眼白时,则目标人脸处于闭眼状态。计算机设备可通过检测目标人脸中是否存在牙齿来判定目标人脸是否处于微笑状态。当目标人脸中存在牙齿时,则判定目标人脸处于微笑状态;当目标人脸中不存在牙齿时,则判定目标人脸不处于微笑状态。
计算机设备在获取到多张图像的图像信息后,通过将多张图像的图像信息进行比较,从上述多张图像中选取目标图像。其中,计算机设备从多张图像中国选取目标图像的标准可由计算机设备设定,也可由用户设定。例如,选取多张图像中图像清晰度最高的图像作为目标图像;选取多张图像中目标人脸处于睁眼且微笑状态的图像作为目标图像。计算机设备在展示人脸图集中的图像时,可仅展示多张图像中目标图像。
本申请实施例中方法,当人脸图集中的图像数量较多时,在检测到人脸图集中包含多张相似图像时,可仅展示多张相似图像中一张图像,减少人脸图集中展示的图像数量,避免图像数量太多造成用户浏览图像不方便的问题。
本申请实施例的方法流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,本申请实施例的方法流程图中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他操作或者其他操作的子操作或者阶段的至少一部分轮流或者交替地执行。
图6为一个实施例中图像处理装置的结构框图。如图6所示,一种图像处理装置,包括:
获取模块602,用于获取人脸图集,人脸图集是计算机设备对包含同一人脸的图像聚类生成的。
识别模块604,用于识别人脸图集中的图像包含的人脸,获取人脸中目标人脸,所述目标人脸是在所述人脸图集中的每张图像中都存在的人脸。
展示模块606,用于在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出显示。
在一个实施例中,展示模块606将图像中目标人脸突出显示包括:将图像中除目标人脸外其他人脸模糊处理。对图像中目标人脸进行标记,在计算机设备显示界面展示标记后图像。从图像中提取目标人脸图像,在计算机设备显示界面展示目标人脸图像。
在一个实施例中,展示模块606将图像中目标人脸突出显示包括:若图像中目标人脸面积低于第一阈值,将图像中目标人脸突出显示。若图像中人脸个数高于第二阈值,将图像中目标人脸突出显示。若接收到用户发起的突出显示指令,将图像中目标人脸突出显示。
在一个实施例中,展示模块606将图像中目标人脸突出显示包括:若检测到人脸图集中多张图像的相似度高于第三阈值,将多张图像中一张图像中目标人脸突出显示。
图7为另一个实施例中图像处理装置的结构框图。如图7所示,一种图像处理装置, 包括获取模块702、识别模块704、展示模块706和处理模块708。其中,获取模块702、识别模块704、展示模块706与图6中对应的模块功能相同。
获取模块702,用于获取人脸图集中的图像的排列顺序。排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序。
处理模块708,用于根据排列顺序将人脸图集中的图像生成影集。
在一个实施例中,上述处理模块708还用于若人脸图集中的图像数量高于第四阈值,按照时间维度或地点维度对人脸图集中的图像进行聚类,得到子图集。展示模块706还用于在计算机设备显示界面展示子图集。
在一个实施例中,上述获取模块702还用于若人脸图集中的图像数量高于第五阈值,获取人脸图集中相似度高于第六阈值的多张图像。分别获取多张图像的图像信息,图像信息包括:图像清晰度和/或目标人脸状态。展示模块706还用于根据图像信息从多张图像中选取目标图像,在人脸图集对应的图像显示界面中展示目标图像。
上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。
上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于服务器中的处理器中,也可以以软件形式存储于服务器中的存储器中,以便于处理器调用执行以上各个模块对应的操作。如在本申请中所使用的,术语“模块”和等旨在表示计算机相关的实体,它可以是硬件、硬件和软件的组合、软件、或者执行中的软件。例如,组件可以是但不限于是,在处理器上运行的进程、处理器、对象、可执行码、执行的线程、程序和/或计算机。作为说明,运行在服务器上的应用程序和服务器都可以是组件。一个或多个组件可以驻留在进程和/或执行的线程中,并且组件可以位于一个计算机内和/或分布在两个或更多的计算机之间。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的计算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行以下步骤:
(1)获取人脸图集,人脸图集是计算机设备对包含同一人脸的图像聚类生成的。
(2)识别人脸图集中的图像包含的人脸,获取人脸中目标人脸。
(3)在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:将图像中除目标人脸外其他人脸模糊处理。对图像中目标人脸进行标记,在计算机设备显示界面展示标记后图像。从图像中提取目标人脸图像,在计算机设备显示界面展示目标人脸图像。
在一个实施例中,将图像中目标人脸突出显示包括:若图像中目标人脸面积低于第一阈值,将图像中目标人脸突出显示。若图像中人脸个数高于第二阈值,将图像中目标人脸突出显示。若接收到用户发起的突出显示指令,将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:若检测到人脸图集中多张图像的相似度高于第三阈值,将多张图像中一张图像中目标人脸突出显示。
在一个实施例中,还执行:获取人脸图集中的图像的排列顺序。排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序。根据排列顺序将人脸图集中的图像生成影集。
在一个实施例中,还执行:若人脸图集中的图像数量高于第四阈值,按照时间维度或地点维度对人脸图集中的图像进行聚类,得到子图集。在计算机设备显示界面展示子图集。
在一个实施例中,还执行:若人脸图集中的图像数量高于第五阈值,获取人脸图集中相似度高于第六阈值的多张图像。分别获取多张图像的图像信息,图像信息包括:图像清晰度和/或目标人脸状态。根据图像信息从多张图像中选取目标图像,在人脸图集对应的图像显示界面中展示目标图像。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行以下步 骤:
(1)获取人脸图集,人脸图集是计算机设备对包含同一人脸的图像聚类生成的。
(2)识别人脸图集中的图像包含的人脸,获取人脸中目标人脸。
(3)在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:将图像中除目标人脸外其他人脸模糊处理。对图像中目标人脸进行标记,在计算机设备显示界面展示标记后图像。从图像中提取目标人脸图像,在计算机设备显示界面展示目标人脸图像。
在一个实施例中,将图像中目标人脸突出显示包括:若图像中目标人脸面积低于第一阈值,将图像中目标人脸突出显示。若图像中人脸个数高于第二阈值,将图像中目标人脸突出显示。若接收到用户发起的突出显示指令,将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:若检测到人脸图集中多张图像的相似度高于第三阈值,将多张图像中一张图像中目标人脸突出显示。
在一个实施例中,还执行:获取人脸图集中的图像的排列顺序。排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序。根据排列顺序将人脸图集中的图像生成影集。
在一个实施例中,还执行:若人脸图集中的图像数量高于第四阈值,按照时间维度或地点维度对人脸图集中的图像进行聚类,得到子图集。在计算机设备显示界面展示子图集。
在一个实施例中,还执行:若人脸图集中的图像数量高于第五阈值,获取人脸图集中相似度高于第六阈值的多张图像。分别获取多张图像的图像信息,图像信息包括:图像清晰度和/或目标人脸状态。根据图像信息从多张图像中选取目标图像,在人脸图集对应的图像显示界面中展示目标图像。
本申请实施例还提供了一种计算机设备。如图8所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该计算机设备可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备,以计算机设备为手机为例:
图8为与本申请实施例提供的计算机设备相关的手机的部分结构的框图。参考图8,手机包括:射频(Radio Frequency,RF)电路810、存储器820、输入单元830、显示单元840、传感器850、音频电路860、无线保真(Wireless Fidelity,WiFi)模块870、处理器880、以及电源890等部件。本领域技术人员可以理解,图8所示的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
其中,RF电路810可用于收发信息或通话过程中,信号的接收和发送,可将基站的下行信息接收后,给处理器880处理;也可以将上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路810还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器820可用于存储软件程序以及模块,处理器880通过运行存储在存储器820的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器820可主要包括程序存储区和数据存储区,其中,程序存储区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能的应用程序、图像播放功能的应用程序等)等;数据存储区可存储根据手机的使用所创建的数据(比如音频数据、通讯录等)等。此外,存储器820可 以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元830可用于接收输入的数字或字符信息,以及产生与手机800的用户设置以及功能控制有关的键信号输入。具体地,输入单元830可包括触控面板831以及其他输入设备832。触控面板831,也可称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板831上或在触控面板831附近的操作),并根据预先设定的程式驱动相应的连接装置。在一个实施例中,触控面板831可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器880,并能接收处理器880发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板831。除了触控面板831,输入单元830还可以包括其他输入设备832。具体地,其他输入设备832可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)等中的一种或多种。
显示单元840可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元840可包括显示面板841。在一个实施例中,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板841。在一个实施例中,触控面板831可覆盖显示面板841,当触控面板831检测到在其上或附近的触摸操作后,传送给处理器880以确定触摸事件的类型,随后处理器880根据触摸事件的类型在显示面板841上提供相应的视觉输出。虽然在图8中,触控面板831与显示面板841是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板831与显示面板841集成而实现手机的输入和输出功能。
手机800还可包括至少一种传感器850,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板841的亮度,接近传感器可在手机移动到耳边时,关闭显示面板841和/或背光。运动传感器可包括加速度传感器,通过加速度传感器可检测各个方向上加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换)、振动识别相关功能(比如计步器、敲击)等;此外,手机还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器等。
音频电路860、扬声器861和传声器862可提供用户与手机之间的音频接口。音频电路860可将接收到的音频数据转换后的电信号,传输到扬声器861,由扬声器861转换为声音信号输出;另一方面,传声器862将收集的声音信号转换为电信号,由音频电路860接收后转换为音频数据,再将音频数据输出处理器880处理后,经RF电路810可以发送给另一手机,或者将音频数据输出至存储器820以便后续处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块870可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图8示出了WiFi模块870,但是可以理解的是,其并不属于手机800的必须构成,可以根据需要而省略。
处理器880是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器820内的软件程序和/或模块,以及调用存储在存储器820内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。在一个实施例中,处理器880可包括一个或多个处理单元。在一个实施例中,处理器880可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等;调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理 器880中。
手机800还包括给各个部件供电的电源890(比如电池),优选的,电源可以通过电源管理系统与处理器880逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
在一个实施例中,手机800还可以包括摄像头、蓝牙模块等。
在本申请实施例中,该移动终端所包括的处理器880执行存储在存储器上的计算机程序时实现以下步骤:
(1)获取人脸图集,人脸图集是计算机设备对包含同一人脸的图像聚类生成的。
(2)识别人脸图集中的图像包含的人脸,获取人脸中目标人脸。
(3)在计算机设备显示界面展示人脸图集中的图像,并将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:将图像中除目标人脸外其他人脸模糊处理。对图像中目标人脸进行标记,在计算机设备显示界面展示标记后图像。从图像中提取目标人脸图像,在计算机设备显示界面展示目标人脸图像。
在一个实施例中,将图像中目标人脸突出显示包括:若图像中目标人脸面积低于第一阈值,将图像中目标人脸突出显示。若图像中人脸个数高于第二阈值,将图像中目标人脸突出显示。若接收到用户发起的突出显示指令,将图像中目标人脸突出显示。
在一个实施例中,将图像中目标人脸突出显示包括:若检测到人脸图集中多张图像的相似度高于第三阈值,将多张图像中一张图像中目标人脸突出显示。
在一个实施例中,还执行:获取人脸图集中的图像的排列顺序。排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序。根据排列顺序将人脸图集中的图像生成影集。
在一个实施例中,还执行:若人脸图集中的图像数量高于第四阈值,按照时间维度或地点维度对人脸图集中的图像进行聚类,得到子图集。在计算机设备显示界面展示子图集。
在一个实施例中,还执行:若人脸图集中的图像数量高于第五阈值,获取人脸图集中相似度高于第六阈值的多张图像。分别获取多张图像的图像信息,图像信息包括:图像清晰度和/或目标人脸状态。根据图像信息从多张图像中选取目标图像,在人脸图集对应的图像显示界面中展示目标图像。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (27)
- 一种图像处理方法,包括:获取人脸图集,所述人脸图集是计算机设备对包含同一人脸的图像聚类生成的;识别所述人脸图集中的图像包含的人脸,获取所述人脸中目标人脸,所述目标人脸是在所述人脸图集中的每张图像中都存在的人脸;及在所述计算机设备显示界面展示所述人脸图集中的图像,并将所述图像中目标人脸突出显示。
- 根据权利要求1所述的方法,其特征在于,所述将所述图像中目标人脸突出显示包括:将所述图像中除目标人脸外其他人脸模糊处理;对所述图像中目标人脸进行标记,在所述计算机设备显示界面展示标记后图像;及从所述图像中提取目标人脸图像,在所述计算机设备显示界面展示所述目标人脸图像。
- 根据权利要求1所述的方法,其特征在于,所述将所述图像中目标人脸突出显示包括:当所述图像中目标人脸面积低于第一阈值时,将所述图像中目标人脸突出显示。
- 根据权利要求1所述的方法,其特征在于,所述将所述图像中目标人脸突出显示包括:当所述图像中人脸个数高于第二阈值时,将所述图像中目标人脸突出显示。
- 根据权利要求1所述的方法,其特征在于,所述将所述图像中目标人脸突出显示包括:当接收到用户发起的突出显示指令时,将所述图像中目标人脸突出显示。
- 根据权利要求1所述的方法,其特征在于,所述将所述图像中目标人脸突出显示包括:当检测到所述人脸图集中多张图像的相似度高于第三阈值时,将所述多张图像中一张图像中目标人脸突出显示。
- 根据权利要求1至6中任一项所述的方法,其特征在于,还包括:获取所述人脸图集中的图像的排列顺序;所述排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序;及根据所述排列顺序将所述人脸图集中的图像生成影集。
- 根据权利要求1至6中任一项所述的方法,其特征在于,还包括:当所述人脸图集中的图像数量高于第四阈值时,按照时间维度或地点维度对所述人脸图集中的图像进行聚类,得到子图集;及在所述计算机设备显示界面展示所述子图集。
- 根据权利要求1至6中任一项所述的方法,其特征在于,还包括:当所述人脸图集中的图像数量高于第五阈值时,获取所述人脸图集中相似度高于第六阈值的多张图像;分别获取所述多张图像的图像信息,所述图像信息包括:图像清晰度和/或目标人脸状态;及根据所述图像信息从所述多张图像中选取目标图像,在所述人脸图集对应的图像显示界面中展示所述目标图像。
- 一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下操作:获取人脸图集,所述人脸图集是计算机设备对包含同一人脸的图像聚类生成的;识别所述人脸图集中的图像包含的人脸,获取所述人脸中目标人脸,所述目标人脸是在所述人脸图集中的每张图像中都存在的人脸;及在所述计算机设备显示界面展示所述人脸图集中的图像,并将所述图像中目标人脸突出显示。
- 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:将所述图像中除目标人脸外其他人脸模糊处理;对所述图像中目标人脸进行标记,在所述计算机设备显示界面展示标记后图像;及从所述图像中提取目标人脸图像,在所述计算机设备显示界面展示所述目标人脸图像。
- 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:当所述图像中目标人脸面积低于第一阈值时,将所述图像中目标人脸突出显示。
- 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:当所述图像中人脸个数高于第二阈值时,将所述图像中目标人脸突出显示。
- 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:当接收到用户发起的突出显示指令时,将所述图像中目标人脸突出显示。
- 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示,还执行以下操作:当检测到所述人脸图集中多张图像的相似度高于第三阈值时,将所述多张图像中一张图像中目标人脸突出显示。
- 根据权利要求10至15中任一项所述的计算机设备,其特征在于,所述处理器还执行以下操作:获取所述人脸图集中的图像的排列顺序;所述排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序;及根据所述排列顺序将所述人脸图集中的图像生成影集。
- 根据权利要求10至15中任一项所述的计算机设备,其特征在于,所述处理器还执行以下操作:当所述人脸图集中的图像数量高于第四阈值时,按照时间维度或地点维度对所述人脸图集中的图像进行聚类,得到子图集;及在所述计算机设备显示界面展示所述子图集。
- 根据权利要求10至15中任一项所述的计算机设备,其特征在于,所述处理器还执行以下操作:当所述人脸图集中的图像数量高于第五阈值时,获取所述人脸图集中相似度高于第六阈值的多张图像;分别获取所述多张图像的图像信息,所述图像信息包括:图像清晰度和/或目标人脸状态;及根据所述图像信息从所述多张图像中选取目标图像,在所述人脸图集对应的图像显示界面中展示所述目标图像。
- 一个或多个包含计算机可执行指令的计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行以下操作:获取人脸图集,所述人脸图集是计算机设备对包含同一人脸的图像聚类生成的;识别所述人脸图集中的图像包含的人脸,获取所述人脸中目标人脸,所述目标人脸是 在所述人脸图集中的每张图像中都存在的人脸;及在所述计算机设备显示界面展示所述人脸图集中的图像,并将所述图像中目标人脸突出显示。
- 根据权利要求19所述的计算机可读存储介质,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:将所述图像中除目标人脸外其他人脸模糊处理;对所述图像中目标人脸进行标记,在所述计算机设备显示界面展示标记后图像;及从所述图像中提取目标人脸图像,在所述计算机设备显示界面展示所述目标人脸图像。
- 根据权利要求19所述的计算机可读存储介质,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:当所述图像中目标人脸面积低于第一阈值时,将所述图像中目标人脸突出显示。
- 根据权利要求19所述的计算机可读存储介质,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:当所述图像中人脸个数高于第二阈值时,将所述图像中目标人脸突出显示。
- 根据权利要求19所述的计算机可读存储介质,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示时,还执行以下操作:当接收到用户发起的突出显示指令时,将所述图像中目标人脸突出显示。
- 根据权利要求19所述的计算机可读存储介质,其特征在于,所述处理器执行所述将所述图像中目标人脸突出显示,还执行以下操作:当检测到所述人脸图集中多张图像的相似度高于第三阈值时,将所述多张图像中一张图像中目标人脸突出显示。
- 根据权利要求19至24中任一项所述的计算机可读存储介质,其特征在于,所述处理器还执行以下操作:获取所述人脸图集中的图像的排列顺序;所述排列顺序为按照拍摄时刻的先后顺序或用户设置的顺序;及根据所述排列顺序将所述人脸图集中的图像生成影集。
- 根据权利要求19至24中任一项所述的计算机可读存储介质,其特征在于,所述处理器还执行以下操作:当所述人脸图集中的图像数量高于第四阈值时,按照时间维度或地点维度对所述人脸图集中的图像进行聚类,得到子图集;及在所述计算机设备显示界面展示所述子图集。
- 根据权利要求19至24中任一项所述的计算机可读存储介质,其特征在于,所述处理器还执行以下操作:当所述人脸图集中的图像数量高于第五阈值时,获取所述人脸图集中相似度高于第六阈值的多张图像;分别获取所述多张图像的图像信息,所述图像信息包括:图像清晰度和/或目标人脸状态;及根据所述图像信息从所述多张图像中选取目标图像,在所述人脸图集对应的图像显示界面中展示所述目标图像。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711244153.0A CN108038431A (zh) | 2017-11-30 | 2017-11-30 | 图像处理方法、装置、计算机设备和计算机可读存储介质 |
CN201711244153.0 | 2017-11-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019105457A1 true WO2019105457A1 (zh) | 2019-06-06 |
Family
ID=62094840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/118555 WO2019105457A1 (zh) | 2017-11-30 | 2018-11-30 | 图像处理方法、计算机设备和计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108038431A (zh) |
WO (1) | WO2019105457A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942065A (zh) * | 2019-11-26 | 2020-03-31 | Oppo广东移动通信有限公司 | 文本框选方法、装置、终端设备及计算机可读存储介质 |
CN112207812A (zh) * | 2019-07-12 | 2021-01-12 | 阿里巴巴集团控股有限公司 | 设备控制方法、设备、系统及存储介质 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038431A (zh) * | 2017-11-30 | 2018-05-15 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、计算机设备和计算机可读存储介质 |
CN111145212B (zh) * | 2019-12-03 | 2023-10-03 | 浙江大华技术股份有限公司 | 一种目标追踪处理方法及装置 |
CN111221999A (zh) * | 2020-01-08 | 2020-06-02 | Oppo广东移动通信有限公司 | 图片处理方法、装置、移动终端及存储介质 |
CN111400534B (zh) * | 2020-03-05 | 2023-09-19 | 杭州海康威视系统技术有限公司 | 图像数据的封面确定方法、装置及计算机存储介质 |
CN113177131A (zh) * | 2021-04-09 | 2021-07-27 | 深圳时空引力科技有限公司 | 图片处理的方法、装置以及存储介质 |
CN113591067A (zh) * | 2021-07-30 | 2021-11-02 | 中冶华天工程技术有限公司 | 一种基于图像识别的事件确认与计时方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299001A (zh) * | 2014-10-11 | 2015-01-21 | 小米科技有限责任公司 | 生成影集的方法及装置 |
CN105404863A (zh) * | 2015-11-13 | 2016-03-16 | 小米科技有限责任公司 | 人物特征识别方法及系统 |
CN105979383A (zh) * | 2016-06-03 | 2016-09-28 | 北京小米移动软件有限公司 | 图像获取方法及装置 |
CN106844492A (zh) * | 2016-12-24 | 2017-06-13 | 深圳云天励飞技术有限公司 | 一种人脸识别的方法、客户端、服务器及系统 |
CN108038431A (zh) * | 2017-11-30 | 2018-05-15 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、计算机设备和计算机可读存储介质 |
-
2017
- 2017-11-30 CN CN201711244153.0A patent/CN108038431A/zh active Pending
-
2018
- 2018-11-30 WO PCT/CN2018/118555 patent/WO2019105457A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299001A (zh) * | 2014-10-11 | 2015-01-21 | 小米科技有限责任公司 | 生成影集的方法及装置 |
CN105404863A (zh) * | 2015-11-13 | 2016-03-16 | 小米科技有限责任公司 | 人物特征识别方法及系统 |
CN105979383A (zh) * | 2016-06-03 | 2016-09-28 | 北京小米移动软件有限公司 | 图像获取方法及装置 |
CN106844492A (zh) * | 2016-12-24 | 2017-06-13 | 深圳云天励飞技术有限公司 | 一种人脸识别的方法、客户端、服务器及系统 |
CN108038431A (zh) * | 2017-11-30 | 2018-05-15 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、计算机设备和计算机可读存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112207812A (zh) * | 2019-07-12 | 2021-01-12 | 阿里巴巴集团控股有限公司 | 设备控制方法、设备、系统及存储介质 |
CN110942065A (zh) * | 2019-11-26 | 2020-03-31 | Oppo广东移动通信有限公司 | 文本框选方法、装置、终端设备及计算机可读存储介质 |
CN110942065B (zh) * | 2019-11-26 | 2023-12-12 | Oppo广东移动通信有限公司 | 文本框选方法、装置、终端设备及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN108038431A (zh) | 2018-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019105457A1 (zh) | 图像处理方法、计算机设备和计算机可读存储介质 | |
JP7058760B2 (ja) | 画像処理方法およびその、装置、端末並びにコンピュータプログラム | |
CN105094760B (zh) | 一种图片标记方法及装置 | |
US20170032219A1 (en) | Methods and devices for picture processing | |
CN105426035B (zh) | 用于提供信息的方法和电子装置 | |
WO2019105237A1 (zh) | 图像处理方法、计算机设备和计算机可读存储介质 | |
CN106777007A (zh) | 相册分类优化方法、装置及移动终端 | |
US10181203B2 (en) | Method for processing image data and apparatus for the same | |
WO2021098609A1 (zh) | 图像检测方法、装置及电子设备 | |
WO2019052433A1 (zh) | 图像处理方法、移动终端及计算机可读存储介质 | |
CN107995422B (zh) | 图像拍摄方法和装置、计算机设备、计算机可读存储介质 | |
US11182593B2 (en) | Image processing method, computer device, and computer readable storage medium | |
WO2020048392A1 (zh) | 应用程序的病毒检测方法、装置、计算机设备及存储介质 | |
RU2643464C2 (ru) | Способ и устройство для классификации изображений | |
US20150242982A1 (en) | Method and apparatus for displaying image | |
KR102140072B1 (ko) | 이미지 합성 방법 및 그 전자 장치 | |
AU2014271204B2 (en) | Image recognition of vehicle parts | |
CN111857793B (zh) | 网络模型的训练方法、装置、设备及存储介质 | |
CN111881315A (zh) | 图像信息输入方法、电子设备及计算机可读存储介质 | |
CN112269853A (zh) | 检索处理方法、装置及存储介质 | |
KR102316846B1 (ko) | 미디어 컨텐츠를 선별하는 방법 및 이를 구현하는 전자장치 | |
CN115115679A (zh) | 一种图像配准方法及相关设备 | |
WO2019109887A1 (zh) | 图像处理的方法、电子设备、计算机可读存储介质 | |
EP3511840A1 (en) | Data processing method, electronic device, and computer-readable storage medium | |
CN114943976B (zh) | 模型生成的方法、装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18883816 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18883816 Country of ref document: EP Kind code of ref document: A1 |