CN111797754A - Image detection method, device, electronic equipment and medium - Google Patents

Image detection method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111797754A
CN111797754A CN202010614648.3A CN202010614648A CN111797754A CN 111797754 A CN111797754 A CN 111797754A CN 202010614648 A CN202010614648 A CN 202010614648A CN 111797754 A CN111797754 A CN 111797754A
Authority
CN
China
Prior art keywords
image
body part
human body
detected
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010614648.3A
Other languages
Chinese (zh)
Other versions
CN111797754B (en
Inventor
胡晨鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202010614648.3A priority Critical patent/CN111797754B/en
Publication of CN111797754A publication Critical patent/CN111797754A/en
Application granted granted Critical
Publication of CN111797754B publication Critical patent/CN111797754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for image detection, electronic equipment and a medium. According to the method and the device, after an image to be detected, which comprises a first human body part image of a target user, is obtained, feature recognition is carried out on the first human body part image based on a preset first image recognition model, a first modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the first modification degree is detected to meet a first preset condition, a detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.

Description

Image detection method, device, electronic equipment and medium
Technical Field
The present application relates to a technology of processing an image, and in particular, to a method, an apparatus, an electronic device, and a medium for image detection.
Background
Due to the rise of the communications era and society, smart devices have been continuously developed with the use of more and more users.
With the rapid development of the communication era, the network communication between users by using intelligent devices has become a normal state. For example, for a social networking platform, a user is first required to upload a real image and identity information of the user, so as to help other users of the platform to better understand the user and then decide whether to communicate with the user.
However, in the related art, in the user images uploaded to the platform, the problem that the uploaded images are excessively decorated by using image modification software and other modes often occurs, so that the authenticity of the uploaded images is affected, and the experience of other users is reduced.
Disclosure of Invention
The embodiment of the application provides an image detection method, an image detection device, an electronic device and an image detection medium, and is used for solving the problem that received user images are excessively modified to cause unreality of the images in the related art.
According to an aspect of the embodiments of the present application, there is provided an image detection method, which is applied to a client, including:
acquiring an image to be detected, wherein the image to be detected comprises first human body part information of a target user;
performing feature recognition on the first human body part image based on a preset first image recognition model, and determining a first embellishment degree corresponding to the first human body part image, wherein the first embellishment degree is used for representing an image embellishment degree of a first human body part of the target user;
and when the first modification degree is detected to meet a first preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, the determining a first embellishment degree corresponding to the first human body part image further includes:
when the image to be detected is obtained, performing feature recognition on the first human body part image based on the first image recognition model to obtain a first modification degree recognition result;
and when the first modification degree recognition result is determined to correspond to the recognition failure, sending the image to be detected to a server.
Optionally, in another embodiment based on the foregoing method of the present application, after the sending the image to be detected to the server, the method further includes:
and receiving a second embellishment degree identification result sent by the server, and taking the second embellishment degree identification result as a first embellishment degree corresponding to the first human body part image, wherein the second embellishment degree identification result is a embellishment degree result generated by the server according to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the image to be detected, the method further includes:
and based on a preset identification strategy, performing feature identification on the image to be detected by using an image segmentation model to obtain at least one human body part image of the target user in the image to be detected, wherein the identification strategy is used for determining the type of the human body part.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
determining that the first human body part image comprises a face image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the face image based on the first image recognition model to obtain face part parameters;
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
Optionally, in another embodiment based on the method of the present application, the determining the first embellishment degree corresponding to the first human body part image includes:
determining a first embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image are matched, wherein the eye features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
determining a first embellishment corresponding to the first human body position image based on whether left and right cheek features of the face image match, the cheek features corresponding to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
determining a first embellishment degree corresponding to the first human body part image based on whether left and right eyebrow features of the face image are matched, wherein the eyebrow features correspond to at least one of size features, color features and contour features.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
determining that the first human body part image comprises a face image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
obtaining five sense organ parameters corresponding to the face image based on the face part parameters;
generating the size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
and determining a first embellishment degree corresponding to the first human body part image according to the size ratio of the five sense organs of the face image.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
determining that the first human body part image comprises a limb image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
acquiring at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part based on the limb part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on a comparison result of at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part and preset limb characteristics.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the image to be detected, the method further includes:
analyzing the image to be detected to obtain a brightness parameter corresponding to the image to be detected, wherein the brightness parameter is used for reflecting the brightness of the image to be detected;
and determining a first modification degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, displaying a detection result corresponding to the first degree of modification when it is detected that the first degree of modification satisfies a first preset condition includes:
when the modification degree of the first human body part image is determined to exceed a preset standard based on the first modification degree, acquiring a second human body part image of the target user by using a preset image segmentation model;
performing feature recognition on the second human body part image based on the first image recognition model, and determining a second modification degree corresponding to the second human body part image;
and when the second modification degree is detected to meet a second preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
when the first human body part image in the at least one human body part image is determined to be decorated to a degree exceeding a preset standard based on the first decoration degree, acquiring a third human body part image from the at least one human body part image of the target user;
performing feature recognition on the third human body part image based on the first image recognition model, and determining a third modification degree corresponding to the third human body part image;
and when the third modification degree is detected to meet a third preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method includes:
acquiring a part type corresponding to the first human body part image, wherein the part type corresponds to at least one of a face image, a limb image, a head image and a trunk image;
acquiring at least one corresponding modification threshold value based on the part type corresponding to the first human body part image;
comparing the first modification degree with the at least one modification degree threshold value to obtain a corresponding comparison result;
and when the comparison result meets the first preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the above method of the present application, the acquiring an image to be detected includes:
acquiring target video data;
selecting sub video data positioned in a target playing time period in the target video data based on a preset rule;
and acquiring the image to be detected according to the sub-video data.
Optionally, in another embodiment based on the foregoing method of the present application, the acquiring the image to be detected according to the sub-video data includes:
acquiring all key frame images in the sub-video data, and sequencing all the key frame images in sequence based on display parameters of the target user in each key frame image, wherein the display parameters are used for reflecting the size and the definition of the human body part of the target user;
and taking the key frame image positioned in a preset ranking range in the sequenced key frame images as the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, before the acquiring the image to be detected, the method further includes:
and receiving a detection instruction generated by the social application program, wherein the detection instruction is used for detecting the image modification degree of the image to be detected.
According to an aspect of the embodiments of the present application, there is provided an image detection method, which is applied to a server, including:
acquiring an image to be detected, wherein the image to be detected comprises a first human body part of a target user;
performing feature recognition on the first human body part image based on a preset second image recognition model, and determining a first embellishment degree corresponding to the first human body part image, wherein the first embellishment degree is used for representing an image embellishment degree of a first human body part of the target user;
and when the first modification degree is detected to meet a first preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, before the acquiring the image to be detected, the method further includes:
obtaining a first number of unmodified sample images, wherein each unmodified sample image comprises at least one body part image of a user;
sample image modification is carried out on the first number of unmodified sample images to obtain a second number of modified sample images, and the sample image modification corresponds to one or more human body position images in the unmodified sample images;
and training a preset convolution neural model by using the unmodified sample image and the modified sample image to obtain the second image recognition model meeting preset conditions.
Optionally, in another embodiment based on the foregoing method of the present application, after obtaining the second image recognition model that satisfies a preset condition, the method further includes:
performing model compression on the second image recognition model to obtain a first image recognition model;
and sending the first image recognition model to a client.
Optionally, in another embodiment based on the foregoing method of the present application, the acquiring an image to be detected, where the image to be detected includes a first human body part of a target user, includes:
and receiving an image to be detected sent by a client corresponding to the server, wherein the image to be detected comprises a first human body part of a target user, the image to be detected sent by the client is an image of which the client cannot determine the modification degree, and/or the client utilizes an image segmentation model to segment the image.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the image to be detected, the method further includes:
and carrying out feature recognition on the image to be detected by utilizing an image segmentation model to obtain at least one human body part image of the target user in the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
determining that the first human body part image comprises a face image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the face image based on the first image recognition model to obtain face part parameters;
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
Optionally, in another embodiment based on the method of the present application, the determining a third embellishment degree corresponding to the first human body part image includes:
determining a third embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image are matched, wherein the eye features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
determining a third embellishment corresponding to the first human body position image based on whether left and right cheek features of the face image match, the cheek features corresponding to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
and determining a third embellishment degree corresponding to the first human body part image based on whether the left and right eyebrow features of the face image are matched, wherein the eyebrow features correspond to at least one of size features, color features and contour features.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
determining that the first human body part image comprises a face image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
obtaining five sense organ parameters corresponding to the face image based on the face part parameters;
generating the size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
and determining a first embellishment degree corresponding to the first human body part image according to the size ratio of the five sense organs of the face image.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
determining that the first human body part image comprises a limb image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
acquiring at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part based on the limb part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on a comparison result of at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part and preset limb characteristics.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the image to be detected, the method further includes:
analyzing the image to be detected to obtain a brightness parameter corresponding to the image to be detected, wherein the brightness parameter is used for reflecting the brightness of the image to be detected;
and determining a third modification degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, displaying a detection result corresponding to the third degree of modification when it is detected that the third degree of modification satisfies a first preset condition includes:
when the modification degree of the first human body part image is determined to exceed a preset standard based on the third modification degree, acquiring a second human body part image of the target user by using a preset image segmentation model;
based on the second image recognition model, performing feature recognition on the second human body part image, and determining a second modification degree corresponding to the second human body part image;
and when the second modification degree is detected to meet a second preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one human body part image of the target user in the image to be detected, the method further includes:
when the first human body part image in the at least one human body part image is determined to be decorated to a degree exceeding a preset standard based on the first decoration degree, acquiring a third human body part image from the at least one human body part image of the target user;
performing feature recognition on the third human body part image based on the first image recognition model, and determining a third modification degree corresponding to the third human body part image;
and when the third modification degree is detected to meet a third preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring at least one body part image of the target user in the image to be detected, the method further includes:
acquiring a part type corresponding to the first human body part image, wherein the part type corresponds to at least one of a face image, a limb image, a head image and a trunk image;
acquiring at least one corresponding modification threshold value based on the part type corresponding to the first human body part image;
comparing the third modification degree with the at least one modification degree threshold value to obtain a corresponding comparison result;
and when the comparison result meets the first preset condition, displaying a detection result corresponding to the image to be detected.
Optionally, in another embodiment based on the above method of the present application, the acquiring an image to be detected includes:
acquiring target video data;
selecting sub video data positioned in a target playing time period in the target video data based on a preset rule;
and acquiring the image to be detected according to the sub-video data.
Optionally, in another embodiment based on the foregoing method of the present application, the acquiring the image to be detected according to the sub-video data includes:
acquiring all key frame images in the sub-video data, and sequencing each key frame image in sequence based on display parameters of the target user in all the key frame images, wherein the display parameters are used for reflecting the size and the definition of the human body part of the target user;
and taking the key frame image positioned in a preset ranking range in the sequenced key frame images as the image to be detected.
Optionally, in another embodiment based on the foregoing method of the present application, before the acquiring an image to be detected, the method further includes:
and receiving a detection instruction generated by the social application program, wherein the detection instruction is used for detecting the image modification degree of the image to be detected.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the image detection methods described above.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the image detection methods described above.
In the method and the device, after the image to be detected, which comprises the first human body part image of the target user, is obtained, feature recognition is carried out on the first human body part image based on a preset first image recognition model, a first modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the first modification degree is detected to meet a first preset condition, a detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a system architecture for image detection proposed in the present application;
fig. 2 is a schematic diagram of an image detection method applied to a client according to the present disclosure;
FIGS. 3 a-3 d are diagrams of client interfaces presented herein;
fig. 4 is a schematic diagram of an image detection method applied to a server according to the present application;
FIG. 5 is a flow chart of image detection proposed by the present application;
6-7 are schematic structural diagrams of the image detection device of the present application;
fig. 8 is a schematic view of the electronic device according to the present disclosure.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for performing image detection according to an exemplary embodiment of the present application is described below with reference to fig. 1 to 5. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which a video processing method or a video processing apparatus of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
The terminal apparatuses 101, 102, 103 in the present application may be terminal apparatuses that provide various services. For example, a user acquires an image to be detected through a terminal device 103 (which may also be the terminal device 101 or 102), wherein the image to be detected includes first human body part information of a target user; the method comprises the steps that feature recognition is conducted on a first human body part image based on a preset first image recognition model, and a first embellishment degree corresponding to the first human body part image is determined and used for representing the image embellishment degree of a first human body part of a target user; and when the first modification degree is detected to meet the first preset condition, displaying a detection result corresponding to the image to be detected.
It should be noted that the method for image detection provided in the embodiments of the present application may be executed by one or more of the terminal devices 101, 102, and 103, and/or the server 105, and accordingly, the apparatus for image detection provided in the embodiments of the present application is generally disposed in the corresponding terminal device, and/or the server 105, but the present application is not limited thereto.
The application also provides an image detection method, an image detection device, a target terminal and a medium.
Fig. 2 schematically shows a flow chart of a method of image detection according to an embodiment of the present application. As shown in fig. 2, the method is applied to a client, and includes:
s101, obtaining an image to be detected, wherein the image to be detected comprises a first human body part image of a target user.
First, it should be noted that, in the present application, the device for acquiring an image to be detected is not specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer), a smart phone, a tablet PC, an e-book reader, an MP3(Moving Picture Experts group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, a portable Computer, or a mobile terminal device with a display function, and the like.
Similarly, the image to be detected is not specifically limited, that is, the image is an image including a first human body part image of the target user. For example, when the first human body part is a face part, the image to be detected is an image including an image of the face of the user. And when the first human body part is the leg human body part, the image to be detected is an image containing the leg image of the user.
The first human body part image is not particularly limited in the present application, and may be, for example, a face image, a limb image, a head image, and a torso image. In addition, the number of the first human body part images is not particularly limited, and may be, for example, one or a plurality.
In addition, it should be further noted that, in the present application, there are various ways of acquiring the image to be detected, for example, the image to be detected including the body part image sent by the social application program may be received when the detection instruction of the target application, for example, the social application program, is received. The detection instruction may be generated based on an interaction operation of the corresponding user in a target application, such as a social application, or may be automatically executed by the social application. In addition, when the occurrence of other preset events is detected, images to be detected sent by other subjects and the like can be acquired.
S102, performing feature recognition on the first human body part image based on a preset first image recognition model, and determining a first embellishment degree corresponding to the first human body part image, wherein the first embellishment degree is used for representing the image embellishment degree of the first human body part of the target user.
Further, after the image to be detected including the first human body part image of the target user is acquired, the problem that image authenticity is affected due to the fact that image modification is excessive in the image transmitted by the user through image modification software and the like in the related technology is solved. The method and the device can utilize a preset neural network detection model to perform feature recognition on the first human body part image. To determine whether the first degree of modification is excessively modified.
One possible way may be to modify the image by using a cropping software such as ps (photoshop). For image decoration, the process of beautifying, changing, repairing and splicing the picture is performed by a user, so that the aims of attractiveness, entertainment and the like are fulfilled.
The first image recognition model is not specifically limited in the present application. For example, a Convolutional Neural Network (CNN). Convolutional Neural Networks are a class of feed-forward Neural Networks (fed-forward Neural Networks) containing convolutional calculations and having a deep structure, and are one of the representative algorithms for deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The CNN (convolutional neural network) has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like due to the powerful feature characterization capability of the CNN on the image.
Further, the method and the device can use the CNN neural network model to detect the feature information of the first human body part image in the image to be detected, and further perform feature identification on the first human body part image to determine the first embellishment degree corresponding to the first human body part image. The first human body part image needs to be input into a preset convolutional neural network model, and the output of a last full connected layer (FC) of the convolutional neural network model is used as an identification result of feature data corresponding to the first human body part image.
For example, the first human body part image is a face image, which is two self-portrait images of the same user, as illustrated in fig. 3a and 3 b. As can be seen from fig. 3a and fig. 3b, fig. 3a is an unmodified self-portrait image transmitted by a user, and fig. 3b is a self-portrait image transmitted by a user and subjected to a large amount of modification by using a retouching software. Further, when fig. 3a and fig. 3b are both input into the preset first image recognition model, the first embellishment degrees corresponding to the output face images should be different. In one approach, the first embellishment degree corresponding to the facial anatomy part image of FIG. 3a should be less than the first embellishment degree corresponding to the facial anatomy part image of FIG. 3b, indicating that the embellishment degree corresponding to the facial anatomy part image of FIG. 3a is less than the embellishment degree corresponding to the facial anatomy part image of FIG. 3 b.
S103, when the first modification degree is detected to meet the first preset condition, displaying a detection result corresponding to the image to be detected.
Further, after the first embellishment degree corresponding to the first human body part image is determined, whether the first embellishment degree meets the preset condition can be further detected. The first preset condition is not specifically limited, for example, the first preset condition may be a condition that the first modification degree corresponds to the excessive modification, and the first preset condition may also be a condition that the first modification degree corresponds to the unmodified excessive modification.
In addition, the method for displaying the detection result of the image to be detected is not particularly limited in the present application, and for example, a corresponding prompt may be generated on a display frame of a display screen, or a prompt may be performed in a manner of sending information.
In one mode, the detection result corresponding to the image to be detected may be "over-modified", "not over-modified", "unable to be determined", or the like. In another embodiment, the detection result may be "10% modification degree", "50% modification degree", or "80% modification degree". This is not a limitation of the present application.
In the method and the device, after the image to be detected, which comprises the first human body part image of the target user, is obtained, feature recognition is carried out on the first human body part image based on a preset first image recognition model, a first modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the first modification degree is detected to meet a first preset condition, a detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.
Optionally, in a possible implementation manner of the present application, in S102 (determining the first embellishment degree corresponding to the first human body part image), the following steps may be performed:
when an image to be detected is obtained, performing feature recognition on the first human body part image based on a first image recognition model to obtain a first modification degree recognition result;
and when the first modification degree recognition result is determined to correspond to the recognition failure, sending the image to be detected to the server.
Further, after sending the image to be detected to the server, the method further includes:
and receiving a second embellishment degree recognition result sent by the server, and taking the second embellishment degree recognition result as a first embellishment degree corresponding to the first human body part image, wherein the second embellishment degree recognition result is a embellishment degree result generated by the server according to the image to be detected.
Further, after the image to be detected is acquired, feature recognition can be performed on the first human body part image in the image to be detected by using a preset image recognition model, and then a corresponding recognition result is obtained. In one mode, when the recognition result can indicate whether the image to be detected is a decorated image, the corresponding detection result can be directly output. And when the identification result cannot indicate whether the image to be detected is a modified image (corresponding to identification failure), the image to be detected can be sent to the server. And enabling the server to continuously perform feature recognition on the first human body part image based on the image recognition model so as to obtain a corresponding detection result. And the second modification degree identification result sent by the server is used as the first modification degree of the image to be detected.
In the embodiment of the present application, different image detection models may be configured in the server and the client, respectively, for example, an image detection model with a large data architecture may be configured in the server, and an image detection model with a small data architecture may be configured in the client.
It can be understood that when the image to be detected cannot be identified in the client to obtain the corresponding modification degree of the image, the image can be sent to the server, and the server performs feature identification on the image to be detected again by using the image detection model with the larger data architecture. Therefore, when the mobile terminal cannot identify the modification degree of the image to be detected, the image detection model of the server can still be used for carrying out feature identification on the image to be detected.
Optionally, in a possible implementation manner of the present application, after S101 (acquiring the image to be detected), the following steps may be implemented:
based on a preset identification strategy, the image to be detected is subjected to feature identification by using an image segmentation model, at least one human body part image of a target user in the image to be detected is obtained, and the identification strategy is used for determining the type of the human body part.
Further, in the process of obtaining the first human body part image, one mode is that feature recognition can be performed on the image to be detected through a preset image segmentation model, so that one or more corresponding human body part images are obtained.
It should be noted that, the identification policy is not specifically limited in the present application, and for example, the human body image of the corresponding portion may be selected according to a prompt message of social software. For example, friend-making software needs to acquire a human body part image including a human face part. For sports software, however, it may require a body position image of a user's limb. Further, even in the same social software, there may be a case where the settings of the recognition policy are different depending on the purpose of the image to be detected. For example, when the purpose of identifying the degree of modification of the image to be detected is to facilitate making friends between users, a human body part image including a human face part in the image to be detected needs to be acquired. Or, when the purpose of identifying the modification degree of the image to be detected is to perform the body-building result display for a certain user, the human body position image and the like including the limb part in the image to be detected need to be acquired.
In another embodiment, it is needless to say that one or more corresponding human body part images may be acquired based on an instruction specified by the user.
In addition, when there are a plurality of body region images obtained by the image segmentation model, the plurality of images may be the same body region image or different body region images.
Optionally, in a possible implementation manner of the present application, after acquiring at least one body part image of the target user in the image to be detected, the following steps may be further implemented:
determining that a first human body part image of the at least one human body part image comprises a face image of the target user;
based on a preset first image recognition model, carrying out feature recognition on the first human body part image, and determining a first modification degree corresponding to the first human body part image, wherein the feature recognition comprises the following steps:
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
Furthermore, after the image to be detected is obtained, the first image recognition model can be utilized to perform feature recognition on the face image to obtain face part parameters of the user, and whether the left area and the right area of the face image are symmetrical or not is judged according to the face part parameters.
It is understood that, for example, when the left and right regions of the face image of the user are severely asymmetric, the image to be detected may be an excessively modified image.
Optionally, in a manner of determining a first embellishment degree corresponding to a first human body part image based on a matching degree of a left half region and a right half region of a face image, the method may include:
determining a first embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image are matched, wherein the eye features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
determining a first embellishment degree corresponding to the first human body part image based on whether left and right cheek features of the face image are matched, wherein the cheek features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
and determining a first embellishment degree corresponding to the first human body part image based on whether the left and right eyebrow features of the face image are matched, wherein the eyebrow features correspond to at least one of size features, color features and contour features.
Further, in the process of determining whether the left and right regions of the face image of the user are symmetrical, one possible way is to determine the first embellishment degree corresponding to the first human body part image by determining whether the left and right eye features are matched, whether the left and right cheek features are matched, and whether the left and right eyebrow features are matched. Wherein the feature may be at least one of a size feature, a color feature, and a contour feature.
As shown in fig. 3c, it can be seen that the sizes and color densities of the left and right eye features of the user are completely different based on whether the left and right eye features match, and therefore, this situation may occur because the left and right eye regions are not decorated in the later stage of image decoration by the user. Therefore, the method and the device can obtain the result that the left eye feature and the right eye feature of the user are not matched according to the face feature parameters determined by the neural network model. The first embellishment degree of the first human body part image may thus be generated to correspond to a detection result of an embellishment excess.
Further, based on whether the features of the left and right eyebrows match, as shown in fig. 3d, it can be seen that the features of the left and right eyebrows of the user are completely different in size and color intensity, and therefore, this situation may occur because the left and right eyebrow regions are not decorated in the later stage of decorating the image. Therefore, the method and the device can obtain the result that the left and right eyebrow features of the user are not matched according to the facial feature parameters determined by the neural network model. The first embellishment degree of the first human body part image may thus be generated to correspond to a detection result of an embellishment excess.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
determining that a first human body part image of the at least one human body part image comprises a face image of the target user;
based on a preset first image recognition model, carrying out feature recognition on the first human body part image, and determining a first modification degree corresponding to the first human body part image, wherein the feature recognition comprises the following steps:
acquiring facial feature parameters corresponding to the facial image based on the facial part parameters;
generating the size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
and determining a first embellishment corresponding to the first human body part image according to the size ratio of the five sense organs of the face image.
Further, in the process of obtaining the first embellishment degree of the face image of the user according to the face part parameters, another possible way is to determine the size of the five sense organs in the face image according to the parameters of the five sense organs of the face. Thereby generating the size ratio of the five sense organs of the face image.
It can be understood that, in the process of post-decorating the image by the user, the user often prefers to zoom the specific human body part of the face correspondingly. Such as enlarging the eyes, reducing the mouth, thinning the eyebrows, etc. Therefore, whether the face image of the user has the condition that the size of a certain human body part in the five sense organs is too large or too small can be determined according to the preset size proportion of the five sense organs of the human face, and then the first modification degree corresponding to the first human body part image is determined according to the condition.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
determining that a first human body part image of the at least one human body part image comprises a limb image of the target user;
based on a preset first image recognition model, carrying out feature recognition on the first human body part image, and determining a first modification degree corresponding to the first human body part image, wherein the feature recognition comprises the following steps:
performing feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
acquiring at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part based on the limb part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on a comparison result of at least one of the size characteristic, the color characteristic and the outline characteristic corresponding to the limb part and a preset limb characteristic.
Further, in the process of determining whether the first human body part image is excessively embellished, another possible way is to determine the first embellishment degree corresponding to the first human body part image by means of the limb part parameters. Wherein the limb portion parameter may correspond to at least one of a size feature, a color feature, and a contour feature.
It can be understood that, in the process of post-decorating the image, the user often prefers to zoom in and out on a specific limb part correspondingly. Such as stretching the legs, thinning the waist, widening the shoulders, etc. Alternatively, the user may also prefer to set a specific limb portion to a corresponding color, such as turning the legs white, the face bronze, etc.
Therefore, whether the user limb image is too mismatched with the limb characteristics of a conventional human body (for example, too long, too thin, too white, too black, too wide, and the like) can be determined according to the preset limb characteristics, and then the first embellishment degree corresponding to the first human body part image is determined according to the situation.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
when the modification degree of a first human body part image in the at least one human body part image is determined to exceed a preset standard based on the first modification degree, acquiring a third human body part image from the at least one human body part image of the target user;
based on the first image recognition model, performing feature recognition on the third human body part image, and determining a third modification degree corresponding to the third human body part image;
and when the third modification degree is detected to meet a third preset condition, displaying a detection result corresponding to the image to be detected.
Further, in the present application, when it is detected that the embellishment degree of the first human body part image in the at least one human body part image obtained by segmentation in advance exceeds the preset standard, it is determined that the image to be detected is indeed an image with excessive embellishment. And further acquiring a third human body part image in the plurality of human body part images obtained by the image segmentation model. And based on the first image recognition model, performing feature recognition on the third human body part image, and determining a third modification degree corresponding to the third human body part image.
In addition, the third human body part image is not particularly limited in the present application, and for example, the third human body part image may be the same human body part image as the first human body part image or may be a different human body part image. For example, when the first body part image is a face image, the third body part image may be a limb image, a leg image, a torso image, or the like.
Further, for example, taking the first human body part image as a face image and the third human body part image as a leg image as an example, when the first human body part image is determined to have a embellishment degree exceeding a preset standard based on the first embellishment degree corresponding to the face image in the image to be detected, it can be determined that the image is possibly an image with excessive embellishment. In addition, the problem of influencing the user experience caused by detection errors is avoided. The leg image of the image to be detected can be further acquired, and the feature recognition is performed on the leg image again by using the preset first image recognition model so as to determine the third embellishment degree corresponding to the leg image.
It is understood that when the third embellishment degree still corresponds to the image indicating that the embellishment degree of the image to be detected exceeds the preset standard, the image to be detected can be further determined to be an image with excessive embellishment. And when the third modification degree does not indicate that the modification degree of the image to be detected exceeds the preset standard, determining the detection result corresponding to the image to be detected by performing feature recognition on the first human body part image again and the like.
It should be noted that, the third preset condition is not specifically limited in this application, that is, the third preset condition may be any condition. In addition, the third preset condition may be the same as the first preset condition, or the third preset condition may be different from the first preset condition.
Optionally, in another possible implementation manner of the present application, after S101 (acquiring the image to be detected), the following steps may be implemented:
analyzing the image to be detected to obtain a brightness parameter corresponding to the image to be detected, wherein the brightness parameter is used for reflecting the brightness of the image to be detected;
and determining a first modification degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
Further, in the process of the first modification degree according to the first human body part image, another possible way is that whether the image is excessively modified can be determined through the brightness parameter of the image to be detected.
It can be understood that, in the process of later decorating the image by the user, the brightness of a part of or the whole area of the image is often regulated, so that the aim of covering or highlighting a part of the human body part image is fulfilled. For example, to adjust the brightness of the image too high or too low, etc. Therefore, whether the brightness parameter corresponding to the image to be detected meets the condition of belonging to the normal brightness parameter range or not can be determined according to the preset brightness parameter range, and if the brightness parameter belongs to the normal brightness parameter range, the first modification degree of the first human body part image can be determined to correspond to the normal modification. If not, it may be determined that the first embellishment degree of the first human body part image corresponds to an over-embellishment or the like.
Further optionally, in S103 (when it is detected that the first modification degree satisfies the first preset condition, displaying a detection result corresponding to the first modification degree), the method includes:
when the modification degree of the first human body part image is determined to exceed a preset standard based on the first modification degree, acquiring a second human body part image of the target user by using a preset image segmentation model;
based on the first image recognition model, performing feature recognition on the second human body part image, and determining a second modification degree corresponding to the second human body part image;
and when the second modification degree is detected to meet a second preset condition, displaying a detection result corresponding to the image to be detected.
Further, in the present application, when the modification degree of the first human body part image in the image to be detected is detected to exceed the preset standard, the image to be detected is further determined to be an image with excessive modification. And further acquiring a second human body part image existing in the image to be detected by using a preset image segmentation model, and performing feature recognition on the second human body part image based on the first image recognition model to determine a second modification degree corresponding to the second human body part image.
In addition, the second human body part image is not particularly limited in the present application, and for example, the second human body part image may be the same human body part image as the first human body part image or may be a different human body part image. For example, when the first body part image is a face image, the second body part image may be a limb image, a leg image, a torso image, or the like.
Further, for example, taking the first human body part image as a face image and the second human body part image as a leg image as an example, when the first human body part image is determined to have a embellishment degree exceeding a preset standard based on the first embellishment degree corresponding to the face image in the image to be detected, it may be determined that the image is possibly an image with excessive embellishment. In addition, the problem of influencing the user experience caused by detection errors is avoided. The leg image of the image to be detected can be further obtained by using the preset image segmentation model, and the feature recognition is performed on the leg image again by using the preset first image recognition model so as to determine the second embellishment degree corresponding to the leg image.
It is understood that when the second degree of modification still corresponds to the condition that the degree of modification of the image to be detected exceeds the preset standard, the image to be detected can be further determined as an image with excessive modification. And when the second modification degree does not indicate that the modification degree of the image to be detected exceeds the preset standard, determining the detection result corresponding to the image to be detected by performing feature recognition on the first human body part image again and the like.
It should be noted that the second preset condition is not specifically limited in this application, that is, the second preset condition may be any condition. In addition, the second preset condition may be the same as the third preset condition and the first preset condition, or the second preset condition may be different from the third preset condition and the first preset condition.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
acquiring a part type corresponding to a first human body part image in the at least one human body part image, wherein the part type corresponds to at least one of a face image, a limb image, a head image and a trunk image;
acquiring at least one corresponding modification threshold value based on the part type corresponding to the first human body part image;
comparing the first modification degree with at least one modification degree threshold value to obtain a corresponding comparison result;
and when the comparison result meets a first preset condition, displaying a detection result corresponding to the image to be detected.
In the embodiment of the present application, a part type corresponding to a first human body part may be determined first, and for example, the part type may be at least one of a face image, a limb image, a head image, and a torso image. So as to obtain different modification degree threshold values subsequently according to different part types. And obtaining corresponding comparison results according to different modification degree thresholds.
The modification degree threshold corresponding to each site may be partially or completely the same or different, and is not limited in the present application.
Optionally, in S101 (acquiring an image to be detected), the method may include:
acquiring target video data;
selecting sub video data positioned in a target playing time period in the target video data based on a preset rule;
and acquiring an image to be detected according to the sub-video data.
Further optionally, obtaining an image to be detected according to the sub-video data includes:
acquiring all key frame images in the sub-video data, and sequencing all the key frame images in sequence based on the display parameters of the target user in each key frame image, wherein the display parameters are used for reflecting the size and the definition of the human body part of the target user;
and taking the key frame images in the preset ranking range in the sequenced key frame images as the images to be detected.
Further, the method and the device for acquiring the image to be detected can be obtained based on the acquired target video data in one mode of acquiring the image to be detected. Specifically, the sub video data located in the target playing time period in the target video data may be selected according to the obtained video data. The present application is not limited to sub-video data, and may be any segment of video data in target video data, for example.
In addition, all key frame I frame images in the sub-video data can be obtained, and all key frame images are sequenced according to a preset sequencing rule, wherein the sequencing rule can be based on display parameters reflecting the size and definition of the human body of a target user. It can be understood that when the human body part of a certain key frame image is clearer, the corresponding sorting order of the certain key frame image can be further ahead. Or when the human body part of a certain key frame image is smaller, the corresponding sorting sequence can be more backward, and the like.
Furthermore, after the I frames are sorted, a target number of I frame images can be selected from the I frame images as the images to be detected in the application.
Further optionally, before acquiring the image to be detected, the method further includes:
and receiving a detection instruction generated by the social application program, wherein the detection instruction is used for detecting the image modification degree of the image to be detected.
Fig. 4 schematically shows a flow chart of a method of image detection according to an embodiment of the present application. As shown in fig. 4, the method is applied to a server, and includes:
s201, obtaining an image to be detected, wherein the image to be detected comprises a first human body part image of a target user.
First, it should be noted that, in the present application, the device for acquiring an image to be detected is not specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer), a smart phone, a tablet PC, an e-book reader, an MP3(Moving Picture Experts group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, a portable Computer, or a mobile terminal device with a display function, and the like.
Similarly, the image to be detected is not specifically limited, that is, the image is an image including a first human body part image of the target user. For example, when the first human body part is a face part, the image to be detected is an image including an image of the face of the user. And when the first human body part is the leg human body part, the image to be detected is an image containing the leg image of the user.
The first human body part image is not particularly limited in the present application, and may be, for example, a face image, a limb image, a head image, and a torso image. In addition, the number of the first human body part images is not particularly limited, and may be, for example, one or a plurality.
In addition, it should be noted that, in the present application, there are various ways to obtain the image to be detected, for example, the image to be detected including the body part image sent by the social application program may be received when the detection instruction of the social application program is received. It is also possible to acquire the image to be detected transmitted by other subjects when the occurrence of other preset events is detected, and the like.
S202, based on a preset second image recognition model, carrying out feature recognition on the first human body part image, and determining a third embellishment degree corresponding to the first human body part image, wherein the third embellishment degree is used for representing the image embellishment degree of the first human body part of the target user.
Further, after the image to be detected including the first human body part image of the target user is acquired, the problem that image authenticity is affected due to the fact that image modification is excessive in the image transmitted by the user through image modification software and the like in the related technology is solved. The method and the device can utilize a preset neural network detection model to perform feature recognition on the first human body part image. To determine whether the third degree of modification is excessively modified.
One possible way may be to modify the image by using a cropping software such as ps (photoshop). For image decoration, the process of beautifying, changing, repairing and splicing the picture is performed by a user, so that the aims of attractiveness, entertainment and the like are fulfilled.
The second image recognition model is not specifically limited in the present application. For example, a Convolutional Neural Network (CNN). Convolutional Neural Networks are a class of feed-forward Neural Networks (fed-forward Neural Networks) containing convolutional calculations and having a deep structure, and are one of the representative algorithms for deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The CNN (convolutional neural network) has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like due to the powerful feature characterization capability of the CNN on the image.
Further, the method and the device can use the CNN neural network model to detect the feature information of the first human body part image in the image to be detected, further perform feature identification on the first human body part image, and determine the third embellishment degree corresponding to the first human body part image. The first human body part image needs to be input into a preset convolutional neural network model, and the output of a last full connected layer (FC) of the convolutional neural network model is used as an identification result of feature data corresponding to the first human body part image.
For example, the first human body part image is a face image, which is two self-portrait images of the same user, as illustrated in fig. 3a and 3 b. As can be seen from fig. 3a and fig. 3b, fig. 3a is an unmodified self-portrait image transmitted by a user, and fig. 3b is a self-portrait image transmitted by a user and subjected to a large amount of modification by using a retouching software. Further, when fig. 3a and fig. 3b are both input into the preset second image recognition model, the third embellishment degrees corresponding to the output face images should be different. In one approach, the third embellishment degree corresponding to the facial anatomy part image of FIG. 3a should be less than the third embellishment degree corresponding to the facial anatomy part image of FIG. 3b, indicating that the embellishment degree corresponding to the facial anatomy part image of FIG. 3a is less than the embellishment degree corresponding to the facial anatomy part image of FIG. 3 b.
S203, when the third modification degree is detected to meet the first preset condition, displaying a detection result corresponding to the image to be detected.
Further, after determining the third embellishment degree corresponding to the first human body part image, the present application may further detect whether the third embellishment degree satisfies a preset condition. The first preset condition is not specifically limited, for example, the first preset condition may be a condition that the third modification degree corresponds to an excessive modification, and the first preset condition may also be a condition that the third modification degree corresponds to an unmodified excessive modification.
In addition, the method for displaying the detection result of the image to be detected is not particularly limited in the present application, and for example, a corresponding prompt may be generated on a display frame of a display screen, or a prompt may be performed in a manner of sending information.
In one mode, the detection result corresponding to the image to be detected may be "over-modified", "not over-modified", "unable to be determined", or the like. In another embodiment, the detection result may be "10% modification degree", "50% modification degree", or "80% modification degree". This is not a limitation of the present application.
According to the method and the device, after the image to be detected, which comprises the first human body part image of the target user, is obtained, feature recognition is carried out on the first human body part image based on a preset second image recognition model, the third modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the third modification degree is detected to meet the first preset condition, the detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.
Optionally, in a possible implementation manner of the present application, before S201 (acquiring the image to be detected), the following steps may be implemented:
obtaining a first number of unmodified sample images, wherein each unmodified sample image comprises at least one body part image of a user;
sample image modification is carried out on the first number of unmodified sample images to obtain a second number of modified sample images, and the sample image modification corresponds to one or more human body part images in the unmodified sample images;
and training a preset convolution neural model by using the unmodified sample image and the modified sample image to obtain a second image recognition model meeting the preset condition.
Furthermore, before the feature recognition is performed on the first human body part image by using the second image recognition model, the second image recognition model needs to be trained first. Specifically, a certain number of unmodified sample images including at least one body part image of the user are acquired, and modified sample images are obtained by modifying the unmodified sample images by means of a modifying software or the like on the basis of the unmodified sample images. And training a basic blank image semantic segmentation model by using the plurality of unmodified sample images and the modified sample images, and further obtaining a first image recognition model meeting preset conditions.
The first number is not particularly limited in the present application, and may be one or a plurality of numbers, for example. Similarly, the present application does not specifically limit the second number, and may include one or more than one. In addition, the first number may be the same as or different from the second number in the present application.
For example, when the first number is 3, and the body part images are leg body part images and face body part images, the present application may acquire 3 unmodified sample images of unmodified leg body part images and face body part images. Further, when sample image modification is performed on the 3 unmodified sample images, the leg human body part image and the face human body part image in the unmodified sample images can be modified simultaneously, so that it can be understood that 3 (second number) modified sample images are obtained. Further alternatively, in the present application, when sample image modification is performed on the 3 unmodified sample images, it can be understood that only the leg body position image in the unmodified sample image (corresponding to the 3 modified sample images) and only the face body position image in the unmodified sample image (corresponding to the 3 modified sample images) are modified, and then the corresponding 6 (second number) modified sample images are obtained.
Still further, the present application may identify, through a neural network image detection model, a sample feature (for example, a face part feature, a hand part feature, a leg part feature, and the like) of at least one object included in the user sample image. Furthermore, the neural network image detection model may classify each sample feature in the sample image, and classify the sample features belonging to the same category into human body parts of the same type, so that a plurality of sample features obtained after semantic segmentation of the sample image may be sample features composed of a plurality of different types.
It should be noted that, when the neural network image classification model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the neural network image detection model is repeatedly trained by the multiple sample images (unmodified sample images and modified sample images), and when the classification accuracy of the neural network image classification model on the pixel points reaches more than 70%, then the first neural network image detection model can be applied to the embodiment of the application to perform image feature recognition on the first human body part image.
Further optionally, after obtaining the second image recognition model meeting the preset condition, the method further includes:
performing model compression on the second image recognition model to obtain a first image recognition model;
and sending the first image recognition model to the client.
Further, after the second image recognition model is obtained, the disadvantage that the second image recognition model cannot be operated on the terminal device due to an overlarge data architecture is avoided. The method and the device can also perform model compression on the image to obtain a first image recognition model with a small corresponding data architecture.
The mode of compressing the second image recognition model may be a method of directly compressing the second image recognition model, and may include two aspects of thinning out a model kernel and cropping the model, for example. The thinning of the kernel needs the support of some sparse computation libraries, and the acceleration effect may be limited by many factors such as bandwidth and sparsity. In addition, the clipping method of the model needs to directly remove the unimportant filter parameters from the original model. Because the self-adaptive capacity of the neural network is very strong, and the model with a large data architecture is often more redundant, after some parameters are removed, the performance reduced by parameter removal can be recovered through a retraining means, so that the model can be effectively compressed to a great extent on the basis of the existing model only by selecting a proper clipping means and retraining means, and the method is the most common method used at present.
Further optionally, acquiring an image to be detected, where the image to be detected includes a first human body part of the target user, includes:
receiving an image to be detected sent by a client corresponding to a server, wherein the image to be detected contains a first human body part of a target user, the image to be detected sent by the client is an image of which the client cannot determine the modification degree, and/or the image is segmented by the client by using an image segmentation model.
Furthermore, after the original second image recognition model with a larger data structure and the compressed first image recognition model are obtained, different neural network detection models can be used for image recognition according to different images to be detected in a targeted manner. For example, after the image to be detected is acquired, the client may perform feature recognition on a first human body part image in the image to be detected by using a first image recognition model with a smaller data architecture, so as to obtain a corresponding recognition result.
In one mode, when the recognition result can indicate whether the image to be detected is a decorated image, the corresponding detection result can be directly output. And when the recognition result can not indicate whether the image to be detected is a modified image, the modified image can be sent to the server, the server continues to perform feature recognition on the human body part image sent by the client based on the second image recognition model, so that a corresponding detection result is obtained, and the modification degree detection result is sent to the client.
In another mode, the server may directly receive the image which is sent by the client and is segmented by the image segmentation model, and the server continues to perform feature recognition on the human body part image sent by the client based on the second image recognition model, so as to obtain a corresponding detection result, and send the modification degree detection result to the client.
Optionally, in a possible implementation manner of the present application, after S201 (acquiring the image to be detected), the following steps may be implemented:
based on a preset identification strategy, the image to be detected is subjected to feature identification by using an image segmentation model, at least one human body part image of a target user in the image to be detected is obtained, and the identification strategy is used for determining the type of the human body part.
Further, in the process of obtaining the first human body part image, one mode is that feature recognition can be performed on the image to be detected through a preset image segmentation model, so that one or more corresponding human body part images are obtained.
It should be noted that, the identification policy is not specifically limited in the present application, and for example, the human body image of the corresponding portion may be selected according to a prompt message of social software. For example, friend-making software needs to acquire a human body part image including a human face part. For sports software, it may be desirable to protect a body part image of a limb part. In another embodiment, it is needless to say that one or more corresponding human body part images may be acquired based on an instruction designated by the user.
In addition, when there are a plurality of body region images obtained by the image segmentation model, the plurality of images may be the same body region image or different body region images.
Optionally, in a possible implementation manner of the present application, after acquiring at least one body part image of the target user in the image to be detected, the following steps may be further implemented:
determining that a first human body part image of the at least one human body part image comprises a facial image of a target user;
based on a preset second image recognition model, carrying out feature recognition on the first human body part image, and determining a third modification degree corresponding to the first human body part image, wherein the third modification degree comprises the following steps:
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a third embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
Furthermore, after the image to be detected is obtained, the second image recognition model can be utilized to perform feature recognition on the face image to obtain face part parameters of the user, and whether the left area and the right area of the face image are symmetrical or not is judged according to the face part parameters.
It is understood that, for example, when the left and right regions of the face image of the user are severely asymmetric, the image to be detected may be an excessively modified image.
Optionally, in a manner of determining a third embellishment degree corresponding to the first human body part image based on a matching degree of a left half region and a right half region of the face image, the method may include:
determining a third embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image are matched, wherein the eye features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
determining a third embellishment degree corresponding to the first human body region image based on whether left and right cheek features of the face image are matched, wherein the cheek features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
and determining a third embellishment degree corresponding to the first human body part image based on whether the left and right eyebrow features of the face image are matched, wherein the eyebrow features correspond to at least one of size features, color features and contour features.
Further, in the process of determining whether the left and right regions of the face image of the user are symmetrical, one possible way is to determine the third embellishment degree corresponding to the first human body part image by determining whether the left and right eye features are matched, whether the left and right cheek features are matched, and whether the left and right eyebrow features are matched. Wherein the feature may be at least one of a size feature, a color feature, and a contour feature.
As shown in fig. 3c, it can be seen that the sizes and color densities of the left and right eye features of the user are completely different based on whether the left and right eye features match, and therefore, this situation may occur because the left and right eye regions are not decorated in the later stage of image decoration by the user. Therefore, the method and the device can obtain the result that the left eye feature and the right eye feature of the user are not matched according to the face feature parameters determined by the neural network model. The third embellishment degree of the first human body part image may be generated to correspond to a detection result of an excessive embellishment.
Further, based on whether the features of the left and right eyebrows match, as shown in fig. 3d, it can be seen that the features of the left and right eyebrows of the user are completely different in size and color intensity, and therefore, this situation may occur because the left and right eyebrow regions are not decorated in the later stage of decorating the image. Therefore, the method and the device can obtain the result that the left and right eyebrow features of the user are not matched according to the facial feature parameters determined by the neural network model. The third embellishment degree of the first human body part image may be generated to correspond to a detection result of an excessive embellishment.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
determining that a first human body part image of the at least one human body part image comprises a facial image of a target user;
based on a preset second image recognition model, carrying out feature recognition on the first human body part image, and determining a third modification degree corresponding to the first human body part image, wherein the third modification degree comprises the following steps:
acquiring facial feature parameters corresponding to the facial image based on the facial part parameters;
generating the size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
and determining a third modification degree corresponding to the first human body part image according to the size ratio of the five sense organs of the face image.
Further, in the process of obtaining the third modification degree of the face image of the user according to the face part parameters, another possible way is to determine the size of the five sense organs in the face image according to the parameters of the five sense organs of the face. Thereby generating the size ratio of the five sense organs of the face image.
It can be understood that, in the process of post-decorating the image by the user, the user often prefers to zoom the specific human body part of the face correspondingly. Such as enlarging the eyes, reducing the mouth, thinning the eyebrows, etc. Therefore, whether the face image of the user has the condition that the size of a certain human body part in the five sense organs is too large or too small can be determined according to the preset size proportion of the five sense organs of the human face, and then the third modification degree corresponding to the first human body part image is determined according to the condition.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
determining that a first human body part image of the at least one human body part image comprises a limb image of a target user;
based on a preset second image recognition model, carrying out feature recognition on the first human body part image, and determining a third modification degree corresponding to the first human body part image, wherein the third modification degree comprises the following steps:
performing feature recognition on the limb image based on the second image recognition model to obtain limb part parameters;
acquiring at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part based on the limb part parameters;
and determining a third embellishment degree corresponding to the first human body part image based on a comparison result of at least one of the size characteristic, the color characteristic and the outline characteristic corresponding to the limb part and the preset limb characteristic.
Further, in the process of determining whether the first human body part image is excessively embellished, another possible way is to determine the third embellishment degree corresponding to the first human body part image by means of the limb part parameters. Wherein the limb portion parameter may correspond to at least one of a size feature, a color feature, and a contour feature.
It can be understood that, in the process of post-decorating the image, the user often prefers to zoom in and out on a specific limb part correspondingly. Such as stretching the legs, thinning the waist, widening the shoulders, etc. Alternatively, the user may also prefer to set a specific limb portion to a corresponding color, such as turning the legs white, the face bronze, etc.
Therefore, the method and the device for determining the third embellishment degree of the first human body part image can determine whether the user limb image is too mismatched with the limb characteristics of the conventional human body (for example, too long, too thin, too white, too black, too wide and the like) according to the preset limb characteristics, and then determine the third embellishment degree corresponding to the first human body part image according to the situation.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
when the decoration degree of the first human body part image in the at least one human body part image is determined to exceed the preset standard based on the third decoration degree, acquiring a third human body part image from the at least one human body part image of the target user;
based on the second image recognition model, performing feature recognition on the third human body part image, and determining a fourth modification degree corresponding to the third human body part image;
and when the fourth modification degree is detected to meet the third preset condition, displaying a detection result corresponding to the image to be detected.
Further, in the present application, when it is detected that the embellishment degree of the first human body part image in the at least one human body part image obtained by segmentation in advance exceeds the preset standard, it is determined that the image to be detected is indeed an image with excessive embellishment. And further acquiring a third human body part image in the plurality of human body part images obtained by the image segmentation model. And based on the second image recognition model, performing feature recognition on the third human body part image, and determining a third modification degree corresponding to the third human body part image.
In addition, the third human body part image is not particularly limited in the present application, and for example, the third human body part image may be the same human body part image as the first human body part image or may be a different human body part image. For example, when the first body part image is a face image, the third body part image may be a limb image, a leg image, a torso image, or the like.
Further, for example, taking the first human body part image as a face image and the third human body part image as a leg image as an example, when the first human body part image is determined to have a embellishment degree exceeding a preset standard based on the third embellishment degree corresponding to the face image in the image to be detected, the image may be determined as an image with excessive embellishment based on the determination. In addition, the problem of influencing the user experience caused by detection errors is avoided. The leg image of the image to be detected can be further acquired, and the feature recognition is performed on the leg image again by using a preset second image recognition model so as to determine a third embellishment degree corresponding to the leg image.
It is understood that when the fourth embellishment degree still corresponds to the image indicating that the embellishment degree of the image to be detected exceeds the preset standard, the image to be detected can be further determined to be an image with excessive embellishment. And when the fourth modification degree does not indicate that the modification degree of the image to be detected exceeds the preset standard, determining the detection result corresponding to the image to be detected by performing feature recognition on the first human body part image again and the like.
It should be noted that, the third preset condition is not specifically limited in this application, that is, the third preset condition may be any condition. In addition, the third preset condition may be the same as the first preset condition, or the third preset condition may be different from the first preset condition.
Optionally, in another possible implementation manner of the present application, after S201 (acquiring the image to be detected), the following steps may be implemented:
analyzing the image to be detected to obtain a brightness parameter corresponding to the image to be detected, wherein the brightness parameter is used for reflecting the brightness of the image to be detected;
and determining a third modification degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
Further, in the third modification degree according to the first human body part image, another possible way is to determine whether the image is excessively modified according to the brightness parameter of the image to be detected.
It can be understood that, in the process of later decorating the image by the user, the brightness of a part of or the whole area of the image is often regulated, so that the aim of covering or highlighting a part of the human body part image is fulfilled. For example, to adjust the brightness of the image too high or too low, etc. Therefore, whether the brightness parameter corresponding to the image to be detected meets the condition of belonging to the normal brightness parameter range or not can be determined according to the preset brightness parameter range, and if the brightness parameter corresponding to the image to be detected belongs to the normal brightness parameter range, the third modification degree of the first human body part image can be determined to correspond to the normal modification. If not, it may be determined that the third embellishment degree of the first human body part image corresponds to an over-embellishment or the like.
Further optionally, in S203 (when it is detected that the third modification degree satisfies the first preset condition, displaying a detection result corresponding to the third modification degree), the method includes:
when the modification degree of the first human body part image is determined to exceed the preset standard based on the third modification degree, acquiring a second human body part image of the target user by using a preset image segmentation model;
based on the second image recognition model, performing feature recognition on the second human body part image, and determining a fourth modification degree corresponding to the second human body part image;
and when the fourth modification degree is detected to meet the second preset condition, displaying a detection result corresponding to the image to be detected.
Further, in the present application, when the modification degree of the first human body part image in the image to be detected is detected to exceed the preset standard, the image to be detected is further determined to be an image with excessive modification. And further acquiring a second human body part image existing in the image to be detected by using a preset image segmentation model, and performing feature recognition on the second human body part image based on a second image recognition model to determine a second modification degree corresponding to the second human body part image.
In addition, the second human body part image is not particularly limited in the present application, and for example, the second human body part image may be the same human body part image as the first human body part image or may be a different human body part image. For example, when the first body part image is a face image, the second body part image may be a limb image, a leg image, a torso image, or the like.
Further, for example, taking the first human body part image as a face image and the second human body part image as a leg image as an example, when the first human body part image is determined to have a embellishment degree exceeding a preset standard based on the third embellishment degree corresponding to the face image in the image to be detected, it may be determined that the image is possibly an image with excessive embellishment. In addition, the problem of influencing the user experience caused by detection errors is avoided. The leg image of the image to be detected can be further obtained by using a preset image segmentation model, and the feature recognition is performed on the leg image again by using a preset second image recognition model so as to determine a second embellishment degree corresponding to the leg image.
It is understood that when the fourth embellishment degree still corresponds to the image indicating that the embellishment degree of the image to be detected exceeds the preset standard, the image to be detected can be further determined to be an image with excessive embellishment. And when the fourth modification degree does not indicate that the modification degree of the image to be detected exceeds the preset standard, determining the detection result corresponding to the image to be detected by performing feature recognition on the first human body part image again and the like.
It should be noted that the second preset condition is not specifically limited in this application, that is, the second preset condition may be any condition. In addition, the second preset condition may be the same as the third preset condition and the first preset condition, or the second preset condition may be different from the third preset condition and the first preset condition.
Optionally, after acquiring at least one human body part image of a target user in an image to be detected, the following steps may be further implemented:
acquiring a part type corresponding to a first human body part image in at least one human body part image, wherein the part type corresponds to at least one of a face image, a limb image, a head image and a trunk image;
acquiring at least one corresponding modification threshold value based on the part type corresponding to the first human body part image;
comparing the third modification degree with at least one modification degree threshold value to obtain a corresponding comparison result;
and when the comparison result meets a first preset condition, displaying a detection result corresponding to the image to be detected.
In the embodiment of the present application, a part type corresponding to a first human body part may be determined first, and for example, the part type may be at least one of a face image, a limb image, a head image, and a torso image. So as to obtain different modification degree threshold values subsequently according to different part types. And obtaining corresponding comparison results according to different modification degree thresholds.
The modification degree threshold corresponding to each site may be partially or completely the same or different, and is not limited in the present application.
Optionally, in S201 (acquiring an image to be detected), the method may include:
acquiring target video data;
selecting sub video data positioned in a target playing time period in the target video data based on a preset rule;
and acquiring an image to be detected according to the sub-video data.
Further optionally, obtaining an image to be detected according to the sub-video data includes:
acquiring all key frame images in the sub-video data, and sequencing all the key frame images in sequence based on the display parameters of the target user in each key frame image, wherein the display parameters are used for reflecting the size and the definition of the human body part of the target user;
and taking the key frame images in the preset ranking range in the sequenced key frame images as the images to be detected.
Further, the method and the device for acquiring the image to be detected can be obtained based on the acquired target video data in one mode of acquiring the image to be detected. Specifically, the sub video data located in the target playing time period in the target video data may be selected according to the obtained video data. The present application is not limited to sub-video data, and may be any segment of video data in target video data, for example.
In addition, all key frame I frame images in the sub-video data can be obtained, and all key frame images are sequenced according to a preset sequencing rule, wherein the sequencing rule can be based on display parameters reflecting the size and definition of the human body of a target user. It can be understood that when the human body part of a certain key frame image is clearer, the corresponding sorting order of the certain key frame image can be further ahead. Or when the human body part of a certain key frame image is smaller, the corresponding sorting sequence can be more backward, and the like.
Furthermore, after the I frames are sorted, a target number of I frame images can be selected from the I frame images as the images to be detected in the application.
Further optionally, before acquiring the image to be detected, the method further includes:
and receiving a detection instruction generated by the social application program, wherein the detection instruction is used for detecting the image modification degree of the image to be detected.
As shown in fig. 5, for a schematic flow chart of image detection applied to a server provided by the present application, as can be seen from fig. 5, first a first number of unmodified sample images need to be obtained, and sample image modification is performed on the first number of unmodified sample images to obtain a second number of modified sample images, and then a preset image semantic segmentation model is trained by using the unmodified sample images and the modified sample images to obtain a second image recognition model satisfying a preset condition.
Further, in an embodiment of the present application, the second image recognition model may be subjected to model compression to obtain the first image recognition model, and the first image recognition model is sent to the client. And performing feature recognition on the first human body part image based on the second image recognition model to obtain a recognition result, and performing feature recognition on the first human body part image based on the first image recognition model when the recognition result is determined to correspond to a second preset condition.
In another embodiment of the present application, after the image to be detected is obtained, the feature of the first human body part image may be directly identified based on the preset second image identification model, the third embellishment degree corresponding to the first human body part image is determined, and when it is detected that the third embellishment degree satisfies the first preset condition, the detection result corresponding to the image to be detected is displayed.
Furthermore, the method and the device can also acquire a second human body part image of the target user when the modification degree of the first human body part image is determined to exceed the preset standard based on the third modification degree, perform feature recognition on the second human body part image based on the second image recognition model, determine a fourth modification degree corresponding to the second human body part image, and display a detection result corresponding to the image to be detected when the fourth modification degree is detected to meet the third preset condition.
In the method and the device, after the image to be detected, which comprises the first human body part image of the target user, is obtained, feature recognition is carried out on the first human body part image based on a preset first image recognition model, a first modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the first modification degree is detected to meet a first preset condition, a detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.
In another embodiment of the present application, as shown in fig. 6, the present application further provides an image detection apparatus. The device is applied to the client and comprises a first obtaining module 301, a first determining module 302 and a first displaying module 303, wherein,
a first obtaining module 301, configured to obtain an image to be detected, where the image to be detected includes a first human body part image of a target user;
a first determining module 302, configured to perform feature recognition on the first human body part image based on a preset first image recognition model, and determine a first embellishment degree corresponding to the first human body part image, where the first embellishment degree is used to represent an image embellishment degree for a first human body part of the target user;
the first display module 303 is configured to display a detection result corresponding to the image to be detected when it is detected that the first modification degree satisfies a first preset condition.
In the method and the device, after the image to be detected, which comprises the first human body part image of the target user, is obtained, feature recognition is carried out on the first human body part image based on a preset first image recognition model, a first modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the first modification degree is detected to meet a first preset condition, a detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301, configured to, when the image to be detected is obtained, perform feature recognition on the first human body part image based on the first image recognition model to obtain a first embellishment degree recognition result;
and when the first modification degree recognition result is determined to correspond to the recognition failure, sending the image to be detected to a server.
The first obtaining module 301 is configured to receive a second embellishment degree identification result sent by the server, and use the second embellishment degree identification result as a first embellishment degree corresponding to the first human body part image, where the second embellishment degree identification result is a embellishment degree result generated by the server according to the image to be detected.
A first obtaining module 301, configured to perform feature recognition on the image to be detected by using an image segmentation model based on a preset recognition strategy, and obtain at least one human body part image of the target user in the image to be detected, where the recognition strategy is used to determine a type of a human body part.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301 configured to determine that a first human body part image of the at least one human body part image includes a face image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the face image based on the first image recognition model to obtain face part parameters;
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
A first obtaining module 301, configured to determine a first embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image match, where the eye features correspond to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
a first obtaining module 301 configured to determine a first embellishment degree corresponding to the first human body part image based on whether left and right cheek features of the face image match, the cheek features corresponding to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
a first obtaining module 301, configured to determine a first embellishment degree corresponding to the first human body part image based on whether left and right eyebrow features of the face image match, the eyebrow features corresponding to at least one of a size feature, a color feature and a contour feature.
A first obtaining module 301 configured to determine that the first human body part image includes a face image of the target user;
the first obtaining module 301, configured to perform feature recognition on the first human body part image based on a preset first image recognition model, and determine a first embellishment degree corresponding to the first human body part image, includes:
a first obtaining module 301 configured to obtain parameters of five sense organs corresponding to the face image based on the facial part parameters;
a first obtaining module 301 configured to generate a size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
a first obtaining module 301, configured to determine a first embellishment degree corresponding to the first human body part image according to a size ratio of five sense organs of the face image.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301 configured to determine that a first human body part image of the at least one human body part image includes a limb image of the target user;
the first obtaining module 301, configured to perform feature recognition on the first human body part image based on a preset first image recognition model, and determine a first embellishment degree corresponding to the first human body part image, includes:
a first obtaining module 301, configured to perform feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
a first obtaining module 301 configured to obtain at least one of a size feature, a color feature and a contour feature corresponding to the limb part based on the limb part parameter;
a first obtaining module 301, configured to determine a first embellishment degree corresponding to the first human body part image based on a comparison result between at least one of a size feature, a color feature and a contour feature corresponding to the limb part and a preset limb feature.
In another embodiment of the present application, the first obtaining module 301 further includes:
the first obtaining module 301 is configured to analyze the image to be detected to obtain a brightness parameter corresponding to the image to be detected, where the brightness parameter is used to reflect the brightness of the image to be detected;
a first obtaining module 301 configured to determine a first embellishment degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301, configured to obtain a second human body part image of the target user by using a preset image segmentation model when it is determined that the modification degree of the first human body part image exceeds a preset standard based on the first modification degree;
a first obtaining module 301, configured to perform feature recognition on the second human body part image based on the first image recognition model, and determine a second embellishment degree corresponding to the second human body part image;
the first obtaining module 301 is configured to display a detection result corresponding to the image to be detected when it is detected that the second modification degree meets a second preset condition.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301 configured to obtain a third human body part image from the at least one human body part image of the target user when it is determined that the degree of modification of a first human body part image of the at least one human body part image exceeds a preset standard based on the first degree of modification;
a first obtaining module 301, configured to perform feature recognition on the third human body part image based on the first image recognition model, and determine a third embellishment degree corresponding to the third human body part image;
the first obtaining module 301 is configured to, when it is detected that the third modification degree meets a third preset condition, display a detection result corresponding to the image to be detected.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301 configured to obtain a part type corresponding to a first human body part image of the at least one human body part image, the part type corresponding to at least one of a face image, a limb image, a head image, and a torso image;
a first obtaining module 301 configured to obtain at least one corresponding threshold value of the embellishment degree based on a part type corresponding to the first human body part image;
a first obtaining module 301, configured to compare the first embellishment degree with the at least one embellishment degree threshold to obtain a corresponding comparison result;
the first obtaining module 301 is configured to display a detection result corresponding to the image to be detected when the comparison result meets the first preset condition.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301 configured to obtain target video data;
the first obtaining module 301 is configured to select, based on a preset rule, sub video data located in a target playing time period in the target video data;
a first obtaining module 301, configured to obtain the image to be detected according to the sub-video data.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301, configured to obtain all key frame images in the sub-video data, and sequentially sort all the key frame images based on a display parameter of the target user in each key frame image, where the display parameter is used to reflect a size and a definition of a human body part of the target user;
the first obtaining module 301 is configured to take a key frame image located in a preset ranking range in the sorted key frame images as the image to be detected.
In another embodiment of the present application, the first obtaining module 301 further includes:
a first obtaining module 301 configured to receive a detection instruction generated by a social application, where the detection instruction is used to perform image modification degree detection on the image to be detected.
In another embodiment of the present application, as shown in fig. 7, the present application further provides an apparatus for image detection. The device, which is applied to the server side, includes a second obtaining module 304, a second determining module 305, and a second displaying module 306, wherein,
a second obtaining module 304, configured to obtain an image to be detected, where the image to be detected includes a first human body part of a target user;
a second determining module 305, configured to perform feature recognition on the first human body part image based on a preset second image recognition model, and determine a first embellishment degree corresponding to the first human body part image, where the first embellishment degree is used for representing an image embellishment degree of a first human body part of the target user;
and the second display module 306 is configured to display a detection result corresponding to the image to be detected when it is detected that the third modification degree meets a first preset condition.
In the method and the device, after the image to be detected, which comprises the first human body part image of the target user, is obtained, feature recognition is carried out on the first human body part image based on a preset first image recognition model, a first modification degree used for representing the image modification degree of the first human body part of the target user is determined, and when the first modification degree is detected to meet a first preset condition, a detection result corresponding to the image to be detected is displayed. By applying the technical scheme of the application, after the image to be detected containing the human body part image of the user is obtained, whether the image is excessively modified or not can be judged by utilizing the pre-trained neural network detection model. Thereby avoiding the problem of unreal images caused by over-decoration of the user images in the related art.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304 configured to obtain a first number of unmodified sample images, wherein each unmodified sample image comprises at least one body part image of a user;
a second obtaining module 304, configured to perform sample image modification on the first number of unmodified sample images to obtain a second number of modified sample images, where the sample image modification corresponds to one or more human body part images in the unmodified sample images;
a second obtaining module 304, configured to train a preset convolutional neural model with the unmodified sample image and the modified sample image, so as to obtain the second image recognition model meeting a preset condition.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to perform model compression on the second image recognition model, so as to obtain a first image recognition model;
a second obtaining module 304 configured to send the first image recognition model to a client.
In another embodiment of the present application, the second obtaining module 304 further includes:
the second obtaining module 304 is configured to receive an image to be detected sent by a client corresponding to the server, where the image to be detected includes a first human body part of a target user, where the image to be detected sent by the client is an image for which the client cannot determine a degree of modification, and/or an image segmented by the client using an image segmentation model.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to perform feature recognition on the image to be detected by using an image segmentation model, and obtain at least one human body part image of the target user in the image to be detected.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304 configured to determine that a first human body part image of the at least one human body part image includes a face image of the target user;
the second obtaining module 304 is configured to perform feature recognition on the first human body part image based on a preset first image recognition model, and determine a first embellishment degree corresponding to the first human body part image, including:
a second obtaining module 304, configured to perform feature recognition on the facial image based on the first image recognition model, so as to obtain facial part parameters;
a second obtaining module 304, configured to detect a matching degree of a left half region and a right half region of the face image based on the face part parameter;
a second obtaining module 304, configured to determine a first embellishment degree corresponding to the first human body part image based on a matching degree of a left half region and a right half region of the face image.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to determine a third embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image match, the eye features corresponding to at least one of a size feature, a color feature and a contour feature;
and/or the presence of a gas in the gas,
a second obtaining module 304 configured to determine a third embellishment degree corresponding to the first human body part image based on whether left and right cheek features of the face image match, the cheek features corresponding to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
a second obtaining module 304, configured to determine a third embellishment degree corresponding to the first human body part image based on whether left and right eyebrow features of the face image match, the eyebrow features corresponding to at least one of a size feature, a color feature and a contour feature.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304 configured to determine that a first human body part image of the at least one human body part image includes a face image of the target user;
the second obtaining module 304 is configured to perform feature recognition on the first human body part image based on a preset first image recognition model, and determine a first embellishment degree corresponding to the first human body part image, including:
a second obtaining module 304, configured to obtain parameters of five sense organs corresponding to the facial image based on the facial part parameters;
a second obtaining module 304, configured to generate a size ratio of the facial features of the facial image based on the parameters of the facial features corresponding to the facial image;
a second obtaining module 304, configured to determine a first embellishment degree corresponding to the first human body part image according to a size ratio of five sense organs of the face image.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second acquisition module 304 configured to determine that the first human body part image comprises a limb image of the target user;
the second obtaining module 304 is configured to perform feature recognition on the first human body part image based on a preset first image recognition model, and determine a first embellishment degree corresponding to the first human body part image, including:
a second obtaining module 304, configured to perform feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
a second obtaining module 304, configured to obtain at least one of a size feature, a color feature and a contour feature corresponding to the limb part based on the limb part parameter;
a second obtaining module 304, configured to determine a first embellishment degree corresponding to the first human body part image based on a comparison result between at least one of a size feature, a color feature and a contour feature corresponding to the limb part and a preset limb feature.
In another embodiment of the present application, the second obtaining module 304 further includes:
the second obtaining module 304 is configured to analyze the image to be detected to obtain a brightness parameter corresponding to the image to be detected, where the brightness parameter is used to reflect the brightness of the image to be detected;
a second obtaining module 304, configured to determine a third embellishment degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to obtain a second human body part image of the target user by using a preset image segmentation model when it is determined that the modification degree of the first human body part image exceeds a preset standard based on the third modification degree;
a second obtaining module 304, configured to perform feature recognition on the second human body part image based on the second image recognition model, and determine a fourth embellishment degree corresponding to the second human body part image;
the second obtaining module 304 is configured to display a detection result corresponding to the image to be detected when it is detected that the fourth embellishment degree satisfies a second preset condition.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to obtain a third body part image from the at least one body part image of the target user when it is determined that the degree of embellishment of the first body part image of the at least one body part image exceeds a preset standard based on the third embellishment degree;
a second obtaining module 304, configured to perform feature recognition on the third body part image based on the first image recognition model, and determine a fourth embellishment degree corresponding to the third body part image;
the second obtaining module 304 is configured to display a detection result corresponding to the image to be detected when it is detected that the fourth embellishment degree satisfies a third preset condition.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304 configured to obtain a part type corresponding to a first human body part image of the at least one human body part image, the part type corresponding to at least one of a face image, a limb image, a head image, and a torso image;
a second obtaining module 304, configured to obtain at least one corresponding threshold value of the embellishment degree based on a part type corresponding to the first human body part image;
a second obtaining module 304, configured to compare the third embellishment degree with the at least one embellishment degree threshold to obtain a corresponding comparison result;
the second obtaining module 304 is configured to display a detection result corresponding to the image to be detected when the comparison result meets the first preset condition.
In another embodiment of the present application, the second obtaining module 304 further includes: a second obtaining module 304 configured to obtain target video data;
a second obtaining module 304, configured to select, based on a preset rule, sub video data located in a target playing time period in the target video data;
a second obtaining module 304, configured to obtain the image to be detected according to the sub-video data.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to obtain all the key frame images in the sub-video data, and sequentially sort each key frame image based on display parameters of the target user in all the key frame images, where the display parameters are used to reflect the size and the definition of the human body part of the target user;
a second obtaining module 304, configured to take a key frame image located in a preset ranking range in the sorted key frame images as the image to be detected.
In another embodiment of the present application, the second obtaining module 304 further includes:
a second obtaining module 304, configured to receive a detection instruction generated by a social application, where the detection instruction is used to perform image modification degree detection on the image to be detected.
FIG. 8 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, electronic device 400 may include one or more of the following components: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 402 is configured to store at least one instruction for execution by the processor 401 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the electronic device 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the electronic device 400 or in a folded design; in still other embodiments, the display screen 405 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate a current geographic location of the electronic device 400 to implement navigation or LBS (location based Service). The positioning component 408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the electronic device 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic apparatus 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the electronic device 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the user on the electronic device 400. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 413 may be disposed on a side bezel of the electronic device 400 and/or on a lower layer of the touch display screen 405. When the pressure sensor 413 is arranged on the side frame of the electronic device 400, a holding signal of the user to the electronic device 400 can be detected, and the processor 401 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the electronic device 400. When a physical button or vendor Logo is provided on the electronic device 400, the fingerprint sensor 414 may be integrated with the physical button or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
Proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of electronic device 400. The proximity sensor 416 is used to capture the distance between the user and the front of the electronic device 400. In one embodiment, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state when the proximity sensor 416 detects that the distance between the user and the front surface of the electronic device 400 gradually decreases; when the proximity sensor 416 detects that the distance between the user and the front of the electronic device 400 is gradually increased, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 does not constitute a limitation of the electronic device 400, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 404, comprising instructions executable by the processor 420 of the electronic device 400 to perform the method of image detection described above, the method comprising: acquiring an image to be detected, wherein the image to be detected comprises first human body part information of a target user; performing feature recognition on the first human body part image based on a preset first image recognition model, and determining a first embellishment degree corresponding to the first human body part image, wherein the first embellishment degree is used for representing an image embellishment degree of a first human body part of the target user; and when the first modification degree is detected to meet a first preset condition, displaying a detection result corresponding to the image to be detected. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 420 of the electronic device 400 to perform the above-described method of image detection, the method comprising: acquiring an image to be detected, wherein the image to be detected comprises first human body part information of a target user; performing feature recognition on the first human body part image based on a preset first image recognition model, and determining a first embellishment degree corresponding to the first human body part image, wherein the first embellishment degree is used for representing an image embellishment degree of a first human body part of the target user; and when the first modification degree is detected to meet a first preset condition, displaying a detection result corresponding to the image to be detected. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (33)

1. An image detection method is applied to a client and comprises the following steps:
acquiring an image to be detected, wherein the image to be detected comprises first human body part information of a target user;
performing feature recognition on the first human body part image based on a preset first image recognition model, and determining a first embellishment degree corresponding to the first human body part image, wherein the first embellishment degree is used for representing an image embellishment degree of a first human body part of the target user;
and when the first modification degree is detected to meet a first preset condition, displaying a detection result corresponding to the image to be detected.
2. The method of claim 1, wherein said determining a first embellishment corresponding to the first human body part image further comprises:
when the image to be detected is obtained, performing feature recognition on the first human body part image based on the first image recognition model to obtain a first modification degree recognition result;
and when the first modification degree recognition result is determined to correspond to the recognition failure, sending the image to be detected to a server.
3. The method according to claim 2, wherein after the sending the image to be detected to the server, further comprising:
and receiving a second embellishment degree identification result sent by the server, and taking the second embellishment degree identification result as a first embellishment degree corresponding to the first human body part image, wherein the second embellishment degree identification result is a embellishment degree result generated by the server according to the image to be detected.
4. The method of claim 1, further comprising, after said acquiring an image to be detected:
and based on a preset identification strategy, performing feature identification on the image to be detected by using an image segmentation model to obtain at least one human body part image of the target user in the image to be detected, wherein the identification strategy is used for determining the type of the human body part.
5. The method of claim 4, further comprising, after said obtaining at least one body part image of said target user in said image to be detected:
determining that a first human body part image of the at least one human body part image comprises a facial image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the face image based on the first image recognition model to obtain face part parameters;
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
6. The method of claim 5, wherein said determining a first embellishment degree for the first human body part image comprises:
determining a first embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image are matched, wherein the eye features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
determining a first embellishment corresponding to the first human body position image based on whether left and right cheek features of the face image match, the cheek features corresponding to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
determining a first embellishment degree corresponding to the first human body part image based on whether left and right eyebrow features of the face image are matched, wherein the eyebrow features correspond to at least one of size features, color features and contour features.
7. The method of claim 4, further comprising, after said obtaining at least one body part image of said target user in said image to be detected:
determining that a first human body part image of the at least one human body part image comprises a facial image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
obtaining five sense organ parameters corresponding to the face image based on the face part parameters;
generating the size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
and determining a first embellishment degree corresponding to the first human body part image according to the size ratio of the five sense organs of the face image.
8. The method of claim 4, further comprising, after said obtaining at least one body part image of said target user in said image to be detected:
determining that a first human body part image of the at least one human body part image comprises a limb image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model to determine a first embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
acquiring at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part based on the limb part parameters;
and determining a first embellishment degree corresponding to the first human body part image based on a comparison result of at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part and preset limb characteristics.
9. The method of claim 1, further comprising, after said acquiring an image to be detected:
analyzing the image to be detected to obtain a brightness parameter corresponding to the image to be detected, wherein the brightness parameter is used for reflecting the brightness of the image to be detected;
and determining a first modification degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
10. The method of claim 1, wherein displaying a detection result corresponding to the first degree of modification when the first degree of modification is detected to satisfy a first preset condition comprises:
when the modification degree of the first human body part image is determined to exceed a preset standard based on the first modification degree, acquiring a second human body part image of the target user by using a preset image segmentation model;
performing feature recognition on the second human body part image based on the first image recognition model, and determining a second modification degree corresponding to the second human body part image;
and when the second modification degree is detected to meet a second preset condition, displaying a detection result corresponding to the image to be detected.
11. The method of claim 4, further comprising, after said obtaining at least one body part image of said target user in said image to be detected:
when the first human body part image in the at least one human body part image is determined to be decorated to a degree exceeding a preset standard based on the first decoration degree, acquiring a third human body part image from the at least one human body part image of the target user;
performing feature recognition on the third human body part image based on the first image recognition model, and determining a second modification degree corresponding to the third human body part image;
and when the second modification degree is detected to meet a third preset condition, displaying a detection result corresponding to the image to be detected.
12. The method of claim 4, wherein after said obtaining at least one body part image of said target user in said image to be detected, comprising:
acquiring a part type corresponding to a first human body part image in the at least one human body part image, wherein the part type corresponds to at least one of a face image, a limb image, a head image and a trunk image;
acquiring at least one corresponding modification threshold value based on the part type corresponding to the first human body part image;
comparing the first modification degree with the at least one modification degree threshold value to obtain a corresponding comparison result;
and when the comparison result meets the first preset condition, displaying a detection result corresponding to the image to be detected.
13. The method of claim 1, wherein said acquiring an image to be detected comprises:
acquiring target video data;
selecting sub video data positioned in a target playing time period in the target video data based on a preset rule;
and acquiring the image to be detected according to the sub-video data.
14. The method according to claim 13, wherein said obtaining the image to be detected according to the sub video data comprises:
acquiring all key frame images in the sub-video data, and sequencing all the key frame images in sequence based on display parameters of the target user in each key frame image, wherein the display parameters are used for reflecting the size and the definition of the human body part of the target user;
and taking the key frame image positioned in a preset ranking range in the sequenced key frame images as the image to be detected.
15. The method of claim 1, further comprising, prior to said acquiring the image to be detected:
and receiving a detection instruction generated by the social application program, wherein the detection instruction is used for detecting the image modification degree of the image to be detected.
16. An image detection method is applied to a server side, and comprises the following steps:
acquiring an image to be detected, wherein the image to be detected comprises a first human body part of a target user;
performing feature recognition on the first human body part image based on a preset second image recognition model, and determining a third embellishment degree corresponding to the first human body part image, wherein the third embellishment degree is used for representing an image embellishment degree of a first human body part of the target user;
and when the third modification degree is detected to meet a first preset condition, displaying a detection result corresponding to the image to be detected.
17. The method of claim 16, further comprising, prior to said acquiring the image to be detected:
obtaining a first number of unmodified sample images, wherein each unmodified sample image comprises at least one body part image of a user;
sample image modification is carried out on the first number of unmodified sample images to obtain a second number of modified sample images, and the sample image modification corresponds to one or more human body position images in the unmodified sample images;
and training a preset convolution neural model by using the unmodified sample image and the modified sample image to obtain the second image recognition model meeting preset conditions.
18. The method of claim 17, after the obtaining the second image recognition model satisfying a preset condition, further comprising:
performing model compression on the second image recognition model to obtain a first image recognition model;
and sending the first image recognition model to a client.
19. The method of claim 16, wherein the acquiring the image to be detected, which includes the first human body part of the target user, comprises:
and receiving an image to be detected sent by a client corresponding to the server, wherein the image to be detected comprises a first human body part of a target user, the image to be detected sent by the client is an image of which the client cannot determine the modification degree, and/or the client utilizes an image segmentation model to segment the image.
20. The method of claim 16, further comprising, after said acquiring an image to be detected:
and carrying out feature recognition on the image to be detected by utilizing an image segmentation model to obtain at least one human body part image of the target user in the image to be detected.
21. The method of claim 20, further comprising, after said acquiring at least one body part image of said target user in said image to be detected:
determining that a first human body part image of the at least one human body part image comprises a facial image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model and determining a third embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the face image based on the first image recognition model to obtain face part parameters;
detecting the matching degree of a left half region and a right half region of the face image based on the face part parameters;
and determining a third embellishment degree corresponding to the first human body part image based on the matching degree of the left half area and the right half area of the face image.
22. The method of claim 21, wherein said determining a third embellishment degree for the first human body part image comprises:
determining a third embellishment degree corresponding to the first human body part image based on whether left and right eye features of the face image are matched, wherein the eye features correspond to at least one of size features, color features and contour features;
and/or the presence of a gas in the gas,
determining a third embellishment corresponding to the first human body position image based on whether left and right cheek features of the face image match, the cheek features corresponding to at least one of a size feature, a color feature, and a contour feature;
and/or the presence of a gas in the gas,
and determining a third embellishment degree corresponding to the first human body part image based on whether the left and right eyebrow features of the face image are matched, wherein the eyebrow features correspond to at least one of size features, color features and contour features.
23. The method of claim 20, further comprising, after said acquiring at least one body part image of said target user in said image to be detected:
determining that a first human body part image of the at least one human body part image comprises a facial image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model and determining a third embellishment degree corresponding to the first human body part image includes:
obtaining five sense organ parameters corresponding to the face image based on the face part parameters;
generating the size ratio of the five sense organs of the face image based on the parameters of the five sense organs corresponding to the face image;
and determining a third modification degree corresponding to the first human body part image according to the size ratio of the five sense organs of the face image.
24. The method of claim 20, further comprising, after said acquiring at least one body part image of said target user in said image to be detected:
determining that a first human body part image of the at least one human body part image comprises a limb image of the target user;
the method for identifying the features of the first human body part image based on a preset first image identification model and determining a third embellishment degree corresponding to the first human body part image includes:
performing feature recognition on the limb image based on the first image recognition model to obtain a limb part parameter;
acquiring at least one of size characteristics, color characteristics and contour characteristics corresponding to the limb part based on the limb part parameters;
and determining a third modification degree corresponding to the first human body part image based on a comparison result of at least one of the size characteristic, the color characteristic and the contour characteristic corresponding to the limb part and a preset limb characteristic.
25. The method of claim 16, further comprising, after said acquiring an image to be detected:
analyzing the image to be detected to obtain a brightness parameter corresponding to the image to be detected, wherein the brightness parameter is used for reflecting the brightness of the image to be detected;
and determining a third modification degree corresponding to the first human body part image based on the brightness parameter corresponding to the image to be detected.
26. The method of claim 16, wherein displaying a detection result corresponding to the third degree of modification when the third degree of modification is detected to satisfy a first preset condition comprises:
when the modification degree of the first human body part image is determined to exceed a preset standard based on the third modification degree, acquiring a second human body part image of the target user by using a preset image segmentation model;
performing feature recognition on the second human body part image based on the second image recognition model, and determining a fourth modification degree corresponding to the second human body part image;
and when the fourth modification degree is detected to meet a second preset condition, displaying a detection result corresponding to the image to be detected.
27. The method of claim 20, further comprising, after said acquiring at least one body part image of said target user in said image to be detected:
when the third embellishment degree is determined that the embellishment degree of a first human body part image in the at least one human body part image exceeds a preset standard, acquiring a third human body part image from the at least one human body part image of the target user;
performing feature recognition on the third human body part image based on the first image recognition model, and determining a fourth modification degree corresponding to the third human body part image;
and when the fourth modification degree is detected to meet a third preset condition, displaying a detection result corresponding to the image to be detected.
28. The method as claimed in claim 20, wherein said obtaining at least one body part image of said target user in said image to be detected further comprises:
acquiring a part type corresponding to a first human body part image in the at least one human body part image, wherein the part type corresponds to at least one of a face image, a limb image, a head image and a trunk image;
acquiring at least one corresponding modification threshold value based on the part type corresponding to the first human body part image;
comparing the third modification degree with the at least one modification degree threshold value to obtain a corresponding comparison result;
and when the comparison result meets the first preset condition, displaying a detection result corresponding to the image to be detected.
29. The method of claim 16, wherein said acquiring an image to be detected comprises:
acquiring target video data;
selecting sub video data positioned in a target playing time period in the target video data based on a preset rule;
and acquiring the image to be detected according to the sub-video data.
30. The method according to claim 29, wherein said obtaining the image to be detected based on the sub-video data comprises:
acquiring all key frame images in the sub-video data, and sequencing each key frame image in sequence based on display parameters of the target user in all the key frame images, wherein the display parameters are used for reflecting the size and the definition of the human body part of the target user;
and taking the key frame image positioned in a preset ranking range in the sequenced key frame images as the image to be detected.
31. The method of claim 16, wherein prior to said acquiring an image to be detected, further comprising:
and receiving a detection instruction generated by the social application program, wherein the detection instruction is used for detecting the image modification degree of the image to be detected.
32. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of image detection of any of claims 1-31.
33. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of image detection of any of claims 1-31.
CN202010614648.3A 2020-06-30 2020-06-30 Image detection method, device, electronic equipment and medium Active CN111797754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010614648.3A CN111797754B (en) 2020-06-30 2020-06-30 Image detection method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010614648.3A CN111797754B (en) 2020-06-30 2020-06-30 Image detection method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111797754A true CN111797754A (en) 2020-10-20
CN111797754B CN111797754B (en) 2024-07-19

Family

ID=72810806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010614648.3A Active CN111797754B (en) 2020-06-30 2020-06-30 Image detection method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111797754B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209032A (en) * 2021-04-09 2022-10-18 美智纵横科技有限责任公司 Image acquisition method and device based on cleaning robot, electronic equipment and medium

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243371A (en) * 2015-10-23 2016-01-13 厦门美图之家科技有限公司 Human face beauty degree detection method and system and shooting terminal
CN106713700A (en) * 2016-12-08 2017-05-24 宇龙计算机通信科技(深圳)有限公司 Picture processing method and apparatus, as well as terminal
CN107169920A (en) * 2017-04-24 2017-09-15 深圳市金立通信设备有限公司 A kind of intelligence repaiies drawing method and terminal
CN108182714A (en) * 2018-01-02 2018-06-19 腾讯科技(深圳)有限公司 Image processing method and device, storage medium
CN108629730A (en) * 2018-05-21 2018-10-09 深圳市梦网科技发展有限公司 Video U.S. face method, apparatus and terminal device
CN109285131A (en) * 2018-09-13 2019-01-29 深圳市梦网百科信息技术有限公司 A kind of more people's image U.S. face method and systems
CN109302628A (en) * 2018-10-24 2019-02-01 广州虎牙科技有限公司 A kind of face processing method based on live streaming, device, equipment and storage medium
CN109325907A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image landscaping treatment method, apparatus and system
CN109376575A (en) * 2018-08-20 2019-02-22 奇酷互联网络科技(深圳)有限公司 Method, mobile terminal and the storage medium that human body in image is beautified
US20190087686A1 (en) * 2017-09-21 2019-03-21 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting human face
CN109543646A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN109584151A (en) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 Method for beautifying faces, device, terminal and storage medium
CN109584177A (en) * 2018-11-26 2019-04-05 北京旷视科技有限公司 Face method of modifying, device, electronic equipment and computer readable storage medium
CN109815821A (en) * 2018-12-27 2019-05-28 北京旷视科技有限公司 A kind of portrait tooth method of modifying, device, system and storage medium
WO2019101021A1 (en) * 2017-11-23 2019-05-31 腾讯科技(深圳)有限公司 Image recognition method, apparatus, and electronic device
CN109978795A (en) * 2019-04-03 2019-07-05 颜沿(上海)智能科技有限公司 A kind of feature tracking split screen examination cosmetic method and system
WO2020038167A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Video image recognition method and apparatus, terminal and storage medium
CN110909693A (en) * 2019-11-27 2020-03-24 深圳市华付信息技术有限公司 3D face living body detection method and device, computer equipment and storage medium
CN111031239A (en) * 2019-12-05 2020-04-17 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111062248A (en) * 2019-11-08 2020-04-24 宇龙计算机通信科技(深圳)有限公司 Image detection method, device, electronic equipment and medium
CN111182196A (en) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 Photographing preview method, intelligent terminal and device with storage function
CN111199176A (en) * 2018-11-20 2020-05-26 浙江宇视科技有限公司 Face identity detection method and device
CN111222569A (en) * 2020-01-06 2020-06-02 宇龙计算机通信科技(深圳)有限公司 Method, device, electronic equipment and medium for identifying food
CN111327819A (en) * 2020-02-14 2020-06-23 北京大米未来科技有限公司 Method, device, electronic equipment and medium for selecting image

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243371A (en) * 2015-10-23 2016-01-13 厦门美图之家科技有限公司 Human face beauty degree detection method and system and shooting terminal
CN106713700A (en) * 2016-12-08 2017-05-24 宇龙计算机通信科技(深圳)有限公司 Picture processing method and apparatus, as well as terminal
CN107169920A (en) * 2017-04-24 2017-09-15 深圳市金立通信设备有限公司 A kind of intelligence repaiies drawing method and terminal
US20190087686A1 (en) * 2017-09-21 2019-03-21 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting human face
WO2019101021A1 (en) * 2017-11-23 2019-05-31 腾讯科技(深圳)有限公司 Image recognition method, apparatus, and electronic device
CN108182714A (en) * 2018-01-02 2018-06-19 腾讯科技(深圳)有限公司 Image processing method and device, storage medium
CN108629730A (en) * 2018-05-21 2018-10-09 深圳市梦网科技发展有限公司 Video U.S. face method, apparatus and terminal device
CN109376575A (en) * 2018-08-20 2019-02-22 奇酷互联网络科技(深圳)有限公司 Method, mobile terminal and the storage medium that human body in image is beautified
WO2020038167A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Video image recognition method and apparatus, terminal and storage medium
CN109285131A (en) * 2018-09-13 2019-01-29 深圳市梦网百科信息技术有限公司 A kind of more people's image U.S. face method and systems
CN109325907A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image landscaping treatment method, apparatus and system
CN109302628A (en) * 2018-10-24 2019-02-01 广州虎牙科技有限公司 A kind of face processing method based on live streaming, device, equipment and storage medium
CN111182196A (en) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 Photographing preview method, intelligent terminal and device with storage function
CN111199176A (en) * 2018-11-20 2020-05-26 浙江宇视科技有限公司 Face identity detection method and device
CN109584177A (en) * 2018-11-26 2019-04-05 北京旷视科技有限公司 Face method of modifying, device, electronic equipment and computer readable storage medium
CN109584151A (en) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 Method for beautifying faces, device, terminal and storage medium
CN109543646A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN109815821A (en) * 2018-12-27 2019-05-28 北京旷视科技有限公司 A kind of portrait tooth method of modifying, device, system and storage medium
CN109978795A (en) * 2019-04-03 2019-07-05 颜沿(上海)智能科技有限公司 A kind of feature tracking split screen examination cosmetic method and system
CN111062248A (en) * 2019-11-08 2020-04-24 宇龙计算机通信科技(深圳)有限公司 Image detection method, device, electronic equipment and medium
CN110909693A (en) * 2019-11-27 2020-03-24 深圳市华付信息技术有限公司 3D face living body detection method and device, computer equipment and storage medium
CN111031239A (en) * 2019-12-05 2020-04-17 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111222569A (en) * 2020-01-06 2020-06-02 宇龙计算机通信科技(深圳)有限公司 Method, device, electronic equipment and medium for identifying food
CN111327819A (en) * 2020-02-14 2020-06-23 北京大米未来科技有限公司 Method, device, electronic equipment and medium for selecting image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209032A (en) * 2021-04-09 2022-10-18 美智纵横科技有限责任公司 Image acquisition method and device based on cleaning robot, electronic equipment and medium
CN115209032B (en) * 2021-04-09 2024-04-16 美智纵横科技有限责任公司 Image acquisition method and device based on cleaning robot, electronic equipment and medium

Also Published As

Publication number Publication date
CN111797754B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN110189340B (en) Image segmentation method and device, electronic equipment and storage medium
CN109978989B (en) Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN107844781A (en) Face character recognition methods and device, electronic equipment and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN109360222B (en) Image segmentation method, device and storage medium
US11386586B2 (en) Method and electronic device for adding virtual item
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN112287852A (en) Face image processing method, display method, device and equipment
CN110675412A (en) Image segmentation method, training method, device and equipment of image segmentation model
CN111027490A (en) Face attribute recognition method and device and storage medium
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN111539795A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110796083A (en) Image display method, device, terminal and storage medium
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN110807769B (en) Image display control method and device
CN113918767A (en) Video clip positioning method, device, equipment and storage medium
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN112135191A (en) Video editing method, device, terminal and storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN114741559A (en) Method, apparatus and storage medium for determining video cover
CN110728167A (en) Text detection method and device and computer readable storage medium
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant