CN111368678A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN111368678A
CN111368678A CN202010120444.4A CN202010120444A CN111368678A CN 111368678 A CN111368678 A CN 111368678A CN 202010120444 A CN202010120444 A CN 202010120444A CN 111368678 A CN111368678 A CN 111368678A
Authority
CN
China
Prior art keywords
face
area
boundary frame
boundary
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010120444.4A
Other languages
Chinese (zh)
Other versions
CN111368678B (en
Inventor
颜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010120444.4A priority Critical patent/CN111368678B/en
Publication of CN111368678A publication Critical patent/CN111368678A/en
Priority to PCT/CN2021/072461 priority patent/WO2021169668A1/en
Application granted granted Critical
Publication of CN111368678B publication Critical patent/CN111368678B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image processing method and a related device, which are applied to electronic equipment and comprise the following steps: acquiring a reference face area of a face image, and determining a boundary frame of the reference face area; adjusting the boundary frame of the reference face area to obtain a boundary frame of a standard face area, determining the standard face area according to the boundary frame of the standard face area, and adjusting the rectangular boundary frame into a square boundary frame according to the adjustment of the boundary frame; inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image; inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model. The embodiment of the application is beneficial to improving the precision of the face key point detection.

Description

Image processing method and related device
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to an image processing method and a related apparatus.
Background
The human face key point detection has wide application field, and can accurately position the facial features through the human face key point detection, so that the facial features can be positioned or adjusted. For example, the face recognition function of mobile devices such as smart phones and tablet computers, and the beauty and makeup function of cameras during photographing all require accurate face key point detection. According to the face key point detection based on deep learning, firstly, a face region is identified from a face image through a face detection network, and then the face region is input into the face key point detection network to obtain face key points.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which are beneficial to improving the detection precision of key points of a human face.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, and the method includes:
acquiring a reference face area of a face image, and determining a boundary frame of the reference face area;
adjusting the boundary frame of the reference face area to obtain a boundary frame of a standard face area, determining the standard face area according to the boundary frame of the standard face area, and adjusting the rectangular boundary frame into a square boundary frame according to the adjustment of the boundary frame;
inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image;
inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to an electronic device, where the electronic device includes an eyeball tracking component; the image processing apparatus includes a processing unit and a communication unit, wherein,
the processing unit is used for acquiring a first face image set through the eyeball tracking assembly in an eyeball tracking calibration process; the human eye region is used for identifying M human face images included in the first human face image set, wherein the M human face images are continuously shot in a preset time length, and M is a positive integer; the multi-frame fusion is carried out on the M face images in the first face image set according to the human eye areas of the M face images to obtain a second face image set comprising super-pixel face images, wherein the second face image set comprises N face images, and N is smaller than M; and obtaining an eyeball tracking calculation equation according to the second face image set, wherein the calculation equation is used for calculating the fixation point of the user in the eyeball tracking process.
In a third aspect, an embodiment of the present application provides an electronic device, including a controller, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the controller, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, the electronic device first acquires a reference face region of a face image, and determines a bounding box of the reference face region, secondly, adjusting the boundary frame of the reference face area to obtain the boundary frame of the standard face area, and determining a standard face region according to the bounding box of the standard face region, wherein the adjustment of the bounding box is used for adjusting a rectangular bounding box into a square bounding box, then, inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image, and finally, inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, and the trained second neural network model is a high-precision human face key point detection model. In the process of adjusting the boundary frame of the reference face area into the boundary frame of the standard face area, the rectangular boundary frame is adjusted into the square boundary frame, and the deformation and/or distortion of the face image is avoided, so that the accuracy of face key point detection is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 1B is a schematic diagram of a reference face area according to an embodiment of the present application;
fig. 1C is a schematic diagram of a standard face region according to an embodiment of the present application;
FIG. 1D is a diagram illustrating a bounding box to be adjusted according to an embodiment of the present disclosure;
FIG. 1E is a diagram of another bounding box to be adjusted according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of another image processing method provided in the embodiments of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a block diagram of functional units of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic devices may include various handheld devices with wireless communication capabilities, in-vehicle devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices, or other processes connected to a wireless modem. User Equipment (UE), Mobile Station (MS), terminal Equipment (terminal), and the like. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
The traditional face key point detection algorithm is mainly divided into a coordinate regression method and a deep learning method based on a neural network. Because the deep learning method based on the neural network, especially the convolutional neural network, has a very good effect in many computer vision tasks and is also obviously helpful for improving the performance of face key point detection, the current face key point detection algorithm mainly takes the deep learning method based on the convolutional neural network as a main part. The general steps of face key point detection based on deep learning are that firstly, a face region is detected from a given image through a face detection network, and then the detected face region is input into the face key point detection network to carry out face key point detection. Due to the diversity of the human face, the human face regions detected by different images are different in size and shape, so that the human face regions need to be adjusted to be of a uniform size and then input into a human face key point detection network for human face key point detection.
In the application, a face region directly detected by a face detection model is called a reference face region, and a face region which is adjusted in size and is ready to be input into a face key point detection model is called a standard face region, wherein the standard face image is square. In the prior art, a reference face area is converted into a standard face area, generally, a method of forcibly changing the size of a picture into a preset size is used, and the forced transformation has little influence on an image of which the reference face area is close to a square, but for the reference face area which is not a square image, the forced change can cause face deformation and distortion, thereby greatly influencing the accuracy of next face key point detection. Therefore, the unified method for converting the standard face image into the reference face image is provided, the conversion processes of all the face images are ensured to be consistent, the face deformation and distortion conditions are reduced, and the accuracy of face key point detection is improved.
The application provides an image processing method, which is used for improving the precision of face key point detection. The standard face region acquired in the application meets the following conditions: the standard human face area is square; the standard face region comprises most face key points; the standard face area is proper in size, and the distance between a boundary box of the standard face area and the face key point is proper; there is a correlation between the bounding box of the standard face region and the bounding box of the reference face region. The face key point detection method for realizing the four conditions can ensure that the face key points acquired by all face images are more accurate, the transformation process in the method does not utilize prior information related to the face key points, the transformed face area is square, the problems of face deformation and distortion caused by forced transformation cannot occur, and the method has a remarkable effect on improving the detection precision of the face key points.
Referring to fig. 1A, fig. 1A is a schematic flowchart of an image processing method applied to an electronic device according to an embodiment of the present disclosure. As shown in the figure, the image processing method includes:
s101, the electronic equipment acquires a reference face area of a face image and determines a boundary frame of the reference face area.
The face detection can be performed on the face image by using the trained face detection model, so that a reference face area and a boundary frame of the reference face area are determined. As shown in fig. 1B, 100 is a face image including face keypoint labels, 101 is a bounding box of a reference face image, a reference face image is in the bounding box, and the positions of the face keypoint labels 102 correspond to the face keypoints. A large number of experiments prove that although most of human faces including a side face and a shielding part can be detected by the trained human face detection model, the reference human face area detected by the human face detection model has the following problems: part of the face key points are not located in a rectangular area corresponding to the boundary frame of the reference face area, the shape of the boundary frame of the reference rehearsal area is rectangular instead of square, and the positions of the boundary frame of the reference rehearsal area are biased to the forehead direction.
And S102, the electronic equipment adjusts the boundary frame of the reference face area to obtain the boundary frame of the standard face area, determines the standard face area according to the boundary frame of the standard face area, and adjusts the rectangular boundary frame into a square boundary frame according to the adjustment of the boundary frame.
The shape of the bounding box of the reference face area is generally rectangular, so that the bounding box is required to be used. And adjusting the behavior into a square, thereby obtaining a bounding box of the standard face area. By observing a large number of bounding boxes of the reference face image, it is found that the height of the bounding box of the reference face image is generally greater than the width, so that when the bounding box of the reference face region is adjusted, the width of the bounding box can be expanded, and the expanded edge distance box can include more face key points.
S103, the electronic equipment inputs the standard face area into the first neural network model to obtain face key point coordinates of the face image.
According to the image processing method provided by the application, the face image corresponding to the standard face area is a square face image, the face deformation and distortion of the square face image are avoided, meanwhile, more face key points are included, the square face image is input into the first neural network model, face key point coordinates can be obtained, the obtained face key point coordinates are used as sample data and can be used for training the second neural network model, the first neural network model is used for positioning the face key points in the face image, and the second neural network model is used for detecting the face key points with high precision.
S104, the electronic equipment inputs the sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
And carrying out iterative training and optimization on the second neural network model through sample data to obtain a high-precision face key point detection model, wherein the sample data comprises face key point coordinates of the face image.
It can be seen that, in the embodiment of the present application, the electronic device first acquires a reference face region of a face image, and determines a bounding box of the reference face region, secondly, adjusting the boundary frame of the reference face area to obtain the boundary frame of the standard face area, and determining a standard face region according to the bounding box of the standard face region, wherein the adjustment of the bounding box is used for adjusting a rectangular bounding box into a square bounding box, then, inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image, and finally, inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, and the trained second neural network model is a high-precision human face key point detection model. In the process of adjusting the boundary frame of the reference face area into the boundary frame of the standard face area, the rectangular boundary frame is adjusted into the square boundary frame, and the deformation and/or distortion of the face image is avoided, so that the accuracy of face key point detection is improved.
In a possible example, the adjusting the bounding box of the reference face region to obtain the bounding box of the standard face region includes: determining the height and width of a bounding box of the reference face region; when the width is detected to be smaller than the height, calculating the absolute value of the difference between the width and the height; and adjusting the width of the boundary box of the reference face area to be consistent with the height, and moving the boundary box downwards to obtain the boundary box of the standard face area, wherein the distance of downward movement is one fourth of the absolute value of the difference.
Wherein, the height and width of the bounding box of the reference face area are determined, generally, the shape of the bounding box of the reference face area is rectangular, and the width is smaller than the height, therefore, when the width is detected to be smaller than the height, the absolute value of the difference between the width and the height is calculated, at this time, the boundary of the reference face area can be adjusted to be consistent with the height, and the bounding box is moved downwards to obtain the bounding box of the standard face area, and the distance of the downward movement is one fourth of the absolute value of the difference. By comparing and adjusting the positions of the boundary frame of the reference face area of a large number of face images and the face images in the reference face area, when the downward movement distance is found to be one fourth of the absolute value of the difference, more face detection key points can be contained in the boundary frame.
As can be seen, in this example, since the width of the bounding box of the reference face region is smaller than the height, the bounding box with the square shape can be obtained after the width is adjusted to be consistent with the height, and the bounding box with the square shape can include more face key points.
In one possible example, the moving the bounding box downwards to obtain the bounding box of the standard face region includes: judging whether the boundary frame after the downward movement is positioned in a display area of the face image; and if not, adjusting the boundary frame after moving downwards to obtain the boundary frame of the standard face area, wherein the adjustment is used for enabling the boundary frame to be located in the display area of the face image.
If the boundary frame is moved out of the display area of the face image after the boundary frame is moved downwards, the boundary frame after the movement needs to be adjusted again, and actually, the boundary frame is completely positioned in the display area of the face image. As shown in fig. 1D, the boundary frame 103 obtained after the downward movement exceeds the display area of the face image, and therefore, the boundary frame in fig. 1D needs to be adjusted again so that the boundary frame 103 is completely located in the display area of the face image.
As can be seen, in this example, when the face image is located at the edge of the display area, moving the bounding box downward may cause the bounding box to exceed the face display area, and therefore, after moving the bounding box downward, it is necessary to determine whether the next and subsequent bounding boxes are completely located in the display area of the face image, and thus when it is detected that the moved-down bounding box is not completely located in the display area, the bounding box needs to be adjusted again.
In a possible example, the adjusting the boundary box after the downward shifting to obtain the boundary box of the standard face area includes: when the area of the boundary frame after the downward movement is detected to be smaller than the area of the face image, calculating the offset distance and the offset direction of the boundary frame after the downward movement relative to the face image; and moving the boundary frame according to the direction opposite to the offset direction, so that the moving distance is equal to the offset distance, and the moved boundary frame is positioned in the display area of the face image.
When the boundary frame after the downward movement is adjusted, firstly, a strategy of moving the boundary frame is adopted, the boundary frame can be completely translated into a display area of the face image by moving the boundary frame, whether the area of the boundary frame after the downward movement is smaller than the area of the face image is firstly detected, if so, the offset distance and the offset direction of the boundary frame after the downward movement relative to the face image are calculated, as shown in fig. 1E, the offset distance is d, the offset direction is downward, at this time, the boundary frame can be moved in the opposite direction of the offset direction, and the movement distance is equal to the offset distance, that is, the boundary frame 103 is moved upwards, the movement distance is d, at this time, the boundary frame 103 can be positioned in the display area 100.
As can be seen, in this example, when the boundary frame after being moved down is not completely located in the display area of the face image, it is first detected whether the area of the boundary frame is smaller than the area of the face image, if so, the boundary frame may be completely located in the display area by moving the boundary frame, the offset direction and the offset distance of the boundary frame with respect to the display area of the face image are determined, and the boundary frame located in the display area of the face image may be obtained by moving the offset distance in the opposite direction of the offset direction.
In a possible example, the adjusting the boundary box after the downward shifting to obtain the boundary box of the standard face area includes: determining an intersection area of the boundary frame after the downward movement and the face image display area, wherein the intersection area is a rectangular area; and determining a longer side and a shorter side of the intersection area, cutting a square area from the intersection area by taking the shorter side as a reference, and determining a bounding box of the square area as a bounding box of the standard face area.
If the boundary frame after the downward shift is not completely located in the display area of the face image, the intersection area of the boundary frame after the downward shift and the display area of the face image can be determined, it can be known that the intersection area must be a rectangular area, the longer side and the shorter side of the intersection area are determined, a square area is cut out from the intersection area by taking the shorter side as a reference, the side length of the square area is equal to the length of the shorter side, at this time, a square area located in the intersection area is obtained, and the square area can be used as a standard face area.
As can be seen, in this example, if the boundary frame after the downward movement is not completely located in the display area of the face image, in order to obtain a boundary frame of a standard face area of a square, a square area may be selected to be cut out in the intersection area, and the square area is used as the standard face area. By the cutting mode, more face detection key points can be included in the boundary mania of the obtained standard face image area
In one possible example, the cropping a square region from the intersection region with reference to the shorter side includes: detecting whether the intersection region has an edge which is simultaneously overlapped with the boundary of the face image display region and the boundary frame after moving downwards; if so, cutting the square area by taking the superposed edges as the side lengths, and enabling the side length of the cut square area to be equal to the length of the shorter edge of the intersection area; if not, symmetrically cutting out a square area from the intersection area, and enabling the center of the cut square area to coincide with the center of the intersection area.
When the intersection region between the boundary frame after the downward movement and the face region display region is detected to have a side which is simultaneously overlapped with the boundary of the face image display region and the next boundary frame, cutting a square region by taking the overlapped side as the side length, wherein the side length of the cut square region is equal to the length of the shorter side of the intersection region, if no overlapped side exists, symmetrically cutting a square region from the intersection region, and the center of the cut square region is overlapped with the center of the intersection region.
It can be seen that, in this example, a square area is cut out from the intersection area as a standard face area, and whether an edge that coincides with the boundary of the face image display area and the boundary frame after the downward movement exists in the intersection area is detected, and if so, a square area can be cut out based on the coinciding edge, so as to ensure that the boundary frame of the cut square area coincides with the boundary frame of the next following area as much as possible, thereby being beneficial to ensuring that the square area includes more face detection key points.
In one possible example, the method further comprises: when the face image is detected to comprise a plurality of faces, dividing the face image into a plurality of face image areas according to the number of the faces; and sequentially determining the reference face areas of the face image areas according to the priorities of the faces.
When the face image is detected to include a plurality of faces, the face image can be divided into a plurality of face image regions according to the number of the faces, face key point detection is performed on each face region in the plurality of face regions in sequence, for example, reference face regions of the plurality of face image regions can be determined in sequence according to the priorities of the plurality of faces, and then face key points of the plurality of face image regions are determined.
As can be seen, in this example, when the face image is a multiple-person group photograph, the face image may include multiple faces, at this time, the face image may be divided into multiple face image regions, and face key point detection may be performed on the face image in each face image region in sequence, so that the face key point of each face may be obtained.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application, and the image processing method is applied to an electronic device. As shown in the figure, the image processing method includes:
s201, the electronic equipment acquires a reference face area of the face image and determines a boundary frame of the reference face area.
S202, the electronic equipment determines the height and the width of the bounding box of the reference face area.
S203, when the electronic equipment detects that the width is smaller than the height, calculating a difference absolute value between the width and the height.
And S204, the electronic equipment adjusts the width of the boundary box of the reference face area to be consistent with the height, and moves the boundary box downwards to obtain the boundary box of the standard face area, wherein the distance of downward movement is one fourth of the absolute value of the difference.
S205, the electronic equipment determines a standard face area according to the bounding box of the standard face area.
S206, the electronic equipment inputs the standard face area into the first neural network model to obtain the face key point coordinates of the face image.
And S207, inputting sample data containing the face key point coordinates into a second neural network model by the electronic equipment to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
It can be seen that, in the embodiment of the present application, the electronic device first acquires a reference face region of a face image, and determines a bounding box of the reference face region, secondly, adjusting the boundary frame of the reference face area to obtain the boundary frame of the standard face area, and determining a standard face region according to the bounding box of the standard face region, wherein the adjustment of the bounding box is used for adjusting a rectangular bounding box into a square bounding box, then, inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image, and finally, inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, and the trained second neural network model is a high-precision human face key point detection model. In the process of adjusting the boundary frame of the reference face area into the boundary frame of the standard face area, the rectangular boundary frame is adjusted into the square boundary frame, and the deformation and/or distortion of the face image is avoided, so that the accuracy of face key point detection is improved.
In addition, because the width of the bounding box of the reference face area is smaller than the height, the bounding box with the square shape can be obtained after the width is adjusted to be consistent with the height, and the bounding box with the square shape can include more face key points.
Consistent with the embodiments shown in fig. 1A and fig. 2, please refer to fig. 3, fig. 3 is a schematic structural diagram of an electronic device 300 provided in the embodiments of the present application, where the electronic device 300 runs with one or more application programs and an operating system, as shown in the figure, the electronic device 300 includes a processor 310, a memory 320, a communication interface 330, and one or more programs 321, where the one or more programs 321 are stored in the memory 320 and configured to be executed by the processor 310, and the one or more programs 321 include instructions for performing the following steps;
acquiring a reference face area of a face image, and determining a boundary frame of the reference face area;
adjusting the boundary frame of the reference face area to obtain a boundary frame of a standard face area, determining the standard face area according to the boundary frame of the standard face area, and adjusting the rectangular boundary frame into a square boundary frame according to the adjustment of the boundary frame;
inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image;
inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
It can be seen that, in the embodiment of the present application, the electronic device first acquires a reference face region of a face image, and determines a bounding box of the reference face region, secondly, adjusting the boundary frame of the reference face area to obtain the boundary frame of the standard face area, and determining a standard face region according to the bounding box of the standard face region, wherein the adjustment of the bounding box is used for adjusting a rectangular bounding box into a square bounding box, then, inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image, and finally, inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, and the trained second neural network model is a high-precision human face key point detection model. In the process of adjusting the boundary frame of the reference face area into the boundary frame of the standard face area, the rectangular boundary frame is adjusted into the square boundary frame, and the deformation and/or distortion of the face image is avoided, so that the accuracy of face key point detection is improved.
In a possible example, in the aspect of adjusting the bounding box of the reference face region to obtain the bounding box of the standard face region, the instructions in the program are specifically configured to perform the following operations: determining the height and width of a bounding box of the reference face region; when the width is detected to be smaller than the height, calculating the absolute value of the difference between the width and the height; and adjusting the width of the boundary box of the reference face area to be consistent with the height, and moving the boundary box downwards to obtain the boundary box of the standard face area, wherein the distance of downward movement is one fourth of the absolute value of the difference.
In one possible example, in terms of the bounding box of the standard face region obtained by moving the bounding box downward, the instructions in the program are specifically configured to perform the following operations: judging whether the boundary frame after the downward movement is positioned in a display area of the face image; and if not, adjusting the boundary frame after moving downwards to obtain the boundary frame of the standard face area, wherein the adjustment is used for enabling the boundary frame to be located in the display area of the face image.
In one possible example, in terms of the adjusting the boundary box after the downward shifting to obtain the boundary box of the standard face region, the instructions in the program are specifically configured to perform the following operations: when the area of the boundary frame after the downward movement is detected to be smaller than the area of the face image, calculating the offset distance and the offset direction of the boundary frame after the downward movement relative to the face image; and moving the boundary frame according to the direction opposite to the offset direction, so that the moving distance is equal to the offset distance, and the moved boundary frame is positioned in the display area of the face image.
In one possible example, in terms of the adjusting the boundary box after the downward shifting to obtain the boundary box of the standard face region, the instructions in the program are specifically configured to perform the following operations: determining an intersection area of the boundary frame after the downward movement and the face image display area, wherein the intersection area is a rectangular area; and determining a longer side and a shorter side of the intersection area, cutting a square area from the intersection area by taking the shorter side as a reference, and determining a bounding box of the square area as a bounding box of the standard face area.
In one possible example, in the aspect of the cutting out a square region from the intersection region with the shorter side as a reference, the instructions in the program are specifically configured to perform the following operations: detecting whether the intersection region has an edge which is simultaneously overlapped with the boundary of the face image display region and the boundary frame after moving downwards; if so, cutting the square area by taking the superposed edges as the side lengths, and enabling the side length of the cut square area to be equal to the length of the shorter edge of the intersection area; if not, symmetrically cutting out a square area from the intersection area, and enabling the center of the cut square area to coincide with the center of the intersection area.
In one possible example, the instructions in the program are specifically for performing the following: when the face image is detected to comprise a plurality of faces, dividing the face image into a plurality of face image areas according to the number of the faces; and sequentially determining the reference face areas of the face image areas according to the priorities of the faces.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 4 is a block diagram of functional units of an apparatus 400 involved in the embodiments of the present application. The image processing apparatus 400 is applied to an electronic device, and the image processing apparatus 400 includes a processing unit 401 and a communication unit 402, in which:
the processing unit 401 is configured to acquire a reference face region of a face image through the communication unit 402, and determine a bounding box of the reference face region; the standard face area is determined according to the boundary frame of the standard face area, and the rectangular boundary frame is adjusted into a square boundary frame according to the adjustment of the boundary frame; the standard face area is input into the first neural network model to obtain face key point coordinates of the face image; and the system is used for inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
It can be seen that, in the embodiment of the present application, the electronic device first acquires a reference face region of a face image, and determines a bounding box of the reference face region, secondly, adjusting the boundary frame of the reference face area to obtain the boundary frame of the standard face area, and determining a standard face region according to the bounding box of the standard face region, wherein the adjustment of the bounding box is used for adjusting a rectangular bounding box into a square bounding box, then, inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image, and finally, inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, and the trained second neural network model is a high-precision human face key point detection model. In the process of adjusting the boundary frame of the reference face area into the boundary frame of the standard face area, the rectangular boundary frame is adjusted into the square boundary frame, and the deformation and/or distortion of the face image is avoided, so that the accuracy of face key point detection is improved.
In a possible example, in terms of adjusting the bounding box of the reference face region to obtain the bounding box of the standard face region, the processing unit 401 is specifically configured to: determining the height and width of a bounding box of the reference face region; and for calculating the absolute value of the difference between the width and the height when it is detected that the width is smaller than the height; and the width of the boundary box of the reference face area is adjusted to be consistent with the height, and the boundary box is moved downwards to obtain the boundary box of the standard face area, wherein the distance of downward movement is one fourth of the absolute value of the difference.
In a possible example, in terms of the moving the bounding box downwards to obtain the bounding box of the standard face region, the processing unit 401 is specifically configured to: judging whether the boundary frame after the downward movement is positioned in a display area of the face image; and if not, adjusting the boundary frame after moving downwards to obtain the boundary frame of the standard face area, wherein the adjustment is used for enabling the boundary frame to be located in the display area of the face image.
In a possible example, in terms of the adjusting the boundary box after the downward shifting to obtain the boundary box of the standard face region, the processing unit 401 is specifically configured to: when the area of the boundary frame after the downward movement is detected to be smaller than the area of the face image, calculating the offset distance and the offset direction of the boundary frame after the downward movement relative to the face image; and the boundary frame is used for moving in the opposite direction of the offset direction, so that the moving distance is equal to the offset distance, and the moved boundary frame is positioned in the display area of the face image.
In a possible example, in terms of the adjusting the boundary box after the downward shifting to obtain the boundary box of the standard face region, the processing unit 401 is specifically configured to: determining an intersection area of the boundary frame after the downward movement and the face image display area, wherein the intersection area is a rectangular area; and the method is used for determining a longer side and a shorter side of the intersection area, cutting a square area from the intersection area by taking the shorter side as a reference, and determining a bounding box of the square area as a bounding box of the standard face area.
In one possible example, in terms of the cutting out a square region from the intersection region with the shorter side as a reference, the processing unit 401 is specifically configured to: detecting whether the intersection region has an edge which is simultaneously overlapped with the boundary of the face image display region and the boundary frame after moving downwards; if so, cutting the square area by taking the superposed edges as the side lengths, and enabling the side length of the cut square area to be equal to the length of the shorter edge of the intersection area; if not, symmetrically cutting out a square area from the intersection area, and enabling the center of the cut square area to coincide with the center of the intersection area.
In one possible example, the processing unit 401 is specifically configured to: when the face image is detected to comprise a plurality of faces, dividing the face image into a plurality of face image areas according to the number of the faces; and sequentially determining the reference face areas of the face image areas according to the priorities of the faces.
The electronic device may further include a storage unit 403, the processing unit 401 and the communication unit 402 may be a controller or a processor, and the storage unit 403 may be a memory.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated into one control unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
acquiring a reference face area of a face image, and determining a boundary frame of the reference face area;
adjusting the boundary frame of the reference face area to obtain a boundary frame of a standard face area, determining the standard face area according to the boundary frame of the standard face area, and adjusting the rectangular boundary frame into a square boundary frame according to the adjustment of the boundary frame;
inputting the standard face region into the first neural network model to obtain face key point coordinates of the face image;
inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
2. The method of claim 1, wherein the adjusting the bounding box of the reference face region to obtain the bounding box of the standard face region comprises:
determining the height and width of a bounding box of the reference face region;
when the width is detected to be smaller than the height, calculating the absolute value of the difference between the width and the height;
and adjusting the width of the boundary box of the reference face area to be consistent with the height, and moving the boundary box downwards to obtain the boundary box of the standard face area, wherein the distance of downward movement is one fourth of the absolute value of the difference.
3. The method of claim 2, wherein moving the bounding box downward to obtain the bounding box of the standard face region comprises:
judging whether the boundary frame after the downward movement is positioned in a display area of the face image;
and if not, adjusting the boundary frame after moving downwards to obtain the boundary frame of the standard face area, wherein the adjustment is used for enabling the boundary frame to be located in the display area of the face image.
4. The method according to claim 3, wherein the adjusting the boundary box after the downward movement to obtain the boundary box of the standard face region comprises:
when the area of the boundary frame after the downward movement is detected to be smaller than the area of the face image, calculating the offset distance and the offset direction of the boundary frame after the downward movement relative to the face image;
and moving the boundary frame according to the direction opposite to the offset direction, so that the moving distance is equal to the offset distance, and the moved boundary frame is positioned in the display area of the face image.
5. The method according to claim 3, wherein the adjusting the boundary box after the downward movement to obtain the boundary box of the standard face region comprises:
determining an intersection area of the boundary frame after the downward movement and the face image display area, wherein the intersection area is a rectangular area;
and determining a longer side and a shorter side of the intersection area, cutting a square area from the intersection area by taking the shorter side as a reference, and determining a bounding box of the square area as a bounding box of the standard face area.
6. The method of claim 5, wherein the cropping a square region from the intersection region based on the shorter side comprises:
detecting whether the intersection region has an edge which is simultaneously overlapped with the boundary of the face image display region and the boundary frame after moving downwards;
if so, cutting the square area by taking the superposed edges as the side lengths, and enabling the side length of the cut square area to be equal to the length of the shorter edge of the intersection area;
if not, symmetrically cutting out a square area from the intersection area, and enabling the center of the cut square area to coincide with the center of the intersection area.
7. The method according to any one of claims 1-6, further comprising:
when the face image is detected to comprise a plurality of faces, dividing the face image into a plurality of face image areas according to the number of the faces;
and sequentially determining the reference face areas of the face image areas according to the priorities of the faces.
8. An image processing apparatus applied to an electronic device, the image processing apparatus including a processing unit and a communication unit, wherein,
the processing unit is used for acquiring a reference face area of the face image through the communication unit and determining a boundary frame of the reference face area; the standard face area is determined according to the boundary frame of the standard face area, and the rectangular boundary frame is adjusted into a square boundary frame according to the adjustment of the boundary frame; the standard face area is input into the first neural network model to obtain face key point coordinates of the face image; and the system is used for inputting sample data containing the face key point coordinates into a second neural network model to train the second neural network model, wherein the trained second neural network model is a high-precision face key point detection model.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the image processing method of any of claims 1-7.
10. A computer-readable storage medium, characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the image processing method according to any one of claims 1 to 7.
CN202010120444.4A 2020-02-26 2020-02-26 Image processing method and related device Active CN111368678B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010120444.4A CN111368678B (en) 2020-02-26 2020-02-26 Image processing method and related device
PCT/CN2021/072461 WO2021169668A1 (en) 2020-02-26 2021-01-18 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010120444.4A CN111368678B (en) 2020-02-26 2020-02-26 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN111368678A true CN111368678A (en) 2020-07-03
CN111368678B CN111368678B (en) 2023-08-25

Family

ID=71208133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010120444.4A Active CN111368678B (en) 2020-02-26 2020-02-26 Image processing method and related device

Country Status (2)

Country Link
CN (1) CN111368678B (en)
WO (1) WO2021169668A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464740A (en) * 2020-11-05 2021-03-09 北京科技大学 Image processing method and system for top-down gesture recognition process
CN112818908A (en) * 2021-02-22 2021-05-18 Oppo广东移动通信有限公司 Key point detection method, device, terminal and storage medium
WO2021169668A1 (en) * 2020-02-26 2021-09-02 Oppo广东移动通信有限公司 Image processing method and related device
WO2022037535A1 (en) * 2020-08-21 2022-02-24 海信视像科技股份有限公司 Display device and camera tracking method
CN115908260A (en) * 2022-10-20 2023-04-04 北京的卢铭视科技有限公司 Model training method, face image quality evaluation method, device and medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581986A (en) * 2021-10-20 2022-06-03 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114116182B (en) * 2022-01-28 2022-07-08 南昌协达科技发展有限公司 Disinfection task allocation method and device, storage medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010068470A (en) * 2008-09-12 2010-03-25 Dainippon Printing Co Ltd Apparatus for automatically trimming face image
CN107871098A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 Method and device for acquiring human face characteristic points
US20190102872A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Glare Reduction in Captured Images
US20190294929A1 (en) * 2018-03-20 2019-09-26 The Regents Of The University Of Michigan Automatic Filter Pruning Technique For Convolutional Neural Networks
CN110807448A (en) * 2020-01-07 2020-02-18 南京甄视智能科技有限公司 Human face key point data enhancement method, device and system and model training method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295678B (en) * 2016-07-27 2020-03-06 北京旷视科技有限公司 Neural network training and constructing method and device and target detection method and device
US10747224B2 (en) * 2018-06-19 2020-08-18 Toyota Research Institute, Inc. Debugging an autonomous driving machine learning model
CN110674874B (en) * 2019-09-24 2022-11-29 武汉理工大学 Fine-grained image identification method based on target fine component detection
CN111368678B (en) * 2020-02-26 2023-08-25 Oppo广东移动通信有限公司 Image processing method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010068470A (en) * 2008-09-12 2010-03-25 Dainippon Printing Co Ltd Apparatus for automatically trimming face image
CN107871098A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 Method and device for acquiring human face characteristic points
US20190102872A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Glare Reduction in Captured Images
US20190294929A1 (en) * 2018-03-20 2019-09-26 The Regents Of The University Of Michigan Automatic Filter Pruning Technique For Convolutional Neural Networks
CN110807448A (en) * 2020-01-07 2020-02-18 南京甄视智能科技有限公司 Human face key point data enhancement method, device and system and model training method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋加涛,刘济林,池哲儒,王蔚: "人脸正面图像中眼睛的精确定位" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021169668A1 (en) * 2020-02-26 2021-09-02 Oppo广东移动通信有限公司 Image processing method and related device
WO2022037535A1 (en) * 2020-08-21 2022-02-24 海信视像科技股份有限公司 Display device and camera tracking method
CN112464740A (en) * 2020-11-05 2021-03-09 北京科技大学 Image processing method and system for top-down gesture recognition process
CN112818908A (en) * 2021-02-22 2021-05-18 Oppo广东移动通信有限公司 Key point detection method, device, terminal and storage medium
CN115908260A (en) * 2022-10-20 2023-04-04 北京的卢铭视科技有限公司 Model training method, face image quality evaluation method, device and medium
CN115908260B (en) * 2022-10-20 2023-10-20 北京的卢铭视科技有限公司 Model training method, face image quality evaluation method, equipment and medium

Also Published As

Publication number Publication date
WO2021169668A1 (en) 2021-09-02
CN111368678B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN111368678A (en) Image processing method and related device
CN108681743B (en) Image object recognition method and device and storage medium
CN105488511B (en) The recognition methods of image and device
CN109684980B (en) Automatic scoring method and device
EP3706039A1 (en) Image processing method, apparatus, and computer-readable recording medium
CN111126394A (en) Character recognition method, reading aid, circuit and medium
EP3273388A1 (en) Image information recognition processing method and device, and computer storage medium
WO2022134771A1 (en) Table processing method and apparatus, and electronic device and storage medium
CN110619334B (en) Portrait segmentation method based on deep learning, architecture and related device
CN113780201B (en) Hand image processing method and device, equipment and medium
CN110827301B (en) Method and apparatus for processing image
CN111149101A (en) Target pattern searching method and computer readable storage medium
CN113850238A (en) Document detection method and device, electronic equipment and storage medium
CN112927163A (en) Image data enhancement method and device, electronic equipment and storage medium
CN112348025A (en) Character detection method and device, electronic equipment and storage medium
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
US11367296B2 (en) Layout analysis
CN104635932A (en) Method and equipment for adjusting display contents
CN112036268B (en) Component identification method and related device
CN117561547A (en) Scene determination method, device and computer readable storage medium
CN112613409A (en) Hand key point detection method and device, network equipment and storage medium
CN113298098A (en) Fundamental matrix estimation method and related product
CN111522988A (en) Image positioning model obtaining method and related device
CN114820575B (en) Image verification method and device, computer equipment and storage medium
CN111103967A (en) Control method and device of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant