CN116503924B - Portrait hair edge processing method and device, computer equipment and storage medium - Google Patents

Portrait hair edge processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116503924B
CN116503924B CN202310332813.XA CN202310332813A CN116503924B CN 116503924 B CN116503924 B CN 116503924B CN 202310332813 A CN202310332813 A CN 202310332813A CN 116503924 B CN116503924 B CN 116503924B
Authority
CN
China
Prior art keywords
curve
area
hair
face information
defining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310332813.XA
Other languages
Chinese (zh)
Other versions
CN116503924A (en
Inventor
陈达佳
吕杏华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yipai Alliance Network Technology Co ltd
Original Assignee
Guangzhou Yipai Alliance Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yipai Alliance Network Technology Co ltd filed Critical Guangzhou Yipai Alliance Network Technology Co ltd
Priority to CN202310332813.XA priority Critical patent/CN116503924B/en
Publication of CN116503924A publication Critical patent/CN116503924A/en
Application granted granted Critical
Publication of CN116503924B publication Critical patent/CN116503924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/88Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
    • G06V10/89Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters using frequency domain filters, e.g. Fourier masks implemented on spatial light modulators
    • G06V10/893Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters using frequency domain filters, e.g. Fourier masks implemented on spatial light modulators characterised by the kind of filter
    • G06V10/898Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters using frequency domain filters, e.g. Fourier masks implemented on spatial light modulators characterised by the kind of filter combination of filters, e.g. phase-only filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

The invention discloses a portrait hair edge processing method, a portrait hair edge processing device, computer equipment and a storage medium, wherein the portrait hair edge processing method comprises the following steps: acquiring face information; dividing face information into a static area and a dynamic area; calculating a distance function of the dead zone to obtain the approximate contraction direction of the hair, wherein the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant from the first curve, and an annular area surrounded by the first curve and the second curve is a calculation area; re-calculating a distance function, and performing finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point; and changing the face information, and combining the changed moving area and the unchanged static area to obtain the changed face information. The invention can automatically compress and collect the fluffy and scattered hair in the portrait, and avoid the failure of verification caused by the fluffy hair of the photo; meanwhile, the processing effect is natural, and distortion is avoided.

Description

Portrait hair edge processing method and device, computer equipment and storage medium
Technical Field
The invention relates to a portrait hair edge processing method, a portrait hair edge processing device, computer equipment and a storage medium, and belongs to the technical field of image processing in photographing service.
Background
The human image information is also an important personal identification mark as an individual privacy source in the AI (artificial intelligence) era, becomes a key basis for verifying that the person is the person, and is closely related to people working, living, learning, examination, country departure and the like. The collection of the portrait information (namely, the certificate photograph) of the certificate is a serious issue for protecting the information security from the source, and because the certificate photograph requires that the portrait hair cannot be fluffy and scattered and the hair ratio cannot be too large, the image processing in the existing photographing service is to compress the hair by manual processing.
Disclosure of Invention
In view of the above, the present invention provides a portrait hair edge processing method, apparatus, computer device and storage medium, which can automatically compress and collect loose hair in a portrait, and avoid that photos fail to pass through due to loose hair; meanwhile, the processing effect is natural, and distortion is avoided.
A first object of the present invention is to provide a portrait hair edge processing method.
A second object of the present invention is to provide a portrait hair edge processing device.
A third object of the present invention is to provide a computer device.
A fourth object of the present invention is to provide a storage medium.
The first object of the present invention can be achieved by adopting the following technical scheme:
a portrait hair edge processing method, the method comprising:
acquiring face information;
dividing face information into a static area and a dynamic area, wherein the static area comprises a part with relatively dense human head, human body, clothes and hair, and the dynamic area comprises a part with relatively sparse hair and a background;
calculating a distance function of the dead zone to obtain a rough contraction direction of the hair, wherein the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant to the first curve, so that the second curve contains most of the hair, and an annular area surrounded by the first curve and the second curve is a calculation area;
according to the annular region, recalculating a distance function, defining the distances at the edge, inside and outside of the annular region, and carrying out finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point;
And changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
Further, the performing finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point includes:
adopting a triangle segmentation method to segment the annular region by finite elements to obtain a triangle net;
and carrying out finite element calculation on the triangular network area to obtain the displacement of each pixel point.
Further, the triangle segmentation method is used for carrying out finite element segmentation on the annular region to obtain a triangle network, and the triangle network comprises the following steps:
solving on an annular area by using a first-order nonlinear partial differential equation to obtain a size distribution function of a finite element triangle;
and obtaining the triangular net by solving a partial differential equation of the structural mechanics according to the size distribution function of the finite element triangle and the hyperstatic structural calculation in the structural mechanics.
Further, the construction process of the first-order nonlinear partial differential equation is as follows:
defining a closed curve of position and time, and defining curvature, moving direction, moving speed, speed component of x-axis direction, speed component of y-axis direction and speed function of the closed curve;
And constructing a first-order nonlinear partial differential equation according to the closed curve, and curvature, moving direction, moving speed, speed component in x-axis direction, speed component in y-axis direction and speed function of the closed curve.
Further, the defining a closed curve of position and time, and defining a curvature, a moving direction, a moving speed, a speed component in an x-axis direction, a speed component in a y-axis direction, and a speed function of the closed curve includes:
defining a closed curve of position and time, and the following formula is:
wherein,representing a certain point on the curve, x and y representing coordinates, s being the position parameter of the curve, t being the time parameter of the curve;
defining a curvature of the closed curve as follows:
wherein K (s, t) represents curvature, x s Representing the first partial derivative of x with respect to s, y s Representing the first partial derivative of y with respect to s, x ss Representing the second partial derivative of x with respect to s, y ss Representing the second partial derivative of y with respect to s;
defining the moving direction of the closed curve as the normal direction, wherein the moving direction is represented by the following formula:
wherein,a certain point s on the curve is represented, and a normal vector is obtained at the moment t;
defining a movement rate of the closed curve as follows:
wherein F (K) represents a movement rate;
Defining a speed component in the x-axis direction and a speed component in the y-axis direction of the closed curve, wherein the speed component and the speed component are as follows:
wherein x is t Representing the first partial derivative of x with respect to t, y t Representing the first partial derivative of y with respect to t;
defining a rate function of the closed curve as follows:
F(K)=1-εK
where ε is a constant that is greater than 0.
Further, the performing finite element calculation on the triangle area to obtain the displacement of each pixel point includes:
in the triangular mesh area, solving a partial differential equation of fluid mechanics to obtain displacement quantity of a corresponding triangular mesh, and calculating the displacement quantity of each pixel point through triangular interpolation.
Further, the defining the distance at the edge, inside and outside of the annular region comprises:
the distance defined at the edge of the annular region is 0, the distance inside the annular region is negative and the distance outside the annular region is positive.
The second object of the present invention can be achieved by adopting the following technical scheme:
a portrait hair edge processing device, said device comprising:
the acquisition module is used for acquiring the face information;
the dividing module is used for dividing the face information into a static area and a dynamic area, wherein the static area comprises a part with relatively dense human head, human body, clothes and hair, and the dynamic area comprises a part with relatively sparse hair and a background;
The first calculation module is used for calculating a distance function of the dead zone to obtain the approximate contraction direction of the hair, the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant to the first curve, so that the second curve contains most of the hair, and an annular area surrounded by the first curve and the second curve is a calculation area;
the second calculation module is used for recalculating a distance function according to the annular area, defining the distances at the edge, inside and outside of the annular area, and carrying out finite element segmentation and finite element calculation on the annular area to obtain the displacement of each pixel point;
and the combination module is used for changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
Further, after the obtaining module, the method further includes:
and the cutting and adjusting module is used for performing photo cutting and color adjustment on the face information.
The third object of the present invention can be achieved by adopting the following technical scheme:
the computer equipment comprises a processor and a memory for storing a program executable by the processor, wherein the processor realizes the portrait hair edge processing method when executing the program stored by the memory.
The fourth object of the present invention can be achieved by adopting the following technical scheme:
a storage medium storing a program which, when executed by a processor, implements the portrait hair edge processing method described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the face information is divided into a static area and a moving area, the distance function of the static area is calculated to obtain the approximate contraction direction of hair, the distance function is recalculated according to an annular area formed by two curves of the distance function, finite element segmentation and finite element calculation are carried out on the annular area to obtain the displacement of each pixel point, so that the face information is changed, the changed moving area and the unchanged static area are combined to obtain the changed face information, the fluffy and scattered hair in the collected human image can be automatically compressed, and the failure of examination and verification caused by the fluffy hair of a photo is avoided; meanwhile, the processing effect is natural, and distortion is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a photographing service system according to embodiment 1 of the present invention.
Fig. 2 is a flowchart of a photographing service method of the photographing apparatus according to embodiment 1 of the present invention.
Fig. 3 is a flowchart of a platform server photographing service method according to embodiment 1 of the present invention.
Fig. 4 is a flowchart of a portrait hair edge processing method according to embodiment 1 of the present invention.
Fig. 5 is an original drawing of embodiment 1 of the present invention and a schematic view of its divided dead zones.
Fig. 6 is a schematic diagram of the division of the dead zone and the moving zone according to embodiment 1 of the present invention.
Fig. 7 is a schematic diagram of the distance function of the dead zone in embodiment 1 of the present invention.
Fig. 8 is a schematic diagram of the distance function of the annular region according to embodiment 1 of the present invention.
Fig. 9 is a diagram showing a size distribution function of a finite element triangle according to embodiment 1 of the present invention.
Fig. 10 is a schematic diagram of a triangular network area according to embodiment 1 of the present invention.
Fig. 11 is a schematic diagram of the displacement of each pixel in embodiment 1 of the present invention.
Fig. 12 is a diagram showing the original image and the final effect image of example 1 of the present invention.
Fig. 13 is a block diagram showing a portrait hair edge processing device according to embodiment 2 of the present invention.
Fig. 14 is a block diagram showing the structure of a computer device according to embodiment 3 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Example 1:
the embodiment provides a portrait hair edge processing method, which can be applied to photo processing of a photo taking service system, as shown in fig. 1, wherein the photo taking service system comprises a photo taking device 101, a mobile terminal 102 and a platform server 103, the photo taking device 101 is connected with the mobile terminal 102, and the platform server 103 is respectively connected with the photo taking device 101 and the mobile terminal 102; the present embodiment may perform photo processing by the photographing apparatus 101, or may perform photo processing by the platform server 103.
When the photo processing is performed by the photo taking apparatus 101, a photo taking service method of the photo taking apparatus 101 is as shown in fig. 2, and specifically includes:
s201, identifying a user, and guiding shooting according to the action of the user.
The present embodiment may further include, before step S101: and acquiring the applet code sent by the platform server, and displaying the applet code which is a two-dimensional code through a display screen.
The operation of identifying the user specifically comprises the following steps: when a user appears in the photographing equipment, sensing the user, carrying out face detection on the user by utilizing an SDM algorithm HyperLandmark, and acquiring 106 key points of the face from a face detection frame of the user to form a most basic facial figure of five sense organs, wherein the facial figure comprises the lower edge of the face, eyebrows, eyes, nose and mouth, so as to judge whether the posture of the head of the photographing person is correct; and acquiring the key points of the limbs of the person from the person detection frame by using an openpost model under the caffe network frame, wherein the key points comprise 14 key points such as a head, a trunk, limbs and the like, and judging whether the person is righted in posture, whether the hand is lifted or whether the person has shielding behaviors or not according to the key points.
In order to sense a user, when the photographing equipment is in a standby state, playing propaganda videos or animations, detecting an environment scene by using a YOLO network frame FastestDet model, continuing to stand by when no current person exists, starting to detect a pedestrian face until a pedestrian recognition frame is detected, and detecting face frame and five-point positioning information of five sense organs by using an mtcnn frame network; when a certain time frame passes, determining that the user is stably seated, namely finishing the operation of sensing the user; the steady state refers to that the user judges that the user is steady when the user does not move for a plurality of continuous frames.
Wherein, according to the action of the user, the shooting is guided to the middle position of the picture, the operation can include controlling the loudspeaker to play voice and/or controlling the display screen to play animation, and the operation is executed after the face frame and five-point positioning information of the five sense organs are detected.
S202, acquiring a first instruction sent by the mobile terminal, and preparing a shooting environment.
Specifically, a user scans an applet code displayed by a photographing device through a mobile terminal (such as a mobile phone and the like), opens the applet, enters an intelligent photographing service, and sends a first instruction to the photographing device, so that the photographing device obtains the first instruction sent after the mobile terminal scans the applet code, wherein the first instruction comprises a certificate type selected by the user for photographing, and the certificate type comprises three major certificates including an identity card, a driving license, an entrance certificate and other certificates; it will be appreciated by those skilled in the art that the user may enter the smart photographing service by designating the APP input or scanning the one-time code displayed on the photographing apparatus, and input the first instruction to the photographing apparatus, thereby enabling the photographing apparatus to acquire the first instruction input by the mobile terminal through designating the APP.
The preparation shooting environment of this embodiment includes controlling speaker broadcast pronunciation, controlling the light filling lamp, controlling and photographing the part and open/go up and down, and the part of photographing can adopt camera or camera, and the light filling lamp can be for the overall arrangement in different positions, the light filling lamp group of the different directional light sources of providing.
S203, primarily shooting the face of the user to obtain first face information, and playing a picture shot in real time through a display screen.
The embodiment adopts the unidirectional glass technology, after the viewfinder is aligned with the human image, the human face of the user is photographed for the first time, the first human face information is obtained, the picture photographed in real time is played through the display screen, and the user looks like a mirror through the unidirectional glass reflection.
S204, certificate standard detection is carried out on the first face information.
The method comprises the steps that the certificate standard detection can be carried out through photographing equipment, a certificate standard detection result is directly obtained, the method is easy to understand, a first instruction and first face information can be sent to a platform server, the platform server carries out certificate standard detection on the first face information, and the certificate standard detection result of the platform server is obtained.
Further, the credential standard detection of this embodiment is to detect whether the user wears glasses, whether hair is blocked, whether clothing is blocked, and so on, using the EffectientNet-mobile classification model under the pytorch framework.
If the certificate illumination standard detection result does not meet the certificate illumination standard, that is, if any standard detection does not meet the requirements, step S205 is performed; if the certification standard is detected, the process proceeds to step S206.
S205, according to the certificate illumination standard detection result, adjusting the photographing equipment to photograph the face of the user again to obtain new first face information until the certificate illumination standard detection result is passing detection.
Specifically, according to the detection result of the certificate standard, the photographing device is adjusted, and/or the user is waited to enter a stable state again after finishing, the face of the user is photographed again, the face information which is photographed again is used as the first face information, the detection of the certificate standard in the step S204 is returned, the detection of the next standard is carried out until the detection result of the certificate standard is passing, and the step S206 is entered.
The adjusting photographing equipment comprises controlling the photographing part to lift and/or controlling the light supplementing lamp and/or controlling the loudspeaker to play voice and the display screen to play animation to guide the user to arrange the photographing part in order.
Further, controlling the photographing part to lift includes: according to the first face information, adjusting the position of a photographing part to reach the optimal photographing height; controlling a light supplement lamp, comprising: the skin area in the first face information is segmented by utilizing the segmentation model, the illumination intensity distribution in the skin area is calculated, the shadow position is calculated, the illumination intensity of the light supplement lamps is adjusted, the light intensity of the light supplement lamp groups distributed at different positions can be easily understood, and the optimal illumination shooting environment is achieved; guiding the user to arrange by oneself, comprising: and guiding the user to correct facial expressions, hairs, postures (including head postures and body postures) and clothes ornaments according to the certificate illumination standard detection result.
S206, formally shooting the face of the user to obtain second face information, performing photo processing on the second face information, and sending the processed second face information to the platform server.
Specifically, the user is ready according to the direction, and the photographing device plays the voice: and 3, 2 and 1, formally shooting the face of the user to obtain second face information, wherein the shooting equipment can directly carry out photo processing on the second face information at the moment, send the processed second face information to the platform server, carry out photo auditing on the processed second face information by the platform server, feed back a photo auditing result to the mobile terminal, acquire a second instruction of the mobile terminal, and send the audited certificate photo and the certificate code to the mobile terminal according to the second instruction.
After shooting is finished, the shooting equipment restores the daily environment, and the restoration of the daily environment comprises controlling the display screen to play and guide the user to check the animation of the mobile terminal, controlling the loudspeaker to play and guide the user to check the voice of the mobile terminal, controlling the shooting part to be closed and controlling the light supplementing lamp to be closed.
When the photo processing is performed by the platform server 103, the photo service method of the platform server 103 is as shown in fig. 3, and specifically includes:
S301, acquiring a first instruction and first face information sent by photographing equipment, and performing certificate standard detection on the first face information.
The present embodiment may further include, before step S301: receiving a photographing equipment application of a photo hall through a client or an official website, wherein the photographing equipment application comprises a photo hall store operating qualification, a signing equipment use treaty and the like; according to the application of the photographing equipment, auditing the photo-album, after qualification auditing is passed, performing relevant information configuration on the photographing equipment, and sending the information configuration to the contracted photo-album; after the photographing equipment with the information configuration is dispatched to the contract photo studio, all the information of the photographing equipment in the contract photo studio, such as equipment application systems, equipment information configuration, equipment hardware upgrading maintenance and management, is uniformly managed and controlled.
Further, the controlling photographing equipment further comprises receiving information that the photo album logs in the photo album account number (login account number/mobile phone number+password) through a computer client or a mobile terminal applet to bind the photographing equipment.
S302, sending the certificate photo standard detection result to photographing equipment.
After receiving the standard detection result of the certificate, the photographing equipment judges the standard detection result of the certificate, and if the standard detection result of the certificate does not accord with the standard of the certificate, the photographing equipment is adjusted to photograph the face of the user again according to the standard detection result of the certificate until the standard detection result of the certificate passes the detection; if the detection result of the certificate photo standard is passing detection, formally shooting the face of the user to obtain second face information, and sending the second face information to the platform server.
Since the credential standard detection can be implemented by the photographing apparatus, the smart photographing service method of the present embodiment may also directly perform steps S303-S304.
S303, acquiring second face information sent by the photographing equipment, performing photo processing on the second face information, performing photo checking on the processed second face information, and feeding back a photo checking result to the mobile terminal.
The platform server can acquire unprocessed second face information sent by the photographing equipment, perform photo processing on the second face information, and perform photo auditing on the processed second face information.
The platform server directly acquires the second face information acquired and sent by the photographing equipment, and performs photo auditing on the second face information without processing by an unauthorized third party (such as a photo studio and manual photo repairing software), so that the risk of photo information leakage is radically eliminated.
When the photo auditing result is qualified, a user can check and confirm the automatically generated qualified certificate photo effect on an applet or APP of the mobile terminal, and can send a second instruction comprising payment, photo downloading, receipt and the like to a platform server through the mobile terminal, and after payment, the certificate photo can be downloaded and stored in the mobile terminal photo album of the user; when the photo checking result is unqualified, displaying unqualified reasons and solutions on the applet or the APP, displaying photo original pictures, marking unqualified positions in the photos, entering the unqualified pages, starting counting down for 20 seconds, and sending a second instruction containing selection of re-shooting to a platform server by a user within 20 seconds through the mobile terminal, wherein the second instruction exceeds 20 seconds, and re-scanning shooting is needed.
The photo processing of this embodiment includes photo cropping, color adjustment, hair edge processing, and background color conversion according to the type of certificate in the first instruction.
S304, acquiring a second instruction of the mobile terminal, and transmitting the checked certificate photo and the certificate code to the mobile terminal according to the second instruction.
Specifically, sending the certificate photo and the certificate code to the mobile terminal comprises the following steps: after the photo checking result is qualified, the platform server processes the second face information into a plurality of commonly used certificate photos; the platform server encrypts various common certificate photographic sheets through a watermark encryption technology, stores the encrypted various common certificate photographic sheets in a database, and simultaneously converts the encrypted various common certificate photographic sheet information into a unique certificate two-dimensional code; after the platform server converts the encrypted various common certificate photographic information into a unique certificate two-dimensional code, the certificate two-dimensional code is sent to the government affair system intranet through a special transmission channel, so that the photo information security is ensured; and meanwhile, the checked certificate photo and the certificate two-dimension code are sent to the mobile terminal, and the user can store the electronic certificate photo into the photo album of the mobile terminal.
The platform server sends the portrait information to the government system intranet, the user photo information is completely kept secret in the whole transmission process, an unauthorized third party is prevented from acquiring any photo information of the user, and the user information safety is ensured.
The user takes a picture at the photographing equipment, the platform server detects and processes the photo, after the photo is qualified, various commonly used certificate photo photos are automatically generated and converted into unique license two-dimensional codes, namely the license two-dimensional codes contain various commonly used certificate photo information, one-time photographing is realized, and multiple phases are available. If the identity card photo is taken, a two-dimensional license code is generated, wherein the two-dimensional license code comprises a common certificate photo such as a resident identity card, a motor vehicle driver license, a resident license, a passport or a port Australian pass. The user can repeatedly use the photos in multiple places within the photo validity period, the photos can be used or the electronic photos can be downloaded by showing the license codes, the user does not need to shoot again, the license effect is fully exerted, and the work efficiency of the user is improved.
S305, receiving a calling instruction of the license code of the authorized third party, encrypting the certificate photo and transmitting the encrypted certificate photo to the authorized third party.
The user goes to the certificate handling hall to handle the certificate service, and the information interaction can be completed only by showing the certificate two-dimensional code correspondingly generated by the portrait information, and the paper photo or photo receipt handling service is not required to be carried. The business department scans the two-dimension code of the user license to obtain the unique two-dimension code information of the license, and obtains the corresponding portrait information in the government internal network system through the two-dimension code information to complete business transaction.
The photo processing is mainly implemented by a portrait hair edge processing method, as shown in fig. 4, which includes the following steps:
s401, face information is acquired.
The face information obtained in this embodiment is the second face information obtained after the above-mentioned formal shooting, and in order to better perform hair edge processing, after obtaining the second face information, photo cropping and color adjustment are performed on the second face information.
S402, dividing the face information into a static area and a dynamic area.
The static area (mask) comprises fixed incompressible parts such as a part with relatively dense human head, body, clothes and hair, as shown in fig. 5, the left part is an original image (second face information), the right light part is a static area, the moving area comprises a part with relatively sparse hair and a background, as shown in fig. 6, the middle portrait part is a static area, the outer background part is a moving area, how to divide the static area and the moving area is automatically determined by a mask image generated by a deep learning model, and the description is omitted here.
S403, calculating a distance function of the dead zone to obtain the approximate contraction direction of the hair.
In particular, hair shrinkage must be from a point distant from the dead zone, gradually closer to the dead zone, i.e., a reduced distance. Thus, the distance function of the dead zone is calculated to obtain the approximate contraction direction of the hair, the distance function is calculated by the OpenCV library, the distance function is shown in fig. 7, and comprises a first curve (inner curve in the figure) and a second curve (outer curve in the figure), the first curve represents the outline of the dead zone, the second curve represents the curve equidistant d from the first curve, the value of d is properly selected, so that the second curve contains most of the hair, the annular area surrounded by the first curve and the second curve is a calculation area, the dead zone is unchanged, no calculation is needed, only copying is needed, the moving zone comprises most of the background, and the background except the second curve is not needed to be processed, therefore, the calculation area is greatly reduced.
S404, recalculating a distance function according to the annular region, defining the distances at the edge, inside and outside of the annular region, and carrying out finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point.
Wherein, the distance defined at the edge of the annular region is 0, the distance inside the annular region is negative, the distance outside the annular region is positive, and the recalculated distance function is shown in fig. 8, and the edge of the annular region is the first curve and the second curve.
The finite element segmentation of the annular region in this embodiment adopts a triangle segmentation method, and has the following requirements:
1) Each element is a triangle and is as close as possible to an equilateral triangle.
2) And a smaller triangle is used at the position with a curved contour, so that the triangle network is close to the actual edge as much as possible, and a larger triangle is used at the position with a flat contour, so that the calculated amount is reduced, and the speed is improved.
3) The large triangle and the small triangle need to be smoothly transited, namely the vertex of any triangle cannot be on the side of the other triangle, two adjacent triangles must have a common side with equal length, otherwise, the finite element equation is not solved.
To satisfy the above three conditions, a first order nonlinear partial differential equation needs to be used, and a Delaunay triangle network, which can directly call the library of OpenCV.
The following describes a first order nonlinear partial differential equation and illustrates how to use this equation to achieve a smooth transition, the nonlinear partial differential equation is constructed as follows: defining a closed curve of position and time, and defining curvature, moving direction, moving speed, speed component of x-axis direction, speed component of y-axis direction and speed function of the closed curve; constructing a first-order nonlinear partial differential equation according to the closed curve, and curvature, moving direction, moving speed, speed component of x-axis direction, speed component of y-axis direction and speed function of the closed curve, wherein the specific explanation is as follows:
a closed curve is defined as follows, s is the position parameter, t is the time parameter:
wherein,representing a certain point on the curve, x and y representing coordinates, s being the position parameter of the curve and t being the time parameter of the curve.
The curvature is as follows:
wherein K (s, t) represents curvature, which is a function of s, t, x s Representing the first partial derivative of x with respect to s, y s Representing the first partial derivative of y with respect to s, x ss Representing the second partial derivative of x with respect to s, y ss Representing the second partial derivative of y with respect to s.
Defining the moving direction as the normal direction is as follows:
wherein,representing the normal vector at a point s on the curve at time t.
The movement rate is:
where F (K) represents the rate of movement, which is a function of the curvature K.
The velocity component in the x-axis direction and the velocity component in the y-axis direction are:
wherein x is t Representing the first partial derivative of x with respect to t, y t Representing the first partial derivative of y with respect to t.
The rate function is defined as:
F(K)=1-εK(7)
where ε is a constant that is greater than 0.
The formulas (1) - (7) are combined to form a first-order nonlinear partial differential equation, and the first-order nonlinear partial differential equation is solved on an annular area to obtain a size distribution function of a finite element triangle, wherein the triangle is small at a place with large curvature, the triangle is large at a place with small curvature, and the transition is smooth (the scale on the right side of the picture is relatively large) as shown in fig. 9.
According to the size distribution function of the finite element triangle and the hyperstatic structure calculation in the structural mechanics, a triangular net is obtained by solving a partial differential equation of the structural mechanics, and the triangular net is similar to a steel frame structure shape on a sea-bead bridge in shape, as shown in fig. 10.
In the triangle mesh region, the partial differential equation of the fluid mechanics is solved to obtain the displacement amount of the corresponding triangle mesh, and the displacement amount of each pixel point is calculated through triangle interpolation, as shown in fig. 11.
And S405, changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
According to the displacement amount of fig. 11, the original image (second face information) is changed, and the changed moving area and the unchanged static area are combined to obtain changed second face information, and the final effect image is shown on the right side of fig. 12, and the left side of fig. 12 is the original image.
The embodiment can also perform background color conversion after finishing portrait hair edge processing.
It should be noted that while the method operations of the above embodiments are described in a particular order, this does not require or imply that the operations must be performed in that particular order or that all of the illustrated operations be performed in order to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Example 2:
as shown in fig. 13, the present embodiment provides an intelligent photographing service apparatus for preventing leakage of portrait information, which is applied to a photographing device, and includes an acquisition module 1301, a division module 1302, a first calculation module 1303, a second calculation module 1304, and a combination module 1305, and the respective modules are described as follows:
an acquiring module 1301 is configured to acquire face information.
The dividing module 1302 is configured to divide the face information into a dead zone and a moving zone, where the dead zone includes a relatively dense portion of the head, the body, the clothing and the hair, and the moving zone includes a relatively sparse portion of the hair and a background.
The first calculating module 1303 is configured to calculate a distance function of the dead zone to obtain a rough contraction direction of the hair, where the distance function includes a first curve and a second curve, the first curve represents a contour of the dead zone, and the second curve represents a curve equidistant from the first curve, so that the second curve includes a majority of the hair, and an annular area enclosed by the first curve and the second curve is a calculating area.
The second calculation module 1304 is configured to recalculate the distance function according to the annular region, and define the distances at the edge, inside and outside of the annular region, and perform finite element segmentation and finite element calculation on the annular region to obtain the displacement amount of each pixel point.
The combination module 1305 is configured to change the face information according to the displacement, and combine the changed moving area with the unchanged static area to obtain the changed face information.
Specific implementation of the above modules can be seen in embodiment 1 above; it should be noted that, the apparatus provided in this embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure is divided into different functional modules, so as to perform all or part of the functions described above.
Example 3:
the present embodiment provides a computer apparatus, as shown in fig. 14, which includes a processor 1402, a memory, an input device 1403, a display 1404 and a network interface 1405 connected through a system bus 1401, where the processor is used to provide computing and control capabilities, the memory includes a nonvolatile storage medium 1406 and an internal memory 1407, the nonvolatile storage medium 1406 stores an operating system, a computer program and a database, the internal memory 1407 provides an environment for the operating system and the computer program in the nonvolatile storage medium, and when the processor 1402 executes the computer program stored in the memory, the portrait hair edge processing method of the above embodiment 1 is implemented as follows:
Acquiring face information;
dividing face information into a static area and a dynamic area, wherein the static area comprises a part with relatively dense human head, human body, clothes and hair, and the dynamic area comprises a part with relatively sparse hair and a background;
calculating a distance function of the dead zone to obtain a rough contraction direction of the hair, wherein the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant to the first curve, so that the second curve contains most of the hair, and an annular area surrounded by the first curve and the second curve is a calculation area;
according to the annular region, recalculating a distance function, defining the distances at the edge, inside and outside of the annular region, and carrying out finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point;
and changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
Example 4:
the present embodiment provides a storage medium, which is a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, implements the portrait hair edge processing method of the above embodiment 1, as follows:
Acquiring face information;
dividing face information into a static area and a dynamic area, wherein the static area comprises a part with relatively dense human head, human body, clothes and hair, and the dynamic area comprises a part with relatively sparse hair and a background;
calculating a distance function of the dead zone to obtain a rough contraction direction of the hair, wherein the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant to the first curve, so that the second curve contains most of the hair, and an annular area surrounded by the first curve and the second curve is a calculation area;
according to the annular region, recalculating a distance function, defining the distances at the edge, inside and outside of the annular region, and carrying out finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point;
and changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
The computer readable storage medium of the above embodiment may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an apparatus, device, or means for electronic, magnetic, optical, electromagnetic, infrared, or semiconductor, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this embodiment, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In the present embodiment, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. A computer program embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable storage medium may be written in one or more programming languages, including an object oriented programming language such as Java, python, C ++ and conventional procedural programming languages, such as the C-language or similar programming languages, or combinations thereof for performing the present embodiments. The program may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In summary, the invention divides the face information into the static area and the moving area, calculates the distance function of the static area to obtain the approximate contraction direction of the hair, recalculates the distance function according to the annular area formed by the two curves of the distance function, and performs finite element segmentation and finite element calculation on the annular area to obtain the displacement of each pixel point, thereby changing the face information, combining the changed moving area and the unchanged static area to obtain the changed face information, realizing automatic compression and gathering of the fluffy and scattered hair in the portrait, and avoiding the failure of examination and verification due to the fluffy hair of the photo; meanwhile, the processing effect is natural, and distortion is avoided.
The above description is only of the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive conception of the present invention equally within the scope of the disclosure of the present invention.

Claims (9)

1. A portrait hair edge processing method, characterized in that the method comprises:
Acquiring face information;
dividing face information into a static area and a dynamic area, wherein the static area comprises a part with relatively dense human head, human body, clothes and hair, and the dynamic area comprises a part with relatively sparse hair and a background;
calculating a distance function of a dead zone to obtain a contraction direction of hair, wherein the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant to the first curve, so that the second curve contains most of hair, and an annular area surrounded by the first curve and the second curve is a calculation area;
according to the annular region, recalculating a distance function, defining that the distance at the edge of the annular region is 0, the distance inside the annular region is negative, the distance outside the annular region is positive, and carrying out finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point;
and changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
2. The method for processing the edge of portrait hair according to claim 1, wherein the performing finite element segmentation and finite element calculation on the annular region to obtain the displacement of each pixel point includes:
Adopting a triangle segmentation method to segment the annular region by finite elements to obtain a triangle net;
and carrying out finite element calculation on the triangular network area to obtain the displacement of each pixel point.
3. The method for processing the edge of the portrait hair according to claim 2, wherein the triangle segmentation method is used for carrying out finite element segmentation on the annular region to obtain a triangle mesh, and the method comprises the following steps:
solving on an annular area by using a first-order nonlinear partial differential equation to obtain a size distribution function of a finite element triangle;
and obtaining the triangular net by solving a partial differential equation of the structural mechanics according to the size distribution function of the finite element triangle and the hyperstatic structural calculation in the structural mechanics.
4. A portrait hair edge processing method according to claim 3 wherein the first order nonlinear partial differential equation is constructed as follows:
defining a closed curve of position and time, and defining curvature, moving direction, moving speed, speed component of x-axis direction, speed component of y-axis direction and speed function of the closed curve;
and constructing a first-order nonlinear partial differential equation according to the closed curve, and curvature, moving direction, moving speed, speed component in x-axis direction, speed component in y-axis direction and speed function of the closed curve.
5. The portrait hair edge processing method according to claim 4 wherein said defining a closed curve of position versus time, and defining curvature, movement direction, movement rate, speed component in x-axis direction, speed component in y-axis direction, and speed function of said closed curve includes:
defining a closed curve of position and time, and the following formula is:
wherein,representing a certain point on the curve, x and y representing coordinates, s being the position parameter of the curve, t being the time parameter of the curve;
defining a curvature of the closed curve as follows:
wherein K (s, t) represents curvature, x s Representing the first partial derivative of x with respect to s, y s Representing the first partial derivative of y with respect to s, x ss Representing the second partial derivative of x with respect to s, y ss Representing the second partial derivative of y with respect to s;
defining the moving direction of the closed curve as the normal direction, wherein the moving direction is represented by the following formula:
wherein,a certain point s on the curve is represented, and a normal vector is obtained at the moment t;
defining a movement rate of the closed curve as follows:
wherein F (K) represents a movement rate;
defining a speed component in the x-axis direction and a speed component in the y-axis direction of the closed curve, wherein the speed component and the speed component are as follows:
wherein x is t Representing the first partial derivative of x with respect to t, y t Representing the first partial derivative of y with respect to t;
defining a rate function of the closed curve as follows:
F(K)=1-εK
where ε is a constant that is greater than 0.
6. The method for processing the edges of the portrait hair according to claim 2, wherein the finite element calculation is performed on the triangle area to obtain the displacement of each pixel, including:
in the triangular mesh area, solving a partial differential equation of fluid mechanics to obtain displacement quantity of a corresponding triangular mesh, and calculating the displacement quantity of each pixel point through triangular interpolation.
7. A portrait hair edge processing device, said device comprising:
the acquisition module is used for acquiring the face information;
the dividing module is used for dividing the face information into a static area and a dynamic area, wherein the static area comprises a part with relatively dense human head, human body, clothes and hair, and the dynamic area comprises a part with relatively sparse hair and a background;
the first calculation module is used for calculating a distance function of the dead zone to obtain the contraction direction of the hair, the distance function comprises a first curve and a second curve, the first curve represents the outline of the dead zone, the second curve represents the curve equidistant to the first curve, so that the second curve contains most of the hair, and an annular area surrounded by the first curve and the second curve is a calculation area;
The second calculation module is used for recalculating a distance function according to the annular area, defining the distance at the edge of the annular area as 0, defining the distance inside the annular area as negative, defining the distance outside the annular area as positive, and carrying out finite element segmentation and finite element calculation on the annular area to obtain the displacement of each pixel point;
and the combination module is used for changing the face information according to the displacement, and combining the changed moving area with the unchanged static area to obtain the changed face information.
8. A computer device comprising a processor and a memory for storing a program executable by the processor, wherein the processor, when executing the program stored in the memory, implements the portrait hair edge processing method according to any one of claims 1 to 6.
9. A storage medium storing a program which, when executed by a processor, implements the portrait hair edge processing method according to any one of claims 1 to 6.
CN202310332813.XA 2023-03-31 2023-03-31 Portrait hair edge processing method and device, computer equipment and storage medium Active CN116503924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310332813.XA CN116503924B (en) 2023-03-31 2023-03-31 Portrait hair edge processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310332813.XA CN116503924B (en) 2023-03-31 2023-03-31 Portrait hair edge processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116503924A CN116503924A (en) 2023-07-28
CN116503924B true CN116503924B (en) 2024-01-26

Family

ID=87325664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310332813.XA Active CN116503924B (en) 2023-03-31 2023-03-31 Portrait hair edge processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116503924B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
CN109345486A (en) * 2018-10-24 2019-02-15 中科天网(广东)科技有限公司 A kind of facial image deblurring method based on adaptive mesh deformation
CN110021000A (en) * 2019-05-06 2019-07-16 厦门欢乐逛科技股份有限公司 Hair line restorative procedure and device based on figure layer deformation
CN112734633A (en) * 2021-01-07 2021-04-30 京东方科技集团股份有限公司 Virtual hair style replacing method, electronic equipment and storage medium
CN114049269A (en) * 2021-11-05 2022-02-15 Oppo广东移动通信有限公司 Image correction method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408653B (en) * 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 Human body hairstyle generation method based on multi-feature retrieval and deformation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
CN109345486A (en) * 2018-10-24 2019-02-15 中科天网(广东)科技有限公司 A kind of facial image deblurring method based on adaptive mesh deformation
CN110021000A (en) * 2019-05-06 2019-07-16 厦门欢乐逛科技股份有限公司 Hair line restorative procedure and device based on figure layer deformation
CN112734633A (en) * 2021-01-07 2021-04-30 京东方科技集团股份有限公司 Virtual hair style replacing method, electronic equipment and storage medium
CN114049269A (en) * 2021-11-05 2022-02-15 Oppo广东移动通信有限公司 Image correction method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
性别分类中头发特征提取方法的研究;谢金融 等;《计算机工程》;第36卷(第7期);第179-184页 *

Also Published As

Publication number Publication date
CN116503924A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN109416727B (en) Method and device for removing glasses in face image
CN108470169A (en) Face identification system and method
CN108171032A (en) A kind of identity identifying method, electronic device and computer readable storage medium
CN108447017A (en) Face virtual face-lifting method and device
CN109978754A (en) Image processing method, device, storage medium and electronic equipment
CN108876833A (en) Image processing method, image processing apparatus and computer readable storage medium
CN107563304A (en) Unlocking terminal equipment method and device, terminal device
TW202026948A (en) Methods and devices for biological testing and storage medium thereof
JP2021517303A (en) Remote user identity verification with threshold-based matching
CN110827371B (en) Certificate generation method and device, electronic equipment and storage medium
EP3905104B1 (en) Living body detection method and device
WO2020164266A1 (en) Living body detection method and system, and terminal device
CN110188670A (en) Face image processing process, device in a kind of iris recognition and calculate equipment
CN108280919A (en) The testimony of a witness veritifies speed passage through customs gate and its control method
CN116074618B (en) Intelligent photographing service method, system and storage medium for preventing portrait information leakage
CN111814564A (en) Multispectral image-based living body detection method, device, equipment and storage medium
CN109492601A (en) Face comparison method and device, computer-readable medium and electronic equipment
CN115147261A (en) Image processing method, device, storage medium, equipment and product
CN112507986B (en) Multi-channel human face in-vivo detection method and device based on neural network
CN116503924B (en) Portrait hair edge processing method and device, computer equipment and storage medium
CN105898140A (en) Information processing method and device
CN114881893B (en) Image processing method, device, equipment and computer readable storage medium
CN112115747A (en) Living body detection and data processing method, device, system and storage medium
CN112598576B (en) Safety verification method and system based on face recognition
CN114140839A (en) Image sending method, device and equipment for face recognition and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant