CN112257552B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112257552B
CN112257552B CN202011120781.XA CN202011120781A CN112257552B CN 112257552 B CN112257552 B CN 112257552B CN 202011120781 A CN202011120781 A CN 202011120781A CN 112257552 B CN112257552 B CN 112257552B
Authority
CN
China
Prior art keywords
image
information
target
face
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011120781.XA
Other languages
Chinese (zh)
Other versions
CN112257552A (en
Inventor
邵和明
虢勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011120781.XA priority Critical patent/CN112257552B/en
Publication of CN112257552A publication Critical patent/CN112257552A/en
Application granted granted Critical
Publication of CN112257552B publication Critical patent/CN112257552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a storage medium, and belongs to the technical field of artificial intelligence. According to the embodiment of the application, the three-dimensional face model is used as complete face information through the non-shielding three-dimensional face model, so that when the first image is shielded, the complete face information and the first image are combined, the shielded area in the first image is removed, and a non-shielding second image is obtained, so that the complete face is obtained through eliminating the shielding, the face recognition accuracy of the second image can be effectively improved, the complete face image can be obtained through combination without picking up the mask in a scene with a mask, the convenience and the safety are improved, and the image containing the complete face can be obtained through combination in some shooting scenes.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
With the development of artificial intelligence technology, the calculation is automatically performed through equipment, and the labor cost is saved as a trend. For example, in an image processing scenario, images are automatically processed by a device to obtain a desired image through artificial intelligence techniques.
In some scenes, the face may be blocked during shooting, so that the face in the shot image is incomplete, and in the scenes such as payment or attendance checking, the face recognition may not be completed, or in some shooting scenes, the image containing the complete face may not be shot. For example, in some scenes, people need to wear a mask for going out, and in many photographing occasions, the mask cannot photograph the whole face, so that the face recognition fails, and therefore, the mask may need to be taken off, so that the safety and convenience are poor. Therefore, an image processing method is needed to remove the blocked portion of the photographed image and restore the whole face.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a storage medium, and provides a novel image processing method which can eliminate occlusion in an image and improve the accuracy of face recognition. The technical scheme is as follows:
in one aspect, there is provided an image processing method, the method including:
acquiring a first image of a target;
acquiring a three-dimensional face model of the target, wherein no shielding exists in the three-dimensional face model;
identifying the target in the first image according to the skin color information of the target in the three-dimensional face model to obtain a blocked area of the target in the first image;
And synthesizing the corresponding region of the blocked region in the three-dimensional face model with the first image to obtain a second image, wherein no blocking exists in the second image.
In one possible implementation manner, the acquiring the three-dimensional face model of the target includes:
receiving an image processing instruction for the first image;
and responding to the image processing instruction, and acquiring a three-dimensional face model of the target.
In one possible implementation, the image processing method is applied to a terminal, or the image processing method is applied to a server.
In one aspect, there is provided an image processing apparatus including:
the image acquisition module is used for acquiring a first image of the target;
the model acquisition module is used for acquiring a three-dimensional face model of the target, wherein no shielding exists in the three-dimensional face model;
the identification module is used for identifying the target in the first image according to the skin color information of the target in the three-dimensional face model to obtain an occluded area of the target in the first image;
and the synthesis module is used for synthesizing the first image with the corresponding area of the shielded area in the three-dimensional face model to obtain a second image, wherein shielding does not exist in the second image.
In one possible implementation, the synthesis module includes a determination unit and a synthesis unit;
the determining unit is used for determining a region model corresponding to the blocked region according to the region corresponding to the blocked region in the three-dimensional face model;
the synthesis unit is used for synthesizing the first image and the region model to obtain a second image.
In a possible implementation manner, the synthesis unit is configured to replace the occluded area in the first image with the area model, so as to obtain a second image.
In one possible implementation, the synthesis unit is configured to:
replacing the image information of the blocked area in the first image with the image information of the area model to obtain a third image;
and updating the image information of the area model in the third image according to the illumination information of the blocked area in the first image to obtain a second image.
In a possible implementation manner, the synthesizing unit is further configured to smooth edges of the region model in the third image.
In one possible implementation, the model acquisition module is configured to perform any one of the following:
Carrying out identity recognition on the target in the first image to obtain identity information of the target; based on the identity information of the target, extracting a three-dimensional face model corresponding to the identity information from a three-dimensional face model database as a three-dimensional face model of the target;
acquiring a preset three-dimensional face model as a three-dimensional face model of the target;
and acquiring a three-dimensional face model of a target corresponding to the equipment information for shooting the first image.
In one possible implementation manner, the three-dimensional face model of the target is obtained based on the following process:
acquiring face information of the target;
and generating a three-dimensional face model of the target based on the face information.
In one possible implementation manner, the generating the three-dimensional face model of the target based on the face information includes:
preprocessing the face information;
and carrying out three-dimensional modeling based on the preprocessed face information to obtain the target non-shielding three-dimensional face model.
In one possible implementation manner, the preprocessed face information includes depth information and skin color information of each position of the face;
the three-dimensional modeling is performed based on the preprocessed face information to obtain a three-dimensional face model of the target, which comprises the following steps:
Updating the depth information of the corresponding position in the model three-dimensional face model based on the depth information of each position in the preprocessed face information to obtain the shape of the three-dimensional face model of the target;
and determining the skin color information of each position of the human face as the skin color information of the corresponding position in the three-dimensional human face model of the target.
In one possible implementation manner, the preprocessing the face information includes at least one of the following:
carrying out normalization processing on the face information;
carrying out smoothing treatment on the face information;
performing outlier processing on the face information;
and restoring the missing value in the face information according to the face information.
In one possible implementation manner, the acquiring the face information of the target includes any one of the following:
recording the target from different angles in response to a face information acquisition instruction to obtain a target video; extracting face information of the target from multi-frame images of the target video;
responding to a face information acquisition instruction, and shooting the target from different angles to obtain a plurality of images; and extracting face information of the target from the plurality of images.
In one possible implementation, the image acquisition module is configured to perform any one of the following:
shooting the target based on a camera shooting assembly to obtain the first image;
extracting one frame at a time from the video of the target as the first image;
one frame at a time is extracted from the media stream of the object as the first image.
In one aspect, an electronic device is provided that includes one or more processors and one or more memories having stored therein at least one piece of program code that is loaded and executed by the one or more processors to implement various alternative implementations of the above-described image processing methods.
In one aspect, a computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement various alternative implementations of the image processing method described above is provided.
In one aspect, a computer program product or computer program is provided, the computer program product or computer program comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. The one or more processors of the electronic device are capable of reading the one or more program codes from the computer readable storage medium, the one or more processors executing the one or more program codes so that the electronic device can perform the image processing method of any one of the possible embodiments described above.
According to the embodiment of the application, the three-dimensional face model is used as complete face information through the non-shielding three-dimensional face model, so that when the first image is shielded, the complete face information and the first image are combined, the shielded area in the first image is removed, and a non-shielding second image is obtained, so that the complete face is obtained through eliminating the shielding, the face recognition accuracy of the second image can be effectively improved, the complete face image can be obtained through combination without picking up the mask in a scene with a mask, the convenience and the safety are improved, and the image containing the complete face can be obtained through combination in some shooting scenes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application;
Fig. 2 is a flowchart of an application scenario of an image processing method provided in an embodiment of the present application;
fig. 3 is a flowchart of an application scenario of an image processing method provided in an embodiment of the present application;
FIG. 4 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 5 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an image processing effect according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an image processing effect according to an embodiment of the present application;
FIG. 8 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 9 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 10 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 11 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 12 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of a terminal according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another element. For example, a first image can be referred to as a second image, and similarly, a second image can be referred to as a first image, without departing from the scope of the various examples. The first image and the second image can both be images, and in some cases, can be separate and distinct images.
The term "at least one" in the present application means one or more, and the term "plurality" in the present application means two or more, for example, a plurality of data packets means two or more data packets.
It should be understood that the terminology used in the description of the various examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of various examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "and/or" is an association relationship describing an associated object, meaning that three relationships can exist, e.g., a and/or B, can be represented: a exists alone, A and B exist together, and B exists alone. In the present application, the character "/" generally indicates that the front and rear related objects are an or relationship.
It should also be understood that, in the embodiments of the present application, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiments of the present application.
It should also be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
It will be further understood that the terms "Comprises" and/or "Comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "if" may be interpreted to mean "when" ("white" or "upon") or "in response to a determination" or "in response to detection". Similarly, the phrase "if a [ stated condition or event ] is detected" may be interpreted to mean "upon a determination" or "in response to a determination" or "upon a detection of a [ stated condition or event ] or" in response to a detection of a [ stated condition or event ], depending on the context.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The scheme provided by the embodiment of the application relates to technologies such as image processing, video processing, image recognition and the like in the computer vision technology of artificial intelligence, and is specifically described by the following embodiment.
The environment in which the present application is implemented is described below.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application. The implementation environment includes a terminal 101 or the implementation environment includes a terminal 101 and an image processing platform 102. The terminal 101 is connected to the image processing platform 102 via a wireless network or a wired network.
The terminal 101 can be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio plane 3) player or an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio plane 4) player, a laptop portable computer, a smart robot, a self-service payment device. The terminal 101 installs and runs an application program supporting image processing, which can be, for example, a system application, an instant messaging application, a news push application, a shopping application, an online video application, a social application.
The terminal 101 can have an image capturing function and an image processing function, and can process a captured image and execute a corresponding function according to the processing result, for example. The terminal 101 is capable of performing this work independently and also capable of providing image processing services thereto via the image processing platform 102. The embodiment of the present application is not limited thereto.
The image processing platform 102 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The image processing platform 102 is used to provide background services for applications that support image processing. Optionally, the image processing platform 102 takes on primary processing work and the terminal 101 takes on secondary processing work; alternatively, the image processing platform 102 performs a secondary processing job, and the terminal 101 performs a primary processing job; alternatively, the image processing platform 102 or the terminal 101 can each independently undertake processing work. Alternatively, the image processing platform 102 and the terminal 101 perform collaborative computing by using a distributed computing architecture.
Optionally, the image processing platform 102 includes at least one server 1021 and a database 1022, where the database 1022 is used to store data, and in an embodiment of the present application, the database 1022 can store a three-dimensional face model, so as to provide a data service for the at least one server 1021.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms. The terminal can be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc.
Those skilled in the art will appreciate that the number of terminals 101 and servers 1021 can be greater or fewer. For example, the number of the terminals 101 and the servers 1021 can be only one, or the number of the terminals 101 and the servers 1021 can be tens or hundreds, or more, and the number and the device type of the terminals or the servers are not limited in the embodiment of the present application.
The image processing method provided by the embodiment of the application can be applied to any image processing scene, for example, the image processing method can be applied to face recognition payment scenes. For another example, the image processing method can be applied to face recognition attendance scenes. As another example, the image processing method may be applied to a photographing scene or the like. The embodiment of the present application is not limited thereto.
As shown in fig. 2, in a face recognition payment scenario, a user 201 can capture a first image 203 through a terminal 202, and since the user 201 wears a mask, there is a mask at the mouth and nose of a face in the first image 203, the terminal 202 can eliminate a blocked area in the first image 203 when performing face recognition on the first image 203, obtain a second image 204 including the whole face, further perform face recognition based on the second image 204, and complete payment based on the recognition result.
As shown in fig. 3, in a photographing scene, a user 301 photographs a first image 303 through a terminal 302, and since a user 301 wears a band-aid on a face, the user's face in the photographed first image 303 is stuck with the band-aid. If the user 301 wants to remove the band-aid, an image processing operation may be performed, with the band-aid in the first image 303 removed by the terminal 302, resulting in a second image 304.
Fig. 4 is a flowchart of an image processing method according to an embodiment of the present application, where the method is applied to an electronic device, and the electronic device is a terminal or a server, and referring to fig. 4, the method is applied to the terminal, and the method includes the following steps.
401. The electronic device obtains a first image of a target.
In the embodiment of the application, the electronic equipment has a shielding removing function, can process the shielded image, remove the shielded area and restore the whole face. In this step 401, there may be occlusion of the face of the object in the first image. The electronic device can process the first image, remove the shielding object and restore the face.
402. The electronic equipment acquires a three-dimensional face model of the target, and no shielding exists in the three-dimensional face model.
The three-dimensional face model comprises complete face information of the target, and face information of any position of the face of the target can be obtained from the three-dimensional face model. The electronic device may supplement face information of the blocked area in the first image based on the three-dimensional face model, and restore the face, thereby obtaining a complete face image.
The electronic equipment can acquire the face information of the target in advance to obtain the three-dimensional face model, and can acquire the three-dimensional face model to fill the face information of the blocked area in the first image when image processing is needed, so that a second image containing all the face information can be obtained.
403. And the electronic equipment identifies the target in the first image according to the skin color information of the target in the three-dimensional face model to obtain the blocked area of the target in the first image.
The three-dimensional face model is the standard face of the target, after the electronic equipment obtains the three-dimensional face model,
404. and the electronic equipment synthesizes the first image with the corresponding area of the shielded area in the three-dimensional face model to obtain a second image.
The electronic equipment synthesizes the corresponding area of the shielded area in the three-dimensional face model with the first image, the corresponding area in the three-dimensional face model can replace the shielded area in the first image, so that the face information in the second image is complete, the shielded area is removed, face recognition is carried out by using the second image, and the recognition accuracy is higher. In the scene of wearing the mask, the whole face image can be obtained through synthesis without picking up the mask, so that convenience and safety are improved.
According to the embodiment of the application, the three-dimensional face model is used as complete face information through the non-shielding three-dimensional face model, so that when the first image is shielded, the complete face information and the first image are combined, the shielded area in the first image is removed, and a non-shielding second image is obtained, so that the complete face is obtained through eliminating the shielding, the face recognition accuracy of the second image can be effectively improved, the complete face image can be obtained through combination without picking up the mask in a scene with a mask, the convenience and the safety are improved, and the image containing the complete face can be obtained through combination in some shooting scenes.
Fig. 5 is a flowchart of an image processing method according to an embodiment of the present application, referring to fig. 5, the method includes the following steps.
501. The electronic device obtains a first image of a target.
In the embodiment of the application, the electronic equipment can collect the image and process the collected image. The electronic device may also receive images acquired by other devices and process the images. That is, the first image capturing device may be the electronic device or other devices.
The manner in which the electronic device acquires the first image may be different when the acquisition device and the electronic device are one device or not the same device. For example, the electronic device may capture a first image through its own capture component. For another example, the electronic device may download the first image of the target from a target website. For another example, the electronic device may receive a first image of a target sent by another device. For another example, the electronic device may extract the first image of the target from the image database, and the embodiment of the present application does not limit the acquiring manner of the first image.
The electronic device may be a terminal or a server.
Optionally, the electronic device may be a terminal, the terminal may collect an image of the target to obtain a first image of the target, and further execute a subsequent image processing step, the terminal may also send the collected first image to the server, the server executes the subsequent image processing step, and the server may also return the processed second image.
Alternatively, the electronic device may be a server, which may receive the image collected by the terminal, or download the image from a website, or extract the image from an image database. The server may then perform subsequent image processing steps to obtain a second image. The server may also send the processed second image to the terminal.
The electronic device may acquire the first image in a number of ways. When the application scenes are different, the acquisition process can be different. Three possible collection modes are provided below, and the embodiment of the present application may implement the collection process in any mode, and the specific mode is not limited.
In the first mode, the electronic device shoots the target based on the shooting assembly to obtain the first image.
In the first mode, after the electronic device can shoot to obtain the first image, the shielding object in the first image is removed, and the face is restored. The restored second image can be used for various purposes, for example, in a face payment scene, identification can be performed based on the restored second image, and it is determined that a payment step can be performed. For another example, in the attendance scene, identity recognition can be performed based on the restored second image, and verification-capable card punching is determined.
And secondly, the electronic equipment extracts one frame from the video of the target at a time to serve as the first image.
In the second mode, the electronic device may perform the image processing step on one or more frames in the video. If the multi-frame is subjected to image processing, the multi-frame can be extracted from the video in a frame extraction mode, and one frame in the multi-frame is used as a first image for image processing each time. Of course, the electronic device may also perform image processing on each frame in the video as the first image, so as to obtain a plurality of frames of second images, which can form a processed video, where no occlusion exists in a face of the processed video.
And thirdly, the electronic equipment extracts one frame from the media stream of the target at a time to serve as the first image.
In the third mode, the image processing method can be applied to live scenes, and the electronic equipment can process media streams generated in a live process. The electronic device is capable of image processing one or more of the frames. Of course, image processing may also be performed for each frame in the media stream.
502. And the electronic equipment performs identity recognition on the target in the first image to obtain the identity information of the target.
After the electronic equipment acquires the first image, the electronic equipment can identify the target in the first image, and determine the target, so that a three-dimensional face model of the person is acquired to restore the blocked face.
In some embodiments, the electronic device may perform feature extraction on the first image to obtain a face feature, and determine identity information of the target based on comparing the face feature with the candidate face feature. Each candidate face feature corresponds to one identity information. The identity of the target can be determined through the part of the first image, which is not shielded by the target, so that a three-dimensional face model of the person is obtained.
In some embodiments, during comparison, the electronic device may obtain a similarity between the face feature and the candidate face feature, and determine that the identity information of the target is the identity information corresponding to the candidate face feature when the similarity is greater than a threshold.
503. Based on the identity information of the target, the electronic equipment extracts a three-dimensional face model corresponding to the identity information from a three-dimensional face model database as a three-dimensional face model of the target, wherein no shielding exists in the three-dimensional face model.
After the electronic equipment determines the identity information of each target, the identity information can be used as an index to obtain the three-dimensional face model of the target.
The three-dimensional face model database comprises a plurality of three-dimensional face models, wherein the three-dimensional face models are models of different people, and the different three-dimensional face models correspond to different identity information. When image processing is needed, the corresponding three-dimensional face model can be selected for image restoration by determining the identity information of the target.
The three-dimensional face model of the target can be generated based on face information of the target. A three-dimensional face model of the object may be generated and stored by the electronic device. Or the three-dimensional face model of the target is generated by other equipment and is sent to the electronic equipment when the electronic equipment needs to process the image. Or the three-dimensional face model of the target is generated by other equipment and then sent to the electronic equipment, the three-dimensional face model is stored by the electronic equipment, and the three-dimensional face model is extracted from the storage when the image processing is needed. The embodiment of the application does not limit which device executes the three-dimensional face model generation process of the target.
In some embodiments, the three-dimensional face model of the object is obtained based on the following step one and step two.
Step one, the electronic equipment collects face information of the target.
In the first step, the electronic device may collect the face information of the target, and by collecting the face information in advance, the missing face information may be supplemented.
The face information can be acquired by shooting a plurality of images with different angles or recording videos containing different angles of the target, and the face information of the target can be acquired by different angles, so that the complete face information of the target can be obtained, and the angles of the target in the subsequent first image are different and can be supplemented based on the complete face information.
In some embodiments, the face information acquisition process may include the following two cases, which are not limited in the embodiment of the present application.
In the first case, the electronic equipment responds to a face information acquisition instruction and records the target from different angles to obtain a target video; and extracting the face information of the target from the multi-frame images of the target video.
In this case one, the electronic device may display a face information acquisition frame, and the user may rotate the head to enable the electronic device to record the target from different angles, so that complete and comprehensive face information of the target may be obtained. Through information acquisition, the electronic equipment can acquire depth information of each position of the face and skin color information under different illumination information.
For example, as shown in fig. 6, the electronic device may display a face information acquisition frame 601, the user may rotate the face, the electronic device may scan images 602 of the face at different angles, record video, and analyze the video to obtain face information of the target.
Secondly, the electronic equipment responds to a face information acquisition instruction and shoots the target from different angles to obtain a plurality of images; face information of the object is extracted from the plurality of images.
And step two, the electronic equipment generates a three-dimensional face model of the target based on the face information.
In the second step, the electronic device collects the face information from the face information, and can generate a three-dimensional face model according to the face information, and the three-dimensional face model is used as the basis of the face information to be supplemented when the image is processed later. In some scenes, the whole face cannot be shot later, and when the face is blocked by the blocking object, the image can be restored based on the information of the whole face, and the blocking part is supplemented to obtain the image comprising the whole face.
In some embodiments, after face information is collected by the electronic device, the face information can be preprocessed, and a three-dimensional face model is regenerated. Specifically, the electronic equipment preprocesses the face information, and performs three-dimensional modeling based on the preprocessed face information to obtain the target non-shielding three-dimensional face model. Through the preprocessing process, the face information is improved more completely, or the face information obtained from different sources is standardized according to a certain standard, so that the generated three-dimensional face model is more in line with the specification, and is more accurate and more real.
In some embodiments, the preprocessed face information includes depth information and skin tone information for various locations of the face. When the electronic equipment generates the three-dimensional face model, the shape of the three-dimensional face model can be determined through the depth information, and the skin color information of the three-dimensional face model can be determined through the skin color information. Specifically, the electronic device may update the depth information of the corresponding position in the model three-dimensional face model based on the depth information of each position in the preprocessed face information, to obtain the shape of the three-dimensional face model of the target, and determine the skin color information of each position of the face as the skin color information of the corresponding position in the three-dimensional face model of the target.
For the pretreatment, the pretreatment process may include one treatment mode, or multiple treatment modes, which may be set by a relevant technician according to the requirements, which is not limited in the embodiment of the present application. In some embodiments, the preprocessing may include one or more of four processing methods that are numerical processing of face information. Of course, the pretreatment process may also include other treatment methods, which are not listed here.
In the first mode, the electronic equipment performs normalization processing on the face information.
The normalization processing can also be called standardization, and in the first mode, the face information can be processed to a certain value range, if the face information is acquired based on a plurality of images, the expression of the same image information in the plurality of images can be unified, and the face information acquired in this way is more standard and more accurate.
And secondly, the electronic equipment performs smoothing processing on the face information.
Through smoothing processing, the method is more natural when the image information acquired by different face images is fused, and when a three-dimensional face model is generated based on the face information, the transition between each pixel point in the three-dimensional face model is natural, the connection is natural, the situation of abrupt change of the pixel point can not occur, the method is more suitable for a real face, and the three-dimensional face model is more accurate.
And thirdly, the electronic equipment processes the abnormal value of the face information.
Through the third mode, the electronic device can process the abnormal value which is obviously different from other information in the face information, for example, the abnormal value can be removed, so that the influence of the abnormal value on the whole accuracy of the three-dimensional face model is avoided.
And fourthly, the electronic equipment restores the missing value in the face information according to the face information.
When the electronic equipment collects the face information, the electronic equipment may not collect the complete face information, and based on the existing face information, the electronic equipment can restore the missing value to obtain the more complete face information, so that a more accurate and finer three-dimensional face model can be obtained.
It should be noted that, the above-mentioned steps 502 and 503 are processes of obtaining the three-dimensional face model of the target, and the above-mentioned processes are described taking the electronic device to determine the corresponding three-dimensional face model based on the manner that the non-occluded portion in the first image identifies the target as an example. The electronic device may further implement a process of acquiring the three-dimensional face model in other manners, for example, acquiring a preset three-dimensional face model, or acquiring the three-dimensional face model according to device information or account information.
In some embodiments, a three-dimensional face model may be preset in the electronic device, and when image processing is required, the preset three-dimensional face model is obtained as the three-dimensional face model of the target. For example, the electronic device may collect facial information of a target for which a three-dimensional face model is generated and stored. When image processing is needed, the three-dimensional face model is directly acquired.
In other embodiments, the electronic device may obtain a three-dimensional face model from the device information. Specifically, the electronic device acquires a three-dimensional face model of a target corresponding to device information of the first image. The device information may be a device for uniquely identifying the first image being photographed. Thus, a corresponding relation is established for the three-dimensional face model and the target according to the equipment information of the shooting equipment, and further, the face information of the first image of the corresponding three-dimensional face model can be obtained according to the corresponding relation.
In other embodiments, the electronic device may obtain the three-dimensional face model according to the account information. Specifically, the electronic device obtains account information corresponding to the first image, and obtains a three-dimensional face model of a target corresponding to the account information. In these embodiments, a corresponding relationship may be established between the three-dimensional face model and account information, and the user may log in his own user account on the terminal, and collect the first image, so that the first image and the account information of the user account have a certain relationship.
The terminal can be the electronic equipment, so that when the three-dimensional face model is obtained, the electronic equipment can obtain the three-dimensional face model corresponding to the user account. Optionally, the account information of the user account may be an account identifier, so that the electronic device may obtain a three-dimensional face model corresponding to the account identifier of the user account. If the terminal is not the electronic device, the terminal may send a first image to the electronic device based on the user account, and the electronic device may perform the step of acquiring the three-dimensional face model based on the first image and the account information of the user account. Or, the terminal may send the first image and account information of the user account to the electronic device. The embodiment of the present application is not particularly limited thereto.
The foregoing shows several possible ways of obtaining the three-dimensional face model, and the electronic device may implement the obtaining step in any one of the foregoing manners, or may implement the obtaining step in any combination of the foregoing manners. For example, the electronic device may first identify the device, and if the identification fails, the electronic device may perform other manners, such as acquiring a preset three-dimensional face model or acquiring a corresponding three-dimensional face model according to device information or account information. The embodiment of the present application does not specifically limit the acquisition mode.
In one possible implementation, the electronic device receives an image processing instruction for the first image, and obtains a three-dimensional face model of the target in response to the image processing instruction. After the first image is acquired, the user can perform image processing operation, and the electronic equipment receives an image processing instruction triggered by the image processing operation. Then, based on the image processing instruction, the step 503 and the subsequent steps are executed to perform image processing.
For example, as shown in fig. 7, taking mask removal as an example, the electronic device may display the acquired first image 701, and the user wears the mask in the first image 701. If the user wants to remove the mask, the user can click on the displayed image processing button 702, and the image processing button 702 can be a "one-touch mask removal" button. The electronic device may process the first image 701 to obtain a second image 703 with the mask removed, and if the user wants to restore the original image (i.e. the first image 701), may click on a displayed restore button 704, where the restore button 704 may be a "restore image" button.
504. And the electronic equipment identifies the target in the first image according to the skin color information of the target in the three-dimensional face model to obtain the blocked area of the target in the first image.
After the electronic equipment acquires the three-dimensional face model, the three-dimensional face model comprises skin color information of the target, and the skin color information can be skin color information of different positions of the face. Because the face is shielded, the pixel points of the shielded area are not consistent with the skin color of the face. The electronic device can find out the positions inconsistent with the skin tone information by comparing the skin tone information with the image information on the face of the target in the first image, and then the positions are the blocked areas.
505. And the electronic equipment determines an area model corresponding to the shielded area according to the area corresponding to the shielded area in the three-dimensional face model.
When the angles of the targets in the first image are different, the corresponding areas of the blocked areas in the three-dimensional face model can be different.
Optionally, the electronic device may perform face detection on the first image, and determine a face key point; and determining a corresponding region of the blocked region in the three-dimensional face model according to the face key points at the edge of the blocked region and the corresponding face key points in the three-dimensional face model, and further taking the region in the three-dimensional face model as a region model.
Optionally, the electronic device may acquire an angle of the target in the first image, and determine the area of the corresponding three-dimensional face model according to the angle and the position of the blocked area in the face in the first image. For example, the fourth image may be obtained by photographing the three-dimensional face model according to the angle. And then determining the area at the corresponding position in the fourth image as the corresponding area of the blocked area in the three-dimensional face model according to the position of the blocked area in the face in the first image, and further determining the area model.
For the angle of the object, in some embodiments, the electronic device may use an angle between the direct view direction of the object in the first image and the normal direction of the first image as the angle of the object. When the angle between the direct view direction of the object and the normal direction of the first image is 0 degrees, the angle of the object is considered to be zero degrees, namely, the front face which is taken as the object in the current first image. When the angle between the direct view direction of the object and the normal direction of the first image is 90 degrees, the angle of the object is considered to be 90 degrees, that is, the side face which is taken as the object in the current first image. Of course, the angle between the direct view direction of the object and the normal direction of the first image may be other values, and the angle of the object may be other values.
506. And the electronic equipment synthesizes the first image and the region model to obtain a second image, wherein no shielding exists in the second image.
Through the steps, the electronic equipment determines the area model corresponding to the shielded area, the area model is a non-shielded face area model, the face appearance in the shielded area in the first image can be obtained through determining the area model, the first image and the area model are combined, the shielded area in the first image can be removed, namely, the face information of the shielded area is restored, and therefore the face information in the second image obtained through the combination process is complete and shielding does not exist.
In some embodiments, the composition process may be implemented by way of a region model to replace occluded regions. Specifically, the electronic device replaces the blocked area in the first image with the area model to obtain a second image. In this way, through the replacement process, the blocked area becomes an area model, the three-dimensional face model is free of blocking, the area model is free of blocking, and through the replacement process, the blocked area in the first image becomes an unobstructed face area, so that the effect of removing blocking can be achieved.
In one possible implementation manner, after the electronic device performs replacement, the replaced image may be further trimmed according to the illumination information of the first image, so that the illumination of the second image after the replacement of the blocked area is the same as that of the original image (i.e., the first image), and the illumination of the second image conforms to that of other areas, so that the second image is closer to the photographed real image. Specifically, the electronic device may replace the image information of the blocked area in the first image with the image information of the area model to obtain a third image, and update the image information of the area model in the third image according to the illumination information of the blocked area in the first image to obtain a second image.
In a specific possible embodiment, the image information may include gray information and color information, and the electronic device may update the color information in the image information of the region model according to the illumination information, so that the color of the portion of the pixels matches the illumination of the portion of the pixels in the first image, and the resultant second image is more authentic.
In some embodiments, the region model is not a region in the image with the first image, and upon replacement, the region model contacts the first image, i.e., the edges of the region model may not be able to engage the first image very naturally. The electronic equipment can perform soft processing on the edge, so that the edge and the edge are synthesized more naturally, the image information of the pixel point is free from mutation, the edge and the edge are connected naturally in vision, and the reality and the look and feel of the second image are improved. Specifically, when the electronic device processes the third image, the electronic device may further perform smoothing processing on an edge of the region model in the third image.
Among them, smoothing processing (smoothing) is also called blurring processing (blurring). The effect is to reduce noise or distortion on the image. The smoothing process is in fact an image filtering process. The image smoothing is to remove the high frequency information from the signal processing point of view and retain the low frequency information. We can therefore apply a low pass filter to the image. The low-pass filtering can remove noise in the image, blurring the image (noise is a region of the image where the variation is relatively large, i.e., high-frequency information). While high pass filtering can extract the edges of the image (edges are also areas where high frequency information is concentrated). The smoothing process may be implemented by a filter, which may be any of a mean filter, a gaussian weighted filter, a median filter, and a bilateral filter, which is not limited by the embodiment of the present application.
In the step 505 and the step 506, the process of synthesizing the first image with the region corresponding to the blocked region in the three-dimensional face model to obtain the second image is described by taking the case that the electronic device determines the region model required for synthesis according to the region corresponding to the blocked region in the three-dimensional face model and then synthesizes the region model. Optionally, the electronic device may directly obtain the image information of the region corresponding to the blocked region in the three-dimensional face model without determining the region model, and replace the image information of the blocked region in the first image with the image information. In other embodiments, the composition process may be implemented in other ways, for example, the electronic device may redraw based on the first image and the region model to obtain the second image. In the drawing process, the second image may be drawn based on the image information of the pixel points except the blocked area in the first image and the image information of the pixel points in the area model. The embodiment of the application is not limited in the specific mode adopted in the synthesis process.
After the step 506, the electronic device processes the second image, and if the electronic device is a terminal, the terminal may display the second image, and display an image of the complete face after removing the occlusion for the user. If the electronic device is a server, the server may send the second image to the terminal, which displays the second image.
As shown in fig. 8, 9, 10, and 11, a specific example is provided, in which, as shown in fig. 8, a face modeling system 800 is provided, the face modeling system 800 including a client APP801 and a server side 802, wherein the client APP801 may include a video recording module 8011 and a depth information processing module 8012. The video recording module 8011 can detect whether the omnidirectional image of the user is recorded completely, that is, whether the image acquisition of a plurality of angles is performed on the user, so as to obtain the depth information of the face. The depth information processing module 8012 can perform a digitizing process according to depth information of the face of the user. The server side 802 may include a modeling module 8021 and a storage module 8022, where the modeling module 8021 can adjust according to an actual face model of a user based on a basic human face model, and establish skin color data, and the process refers to adjusting based on depth information of a face of the user on a general human face model, so as to obtain a face model (i.e. a three-dimensional face model) of the user. The storage module 8022 can create and store individual face materials for each user. The face material refers to a three-dimensional face model and skin color data.
The terminal (client) is used for collecting the first image, the server is used for providing image processing service for the terminal, specifically, the server performs image processing on the first image collected by the terminal, the processed second image is sent to the terminal, and the terminal executes subsequent functions according to the second image.
As shown in fig. 9, the client APP may be a client, and the server is a background, where the client includes a front-end interaction module, an image recording module, and an image processing module, and the background includes a modeling background and a face model database. The client and the background are image processing systems, the image processing method is realized by the image processing systems, and the image processing method comprises two parts: the first part is a person face modeling and storage process, and the second part is face image synthesis.
The respective modules included in the image processing system are explained below.
Front-end interaction module: a client Application (App) part guides the user to complete multi-angle face recording, modeling and storing processes, and provides a functional operation entrance for shooting and removing face masks.
And an image recording module: and an App part for collecting the recorded facial image information in real time.
An image processing module: and an App part for preprocessing the collected facial image information, such as digitalization, geometric change, normalization, smoothing, restoration, enhancement and the like.
Modeling background: 3D (three-dimensional) face reconstruction is performed on the preprocessed image information by using a general face model (such as CANDIDE-3) and a three-dimensional deformation model (3d Morph Model,3DMM), so as to generate a 3D face model of the user.
Face model database: a 3D face model of the user is stored and provided.
In addition to the several modules described above, the image processing system may also include an image capture module, a pattern recognition background, and a composition background, as shown in FIG. 11.
Image capturing module (image capturing module): belonging to the App part, is similar to a camera function, and is used for shooting by a user.
Graph recognition background: and (3) carrying out facial complexion and facial shelter (mask) feature recognition on the image shot by the user.
The synthesis background is as follows: combining the 3D face model, the facial complexion characteristics and the facial mask characteristics of the user, acquiring a facial mask area model, integrating the area model and the photographed original image face, synthesizing an image of which the facial mask is finally removed, returning the image to the App, and presenting the image to the user.
As shown in fig. 10, the client APP can upload an image (called a main body picture), perform image recognition by the server, find a corresponding three-dimensional face model from the storage module, and the image synthesis module synthesizes the face image information extracted from the three-dimensional face model into the main body picture, performs fine adjustment according to the image light shadow, and performs soft processing on the image edge to obtain a final image. The image can be used for image presentation.
Through the image processing method, when photographing/video recording is performed, if the face is provided with the shielding object, the user can simply process the image of the object with the face which is not shielded in a post-processing mode, so that the photographing experience of the user can be improved, and the loyalty of the user to the platform is increased. For example, the image processing method can be applied to content generation of image products such as video recording and live broadcasting, and machine learning identification can be performed on objects other than masks, such as band-aids, facial scars and the like, and then elimination can be performed.
According to the embodiment of the application, the three-dimensional face model is used as complete face information through the non-shielding three-dimensional face model, so that when the first image is shielded, the complete face information and the first image are combined, the shielded area in the first image is removed, and a non-shielding second image is obtained, so that the complete face is obtained through eliminating the shielding, the face recognition accuracy of the second image can be effectively improved, the complete face image can be obtained through combination without picking up the mask in a scene with a mask, the convenience and the safety are improved, and the image containing the complete face can be obtained through combination in some shooting scenes.
All the above optional solutions can be combined to form an optional embodiment of the present application, and will not be described in detail herein.
Fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, referring to fig. 12, the apparatus includes:
an image acquisition module 1201, configured to acquire a first image of a target;
a model obtaining module 1202, configured to obtain a three-dimensional face model of the target, where no occlusion exists in the three-dimensional face model;
the identifying module 1203 is configured to identify, according to skin color information of the target in the three-dimensional face model, the target in the first image, so as to obtain an occluded area of the target in the first image;
and a synthesizing module 1204, configured to synthesize the first image with a region corresponding to the blocked region in the three-dimensional face model, to obtain a second image, where no blocking exists in the second image.
In one possible implementation, the composition module 1204 includes a determination unit and a composition unit;
the determining unit is used for determining a region model corresponding to the blocked region according to the region corresponding to the blocked region in the three-dimensional face model;
the synthesizing unit is used for synthesizing the first image and the region model to obtain a second image.
In a possible implementation, the synthesizing unit is configured to replace the occluded area in the first image with the area model to obtain a second image.
In one possible implementation, the synthesis unit is configured to:
replacing the image information of the blocked area in the first image with the image information of the area model to obtain a third image;
and updating the image information of the area model in the third image according to the illumination information of the blocked area in the first image to obtain a second image.
In a possible implementation, the synthesis unit is further configured to smooth edges of the region model in the third image.
In one possible implementation, the model acquisition module 1202 is configured to perform any of:
acquiring a preset three-dimensional face model as a three-dimensional face model of the target;
acquiring a three-dimensional face model of a target corresponding to equipment information of shooting the first image;
and acquiring account information corresponding to the first image, and acquiring a three-dimensional face model of a target corresponding to the account information.
In one possible implementation manner, the three-dimensional face model of the object is obtained based on the following process:
Collecting face information of the target;
based on the face information, a three-dimensional face model of the target is generated.
In one possible implementation manner, the generating the three-dimensional face model of the target based on the face information includes:
preprocessing the face information;
and carrying out three-dimensional modeling based on the preprocessed face information to obtain the target non-shielding three-dimensional face model.
In one possible implementation manner, the preprocessed face information includes depth information and skin color information of each position of the face;
the three-dimensional modeling is carried out based on the preprocessed face information to obtain a three-dimensional face model of the target, which comprises the following steps:
based on the depth information of each position in the preprocessed face information, updating the depth information of the corresponding position in the model three-dimensional face model to obtain the shape of the three-dimensional face model of the target;
and determining the skin color information of each position of the human face as the skin color information of the corresponding position in the three-dimensional human face model of the target.
In one possible implementation, the preprocessing the face information includes at least one of:
carrying out normalization processing on the face information;
carrying out smoothing treatment on the face information;
Carrying out outlier processing on the face information;
and restoring the missing value in the face information according to the face information.
In one possible implementation manner, the acquiring the face information of the target includes any one of the following:
recording the target from different angles in response to a face information acquisition instruction to obtain a target video; extracting face information of the target from multi-frame images of the target video;
responding to a face information acquisition instruction, and shooting the target from different angles to obtain a plurality of images; face information of the object is extracted from the plurality of images.
In one possible implementation, the image acquisition module 1201 is configured to perform any of the following:
shooting the target based on a shooting assembly to obtain the first image;
extracting one frame at a time from the video of the object as the first image;
one frame at a time is extracted from the media stream of the object as the first image.
According to the device provided by the embodiment of the application, the three-dimensional face model can be used as complete face information through the non-shielding three-dimensional face model, so that when the first image is shielded, the complete face information and the first image are synthesized, the shielded area in the first image is removed, and the non-shielding second image is obtained, so that the complete face is obtained through eliminating the shielding, the face recognition accuracy of the second image can be effectively improved, and in a scene with a mask, the complete face image can be obtained through synthesis without removing the mask, thereby improving convenience and safety, and in some shooting scenes, the image containing the complete face can be obtained through synthesis.
It should be noted that: the image processing apparatus provided in the above embodiment is exemplified by the above-described division of the respective functional modules when processing an image, and in practical application, the above-described functional allocation can be performed by different functional modules as needed, that is, the internal structure of the image processing apparatus is divided into different functional modules to perform all or part of the functions described above. In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
The electronic device in the method embodiment described above can be implemented as a terminal. For example, fig. 13 is a block diagram of a terminal according to an embodiment of the present application. The terminal 1300 may be a portable mobile terminal such as: a smart phone, a tablet, an MP3 (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook or a desktop. Terminal 1300 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 1300 includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen needs to display. In some embodiments, the processor 1301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one program code for execution by processor 1301 to implement the image processing method provided by the method embodiments of the present application.
In some embodiments, the terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, a positioning assembly 1308, and a power supply 1309.
A peripheral interface 1303 may be used to connect I/O (Input/Output) related at least one peripheral to the processor 1301 and the memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1301, the memory 1302, and the peripheral interface 1303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal to an electromagnetic signal for transmission, or converts a received electromagnetic signal to an electrical signal. Optionally, the radio frequency circuit 1304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication ) related circuits, which the present application is not limited to.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1305 is a touch display, the display 1305 also has the ability to capture touch signals at or above the surface of the display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 1305 may be one and disposed on the front panel of the terminal 1300; in other embodiments, the display 1305 may be at least two, disposed on different surfaces of the terminal 1300 or in a folded configuration; in other embodiments, the display 1305 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1300. Even more, the display screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 1300, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1307 may also comprise a headphone jack.
The location component 1308 is used to locate the current geographic location of the terminal 1300 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1308 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
A power supply 1309 is used to power the various components in the terminal 1300. The power supply 1309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyroscope sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. Processor 1301 may control display screen 1305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by acceleration sensor 1311. The acceleration sensor 1311 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the terminal 1300, and the gyro sensor 1312 may collect a 3D motion of the user on the terminal 1300 in cooperation with the acceleration sensor 1311. Processor 1301 can implement the following functions based on the data collected by gyro sensor 1312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side frame of terminal 1300 and/or below display screen 1305. When the pressure sensor 1313 is disposed at a side frame of the terminal 1300, a grip signal of the terminal 1300 by a user may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1313. When the pressure sensor 1313 is disposed at the lower layer of the display screen 1305, the processor 1301 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1314 is used to collect a fingerprint of the user, and the processor 1301 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 1301 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical key or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical key or vendor Logo.
The optical sensor 1315 is used to collect ambient light intensity. In one embodiment, processor 1301 may control the display brightness of display screen 1305 based on the intensity of ambient light collected by optical sensor 1315. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1305 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1305 is turned down. In another embodiment, processor 1301 may also dynamically adjust the shooting parameters of camera assembly 1306 based on the intensity of ambient light collected by optical sensor 1315.
A proximity sensor 1316, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1300. The proximity sensor 1316 is used to collect the distance between the user and the front of the terminal 1300. In one embodiment, when proximity sensor 1316 detects a gradual decrease in the distance between the user and the front of terminal 1300, processor 1301 controls display screen 1305 to switch from a bright screen state to a inactive screen state; when the proximity sensor 1316 detects that the distance between the user and the front surface of the terminal 1300 gradually increases, the processor 1301 controls the display screen 1305 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 13 is not limiting of terminal 1300 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The electronic device in the above-described method embodiment can be implemented as a server. For example, fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1400 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 1401 and one or more memories 1402, where at least one program code is stored in the memories 1402, and the at least one program code is loaded and executed by the processor 1401 to implement the image processing method provided in the respective method embodiments described above. Of course, the server can also have components such as a wired or wireless network interface and an input/output interface for inputting and outputting, and can also include other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, for example a memory, comprising at least one program code executable by a processor to perform the image processing method of the above embodiment is also provided. For example, the computer readable storage medium can be Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), compact disk Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product or computer program is provided, the computer program product or computer program comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. The one or more processors of the electronic device are capable of reading the one or more program codes from the computer-readable storage medium, the one or more processors executing the one or more program codes so that the electronic device can perform the above-described image processing method.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It should be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above-described embodiments can be implemented by hardware, or can be implemented by a program instructing the relevant hardware, and the program can be stored in a computer readable storage medium, and the above-mentioned storage medium can be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only of alternative embodiments of the application and is not intended to limit the application, but any modifications, equivalents, improvements, etc. which fall within the spirit and principles of the application are intended to be included in the scope of the application.

Claims (20)

1. An image processing method, the method comprising:
acquiring a first image of a target;
acquiring face information of the first image;
preprocessing the face information, wherein the preprocessed face information comprises depth information and skin color information of each position of the face; updating the depth information of the corresponding position in the model three-dimensional face model based on the depth information of each position in the preprocessed face information to obtain the shape of the three-dimensional face model of the target;
the skin color information of each position of the human face is determined to be the skin color information of the corresponding position in the three-dimensional human face model of the target, and no shielding exists in the three-dimensional human face model;
identifying the target in the first image according to the skin color information of the target in the three-dimensional face model to obtain a blocked area of the target in the first image;
and synthesizing the corresponding region of the blocked region in the three-dimensional face model with the first image to obtain a second image, wherein no blocking exists in the second image.
2. The method according to claim 1, wherein the synthesizing the first image with the corresponding region of the occluded region in the three-dimensional face model to obtain a second image includes:
determining a region model corresponding to the blocked region according to the region corresponding to the blocked region in the three-dimensional face model;
and synthesizing the first image and the region model to obtain a second image.
3. The method of claim 2, wherein synthesizing the first image and the region model to obtain a second image comprises:
and replacing the blocked area in the first image with the area model to obtain a second image.
4. A method according to claim 3, wherein said replacing the occluded region in the first image with the region model results in a second image, comprising:
replacing the image information of the blocked area in the first image with the image information of the area model to obtain a third image;
and updating the image information of the area model in the third image according to the illumination information of the blocked area in the first image to obtain a second image.
5. The method according to claim 4, wherein the method further comprises:
and smoothing the edge of the region model in the third image.
6. The method of claim 1, further comprising any one of:
carrying out identity recognition on the target in the first image to obtain identity information of the target; based on the identity information of the target, extracting a three-dimensional face model corresponding to the identity information from a three-dimensional face model database as a three-dimensional face model of the target;
acquiring a preset three-dimensional face model as a three-dimensional face model of the target;
acquiring a three-dimensional face model of a target corresponding to equipment information of shooting the first image;
and acquiring account information corresponding to the first image, and acquiring a three-dimensional face model of a target corresponding to the account information.
7. The method of claim 1, wherein the preprocessing the face information comprises at least one of:
carrying out normalization processing on the face information;
carrying out smoothing treatment on the face information;
performing outlier processing on the face information;
And restoring the missing value in the face information according to the face information.
8. The method of claim 1, wherein the acquiring face information of the first image comprises any one of:
recording the target from different angles in response to a face information acquisition instruction to obtain a target video; extracting face information of the target from multi-frame images of the target video;
responding to a face information acquisition instruction, and shooting the target from different angles to obtain a plurality of images; and extracting face information of the target from the plurality of images.
9. The method of claim 1, wherein the acquiring the first image of the target comprises any one of:
shooting the target based on a camera shooting assembly to obtain the first image;
extracting one frame at a time from the video of the target as the first image;
one frame at a time is extracted from the media stream of the object as the first image.
10. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a first image of the target;
The model acquisition module is used for acquiring the face information of the first image; preprocessing the face information, wherein the preprocessed face information comprises depth information and skin color information of each position of the face; updating the depth information of the corresponding positions in the model three-dimensional face model based on the depth information of each position in the preprocessed face information to obtain the shape of the three-dimensional face model of the target, determining the skin color information of each position of the face as the skin color information of the corresponding position in the three-dimensional face model of the target, wherein no shielding exists in the three-dimensional face model;
the identification module is used for identifying the target in the first image according to the skin color information of the target in the three-dimensional face model to obtain an occluded area of the target in the first image;
and the synthesis module is used for synthesizing the first image with the corresponding area of the shielded area in the three-dimensional face model to obtain a second image, wherein shielding does not exist in the second image.
11. The apparatus of claim 10, wherein the synthesis module comprises a determination unit and a synthesis unit;
The determining unit is used for determining a region model corresponding to the blocked region according to the region corresponding to the blocked region in the three-dimensional face model;
the synthesis unit is used for synthesizing the first image and the region model to obtain a second image.
12. The apparatus according to claim 11, wherein the synthesis unit is configured to replace the occluded region in the first image with the region model to obtain a second image.
13. The apparatus of claim 11, wherein the synthesis unit is configured to:
replacing the image information of the blocked area in the first image with the image information of the area model to obtain a third image;
and updating the image information of the area model in the third image according to the illumination information of the blocked area in the first image to obtain a second image.
14. The apparatus of claim 13, wherein the synthesis unit is further configured to smooth edges of the region model in the third image.
15. The apparatus of claim 10, wherein the model acquisition module is configured to perform any one of:
Acquiring a preset three-dimensional face model as a three-dimensional face model of the target;
acquiring a three-dimensional face model of a target corresponding to equipment information of shooting the first image;
and acquiring account information corresponding to the first image, and acquiring a three-dimensional face model of a target corresponding to the account information.
16. The apparatus of claim 10, wherein the model acquisition module is configured to perform at least one of:
carrying out normalization processing on the face information;
carrying out smoothing treatment on the face information;
performing outlier processing on the face information;
and restoring the missing value in the face information according to the face information.
17. The apparatus of claim 10, wherein the model acquisition module is configured to perform at least one of:
recording the target from different angles in response to a face information acquisition instruction to obtain a target video; extracting face information of the target from multi-frame images of the target video;
responding to a face information acquisition instruction, and shooting the target from different angles to obtain a plurality of images; and extracting face information of the target from the plurality of images.
18. The apparatus of claim 10, wherein the image acquisition module is configured to perform any one of:
shooting the target based on a camera shooting assembly to obtain the first image;
extracting one frame at a time from the video of the target as the first image;
one frame at a time is extracted from the media stream of the object as the first image.
19. An electronic device comprising one or more processors and one or more memories, the one or more memories having stored therein at least one piece of program code loaded and executed by the one or more processors to implement the image processing method of any of claims 1-9.
20. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the image processing method of any one of claims 1 to 9.
CN202011120781.XA 2020-10-19 2020-10-19 Image processing method, device, equipment and storage medium Active CN112257552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011120781.XA CN112257552B (en) 2020-10-19 2020-10-19 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011120781.XA CN112257552B (en) 2020-10-19 2020-10-19 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112257552A CN112257552A (en) 2021-01-22
CN112257552B true CN112257552B (en) 2023-09-05

Family

ID=74245639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011120781.XA Active CN112257552B (en) 2020-10-19 2020-10-19 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112257552B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449696B (en) * 2021-08-27 2021-12-07 北京市商汤科技开发有限公司 Attitude estimation method and device, computer equipment and storage medium
CN113734197A (en) * 2021-09-03 2021-12-03 合肥学院 Unmanned intelligent control scheme based on data fusion
CN113963426B (en) * 2021-12-22 2022-08-26 合肥的卢深视科技有限公司 Model training method, mask wearing face recognition method, electronic device and storage medium
CN114140501A (en) * 2022-01-30 2022-03-04 南昌工程学院 Target tracking method and device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680069A (en) * 2017-08-30 2018-02-09 歌尔股份有限公司 A kind of image processing method, device and terminal device
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN111008935A (en) * 2019-11-01 2020-04-14 北京迈格威科技有限公司 Face image enhancement method, device, system and storage medium
CN111444887A (en) * 2020-04-30 2020-07-24 北京每日优鲜电子商务有限公司 Mask wearing detection method and device, storage medium and electronic equipment
CN111553284A (en) * 2020-04-29 2020-08-18 武汉大学 Face image processing method and device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102299847B1 (en) * 2017-06-26 2021-09-08 삼성전자주식회사 Face verifying method and apparatus
CN109872379B (en) * 2017-12-05 2022-12-02 富士通株式会社 Data processing apparatus and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018040099A1 (en) * 2016-08-31 2018-03-08 深圳市唯特视科技有限公司 Three-dimensional face reconstruction method based on grayscale and depth information
CN107680069A (en) * 2017-08-30 2018-02-09 歌尔股份有限公司 A kind of image processing method, device and terminal device
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN111008935A (en) * 2019-11-01 2020-04-14 北京迈格威科技有限公司 Face image enhancement method, device, system and storage medium
CN111553284A (en) * 2020-04-29 2020-08-18 武汉大学 Face image processing method and device, computer equipment and storage medium
CN111444887A (en) * 2020-04-30 2020-07-24 北京每日优鲜电子商务有限公司 Mask wearing detection method and device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于标准肤色的人脸图像纹理合成与三维重建应用;阳策;李重;任义;刘恒;;计算机系统应用(05);全文 *

Also Published As

Publication number Publication date
CN112257552A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN113205568B (en) Image processing method, device, electronic equipment and storage medium
CN112257552B (en) Image processing method, device, equipment and storage medium
CN110807361B (en) Human body identification method, device, computer equipment and storage medium
CN110222789B (en) Image recognition method and storage medium
CN112991494B (en) Image generation method, device, computer equipment and computer readable storage medium
CN111541907A (en) Article display method, apparatus, device and storage medium
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN111723803B (en) Image processing method, device, equipment and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN110675412A (en) Image segmentation method, training method, device and equipment of image segmentation model
CN112084811A (en) Identity information determining method and device and storage medium
CN111738914A (en) Image processing method, image processing device, computer equipment and storage medium
CN111027490A (en) Face attribute recognition method and device and storage medium
CN111680758B (en) Image training sample generation method and device
CN113570614A (en) Image processing method, device, equipment and storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN112770173A (en) Live broadcast picture processing method and device, computer equipment and storage medium
CN110189348B (en) Head portrait processing method and device, computer equipment and storage medium
CN114741559A (en) Method, apparatus and storage medium for determining video cover
CN110705438A (en) Gait recognition method, device, equipment and storage medium
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN112528760A (en) Image processing method, image processing apparatus, computer device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038207

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant