CN114173158A - Face recognition method, cloud device, client device, electronic device and medium - Google Patents

Face recognition method, cloud device, client device, electronic device and medium Download PDF

Info

Publication number
CN114173158A
CN114173158A CN202111433098.6A CN202111433098A CN114173158A CN 114173158 A CN114173158 A CN 114173158A CN 202111433098 A CN202111433098 A CN 202111433098A CN 114173158 A CN114173158 A CN 114173158A
Authority
CN
China
Prior art keywords
time point
cloud
face recognition
current time
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111433098.6A
Other languages
Chinese (zh)
Inventor
陈思民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111433098.6A priority Critical patent/CN114173158A/en
Publication of CN114173158A publication Critical patent/CN114173158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/241Operating system [OS] processes, e.g. server setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4431OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB characterized by the use of Application Program Interface [API] libraries

Abstract

The disclosure provides a face recognition method, a cloud device, a client device, an electronic device and a medium, relates to the technical field of artificial intelligence, particularly relates to a cloud computing technology, and can be applied to scenes such as cloud games. The specific implementation scheme is as follows: receiving a request for starting a face recognition function sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identification of the current application program; according to the identification of the current application program, equipment information which is possessed by the cloud equipment for realizing the face recognition function is sent to the client equipment; receiving video stream data of a user acquired by client equipment based on equipment information; and carrying out face recognition on the user based on the video stream data of the user. The embodiment of the application can greatly improve the success rate of face recognition.

Description

Face recognition method, cloud device, client device, electronic device and medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and further relates to a cloud computing technology, and in particular, to a face recognition method, a cloud device, a client device, an electronic device, and a medium, which can be applied to a scene such as a cloud game.
Background
The cloud game is a game mode based on cloud computing, all games run at a server side in a running mode of the cloud game, and a rendered game picture is compressed and then transmitted to a user through a network. At the client, the user's gaming device does not require any high-end processor and graphics card, but only basic video decompression capability.
In the cloud game scenario, in order to prevent the user from indulging in the game, it is generally necessary to activate a face recognition function. When the human face is identified at the cloud end, the real camera data are collected and transmitted back to the cloud camera for driving in a video streaming mode, so that the function of identifying the human face of the real mobile phone of the user at the cloud end is achieved. However, the pixel difference of the real camera is varied, and the width and height ratios of the videos acquired by different mobile phone cameras are also different, such as 480 × 800, 640 × 800 and the like. Due to the fact that the width-to-height ratios of videos collected by different user mobile phones are different, the video stream transmitted to the cloud end is stretched, and the success rate of face recognition is very low.
Disclosure of Invention
The disclosure provides a face recognition method, a cloud device, a client device, an electronic device and a medium.
In a first aspect, an embodiment of the present application provides a face recognition method, which is applied to a cloud device, and the method includes:
receiving a request for starting a face recognition function sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
according to the identification of the current application program, sending equipment information which is possessed by the cloud equipment for realizing the face recognition function to the client equipment;
receiving video stream data of the user acquired by the client device based on the device information;
and carrying out face recognition on the user based on the video stream data of the user.
In a second aspect, an embodiment of the present application further provides a face recognition method, which is applied to a client device, and the method includes:
receiving a face recognition instruction sent by a user through a current application program;
responding to the face recognition instruction and sending a request for starting a face recognition function to the cloud end equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
receiving device information which is sent by the cloud device and is possessed by the cloud device for realizing the face recognition function;
collecting video stream data of the user based on the device information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
In a third aspect, an embodiment of the present application provides a cloud device, where the cloud device includes: the system comprises a first receiving module, a first sending module and a face recognition module; wherein the content of the first and second substances,
the first receiving module is used for receiving a request for starting a face recognition function, which is sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
the first sending module is configured to send, to the client device, device information that the cloud device has the face recognition function according to the identifier of the current application program;
the first receiving module is further configured to receive video stream data of the user, which is acquired by the client device based on the device information;
the face recognition module is used for carrying out face recognition on the user based on the video stream data of the user.
In a fourth aspect, an embodiment of the present application further provides a client device, where the client device includes: the second receiving module, the second sending module and the video acquisition module; wherein the content of the first and second substances,
the second receiving module is used for receiving a face recognition instruction sent by a user through a current application program;
the second sending module is used for responding to the face recognition instruction and sending a request for starting a face recognition function to the cloud end equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
the second receiving module is further configured to receive device information, which is sent by the cloud device and is possessed by the cloud device for implementing the face recognition function;
the video acquisition module is used for acquiring video stream data of the user based on the equipment information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
In a fifth aspect, an embodiment of the present application further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the face recognition method according to any embodiment of the present application.
In a sixth aspect, the present application provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the face recognition method according to any embodiment of the present application.
In a seventh aspect, a computer program product is provided, which when executed by a computer device implements the face recognition method according to any embodiment of the present application.
According to the technical scheme, the technical problem that due to the fact that the width and the height of videos collected by different user mobile phones are different in proportion in the prior art, the video stream transmitted to the cloud end is stretched, and the success rate of face recognition is very low is solved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of a first process of a face recognition method according to an embodiment of the present application;
fig. 2 is a second flow chart of the face recognition method according to the embodiment of the present application;
fig. 3 is a third flow chart of the face recognition method according to the embodiment of the present application;
fig. 4 is a fourth flowchart illustrating a face recognition method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a cloud device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a client device provided in an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a face recognition method according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1 is a first flowchart of a face recognition method according to an embodiment of the present disclosure, where the method may be executed by a cloud device, the cloud device may be implemented by software and/or hardware, and the cloud device may be integrated in any intelligent device with a network communication function. As shown in fig. 1, the face recognition method may include the following steps:
s101, receiving a request for starting a face recognition function, which is sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identification of the current application program.
In this step, the cloud device may receive a request for starting a face recognition function, which is sent by a user through the client device; wherein, the request for starting the face recognition function carries the identification of the current application program. The cloud device in the embodiment of the application can be a cloud mobile phone. The cloud mobile phone is based on an end-cloud integrated virtualization technology, flexibly adapts to individual requirements of users through the digitization capabilities of a cloud network, safety, AI and the like, releases hardware resources of the mobile phone, and loads mobile phone forms applied to massive clouds as required. The cloud mobile phone is based on the 5G network, so that complex calculation and high-capacity data can be stored on the cloud end. The user can see through the long-range real-time control cloud cell-phone of mode of video stream, finally realizes the high in the clouds operation of tall and erect native application of ann and hand trip. The current application in the embodiment of the present application refers to an application in the client device that is providing a face recognition function for a user. The application may be any application installed in the client device that can provide face recognition functionality, such as QQ, WeChat, and so on.
And S102, sending equipment information which is provided by the cloud equipment for realizing the face recognition function to the client equipment according to the identification of the current application program.
In this step, the cloud device may send, to the client device, device information that the cloud device has to implement the face recognition function, according to the identifier of the current application program. Specifically, the cloud device may send at least one screen brightness information of the cloud device to the client device within a preset time length; or sending resolution information of a camera of the cloud equipment to the client equipment when a request for starting the face recognition function is received. The device information in the embodiment of the application may be screen brightness information of the cloud device, and may also be resolution information of a camera of the cloud device.
S103, receiving video stream data of the user acquired by the client device based on the device information.
In this step, the cloud device may receive video stream data of the user acquired by the client device based on the device information. Specifically, the cloud device may receive video stream data of a user, which is acquired by the client device based on screen brightness information of the cloud device, and may also receive video stream data of the user, which is acquired by the client device based on resolution information of a camera of the cloud device.
And S104, carrying out face recognition on the user based on the video stream data of the user.
In this step, the cloud device may perform face recognition on the user based on the video stream data of the user. Specifically, the face recognition system mainly includes four components: the method comprises the steps of face image acquisition and detection, face image preprocessing, face image feature extraction, matching and identification. Acquiring a face image: different face images can be collected through the camera lens, and for example, static images, dynamic images, different positions, different expressions and the like can be well collected. When the user is in the shooting range of the acquisition equipment, the acquisition equipment can automatically search and shoot the face image of the user. Further, face detection: in practice, face detection is mainly used for preprocessing of face recognition, namely, the position and size of a face are accurately calibrated in an image. The face image contains abundant pattern features, such as histogram features, color features, template features, structural features, and the like. The face detection is to extract the useful information and to use the features to realize the face detection. Preprocessing a face image: the image preprocessing for the human face is a process of processing the image based on the human face detection result and finally serving for feature extraction. The original image acquired by the system is limited by various conditions and random interference, so that the original image cannot be directly used, and the original image needs to be subjected to image preprocessing such as gray scale correction, noise filtering and the like in the early stage of image processing. For the face image, the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening, and the like of the face image. Extracting the features of the face image: features that can be used by a face recognition system are generally classified into visual features, pixel statistical features, face image transform coefficient features, face image algebraic features, and the like. The face feature extraction is performed on some features of the face. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. Matching and identifying the face image: and searching and matching the extracted feature data of the face image with a feature template stored in a database, and outputting a result obtained by matching when the similarity exceeds a threshold value by setting the threshold value. The face recognition is to compare the face features to be recognized with the obtained face feature template, and judge the identity information of the face according to the similarity degree.
The face recognition method provided by the embodiment of the application comprises the steps of firstly receiving a request for starting a face recognition function, which is sent by a user through client equipment; then, according to the identification of the current application program, equipment information which is possessed by the cloud equipment for realizing the face recognition function is sent to the client equipment; then receiving video stream data of a user acquired by the client device based on the device information; and finally, carrying out face recognition on the user based on the video stream data of the user. That is to say, this application can realize the equipment information that face identification function possesses with high in the clouds equipment and send for client equipment gathers user's video stream data based on the equipment information of high in the clouds equipment, because the equipment information of high in the clouds equipment and client equipment is unanimous, just so can not produce the condition of video stretch or other deformations. In the existing face recognition method, the user mobile phones collect the device information based on themselves, the device information of different user mobile phones is different, and the situation of video stretching or other deformation may occur when the device information is sent to the cloud device, so that the face recognition accuracy is very low. The technical scheme provided by the application can greatly improve the success rate of face recognition, and the technical scheme can greatly improve the success rate of face recognition; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example two
Fig. 2 is a second flow chart of the face recognition method according to the embodiment of the present application. Further optimization and expansion are performed based on the technical scheme, and the method can be combined with the various optional embodiments. As shown in fig. 2, the face recognition method may include the following steps:
s201, receiving a request for starting a face recognition function sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identification of the current application program.
S202, sending at least one screen brightness information of the cloud device to the client device within a preset time length according to the identification of the current application program; or sending resolution information of a camera of the cloud equipment to the client equipment when a request for starting the face recognition function is received.
In this step, the cloud device may send at least one screen brightness information of the cloud device to the client device within a preset duration according to the identifier of the current application program; or sending resolution information of a camera of the cloud equipment to the client equipment when a request for starting the face recognition function is received. For example, assuming that the application currently used by the user is QQ, the cloud device may first obtain screen brightness information corresponding to QQ and resolution information of the camera, and then send the screen brightness information corresponding to QQ to the client device within a preset time period; or sending resolution information of a camera corresponding to the QQ to the client device when receiving a request for starting the face recognition function.
Specifically, when the cloud device sends the screen brightness information of the cloud device to the client device, a first time point within a preset time duration may be used as a current time point; then, sending screen brightness information of the cloud equipment at the current time point to the client equipment at the current time point; and then taking the next time point of the current time point as the current time point, and repeatedly executing the operation of sending the screen brightness information of the cloud equipment on the current time point to the client equipment on the current time point until the current time point is the last time point in the preset time length.
S203, receiving video stream data of a user, acquired by the client device based on at least one screen brightness information of the cloud device or resolution information of a camera of the cloud device.
It should be noted that the client device may acquire the video stream data of the user only based on the screen brightness information of the cloud device, may also acquire the video stream data of the user only based on the resolution information of the camera of the cloud device, and may also acquire the video stream data of the user based on the screen brightness information of the cloud device and the resolution information of the camera at the same time.
And S204, carrying out face recognition on the user based on the video stream data of the user.
The face recognition method provided by the embodiment of the application comprises the steps of firstly receiving a request for starting a face recognition function, which is sent by a user through client equipment; then, according to the identification of the current application program, equipment information which is possessed by the cloud equipment for realizing the face recognition function is sent to the client equipment; then receiving video stream data of a user acquired by the client device based on the device information; and finally, carrying out face recognition on the user based on the video stream data of the user. That is to say, this application can realize the equipment information that face identification function possesses with high in the clouds equipment and send for client equipment gathers user's video stream data based on the equipment information of high in the clouds equipment, because the equipment information of high in the clouds equipment and client equipment is unanimous, just so can not produce the condition of video stretch or other deformations. In the existing face recognition method, the user mobile phones collect the device information based on themselves, the device information of different user mobile phones is different, and the situation of video stretching or other deformation may occur when the device information is sent to the cloud device, so that the face recognition accuracy is very low. The technical scheme provided by the application can greatly improve the success rate of face recognition, and the technical scheme can greatly improve the success rate of face recognition; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
EXAMPLE III
Fig. 3 is a third flowchart of a face recognition method provided in an embodiment of the present application, where the method may be executed by a client device, where the client device may be implemented by software and/or hardware, and the client device may be integrated in any intelligent device with a network communication function. As shown in fig. 3, the face recognition method may include the following steps:
s301, receiving a face recognition instruction sent by a user through a current application program.
In this step, the client device may receive a face recognition instruction sent by the user through the current application program. The client device in the embodiment of the application can be a mobile phone of a user, and can also be other electronic devices used by the user. When a user clicks a button of one application program on the mobile phone, the application program can be triggered to start a face recognition function; at this time, the client device may receive a face recognition instruction sent by the user through the current application program.
S302, responding to a face recognition instruction, and sending a request for starting a face recognition function to the cloud equipment; wherein, the request for starting the face recognition function carries the identification of the current application program.
In this step, the cloud device may send a request for starting a face recognition function to the cloud device in response to the face recognition instruction; wherein, the request for starting the face recognition function carries the identification of the current application program. Specifically, the identifier of the application may be the name of the application, or may be other unique identifier. Different application programs correspond to different identifications, and the purpose of sending the identification of the current application program to the cloud end equipment by the client end equipment is to inform the cloud end equipment of which application program a user is using at the moment, so that the cloud end equipment can acquire screen brightness information corresponding to the application program and resolution information of a camera.
And S303, receiving equipment information which is sent by the cloud equipment and has the face recognition function.
In this step, the client device may receive device information that the cloud device realizes the face recognition function and is sent by the cloud device. The device information in the embodiment of the application can be screen brightness information of the cloud device and resolution information of the camera. Optionally, the client device may receive only screen brightness information of the cloud device, or only resolution information of a camera of the cloud device. In order to improve the accuracy of face recognition, the client device can also receive the screen brightness information of the cloud device and the resolution information of the camera at the same time.
S304, collecting video stream data of a user based on the equipment information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
In this step, the client device may collect video stream data of the user based on the device information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user. Specifically, the client device may acquire video stream data of the user based on screen brightness information of the cloud device and resolution information of the camera; and then sending the collected video stream data to the cloud equipment.
The face recognition method provided by the embodiment of the application comprises the steps of firstly receiving a request for starting a face recognition function, which is sent by a user through client equipment; then, according to the identification of the current application program, equipment information which is possessed by the cloud equipment for realizing the face recognition function is sent to the client equipment; then receiving video stream data of a user acquired by the client device based on the device information; and finally, carrying out face recognition on the user based on the video stream data of the user. That is to say, this application can realize the equipment information that face identification function possesses with high in the clouds equipment and send for client equipment gathers user's video stream data based on the equipment information of high in the clouds equipment, because the equipment information of high in the clouds equipment and client equipment is unanimous, just so can not produce the condition of video stretch or other deformations. In the existing face recognition method, the user mobile phones collect the device information based on themselves, the device information of different user mobile phones is different, and the situation of video stretching or other deformation may occur when the device information is sent to the cloud device, so that the face recognition accuracy is very low. The technical scheme provided by the application can greatly improve the success rate of face recognition, and the technical scheme can greatly improve the success rate of face recognition; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example four
Fig. 4 is a fourth flowchart illustrating a face recognition method according to an embodiment of the present application. Further optimization and expansion are performed based on the technical scheme, and the method can be combined with the various optional embodiments. As shown in fig. 4, the face recognition method may include the following steps:
s401, receiving a face recognition instruction sent by a user through a current application program.
S402, responding to the face recognition instruction, and sending a request for starting a face recognition function to the cloud equipment; wherein, the request for starting the face recognition function carries the identification of the current application program.
S403, receiving at least one screen brightness information of the cloud equipment sent by the cloud equipment within a preset time length; or receiving resolution information of a camera of the cloud device, which is sent by the cloud device when the cloud device receives a request for starting a face recognition function.
In this step, the client device may receive at least one screen brightness information of the cloud device sent by the cloud device within a preset time period; or receiving resolution information of a camera of the cloud device, which is sent by the cloud device when the cloud device receives a request for starting a face recognition function. Specifically, the client device may use a first time point within a preset duration as a current time point; then, receiving screen brightness information of the cloud equipment at the current time point, which is sent by the cloud equipment, at the current time point; and then taking the next time point of the current time point as the current time point, and repeatedly executing the operation of receiving the screen brightness information of the cloud end equipment on the current time point, which is sent by the cloud end equipment, at the current time point until the current time point is the last time point in the preset time length.
S404, acquiring video stream data of a user based on screen brightness information of cloud equipment or resolution information of a camera; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
In this step, the client device may acquire video stream data of the user based on screen brightness information of the cloud device or resolution information of the camera; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user. Specifically, the client device may use a first time point within a preset duration as a current time point; then replacing the screen brightness information of the client equipment with the screen brightness information of the cloud equipment at the current time point; then, acquiring video stream data of a user based on screen brightness information of the cloud equipment; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point until the current time point is the last time point in the preset time length. In addition, the client device can replace the resolution information of the camera of the client device with the resolution information of the camera of the cloud device; and acquiring video stream data of a user based on resolution information of a camera of the cloud equipment.
The face recognition method provided by the embodiment of the application comprises the steps of firstly receiving a request for starting a face recognition function, which is sent by a user through client equipment; then, according to the identification of the current application program, equipment information which is possessed by the cloud equipment for realizing the face recognition function is sent to the client equipment; then receiving video stream data of a user acquired by the client device based on the device information; and finally, carrying out face recognition on the user based on the video stream data of the user. That is to say, this application can realize the equipment information that face identification function possesses with high in the clouds equipment and send for client equipment gathers user's video stream data based on the equipment information of high in the clouds equipment, because the equipment information of high in the clouds equipment and client equipment is unanimous, just so can not produce the condition of video stretch or other deformations. In the existing face recognition method, the user mobile phones collect the device information based on themselves, the device information of different user mobile phones is different, and the situation of video stretching or other deformation may occur when the device information is sent to the cloud device, so that the face recognition accuracy is very low. The technical scheme provided by the application can greatly improve the success rate of face recognition, and the technical scheme can greatly improve the success rate of face recognition; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a cloud device provided in an embodiment of the present application. As shown in fig. 5, the cloud device 500 includes: a first receiving module 501, a first sending module 502 and a face recognition module 503; wherein the content of the first and second substances,
the first receiving module 501 is configured to receive a request for starting a face recognition function, where the request is sent by a user through a client device; wherein, the request for starting the face recognition function carries the identifier of the current application program;
the first sending module 502 is configured to send, to the client device, device information that the cloud device has the face recognition function according to the identifier of the current application program;
the first receiving module 501 is further configured to receive video stream data of the user, which is acquired by the client device based on the device information;
the face recognition module 503 is configured to perform face recognition on the user based on the video stream data of the user.
Further, the first sending module 502 is specifically configured to send at least one screen brightness information of the cloud device to the client device within a preset time duration; or sending resolution information of a camera of the cloud device to the client device when the request for starting the face recognition function is received.
Further, the first sending module 502 is specifically configured to use a first time point within the preset time period as a current time point; sending screen brightness information of the cloud device at the current time point to the client device at the current time point; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of sending the screen brightness information of the cloud end equipment on the current time point to the client end equipment on the current time point until the current time point is the last time point in the preset time length.
The cloud device can execute the methods provided by the first embodiment and the second embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technology that are not described in detail in this embodiment, reference may be made to the face recognition methods provided in the first and second embodiments of the present application.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a client device according to an embodiment of the present application. As shown in fig. 6, the client device 600 includes: a second receiving module 601, a second sending module 602 and a video capturing module 603; wherein the content of the first and second substances,
the second receiving module 601 is configured to receive a face recognition instruction sent by a user through a current application program;
the second sending module 602 is configured to send, to a cloud device, a request for starting a face recognition function in response to the face recognition instruction; wherein, the request for starting the face recognition function carries the identifier of the current application program;
the second receiving module 601 is further configured to receive device information, which is sent by the cloud device and is possessed by the cloud device for implementing the face recognition function;
the video acquisition module is used for acquiring video stream data of the user based on the equipment information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
Further, the second receiving module 601 is specifically configured to receive at least one screen brightness information of the cloud device sent by the cloud device within a preset time period; or receiving resolution information of a camera of the cloud device, which is sent by the cloud device when the cloud device receives the request for starting the face recognition function.
Further, the second receiving module 601 is specifically configured to use a first time point within the preset time length as a current time point; receiving screen brightness information of the cloud equipment at the current time point, which is sent by the cloud equipment, at the current time point; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of receiving the screen brightness information of the cloud end equipment on the current time point, which is sent by the cloud end equipment, on the current time point until the current time point is the last time point in the preset time length.
Further, the video capture module 603 is specifically configured to use a first time point within the preset duration as a current time point; replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point; acquiring video stream data of the user based on screen brightness information of the cloud equipment; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point until the current time point is the last time point in the preset time length.
Further, the video acquisition module 603 is specifically configured to replace resolution information of a camera of the client device with resolution information of a camera of the cloud device; and acquiring video stream data of the user based on resolution information of a camera of the cloud equipment.
The client device can execute the methods provided by the third embodiment and the fourth embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution methods. For details of the technology that are not described in detail in this embodiment, reference may be made to the face recognition methods provided in the third and fourth embodiments of the present application.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
EXAMPLE seven
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 executes the respective methods and processes described above, such as the face recognition method. For example, in some embodiments, the face recognition method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the face recognition method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the face recognition method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A face recognition method is applied to cloud equipment, and the method comprises the following steps:
receiving a request for starting a face recognition function sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
according to the identification of the current application program, sending equipment information which is possessed by the cloud equipment for realizing the face recognition function to the client equipment;
receiving video stream data of the user acquired by the client device based on the device information;
and carrying out face recognition on the user based on the video stream data of the user.
2. The method of claim 1, wherein the sending, to the client device, device information with which the cloud device implements the face recognition function comprises:
sending at least one screen brightness information of the cloud device to the client device within a preset time length; or sending resolution information of a camera of the cloud device to the client device when the request for starting the face recognition function is received.
3. The method of claim 2, wherein the sending, to the client device, at least one piece of screen brightness information of the cloud device within a preset time duration comprises:
taking the first time point in the preset time length as the current time point;
sending screen brightness information of the cloud device at the current time point to the client device at the current time point;
and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of sending the screen brightness information of the cloud end equipment on the current time point to the client end equipment on the current time point until the current time point is the last time point in the preset time length.
4. A face recognition method is applied to client equipment and comprises the following steps:
receiving a face recognition instruction sent by a user through a current application program;
responding to the face recognition instruction and sending a request for starting a face recognition function to the cloud end equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
receiving device information which is sent by the cloud device and is possessed by the cloud device for realizing the face recognition function;
collecting video stream data of the user based on the device information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
5. The method of claim 4, wherein the receiving device information sent by the cloud device and provided for the cloud device to implement the face recognition function comprises:
receiving at least one screen brightness information of the cloud equipment, which is sent by the cloud equipment within a preset time length; or receiving resolution information of a camera of the cloud device, which is sent by the cloud device when the cloud device receives the request for starting the face recognition function.
6. The method of claim 5, wherein the receiving at least one piece of screen brightness information of the cloud device sent by the cloud device within a preset time period comprises:
taking the first time point in the preset time length as the current time point;
receiving screen brightness information of the cloud equipment at the current time point, which is sent by the cloud equipment, at the current time point;
and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of receiving the screen brightness information of the cloud end equipment on the current time point, which is sent by the cloud end equipment, on the current time point until the current time point is the last time point in the preset time length.
7. The method of claim 5, wherein said capturing video stream data of the user based on the device information comprises:
taking the first time point in the preset time length as the current time point;
replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point; acquiring video stream data of the user based on screen brightness information of the cloud equipment;
and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point until the current time point is the last time point in the preset time length.
8. The method of claim 5, wherein said capturing video stream data of the user based on the device information comprises:
replacing the resolution information of the camera of the client device with the resolution information of the camera of the cloud device; and acquiring video stream data of the user based on resolution information of a camera of the cloud equipment.
9. A cloud device, the cloud device comprising: the system comprises a first receiving module, a first sending module and a face recognition module; wherein the content of the first and second substances,
the first receiving module is used for receiving a request for starting a face recognition function, which is sent by a user through client equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
the first sending module is configured to send, to the client device, device information that the cloud device has the face recognition function according to the identifier of the current application program;
the first receiving module is further configured to receive video stream data of the user, which is acquired by the client device based on the device information;
the face recognition module is used for carrying out face recognition on the user based on the video stream data of the user.
10. The cloud device of claim 9, wherein the first sending module is specifically configured to send at least one screen brightness information of the cloud device to the client device within a preset time duration; or sending resolution information of a camera of the cloud device to the client device when the request for starting the face recognition function is received.
11. The cloud device of claim 10, wherein the first sending module is specifically configured to use a first time point within the preset duration as a current time point; sending screen brightness information of the cloud device at the current time point to the client device at the current time point; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of sending the screen brightness information of the cloud end equipment on the current time point to the client end equipment on the current time point until the current time point is the last time point in the preset time length.
12. A client device, the client device comprising: the second receiving module, the second sending module and the video acquisition module; wherein the content of the first and second substances,
the second receiving module is used for receiving a face recognition instruction sent by a user through a current application program;
the second sending module is used for responding to the face recognition instruction and sending a request for starting a face recognition function to the cloud end equipment; wherein, the request for starting the face recognition function carries the identifier of the current application program;
the second receiving module is further configured to receive device information, which is sent by the cloud device and is possessed by the cloud device for implementing the face recognition function;
the video acquisition module is used for acquiring video stream data of the user based on the equipment information; and sending the video stream data of the user to the cloud end equipment, so that the cloud end equipment carries out face recognition on the user based on the video stream data of the user.
13. The client device according to claim 12, wherein the second receiving module is specifically configured to receive at least one screen brightness information of the cloud device sent by the cloud device within a preset time period; or receiving resolution information of a camera of the cloud device, which is sent by the cloud device when the cloud device receives the request for starting the face recognition function.
14. The client device according to claim 13, wherein the second receiving module is specifically configured to use a first time point within the preset duration as a current time point; receiving screen brightness information of the cloud equipment at the current time point, which is sent by the cloud equipment, at the current time point; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of receiving the screen brightness information of the cloud end equipment on the current time point, which is sent by the cloud end equipment, on the current time point until the current time point is the last time point in the preset time length.
15. The client device according to claim 13, wherein the video capture module is specifically configured to use a first time point within the preset duration as a current time point; replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point; acquiring video stream data of the user based on screen brightness information of the cloud equipment; and taking the next time point of the current time point as the current time point, and repeatedly executing the operation of replacing the screen brightness information of the client device with the screen brightness information of the cloud device at the current time point until the current time point is the last time point in the preset time length.
16. The client device of claim 13, wherein the video capture module is specifically configured to replace resolution information of a camera of the client device with resolution information of a camera of the cloud device; and acquiring video stream data of the user based on resolution information of a camera of the cloud equipment.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3 or 4-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of claims 1-3 or 4-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-3 or 4-8.
CN202111433098.6A 2021-11-29 2021-11-29 Face recognition method, cloud device, client device, electronic device and medium Pending CN114173158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111433098.6A CN114173158A (en) 2021-11-29 2021-11-29 Face recognition method, cloud device, client device, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111433098.6A CN114173158A (en) 2021-11-29 2021-11-29 Face recognition method, cloud device, client device, electronic device and medium

Publications (1)

Publication Number Publication Date
CN114173158A true CN114173158A (en) 2022-03-11

Family

ID=80481551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111433098.6A Pending CN114173158A (en) 2021-11-29 2021-11-29 Face recognition method, cloud device, client device, electronic device and medium

Country Status (1)

Country Link
CN (1) CN114173158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150375A (en) * 2022-06-23 2022-10-04 浙江惠瀜网络科技有限公司 Video stream data acquisition method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150373084A1 (en) * 2014-05-30 2015-12-24 Apple Inc. Forwarding activity-related information from source electronic devices to companion electronic devices
CN107682556A (en) * 2017-09-30 2018-02-09 广东欧珀移动通信有限公司 Information displaying method and equipment
CN108965728A (en) * 2018-07-06 2018-12-07 维沃移动通信有限公司 A kind of brightness adjusting method and terminal
WO2019100756A1 (en) * 2017-11-23 2019-05-31 乐蜜有限公司 Image acquisition method and apparatus, and electronic device
CN112044055A (en) * 2020-08-31 2020-12-08 北京爱奇艺科技有限公司 Image data acquisition method, system, device, electronic equipment and storage medium
WO2021023055A1 (en) * 2019-08-06 2021-02-11 华为技术有限公司 Video call method
CN112492203A (en) * 2020-11-26 2021-03-12 北京指掌易科技有限公司 Virtual photographing method, device, equipment and storage medium
CN112839175A (en) * 2021-01-12 2021-05-25 无锡鲲鲸云科技有限公司 Method and device for acquiring image by cloud mobile phone, computer equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150373084A1 (en) * 2014-05-30 2015-12-24 Apple Inc. Forwarding activity-related information from source electronic devices to companion electronic devices
CN107682556A (en) * 2017-09-30 2018-02-09 广东欧珀移动通信有限公司 Information displaying method and equipment
WO2019100756A1 (en) * 2017-11-23 2019-05-31 乐蜜有限公司 Image acquisition method and apparatus, and electronic device
CN108965728A (en) * 2018-07-06 2018-12-07 维沃移动通信有限公司 A kind of brightness adjusting method and terminal
WO2021023055A1 (en) * 2019-08-06 2021-02-11 华为技术有限公司 Video call method
CN112044055A (en) * 2020-08-31 2020-12-08 北京爱奇艺科技有限公司 Image data acquisition method, system, device, electronic equipment and storage medium
CN112492203A (en) * 2020-11-26 2021-03-12 北京指掌易科技有限公司 Virtual photographing method, device, equipment and storage medium
CN112839175A (en) * 2021-01-12 2021-05-25 无锡鲲鲸云科技有限公司 Method and device for acquiring image by cloud mobile phone, computer equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许立荡;黄原有;李沅霞;陈堂铰;张杨志;: "人脸识别技术在安防中的应用", 电子世界, no. 13, 15 July 2020 (2020-07-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150375A (en) * 2022-06-23 2022-10-04 浙江惠瀜网络科技有限公司 Video stream data acquisition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112633384B (en) Object recognition method and device based on image recognition model and electronic equipment
CN111507914B (en) Training method, repairing method, device, equipment and medium for face repairing model
CN111160202B (en) Identity verification method, device, equipment and storage medium based on AR equipment
CN113378770B (en) Gesture recognition method, device, equipment and storage medium
CN113343826A (en) Training method of human face living body detection model, human face living body detection method and device
CN113657395B (en) Text recognition method, training method and device for visual feature extraction model
CN111783619B (en) Human body attribute identification method, device, equipment and storage medium
CN113221771A (en) Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product
CN112561879A (en) Ambiguity evaluation model training method, image ambiguity evaluation method and device
CN112528858A (en) Training method, device, equipment, medium and product of human body posture estimation model
CN111862031A (en) Face synthetic image detection method and device, electronic equipment and storage medium
CN114173158A (en) Face recognition method, cloud device, client device, electronic device and medium
CN113628239A (en) Display optimization method, related device and computer program product
CN112529018A (en) Training method and device for local features of image and storage medium
CN115937039A (en) Data expansion method and device, electronic equipment and readable storage medium
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment
CN111967299B (en) Unmanned aerial vehicle inspection method, unmanned aerial vehicle inspection device, unmanned aerial vehicle inspection equipment and storage medium
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN114119374A (en) Image processing method, device, equipment and storage medium
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator
CN114842541A (en) Model training and face recognition method, device, equipment and storage medium
CN111862030B (en) Face synthetic image detection method and device, electronic equipment and storage medium
CN113988294A (en) Method for training prediction network, image processing method and device
CN114219744B (en) Image generation method, device, equipment and storage medium
CN115496916B (en) Training method of image recognition model, image recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination