WO2021083069A1 - Procédé et dispositif de formation de modèle d'échange de visages - Google Patents

Procédé et dispositif de formation de modèle d'échange de visages Download PDF

Info

Publication number
WO2021083069A1
WO2021083069A1 PCT/CN2020/123582 CN2020123582W WO2021083069A1 WO 2021083069 A1 WO2021083069 A1 WO 2021083069A1 CN 2020123582 W CN2020123582 W CN 2020123582W WO 2021083069 A1 WO2021083069 A1 WO 2021083069A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
sample set
model
template
training
Prior art date
Application number
PCT/CN2020/123582
Other languages
English (en)
Chinese (zh)
Inventor
徐伟
罗琨
陈晓磊
Original Assignee
上海掌门科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海掌门科技有限公司 filed Critical 上海掌门科技有限公司
Publication of WO2021083069A1 publication Critical patent/WO2021083069A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the embodiments of the present application relate to the field of computer technology, in particular to a method and device for training a face-changing model.
  • GAN Generative Adversarial Networks
  • the embodiment of the application proposes a method and device for training a face-changing model.
  • an embodiment of the present application provides a method for training a face-changing model, including: receiving a face-changing model training request sent by a user, wherein the face-changing model training request includes the face before the face change provided by the user The sample set and the specified template face identification; from the pre-training model set corresponding to the template face identification, the pre-training model that matches the face sample set before the face change is determined, wherein the pre-training model set includes a sample set group based on the target face The pre-trained model of the template face sample set corresponding to the template face identifier; the template face sample set that matches the face sample set before the face change is determined from the template face sample set; the machine learning method is based on Training the determined pre-training model with the face sample set before the face change and the determined template face sample set to obtain the face change model.
  • determining from the pre-training model set corresponding to the template face identifier the pre-training model that matches the face sample set before the face change includes: if there is a pre-trained model corresponding to the template face identifier in the user's historical face change record The training model determines the pre-training model corresponding to the template face identifier as the pre-training model matching the face sample set before the face change.
  • determining from the pre-training model set corresponding to the template face identifiers the pre-training model that matches the face sample set before the face change further includes: if there is no template face identifier corresponding to the user's historical face change records The pre-training model for identifying the face attribute information of the face sample set before the face change; based on the recognized face attribute information, the pre-training model is determined from the pre-training model set.
  • the face attribute information includes information in at least one of the following dimensions: gender, age group, race, facial accessories, and face shape.
  • identifying the face attribute information of the face sample set before the face change includes: inputting the face sample set before the face change into a pre-trained first classification model to obtain the gender of the face sample set before the face change , Age, race, and facial accessories, where the first classification model is a classification model based on a convolutional neural network.
  • identifying the face attribute information of the face sample set before the face change includes: extracting the face classification features of the face sample set before the face change; inputting the extracted face classification features to the pre-training
  • the second classification model is to obtain the face shape of the face sample set before the face change, where the second classification model is a classification model based on a support vector machine.
  • extracting the facial classification features of the face sample set before the face change includes: extracting the face feature point information of the face sample set before the face change; calculating the face feature point information based on the extracted face feature point information The face measurement parameters of the face sample set before the face; the extracted face feature point information and the calculated face measurement parameters are combined into the face classification features of the face sample set before the face change.
  • determining a pre-training model from the pre-training model set based on the recognized face attribute information includes: determining a pre-training model subset matching the recognized face attribute information from the pre-training model set; computing; The similarity between the face sample set before the face change and the target face sample set corresponding to the pre-training model in the pre-training model subset; based on the calculated similarity, the pre-training model is determined from the pre-training model subset.
  • calculating the similarity between the face sample set before the face change and the target face sample set corresponding to the pre-trained model in the pre-training model subset includes: extracting the average facial features of the face sample set before the face change Vector; Calculate the cosine similarity between the extracted average face feature vector and the average face feature vector of the target face sample set corresponding to the pre-training model in the pre-training model subset.
  • determining, from the template face sample set group, the template face sample set that matches the face sample set before the face change includes: extracting the face richness features of the face sample set before the face change; The degree of matching between the extracted face richness features and the face richness features of the template face sample set in the template face sample set group; based on the calculated matching degree, the template face is determined from the template face sample set group Sample set.
  • extracting the face richness features of the face sample set before the face change includes: extracting the face feature information of the face sample set before the face change; performing histogram statistics on the face feature information to obtain the face change The face richness feature of the previous face sample set.
  • the facial feature information includes information in at least one of the following dimensions: facial feature points, facial angles, and facial expressions.
  • calculating the matching degree between the extracted face richness features and the face richness features of the template face sample set in the template face sample set group includes: using a histogram matching method to calculate the extracted The degree of matching between the face richness features of the template face sample set and the face richness feature of the template face sample set in the template face sample set group.
  • determining the template face sample set from the template face sample set group includes: if there is a template person in the template face sample set group with a matching degree greater than a preset matching degree threshold Face sample set, select the template face sample set with the highest matching degree from the template face sample set group; if there is no template face sample set with the matching degree greater than the preset matching degree threshold in the template face sample set group, select the template face sample set from the template face sample set group. Select a universal template face sample set from the face sample set group.
  • the pre-training model set is trained by the following steps: acquiring multiple target face samples; dividing the multiple target face samples into target face sample set groups according to face attributes, where the same target face The face attributes of the target face samples in the sample set are similar; for the target face sample set in the target face sample set group, based on the target face sample set and the template face sample set matching the target face sample set Train the generative confrontation network to get the pre-training model.
  • the pre-training model includes a generative model and a discriminant model; and using machine learning methods, the determined pre-training model is trained based on the face sample set before the face change and the determined template face sample set, to obtain
  • the face-changing model includes: inputting the face sample set before changing the face into the generated model of the determined pre-training model to obtain the face sample set after the face changing; the face sample set after the face changing and the determined template face sample Set input to the discriminant model of the pre-trained model determined to obtain the discriminant result, where the discriminant result is used to represent the probability that the face sample set after face change and the determined template face sample set are the real sample set; adjust based on the discriminant result
  • the parameters of the generative model and the discriminant model of the determined pre-training model are used to obtain the face sample set after the face changing.
  • adjusting the parameters of the generated model and the discrimination model of the determined pre-training model based on the discrimination result includes: determining whether the discrimination result meets the constraint condition; if the discrimination result does not satisfy the constraint condition, adjusting the determined parameter based on the discrimination result
  • the generation model of the pre-training model and the parameters of the discrimination model, and the determined pre-training model is trained again based on the face sample set before the face change and the determined template face sample set; if the discrimination result meets the constraint conditions, it is determined to change
  • the face model training is completed, and the face sample set after the face change output last time by the generation model of the determined pre-training model is sent to the user.
  • an embodiment of the present application provides an apparatus for training a face-changing model, including: a receiving unit configured to receive a face-changing model training request sent by a user, wherein the face-changing model training request includes the user provided The face sample set before the face change and the designated template face identifier; the first determining unit is configured to determine the pre-training model matching the face sample set before the face change from the pre-training model set corresponding to the template face identifier, Among them, the pre-training model set includes a model pre-trained based on the target face sample set group and the template face sample set group corresponding to the template face identifier; the second determining unit is configured to determine from the template face sample set group A template face sample set that matches the face sample set before the face change; the training unit is configured to use a machine learning method, based on the pre-training determined by the face sample set before the face change and the determined template face sample set The model is trained to obtain a face-changing model.
  • the first determining unit includes: a first determining subunit configured to, if there is a pre-trained model corresponding to the template face identifier in the user’s historical face-changing record, the pre-trained model corresponding to the template face identifier Determined as a pre-trained model that matches the face sample set before the face change.
  • the first determining unit further includes: a recognition sub-unit configured to recognize the person in the face sample set before the face change if there is no pre-trained model corresponding to the template face identifier in the user's historical face change record Face attribute information; the second determining subunit is configured to determine the pre-training model from the pre-training model set based on the recognized face attribute information.
  • the face attribute information includes information in at least one of the following dimensions: gender, age group, race, facial accessories, and face shape.
  • the recognition subunit includes: a first classification module configured to input the face sample set before the face change into the pre-trained first classification model to obtain the gender and age group of the face sample set before the face change , Race, and facial accessories, where the first classification model is a classification model based on a convolutional neural network.
  • the recognition subunit includes: an extraction module configured to extract facial facial classification features of a face sample set before the face change; a second classification module configured to input the extracted facial facial classification features To the pre-trained second classification model, the face shape of the face sample set before the face change is obtained, where the second classification model is a classification model based on a support vector machine.
  • the extraction module is further configured to: extract face feature point information of the face sample set before the face change; based on the extracted face feature point information, calculate the face measurement of the face sample set before the face change Parameters: Combine the extracted facial feature point information and the calculated facial measurement parameters into the facial classification features of the face sample set before the face change.
  • the second determining subunit includes: a first determining module configured to determine a subset of pre-trained models matching the recognized face attribute information from a set of pre-training models; a calculation module configured to calculate The similarity between the face sample set before the face change and the target face sample set corresponding to the pre-training model in the pre-training model subset; the second determining module is configured to determine from the pre-training model subset based on the calculated similarity Pre-trained model.
  • the calculation module is further configured to: extract the average face feature vector of the face sample set before the face change; calculate the extracted average face feature vector and the target corresponding to the pre-trained model in the pre-trained model subset The cosine similarity of the average face feature vector of the face sample set.
  • the second determining unit includes: an extraction subunit configured to extract face richness features of a face sample set before the face change; a calculation subunit configured to calculate the extracted face richness features The matching degree with the face richness feature of the template face sample set in the template face sample set group; the third determining subunit is configured to determine the template from the template face sample set group based on the calculated matching degree Set of human face samples.
  • the extraction subunit is further configured to: extract the face feature information of the face sample set before the face change; perform histogram statistics on the face feature information to obtain the face richness of the face sample set before the face change Degree characteristics.
  • the facial feature information includes information in at least one of the following dimensions: facial feature points, facial angles, and facial expressions.
  • the calculation subunit is further configured to: use a histogram matching method to calculate the difference between the extracted face richness feature and the face richness feature of the template face sample set in the template face sample set group. suitability.
  • the third determining subunit is further configured to: if there is a template face sample set with a matching degree greater than a preset matching degree threshold in the template face sample set group, select the matching from the template face sample set group The template face sample set with the highest degree; if there is no template face sample set with a matching degree greater than the preset matching degree threshold in the template face sample set group, select a general template face sample set from the template face sample set group .
  • the pre-training model set is trained by the following steps: acquiring multiple target face samples; dividing the multiple target face samples into target face sample set groups according to face attributes, where the same target face The face attributes of the target face samples in the sample set are similar; for the target face sample set in the target face sample set group, based on the target face sample set and the template face sample set matching the target face sample set Train the generative confrontation network to get the pre-training model.
  • the pre-training model includes a generative model and a discriminant model
  • the training unit includes: a generating subunit configured to input the set of face samples before the face change into the generative model of the determined pre-training model to obtain the face change The posterior face sample set; the discriminant subunit is configured to input the face sample set after face change and the determined template face sample set into the discriminant model of the determined pre-training model to obtain the discriminant result, where the discriminant result is used To characterize the probability that the face sample set and the determined template face sample set are the real sample set after the face change; the adjustment subunit is configured to adjust the parameters of the determined pre-training model generation model and the discrimination model based on the discrimination result .
  • the adjustment subunit is further configured to: determine whether the discrimination result meets the constraint condition; if the discrimination result does not satisfy the constraint condition, adjust the parameters of the determined pre-training model generation model and the discrimination model based on the discrimination result, and Train the determined pre-training model again based on the face sample set before the face change and the determined template face sample set; if the discrimination result meets the constraint conditions, it is determined that the face change model training is completed, and the determined pre-training model The face sample set after the face change last time output by the generative model is sent to the user.
  • the embodiments of the present application provide a computer device, which includes: one or more processors; a storage device on which one or more programs are stored; when one or more programs are stored by one or more Execution by two processors, so that one or more processors implement the method described in any implementation manner of the first aspect.
  • an embodiment of the present application provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, the method as described in any implementation manner in the first aspect is implemented.
  • the method and device for training a face-changing model provided by the embodiments of the application firstly receive a face-changing model training request sent by a user; then, determine and change the set of pre-training models corresponding to the template face identifier in the face-changing model training request.
  • the pre-training model that matches the face sample set before the face change in the face model training request; then, from the template face sample set corresponding to the template face identifier, it is determined to match the face sample set before the face change in the face change model training request
  • the machine learning method is used to train the determined pre-training model based on the face sample set before the face change and the determined template face sample set to obtain the face change model.
  • the pre-training model to train the face-changing model avoids "zero-start” training, saves the time-consuming training of the face-changing model, and improves the training efficiency of the face-changing model.
  • the in-depth face-changing technology plays a positive role in practical applications and experience effects.
  • Fig. 1 is an exemplary system architecture in which some embodiments of the present application can be applied;
  • Fig. 2 is a flowchart of an embodiment of a method for training a face-changing model according to the present application
  • Fig. 3 is a flowchart of another embodiment of the method for training a face-changing model according to the present application
  • Fig. 4 is a schematic structural diagram of a computer system suitable for implementing computer equipment of some embodiments of the present application.
  • Fig. 1 shows an exemplary system architecture 100 to which an embodiment of the method for training a face-changing model of the present application can be applied.
  • the system architecture 100 may include devices 101 and 102 and a network 103.
  • the network 103 is a medium used to provide a communication link between the devices 101 and 102.
  • the network 103 may include various connection types, such as wired, wireless target communication links, or fiber optic cables, and so on.
  • the devices 101 and 102 may be hardware devices or software that support network connections to provide various network services.
  • the device can be various electronic devices, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, servers, and so on.
  • a hardware device it can be implemented as a distributed device group composed of multiple devices, or as a single device.
  • the device is software, it can be installed in the electronic devices listed above.
  • software it can be implemented as multiple software or software modules for providing distributed services, or as a single software or software module. There is no specific limitation here.
  • devices can provide corresponding network services by installing corresponding client applications or server applications.
  • client application After the device has installed the client application, it can be embodied as a client in network communication.
  • server application After the server application is installed, it can be embodied as a server in network communication.
  • the device 101 is embodied as a client, and the device 102 is embodied as a server.
  • the device 101 may be a client with image processing software installed, and the device 102 may be a server of the image processing software.
  • the method for training a face-changing model provided in the embodiment of the present application may be executed by the device 102.
  • FIG. 1 the number of networks and devices in FIG. 1 is merely illustrative. According to implementation needs, there can be any number of networks and devices.
  • FIG. 2 it shows a process 200 of an embodiment of the method for training a face-changing model according to the present application.
  • the method for training a face-changing model may include the following steps:
  • Step 201 Receive a face-changing model training request sent by a user.
  • the execution subject of the method for training a face-changing model may receive a face-changing model training request sent by a user.
  • the face-changing model training request may include the face sample set before the face-changing provided by the user and the designated template face identifier.
  • the face sample set before the face change may be a sample set of which the user wants to replace the face.
  • the face sample set before the face change may be one or more face images before the face change, or it may be a multi-frame video frame of the face video before the face change.
  • the template face may be the face that the user wants to replace.
  • the template face identification can be composed of letters, numbers, symbols, etc., and is the only identification of the template face.
  • image processing software may be installed on the user's terminal device (for example, the device 101 shown in FIG. 1).
  • the user can open the image processing software and enter the main page. Edit buttons can be set on the main page.
  • the edit button When the user clicks the edit button, the locally stored image list and/or video list can be displayed for the user to select.
  • the user selects one or more images from the image list, the one or more images selected by the user can be determined as the face sample set provided by the user before the face change.
  • the multi-frame video frame of the video selected by the user can be determined as the face sample set provided by the user before the face change.
  • the user will enter the image processing page.
  • the face sample set before the face change can be displayed on the image processing page.
  • a face-changing button can also be set on the image processing page. When the user clicks the face-changing button, a list of template faces that can be replaced can be displayed. When the user selects a template face from the template face list, the template face selected by the user can be determined as the user-specified template face, and its identifier is the user-specified template face identifier.
  • the terminal device can send a face-changing model training request including the face sample set before the face-changing provided by the user and the designated template face identifier to the above-mentioned execution subject.
  • Step 202 Determine a pre-training model matching the set of face samples before the face change from the pre-training model set corresponding to the template face identifier.
  • the above-mentioned execution subject may determine a pre-training model matching the set of face samples before the face change from the set of pre-training models corresponding to the template face identifier designated by the user. For example, the above-mentioned execution subject may randomly select a pre-training model from a set of pre-training models corresponding to a template face identifier designated by the user.
  • the execution subject may determine the pre-training model corresponding to the template face identifier as the A pre-trained model for matching the face sample set in front of the face.
  • a historical face-changing record is generated.
  • the historical face change record may record the template face identifier and the pre-training model identifier used during the historical face change process.
  • the above-mentioned execution subject may directly determine the pre-training model corresponding to the template face identifier as the pre-training model to be used this time.
  • a template face identifier corresponds to a pre-training model set.
  • the same pre-training model set can be used to train face-changing models of different face attribute information of the same template face.
  • the pre-trained model set of the same template face may include a pre-trained model based on the target face sample set group of the same target face and the template face sample set group of the same template face.
  • a pair of target face sample set and template face sample set can be used to train a pre-training model of the same face attribute information. It can be seen that the face attribute information of the target face samples in the same target face sample set is similar, and the face attributes of the template face samples in the sample set of the same template face are similar. In addition, the face attribute information of the target face sample set and the template face sample set used to train the same pre-training model are also similar.
  • face attribute information may include information of multiple dimensions.
  • face attribute information may include, but is not limited to, information of at least one of the following dimensions: gender (such as male, female), age group (such as teenagers, middle-aged, Old age), race (such as white, yellow, black), facial accessories (such as whether to wear facial accessories), face shape (such as round face, triangle face, oval face, square face), etc. .
  • the pre-training model set is trained through the following steps:
  • the multiple target face samples may be a batch of target face samples of the same target face.
  • the multiple target face samples are divided into target face sample set groups.
  • the face attribute information of the target face samples in the same target face sample set is similar.
  • the target face sample whose face attribute information is ⁇ male, middle-aged, yellow, no glasses, round face ⁇ belongs to a target face sample set.
  • the target face sample whose face attribute information is ⁇ male, middle-aged, yellow, wearing glasses, round face ⁇ belongs to another target face sample set.
  • each target face sample set will be marked with a corresponding label to record the corresponding face attribute information.
  • the generative confrontation network is trained based on the target face sample set and the template face sample set matching the target face sample set to obtain pre-training model.
  • the face attribute information of the template face samples in the same template face sample set is similar.
  • the face attribute information of the template face sample set matching the target face sample set is similar to the face attribute information of the target face sample set. For example, if the face attribute information of the target face sample set is ⁇ male, middle-aged, yellow, no glasses, round face ⁇ , then the face attributes of the template face sample set that matches the target face sample set The information has a high probability of ⁇ male, middle-aged, yellow race, no glasses, round face ⁇ .
  • Step 203 Determine a template face sample set matching the face sample set before the face change from the template face sample set group.
  • the above-mentioned execution subject may determine a template face sample set that matches the face sample set before the face change from the template face sample set group. For example, the above-mentioned execution subject can select a template face sample set similar to the face attribute information of the face sample set before the face change from the template face sample set group, and determine it as a template matching the face sample set before the face change Set of human face samples.
  • Step 204 Using a machine learning method, train the determined pre-training model based on the face sample set before the face change and the determined template face sample set to obtain the face change model.
  • the above-mentioned execution subject may use a machine learning method to train the determined pre-training model based on the face sample set before the face change and the determined template face sample set to obtain the face change model.
  • the above-mentioned execution subject may take the face sample set before the face change and the determined template face sample set as input, and obtain the corresponding output through the processing of the determined pre-training model. If the output satisfies the unconstrained condition, the parameters of the determined pre-training model are adjusted, and the face sample set before the face change and the determined template face sample set are input again to continue training. If the output meets the preset conditions, the model training is completed.
  • the pre-training model is a trained generative confrontation network
  • the pre-training model may include a trained generative model and a trained discriminant model.
  • the generative model is mainly used to learn the distribution of real images to make the images generated by itself more realistic, so as to fool the discriminant model.
  • the discriminant model needs to judge the authenticity of the received image.
  • the generative model strives to make the generated image more realistic, while the discriminative model strives to identify the true and false of the image. This process is equivalent to a two-person game.
  • the generative model and the discriminant model In constant confrontation, the two networks finally reached a dynamic equilibrium: the images generated by the generative model were close to the distribution of real images, and the discriminant model could not identify true and false images.
  • the above-mentioned execution subject may train the face-changing model through the following steps:
  • the face sample set before the face change is input into the determined generation model of the pre-training model to obtain the face sample set after the face change.
  • the face sample set after the face change and the determined template face sample set are input into the determined discriminant model of the pre-training model, and the discriminant result is obtained.
  • the discrimination result can be used to characterize the probability that the face sample set after the face change and the determined template face sample set are the real sample set.
  • the parameters of the determined pre-training model generation model and the discrimination model are adjusted based on the discrimination result.
  • the above-mentioned execution subject will determine whether the judgment result meets the constraint conditions. If the discrimination result does not satisfy the constraint condition, the above-mentioned execution subject may adjust the parameters of the determined pre-training model generation model and discrimination model based on the discrimination result. Subsequently, the determined pre-training model is trained again based on the face sample set before the face change and the determined template face sample set. If the result of the discrimination satisfies the constraint condition, the above-mentioned execution subject may determine that the face-changing model training is completed, and send the face sample set after the face-changing output of the determined generation model of the pre-training model to the user. Among them, the face sample set after the face change last output by the generative model is the sample set where the face before the face change is replaced with the template face.
  • a face-changing model training request sent by a user is received; then, the pre-training model set corresponding to the template face identifier in the face-changing model training request is determined and the face-changing model is determined.
  • the pre-training model that matches the face sample set before the face change in the training request; then, from the template face sample set group corresponding to the template face identifier, determine the template that matches the face sample set before the face change in the face change model training request Face sample set; Finally, a machine learning method is used to train the determined pre-training model based on the face sample set before the face change and the determined template face sample set to obtain the face change model.
  • the pre-training model to train the face-changing model avoids "zero-start” training, saves the time-consuming training of the face-changing model, and improves the training efficiency of the face-changing model.
  • the in-depth face-changing technology plays a positive role in practical applications and experience effects.
  • FIG. 3 shows a process 300 of another embodiment of the method for training a face-changing model according to the present application.
  • the method for training a face-changing model may include the following steps:
  • Step 301 Receive a face-changing model training request sent by the user.
  • step 301 has been described in detail in step 201 in the embodiment shown in FIG. 2, and will not be repeated here.
  • Step 302 If there is no pre-training model corresponding to the template face identifier in the user's historical face change record, identify the face attribute information of the face sample set before the face change.
  • face attribute information of the face sample set in front of the face may include information of multiple dimensions.
  • face attribute information may include, but is not limited to, information of at least one of the following dimensions: gender (such as male, female), age group (such as teenagers, middle-aged, Old age), race (such as white, yellow, black), facial accessories (such as whether to wear facial accessories), face shape (such as round face, triangle face, oval face, square face), etc. .
  • the above-mentioned execution subject may input the face sample set before the face change into the pre-trained first classification model to obtain the gender, age group, and person of the face sample set before the face change.
  • Information of at least one dimension in species and facial accessories can be a classification model based on Convolutional Neural Networks (CNN) (such as AlexNet, GoogleNet, ResNet, etc.). Get it through training.
  • CNN Convolutional Neural Networks
  • the above-mentioned execution subject may first extract the facial classification features of the face sample set before the face change; and then input the extracted facial facial classification features into the pre-trained second
  • the classification model obtains the face shape of the face sample set before the face change.
  • the second classification model may be obtained by training a classification model based on Support Vector Machine (SVM).
  • SVM Support Vector Machine
  • the facial feature classification feature may include facial feature point information and facial measurement parameters.
  • the above-mentioned execution subject can first extract the face feature point information of the face sample set before the face change; then, based on the extracted face feature point information, calculate the face measurement parameters of the face sample set before the face change; The extracted face feature point information and the calculated face measurement parameters are combined into the face classification feature of the face sample set before the face change.
  • the algorithm for extracting facial feature point information may include, but is not limited to, dlib, LBF, and so on.
  • the face measurement parameters calculated based on the facial feature point information may include, but are not limited to, face width (Wshape), jaw width (Wmandible), shape face height (Hshape), and so on.
  • the face width can be equal to the Euclidean distance between the left and right zygomatic points.
  • the width of the mandible can be equal to the Euclidean distance between the left and right mandibular corner points.
  • the height of the morphological surface can be equal to the Euclidean distance between the nasion point and the submental point.
  • Step 303 Based on the recognized face attribute information, a pre-training model is determined from the pre-training model set.
  • the above-mentioned execution subject may determine the pre-training model from the pre-training model set based on the recognized face attribute information. For example, the above-mentioned execution subject may select a pre-training model that best matches the recognized face attribute information from a set of pre-training models.
  • the above-mentioned execution subject may first determine from the pre-training model set a subset of the pre-training model that matches the recognized face attribute information; and then calculate the face sample set before the face change and The similarity of the target face sample set corresponding to the pre-training model in the pre-training model subset; finally, based on the calculated similarity, the pre-training model is determined from the pre-training model subset.
  • the above-mentioned execution subject can first extract the average face feature vector of the face sample set before the face change; then calculate the extracted average face feature vector and the target face sample set corresponding to the pre-training model in the pre-training model subset. The cosine similarity of the average face feature vector.
  • the algorithm for extracting the average face feature vector may be, for example, a face recognition algorithm (such as VggFace).
  • the target face sample set corresponding to the pre-training model is the target face sample set used when the pre-training model is pre-trained.
  • Step 304 Extract the face richness features of the face sample set before the face change.
  • the above-mentioned execution subject may extract the face richness features of the face sample set before the face change.
  • the above-mentioned execution subject may first extract the face feature information of the face sample set before the face change; then perform histogram statistics on the face feature information to obtain the face sample before the face change Set of face richness characteristics.
  • the facial feature information may include, but is not limited to, information in at least one of the following dimensions: facial feature points, facial angles, facial expressions, and so on.
  • Methods for extracting facial feature information may include, but are not limited to, face detection, facial feature point extraction, facial angle recognition, facial expression recognition, and so on.
  • Step 305 Calculate the matching degree between the extracted face richness feature and the face richness feature of the template face sample set in the template face sample set group.
  • the above-mentioned execution subject may calculate the degree of matching between the extracted face richness feature and the face richness feature of the template face sample set in the template face sample set group.
  • the value of the matching degree is usually between 0 and 1, 0 means no match at all, and 1 means complete match.
  • the face richness features of the template face sample set can be pre-selected and extracted, and the extraction method is the same as the face richness feature extraction method of the face sample set before the face change, and will not be repeated here.
  • the above-mentioned execution subject may use the histogram matching method to calculate the extracted face richness features and the face richness of the template face sample set in the template face sample set group. The degree of matching of the feature.
  • Step 306 Determine a template face sample set from the template face sample set group based on the calculated matching degree.
  • the above-mentioned execution subject may determine the template face sample set from the template face sample set group based on the calculated matching degree. For example, the above-mentioned execution subject may select the template face sample set with the highest matching degree from the template face sample set group.
  • the above-mentioned execution subject may compare the matching degree of the template face sample set in the template face sample set group with a preset matching degree threshold (for example, 0.7). If there is a template face sample set with a matching degree greater than a preset matching degree threshold in the template face sample set group, the above-mentioned execution subject may select the template face sample set with the highest matching degree from the template face sample set group. If there is no template face sample set with a matching degree greater than the preset matching degree threshold in the template face sample set group, the above-mentioned execution subject may select a general template face sample set from the template face sample set group. Generally, a universal template face sample set is preset in the template face sample set group.
  • a preset matching degree threshold for example, 0.7
  • Step 307 Use a machine learning method to train the determined pre-training model based on the face sample set before the face change and the determined template face sample set to obtain the face change model.
  • step 307 has been described in detail in step 204 in the embodiment shown in FIG. 2, and will not be repeated here.
  • the process 300 of the method for training a face-changing model in this embodiment highlights the determination of the pre-training model based on the face attribute information and the richness based on the face.
  • the sample set trains the pre-trained model with the most similar face attribute information, improves the face-changing effect of the trained face-changing model, and makes the output of the face-changing model more realistic.
  • FIG. 4 shows a schematic structural diagram of a computer system 400 suitable for implementing a computer device (for example, the device 102 shown in FIG. 1) of an embodiment of the present application.
  • the computer device shown in FIG. 4 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
  • the computer system 400 includes a central processing unit (CPU) 401, which can be based on a program stored in a read-only memory (ROM) 402 or a program loaded from a storage part 408 into a random access memory (RAM) 403 And perform various appropriate actions and processing.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for the operation of the system 400 are also stored.
  • the CPU 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to the bus 404.
  • the following components are connected to the I/O interface 405: an input part 406 including a keyboard, a mouse, etc.; an output part 407 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and speakers, etc.; a storage part 408 including a hard disk, etc. ; And a communication section 409 including a network interface card such as a LAN card, a modem, and the like. The communication section 409 performs communication processing via a network such as the Internet.
  • the driver 410 is also connected to the I/O interface 405 as needed.
  • a removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 410 as required, so that the computer program read from it is installed into the storage section 408 as required.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 409, and/or installed from the removable medium 411.
  • CPU central processing unit
  • the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages-such as Java, Smalltalk, C++, and also conventional The procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or electronic device.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present application can be implemented in software or hardware.
  • the described unit may also be provided in the processor.
  • a processor includes a receiving unit, a first determining unit, a second determining unit, and a training unit.
  • the names of these units do not constitute a limitation on the unit itself in this case.
  • the receiving unit can also be described as "a unit that receives a face-changing model training request sent by a user".
  • this application also provides a computer-readable medium.
  • the computer-readable medium may be included in the computer equipment described in the above-mentioned embodiments; it may also exist alone without being assembled into the computer equipment. in.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the computer device When the above-mentioned one or more programs are executed by the computer device, the computer device: receives a face-changing model training request sent by a user, wherein, in the face-changing model training request Including the face sample set before the face change and the specified template face identifier provided by the user; the pre-training model matching the face sample set before the face change is determined from the pre-training model set corresponding to the template face identifier, where the pre-training model The set includes pre-trained models based on the target face sample set group and the template face sample set group corresponding to the template face identifier; the template face that matches the face sample set before the face change is determined from the template face sample set group Sample set: Using machine learning methods, the determined pre-training model is trained based on the face sample set before the face change and the determined template face sample set to obtain the face change model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Sont divulgués dans des modes de réalisation de la présente demande un procédé et un dispositif de formation d'un modèle d'échange de visages. Un mode de réalisation spécifique du procédé consiste : à recevoir une requête de formation de modèle d'échange de visages envoyée par un utilisateur, la requête de formation de modèle d'échange de visages comprenant un ensemble d'échantillons de visages fourni par l'utilisateur avant l'échange de visages et un identifiant de visages de référence spécifié ; à déterminer, dans un ensemble de modèles de pré-formation correspondant à l'identifiant de visages de référence, un modèle de pré-formation correspondant à l'ensemble d'échantillons de visages avant l'échange de visages, l'ensemble de modèles de pré-formation comprenant un modèle pré-formé sur la base d'un groupe d'ensembles d'échantillons de visages cible et d'un groupe d'ensembles d'échantillons de visages de référence correspondant à l'identifiant de visages de référence ; à déterminer, dans le groupe d'ensembles d'échantillons de visages de référence, un ensemble d'échantillons de visages de référence correspondant à l'ensemble d'échantillons de visages avant l'échange de visages ; et à former le modèle de pré-formation déterminé sur la base de l'ensemble d'échantillons de visages avant l'échange de visages et l'ensemble d'échantillons de visages de référence déterminé à l'aide d'un procédé d'apprentissage automatique de manière à obtenir un modèle d'échange de visages. Le mode de réalisation épargne le temps consacré à la formation du modèle d'échange de visages et améliore l'efficacité de formation du modèle d'échange de visages.
PCT/CN2020/123582 2019-10-30 2020-10-26 Procédé et dispositif de formation de modèle d'échange de visages WO2021083069A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911043178.3A CN110796089B (zh) 2019-10-30 2019-10-30 用于训练换脸模型的方法和设备
CN201911043178.3 2019-10-30

Publications (1)

Publication Number Publication Date
WO2021083069A1 true WO2021083069A1 (fr) 2021-05-06

Family

ID=69442013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123582 WO2021083069A1 (fr) 2019-10-30 2020-10-26 Procédé et dispositif de formation de modèle d'échange de visages

Country Status (2)

Country Link
CN (1) CN110796089B (fr)
WO (1) WO2021083069A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379594A (zh) * 2021-06-29 2021-09-10 北京百度网讯科技有限公司 脸型变换模型训练、脸型变换方法及相关装置
CN113486785A (zh) * 2021-07-01 2021-10-08 深圳市英威诺科技有限公司 基于深度学习的视频换脸方法、装置、设备及存储介质
CN115358916A (zh) * 2022-07-06 2022-11-18 北京健康之家科技有限公司 换脸图像的生成方法、装置、计算机设备及可读存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796089B (zh) * 2019-10-30 2023-12-12 上海掌门科技有限公司 用于训练换脸模型的方法和设备
CN111353392B (zh) * 2020-02-18 2022-09-30 腾讯科技(深圳)有限公司 换脸检测方法、装置、设备及存储介质
CN113763232A (zh) * 2020-08-10 2021-12-07 北京沃东天骏信息技术有限公司 图像处理方法、装置、设备及计算机可读存储介质
CN112734631A (zh) * 2020-12-31 2021-04-30 北京深尚科技有限公司 基于微调模型的视频图像换脸方法、装置、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012078526A (ja) * 2010-09-30 2012-04-19 Xing Inc カラオケシステム
CN106534757A (zh) * 2016-11-22 2017-03-22 北京金山安全软件有限公司 人脸交换方法、装置、主播终端及观众终端
CN110796089A (zh) * 2019-10-30 2020-02-14 上海掌门科技有限公司 用于训练换脸模型的方法和设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520220B (zh) * 2018-03-30 2021-07-09 百度在线网络技术(北京)有限公司 模型生成方法和装置
CN108509916A (zh) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 用于生成图像的方法和装置
CN109409198B (zh) * 2018-08-31 2023-09-05 平安科技(深圳)有限公司 Au检测方法、装置、设备及介质
CN109214343B (zh) * 2018-09-14 2021-03-09 北京字节跳动网络技术有限公司 用于生成人脸关键点检测模型的方法和装置
CN110110611A (zh) * 2019-04-16 2019-08-09 深圳壹账通智能科技有限公司 人像属性模型构建方法、装置、计算机设备和存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012078526A (ja) * 2010-09-30 2012-04-19 Xing Inc カラオケシステム
CN106534757A (zh) * 2016-11-22 2017-03-22 北京金山安全软件有限公司 人脸交换方法、装置、主播终端及观众终端
CN110796089A (zh) * 2019-10-30 2020-02-14 上海掌门科技有限公司 用于训练换脸模型的方法和设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Face Wwap Face Replacement Tutorial", 17 March 2018 (2018-03-17), pages 1 - 7, XP055807484, Retrieved from the Internet <URL:https://blog.csdn.net/sinat_26918145/article/details/79591717> *
XING ENXU , WU XIAOYONG , LI YAXIAN: "Double-Layer Generative Adversarial Networks Based on Transfer Learning", COMPUTER ENGINEERING AND APPLICATIONS, vol. 55, no. 15, 8 March 2019 (2019-03-08), pages 38 - 46+103, XP055807487, ISSN: 1002-8331, DOI: 10.3778/j.issn.1002-8331.1812-0225 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379594A (zh) * 2021-06-29 2021-09-10 北京百度网讯科技有限公司 脸型变换模型训练、脸型变换方法及相关装置
CN113486785A (zh) * 2021-07-01 2021-10-08 深圳市英威诺科技有限公司 基于深度学习的视频换脸方法、装置、设备及存储介质
CN115358916A (zh) * 2022-07-06 2022-11-18 北京健康之家科技有限公司 换脸图像的生成方法、装置、计算机设备及可读存储介质

Also Published As

Publication number Publication date
CN110796089B (zh) 2023-12-12
CN110796089A (zh) 2020-02-14

Similar Documents

Publication Publication Date Title
WO2021083069A1 (fr) Procédé et dispositif de formation de modèle d&#39;échange de visages
CN109726624B (zh) 身份认证方法、终端设备和计算机可读存储介质
WO2020006961A1 (fr) Procédé et dispositif d&#39;extraction d&#39;image
US11487995B2 (en) Method and apparatus for determining image quality
WO2020024484A1 (fr) Procédé et dispositif de production de données
WO2021036059A1 (fr) Procédé d&#39;entraînement d&#39;un modèle de conversion d&#39;image, procédé de reconnaissance faciale hétérogène, dispositif et appareil
CN108416310B (zh) 用于生成信息的方法和装置
WO2020019591A1 (fr) Procédé et dispositif utilisés pour la génération d&#39;informations
CN108197618B (zh) 用于生成人脸检测模型的方法和装置
CN109993150B (zh) 用于识别年龄的方法和装置
CN108898185A (zh) 用于生成图像识别模型的方法和装置
WO2020253127A1 (fr) Procédé et appareil d&#39;apprentissage de modèle d&#39;extraction de caractéristiques faciales, procédé et appareil d&#39;extraction de caractéristiques faciales, dispositif et support d&#39;informations
CN106682632B (zh) 用于处理人脸图像的方法和装置
CN107679466B (zh) 信息输出方法和装置
WO2020062493A1 (fr) Procédé et appareil de traitement d&#39;image
WO2021208601A1 (fr) Procédé et appareil de traitement d&#39;image basé sur l&#39;intelligence artificielle, dispositif et support de stockage
CN109189544B (zh) 用于生成表盘的方法和装置
WO2022105118A1 (fr) Procédé et appareil d&#39;identification d&#39;état de santé basés sur une image, dispositif et support de stockage
WO2021238410A1 (fr) Procédé et appareil de traitement d&#39;image, dispositif électronique, et support
KR20210040882A (ko) 동영상을 생성하기 위한 방법 및 장치
WO2019114464A1 (fr) Procédé et dispositif de réalité augmentée
WO2020006964A1 (fr) Procédé et dispositif de détection d&#39;image
WO2020238321A1 (fr) Procédé et dispositif d&#39;identification d&#39;âge
WO2020124994A1 (fr) Procédé et appareil de détection de preuve de vie, dispositif électronique et support de stockage
WO2023050868A1 (fr) Procédé et appareil de formation de modèle de fusion, procédé et appareil de fusion d&#39;image, et dispositif et support

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20881794

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20881794

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20881794

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.10.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20881794

Country of ref document: EP

Kind code of ref document: A1