CN111259695B - Method and device for acquiring information - Google Patents

Method and device for acquiring information Download PDF

Info

Publication number
CN111259695B
CN111259695B CN201811458372.3A CN201811458372A CN111259695B CN 111259695 B CN111259695 B CN 111259695B CN 201811458372 A CN201811458372 A CN 201811458372A CN 111259695 B CN111259695 B CN 111259695B
Authority
CN
China
Prior art keywords
face
image
sample
matching model
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811458372.3A
Other languages
Chinese (zh)
Other versions
CN111259695A (en
Inventor
朱祥祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811458372.3A priority Critical patent/CN111259695B/en
Publication of CN111259695A publication Critical patent/CN111259695A/en
Application granted granted Critical
Publication of CN111259695B publication Critical patent/CN111259695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application discloses a method and a device for acquiring information. One embodiment of the method comprises the following steps: dividing the acquired face image to be processed into at least one face area image; acquiring a feature tag corresponding to a face region image in the at least one face region image; importing at least one feature tag corresponding to the at least one face region image into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed; and acquiring hairstyle information corresponding to the target face image in the target face images. The method and the device improve the accuracy and the effectiveness of acquiring the hairstyle information matched with the face image to be processed.

Description

Method and device for acquiring information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for acquiring information.
Background
Hair styling has a significant impact on the human image. The proper hairstyle can improve the overall image of the user. The user can set the hairstyle according to the user's own preference, and then the hairstylist sets the hairstyle for the user according to the user's requirement.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring information.
In a first aspect, an embodiment of the present application provides a method for generating a web page, where the method includes: dividing the acquired face image to be processed into at least one face area image; for a face area image in the at least one face area image, acquiring a feature tag corresponding to the face area image, wherein the feature tag is used for identifying the classification of the face features corresponding to the face area image; importing at least one feature tag corresponding to the at least one face region image into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, wherein the face matching model is used for representing the corresponding relation between the feature tag and the target face image in a face image library; and acquiring hairstyle information corresponding to the target face image in the target face images.
In some embodiments, the acquiring the feature tag corresponding to the face area image includes: setting a position reference point for the face region image, wherein the position reference point is used for identifying structural features of the face features, and the structural features comprise at least one of the following: big, small, high, low, long, short, round, square; and determining the classification of the face features corresponding to the face region image according to the position reference points.
In some embodiments, the face matching model is constructed by: acquiring a plurality of sample face images and sample feature labels corresponding to each sample face image in the plurality of sample face images; and taking each sample face image in the plurality of sample face images as input, taking a sample feature label of each sample face image in the plurality of sample face images as output, and training to obtain a face matching model.
In some embodiments, the training to obtain the face matching model includes: the following training steps are performed: and sequentially inputting each sample face image in the plurality of sample face images into an initial face matching model to obtain a prediction feature label corresponding to each sample face image in the plurality of sample face images, comparing the prediction feature label corresponding to each sample face image in the plurality of sample face images with the sample feature label corresponding to the sample face image to obtain the prediction accuracy of the initial face matching model, determining whether the prediction accuracy is greater than a preset accuracy threshold, and if so, using the initial face matching model as a trained face matching model.
In some embodiments, the training to obtain the face matching model includes: and adjusting parameters of the initial face matching model in response to the fact that the initial face matching model is not larger than the preset accuracy threshold, and continuing to execute the training step.
In some embodiments, the above method further comprises: and displaying a hairstyle effect graph corresponding to the face image to be processed according to the hairstyle information.
In a second aspect, an embodiment of the present application provides an apparatus for acquiring information, including: a face region image acquisition unit configured to divide an acquired face image to be processed into at least one face region image; a feature tag obtaining unit, configured to obtain a feature tag corresponding to a face region image in the at least one face region image, where the feature tag is used to identify a classification of face features corresponding to the face region image; the target face image acquisition unit is configured to guide at least one feature tag corresponding to the at least one face region image into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, wherein the face matching model is used for representing the corresponding relation between the feature tag and the target face image in the face image library; and the hair style information acquisition unit is configured to acquire hair style information corresponding to the target face image in the at least one target face image.
In some embodiments, the feature tag obtaining unit includes: a position reference point setting subunit configured to set a position reference point for the face area image, the position reference point being used for identifying structural features of the face feature, the structural features including at least one of: big, small, high, low, long, short, round, square; and the classification information acquisition subunit is configured to determine the classification of the face features corresponding to the face region image according to the position reference points.
In some embodiments, the apparatus further includes a face matching model construction unit configured to construct a face matching model, the face matching model construction unit including: a sample acquisition subunit configured to acquire a plurality of sample face images and a sample feature tag corresponding to each of the plurality of sample face images; the face matching model construction subunit is configured to take each of the plurality of sample face images as input, take a sample feature tag of each of the plurality of sample face images as output, and train to obtain a face matching model.
In some embodiments, the face matching model building subunit includes: the face matching model construction module is configured to sequentially input each sample face image in the plurality of sample face images into an initial face matching model to obtain a prediction feature label corresponding to each sample face image in the plurality of sample face images, compare the prediction feature label corresponding to each sample face image in the plurality of sample face images with the sample feature label corresponding to the sample face image to obtain the prediction accuracy of the initial face matching model, determine whether the prediction accuracy is greater than a preset accuracy threshold, and if so, take the initial face matching model as a trained face matching model.
In some embodiments, the face matching model building subunit includes: and the parameter adjustment module is used for responding to the fact that the parameter is not larger than the preset accuracy threshold value, is configured to adjust the parameters of the initial face matching model, and continues to execute the training step.
In some embodiments, the apparatus further comprises: and the effect graph display unit is configured to display a hairstyle effect graph corresponding to the face image to be processed according to the hairstyle information.
In a third aspect, an embodiment of the present application provides a server, including: one or more processors; and a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the method for acquiring information of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer readable medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method for acquiring information of the first aspect described above.
The method and the device for acquiring information provided by the embodiment of the application firstly divide the acquired face image to be processed into at least one face area image; then, inquiring a feature tag corresponding to the face region image for the face region image in the at least one face region image, wherein the feature tag is used for identifying the classification of the face features corresponding to the face region image; and then, at least one feature label corresponding to the at least one face area image is imported into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, and the target face image corresponding to the image to be processed can be obtained. The face matching model is used for representing the corresponding relation between the feature tag and the target face image in the face image library; and finally, acquiring the hairstyle information corresponding to the target face image. According to the technical scheme, the face matching model is imported through the feature labels corresponding to the face area images, the target face image which is most similar to the face image to be processed can be found, the hairstyle information corresponding to the target face image is obtained, and accuracy and effectiveness of obtaining the hairstyle information matched with the face image to be processed are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for obtaining information in accordance with the present application;
FIG. 3 is a schematic illustration of an application scenario of a method for obtaining information according to the present application;
FIG. 4 is a flow chart of one embodiment of a face matching model training method according to the present application;
FIG. 5 is a schematic diagram of an embodiment of an apparatus for acquiring information in accordance with the present application;
FIG. 6 is a schematic diagram of a computer system suitable for use with a server implementing an embodiment of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which a method for acquiring information or an apparatus for acquiring information of an embodiment of the present application may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various image processing applications such as an image acquisition application, a light detection application, an exposure control application, an image brightness adjustment application, an image editing application, an image transmission application, and the like may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting image display, including but not limited to smartphones, tablet computers, laptop and desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module, without limitation.
The server 105 may be a server that provides various services, for example, a server that processes the face images to be processed sent from the terminal devices 101, 102, 103. The server can analyze and the like the received data of the face image to be processed and the like so as to determine the hair style information corresponding to the face image to be processed.
It should be noted that the method for acquiring information provided by the embodiment of the present application may be executed by the terminal devices 101, 102, 103 alone or may be executed by the terminal devices 101, 102, 103 and the server 105 together. Accordingly, the means for acquiring information may be provided in the terminal devices 101, 102, 103 or in the server 105.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, to provide a distributed service), or may be implemented as a single software or software module, which is not specifically limited herein.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for obtaining information in accordance with the present application is shown. The method for acquiring information includes the steps of:
step 201, dividing the acquired face image to be processed into at least one face area image.
In the present embodiment, the execution subject of the method for acquiring information (e.g., the terminal devices 101, 102, 103 or the server 105 shown in fig. 1) may receive a face image to be processed from a terminal with which the user performs image acquisition by a wired connection manner or a wireless connection manner. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
In general, different users are adapted to different hairstyles due to the individual users. The user may decide himself about the desired hairstyle, but the desired hairstyle does not necessarily fit the user himself. Some hairstyles adversely affect the overall image of the user, playing a negative role.
After the execution subject acquires the face image to be processed, the execution subject can divide the face image to be processed into at least one face region image. The image of the face to be processed typically also includes images other than the face (which may be, for example, a coat, hair, hat, etc.). The execution subject may determine a face image of the face images to be processed. Then, the executing body can further determine the image positions corresponding to the face features such as eyes, forehead, nose, eyebrows, mouth and the like on the face image through a face recognition method and the like. Finally, the executing body divides the face image to be processed into at least one face area image according to the image positions corresponding to eyes, forehead, nose, eyebrows, mouth and the like on the face image. At least one face region image. Each of the face area images includes one image of the eyes, forehead, nose, eyebrows, mouth, and the like. For example, the face region image may include only an image corresponding to the left eye.
Step 202, for a face area image in the at least one face area image, acquiring a feature tag corresponding to the face area image.
After the face area image is obtained, the executing body can further identify the face features contained in the face area image, and further obtain feature labels corresponding to the face features. The feature labels are used for identifying the classification of the face features corresponding to the face region images. The classification may be different for different face features. For example, when the face feature is an eye, the classification may be: "double eyelid", "single eyelid", etc.; when the face feature is nose, the classification may be: high bridge, low bridge, etc. The classification may also be his content, which is not described in detail here.
In some optional implementations of this embodiment, the acquiring the feature tag corresponding to the face area image may include the following steps:
first, setting a position reference point for the face region image.
The classification of the face features may be embodied by the structure of the face region image. In order to determine the classification of the face features within the face region image, the execution subject may set a position reference point for the face region image. The position reference points may be used to identify structural features of the face feature. The above structural features include at least one of: big, small, high, low, long, short, round and square. For example, when the face feature is an eye, the structural feature may be: big, small, long, short, etc. For different face features, the structural features may be different, and will not be described in detail here.
And secondly, determining the classification of the face features corresponding to the face region image according to the position reference points.
After the position reference points are determined, the structure of the face features can be represented. The execution body may determine the classification of the corresponding face feature according to the distance information between the position reference points. For example, when the face features eyes, the position reference points may be respectively at the large canthus, the small canthus, and the connecting lines connecting the large canthus and the small canthus. When there are two lines located on the upper part of the eye, then the classification of the eye may be "double eyelid". The feature tag at this time may be: { eyes; double eyelid }; when there is a line on the upper part of the eye, the classification of the eye may be "single eyelid". The feature tag at this time may be: { eyes; single eyelid }. It should be noted that, the more the position reference points are, the more the structural features of the obtained face features are, and the more accurate the classification of the face features is determined according to the structural features.
Step 203, importing at least one feature tag corresponding to the at least one face area image into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed.
Each face region image corresponds to a face feature, and each face feature corresponds to a feature tag. How many face area images have feature labels. The executing body may import the feature tags into a pre-trained face matching model. The face matching model can search the face image closest to the feature tag in the face image library. The face images in the face image library have corresponding characteristic labels, and the face matching model can take the closest face image as a target face image. The face matching model can be used for representing the corresponding relation between the feature tag and the target face image in the face image library. The face images in the face image library are matched with corresponding feature labels. For example, a face image contains a plurality of feature labels: { eyes; double eyelid }; { nose; high nose bridge }; { mouth; small }; { eyebrow; long } and the like. After the feature labels are imported into the face matching model, the face matching model can find a face image closest to the imported feature labels from a face image library.
In some optional implementations of this embodiment, the face matching model is constructed by:
The method comprises the steps of obtaining a plurality of sample face images and sample feature labels corresponding to each sample face image in the plurality of sample face images.
The execution subject may acquire a plurality of sample face images. Each of the plurality of sample face images is matched with a corresponding sample feature tag. Wherein the sample feature tags may be configured by a technician for each face feature based on experience or quantification criteria.
And secondly, taking each of the plurality of sample face images as input, taking a sample feature tag of each of the plurality of sample face images as output, and training to obtain a face matching model.
The execution body may take each of the plurality of sample face images as input, take a sample feature tag of each of the plurality of sample face images as output, and train to obtain a face matching model. The face matching model of the application can be an artificial neural network, which abstracts the human brain neural network from the angle of information processing, builds a certain simple model and forms different networks according to different connection modes. Artificial neural networks are typically made up of a large number of nodes (or neurons) interconnected, each node representing a particular output function, called an excitation function. The connection between each two nodes represents a weight, called a weight (also called a parameter), for the signal passing through the connection, and the output of the network varies according to the connection mode, the weight and the excitation function of the network. The face matching model generally includes a plurality of layers, each layer includes a plurality of nodes, and in general, the weights of the nodes of the same layer may be the same, and the weights of the nodes of different layers may be different, so that parameters of the plurality of layers of the face matching model may also be different.
Step 204, for a target face image in the at least one target face image, acquiring hairstyle information corresponding to the target face image.
After the target face image is obtained, the executing body can further obtain the hairstyle information corresponding to the target face image. Wherein the target face image may generally correspond to at least one piece of hairstyle information. The hair style information may include a hair style name and a hair style image corresponding to the hair style name. For example, if the target face image is a certain male star whose hairstyle is different in different occasions, the execution subject can acquire the hairstyle information. For example, the target face image is a male star, and the hairstyle information of the male star may be: hairstyle name: "high contrast", hairstyle image 1; hairstyle name: "fresh meat fight", hairstyle image 2; hairstyle name: "permanent type", hairstyle image 3, etc.
In some optional implementations of this embodiment, the method may further include: and displaying a hairstyle effect graph corresponding to the face image to be processed according to the hairstyle information.
When the hairstyle information is acquired, the executive body may display the hairstyle information to the user. When detecting that the user selects a selection signal corresponding to certain hair style information, the executing body can display a hair style effect diagram corresponding to the face image to be processed according to the hair style information, so that the user can further confirm the hair style effect diagram. The hairstyle effect graph can be an image obtained by combining the face image to be processed and the hairstyle image. For example, a face area image in the face image to be processed and a hair area image in the hair style image are combined together to obtain a hair style effect map.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for acquiring information according to the present embodiment. In the application scenario of fig. 3, a user acquires a face image of the user himself via the terminal device 102, and sends the face image to the server 105 via the network 104. The server 105 acquires the face image to be processed, and obtains at least one corresponding face region image. Then, the server 105 acquires the feature tag of each face area image: { eyes; double eyelid }; { nose; high nose bridge }; { mouth; small }; { eyebrow; long } and the like. Thereafter, the executing body tags the feature tag: { eyes; double eyelid }; { nose; high nose bridge }; { mouth; small }; { eyebrow; and (3) leading in a face matching model to obtain at least one target face image. Finally, the executing body can acquire the hair style information of each target face image in the at least one target face image.
The method provided by the embodiment of the application comprises the steps of firstly dividing an acquired face image to be processed into at least one face area image; then, inquiring a feature tag corresponding to the face region image for the face region image in the at least one face region image, wherein the feature tag is used for identifying the classification of the face features corresponding to the face region image; and then, at least one feature label corresponding to the at least one face area image is imported into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, and the target face image corresponding to the image to be processed can be obtained. The face matching model is used for representing the corresponding relation between the feature tag and the target face image; and finally, acquiring the hairstyle information corresponding to the target face image. According to the technical scheme, the face matching model is imported through the feature labels corresponding to the face area images, the target face image which is most similar to the face image to be processed can be found, the hairstyle information corresponding to the target face image is obtained, and accuracy and effectiveness of obtaining the hairstyle information matched with the face image to be processed are improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of a face matching model training method is shown. The process 400 of the face matching model training method includes the following steps:
step 401, acquiring a plurality of sample face images and sample feature labels corresponding to each of the plurality of sample face images.
In this embodiment, the face matching model training method execution body (for example, the server 105 shown in fig. 1) may acquire a plurality of sample face images and sample feature tags corresponding to each of the plurality of sample face images through a wired connection manner or a wireless connection manner.
Step 402, sequentially inputting each of the plurality of sample face images into an initial face matching model to obtain a predictive feature label corresponding to each of the plurality of sample face images.
In this embodiment, the execution subject may sequentially input each of the plurality of sample face images to the initial face matching model, so as to obtain a prediction feature tag corresponding to each of the plurality of sample face images. Here, the execution subject may input each sample face image from the input side of the initial face matching model, sequentially perform processing on parameters of each layer in the initial face matching model, and output the sample face image from the output side of the initial face matching model, where the information output from the output side is the prediction feature label corresponding to the sample face image. The initial face matching model may be an untrained face matching model or an untrained face matching model, and each layer of the initial face matching model is provided with an initialization parameter, and the initialization parameters may be continuously adjusted in the training process of the face matching model.
Step 403, comparing the prediction feature label corresponding to each of the plurality of sample face images with the sample feature label corresponding to the sample face image, so as to obtain the prediction accuracy of the initial face matching model.
In this embodiment, based on the prediction feature label corresponding to each of the plurality of sample face images obtained in step 402, the execution subject may compare the prediction feature label corresponding to each of the plurality of sample face images with the sample feature label corresponding to the sample face image, so as to obtain the prediction accuracy of the initial face matching model. Specifically, if a prediction feature label corresponding to one sample face image is the same as or similar to a sample feature label corresponding to the sample face image, the initial face matching model is predicted correctly; if the prediction feature label corresponding to one sample face image is different or not similar to the sample feature label corresponding to the sample face image, the initial face matching model is mispredicted. Here, the execution body may calculate a ratio of the number of prediction correctness to the total number of samples, and take the ratio as the prediction accuracy of the initial face matching model.
Step 404, determining whether the prediction accuracy is greater than a preset accuracy threshold.
In this embodiment, based on the prediction accuracy of the initial face matching model obtained in step 403, the execution subject may compare the prediction accuracy of the initial face matching model with a preset accuracy threshold. If the accuracy is greater than the preset accuracy threshold, step 405 is executed; if not, step 406 is performed.
And step 405, using the initial face matching model as a face matching model after training.
In this embodiment, when the prediction accuracy of the initial face matching model is greater than the preset accuracy threshold, it is indicated that the training of the face matching model is completed. At this time, the execution subject may use the initial face matching model as the face matching model after training is completed.
Step 406, adjusting parameters of the initial face matching model.
In this embodiment, under the condition that the prediction accuracy of the initial face matching model is not greater than the preset accuracy threshold, the executing body may adjust parameters of the initial face matching model, and return to the executing step 402 until the face matching model capable of representing the corresponding relationship between the feature tag and the face image is trained.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for acquiring information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for acquiring information of the present embodiment may include: a face region image acquisition unit 501, a feature tag acquisition unit 502, a target face image acquisition unit 503, and a hairstyle information acquisition unit 504. Wherein the face region image obtaining unit 501 is configured to divide the obtained face image to be processed into at least one face region image; a feature tag obtaining unit 502, configured to obtain, for a face area image in the at least one face area image, a feature tag corresponding to the face area image, where the feature tag is used to identify a classification of face features corresponding to the face area image; the target face image obtaining unit 503 is configured to import at least one feature tag corresponding to the at least one face area image into a pre-trained face matching model, so as to obtain at least one target face image corresponding to the face image to be processed, where the face matching model is used for representing a corresponding relationship between the feature tag and the target face image in the face image library; the hairstyle information acquiring unit 504 is configured to acquire hairstyle information corresponding to a target face image of the at least one target face image.
In some optional implementations of this embodiment, the feature tag obtaining unit 502 may include: a position reference point setting subunit (not shown in the figure) and a classification information acquisition subunit (not shown in the figure). Wherein the position reference point setting subunit is configured to set a position reference point for the face region image, the position reference point being used to identify structural features of the face feature, the structural features including at least one of: big, small, high, low, long, short, round, square; the classification information acquisition subunit is configured to determine a classification of the face feature corresponding to the face region image according to the position reference point.
In some optional implementations of this embodiment, the apparatus 500 for acquiring information may further include a face matching model building unit (not shown in the figure) configured to build a face matching model. The face matching model construction unit may include: a sample acquisition subunit (not shown) and a face matching model construction subunit (not shown). The sample acquisition subunit is configured to acquire a plurality of sample face images and sample feature labels corresponding to each of the plurality of sample face images; the face matching model construction subunit is configured to take each of the plurality of sample face images as input, take a sample feature tag of each of the plurality of sample face images as output, and train to obtain a face matching model.
In some optional implementations of this embodiment, the face matching model building subunit may include: a face matching model construction module (not shown in the figure) configured to sequentially input each of the plurality of sample face images into an initial face matching model to obtain a prediction feature label corresponding to each of the plurality of sample face images, compare the prediction feature label corresponding to each of the plurality of sample face images with the sample feature label corresponding to the sample face image to obtain a prediction accuracy of the initial face matching model, determine whether the prediction accuracy is greater than a preset accuracy threshold, and if so, use the initial face matching model as a trained face matching model.
In some optional implementations of this embodiment, the face matching model building subunit may include: a parameter adjustment module (not shown in the figure) is configured to adjust parameters of the initial face matching model in response to not being greater than the preset accuracy threshold, and to continue the training step.
In some optional implementations of this embodiment, the apparatus 500 for acquiring information may further include: and an effect map display unit (not shown in the figure) configured to display a hair style effect map corresponding to the face image to be processed based on the hair style information.
The embodiment also provides a server, including: one or more processors; and a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the method for acquiring information described above.
The present embodiment also provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the above-described method for acquiring information.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use with a server (e.g., server 105 of FIG. 1) for implementing an embodiment of the present application. The server illustrated in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 601.
The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes a face region image acquisition unit, a feature tag acquisition unit, a target face image acquisition unit, and a hairstyle information acquisition unit. The names of these units do not constitute a limitation of the unit itself in some cases, and for example, the hairstyle information acquisition unit may also be described as "a unit for displaying hairstyle information corresponding to a target face image".
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: dividing the acquired face image to be processed into at least one face area image; for a face area image in the at least one face area image, acquiring a feature tag corresponding to the face area image, wherein the feature tag is used for identifying the classification of the face features corresponding to the face area image; importing at least one feature tag corresponding to the at least one face region image into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, wherein the face matching model is used for representing the corresponding relation between the feature tag and the target face image in a face image library; and acquiring hairstyle information corresponding to the target face image in the target face images.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (14)

1. A method for obtaining information, comprising:
dividing the acquired face image to be processed into at least one face region image, including: identifying the image position of each face part in the face area in the face image to be processed; dividing the face image to be processed into face area images corresponding to the face parts one by one according to the image positions;
for face area images in the face area images, acquiring feature labels corresponding to the face area images, wherein the feature labels are used for identifying classification of face features corresponding to the face area images;
Importing a plurality of feature labels corresponding to the face region images into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, wherein the face matching model is used for representing the corresponding relation between the feature labels and the target face images in a face image library;
and acquiring hairstyle information corresponding to the target face image in the target face image.
2. The method of claim 1, wherein the acquiring the feature tag corresponding to the face area image includes:
setting a position reference point for the face region image, wherein the position reference point is used for identifying structural features of the face features, and the structural features comprise at least one of the following: big, small, high, low, long, short, round, square;
and determining the classification of the face features corresponding to the face region image according to the position reference points.
3. The method of claim 1, wherein the face matching model is constructed by:
acquiring a plurality of sample face images and sample feature labels corresponding to each sample face image in the plurality of sample face images;
And taking each sample face image in the plurality of sample face images as input, taking a sample feature label of each sample face image in the plurality of sample face images as output, and training to obtain a face matching model.
4. A method according to claim 3, wherein the training to obtain the face matching model takes each of the plurality of sample face images as input and takes a sample feature tag of each of the plurality of sample face images as output comprises:
the following training steps are performed: inputting each sample face image in the plurality of sample face images into an initial face matching model in sequence to obtain a prediction feature label corresponding to each sample face image in the plurality of sample face images, comparing the prediction feature label corresponding to each sample face image in the plurality of sample face images with the sample feature label corresponding to the sample face image to obtain the prediction accuracy of the initial face matching model, determining whether the prediction accuracy is larger than a preset accuracy threshold, and if so, using the initial face matching model as a trained face matching model.
5. The method of claim 4, wherein the training to obtain the face matching model takes each of the plurality of sample face images as input and takes a sample feature tag of each of the plurality of sample face images as output comprises:
and adjusting parameters of the initial face matching model in response to the fact that the initial face matching model is not larger than the preset accuracy threshold, and continuing to execute the training step.
6. The method of any one of claims 1 to 5, wherein the method further comprises:
and displaying a hairstyle effect graph corresponding to the face image to be processed according to the hairstyle information.
7. An apparatus for obtaining information, comprising:
a face region image acquisition unit configured to divide an acquired face image to be processed into at least one face region image, including: identifying the image position of each face part in the face area in the face image to be processed; dividing the face image to be processed into face area images corresponding to the face parts one by one according to the image positions;
the device comprises a feature tag acquisition unit, a feature tag acquisition unit and a feature processing unit, wherein the feature tag acquisition unit is configured to acquire feature tags corresponding to face region images in the face region images, and the feature tags are used for identifying classification of face features corresponding to the face region images;
The target face image acquisition unit is configured to guide a plurality of feature labels corresponding to the face region images into a pre-trained face matching model to obtain at least one target face image corresponding to the face image to be processed, and the face matching model is used for representing the corresponding relation between the feature labels and the target face images in a face image library;
and the hair style information acquisition unit is configured to acquire hair style information corresponding to the target face image in the at least one target face image.
8. The apparatus of claim 7, wherein the feature tag acquisition unit comprises:
a position reference point setting subunit configured to set a position reference point for the face region image, the position reference point being used to identify structural features of the face feature, the structural features including at least one of: big, small, high, low, long, short, round, square;
and the classification information acquisition subunit is configured to determine the classification of the face features corresponding to the face region image according to the position reference points.
9. The apparatus according to claim 7, wherein the apparatus further comprises a face matching model construction unit configured to construct a face matching model, the face matching model construction unit comprising:
A sample acquisition subunit configured to acquire a plurality of sample face images and a sample feature tag corresponding to each of the plurality of sample face images;
the face matching model construction subunit is configured to take each sample face image in the plurality of sample face images as input, take a sample feature tag of each sample face image in the plurality of sample face images as output, and train to obtain a face matching model.
10. The apparatus of claim 9, wherein the face matching model construction subunit comprises:
the face matching model construction module is configured to sequentially input each sample face image in the plurality of sample face images into an initial face matching model to obtain a prediction feature label corresponding to each sample face image in the plurality of sample face images, compare the prediction feature label corresponding to each sample face image in the plurality of sample face images with the sample feature label corresponding to the sample face image to obtain the prediction accuracy of the initial face matching model, determine whether the prediction accuracy is greater than a preset accuracy threshold, and if so, take the initial face matching model as a trained face matching model.
11. The apparatus of claim 10, wherein the face matching model construction subunit comprises:
and the parameter adjustment module is used for responding to the condition that the parameter is not larger than the preset accuracy threshold value, and is configured to adjust the parameters of the initial face matching model and continuously execute the training step.
12. The apparatus according to any one of claims 7 to 11, wherein the apparatus further comprises:
and the effect graph display unit is configured to display a hairstyle effect graph corresponding to the face image to be processed according to the hairstyle information.
13. A server, comprising:
one or more processors;
a memory having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
14. A computer readable medium on which a computer program is stored, characterized in that the program, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN201811458372.3A 2018-11-30 2018-11-30 Method and device for acquiring information Active CN111259695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458372.3A CN111259695B (en) 2018-11-30 2018-11-30 Method and device for acquiring information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458372.3A CN111259695B (en) 2018-11-30 2018-11-30 Method and device for acquiring information

Publications (2)

Publication Number Publication Date
CN111259695A CN111259695A (en) 2020-06-09
CN111259695B true CN111259695B (en) 2023-08-29

Family

ID=70946674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458372.3A Active CN111259695B (en) 2018-11-30 2018-11-30 Method and device for acquiring information

Country Status (1)

Country Link
CN (1) CN111259695B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766285B (en) * 2021-01-26 2024-03-19 北京有竹居网络技术有限公司 Image sample generation method and device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578034A (en) * 2017-09-29 2018-01-12 百度在线网络技术(北京)有限公司 information generating method and device
CN107590482A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 information generating method and device
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN108009521A (en) * 2017-12-21 2018-05-08 广东欧珀移动通信有限公司 Humanface image matching method, device, terminal and storage medium
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108416310A (en) * 2018-03-14 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108509041A (en) * 2018-03-29 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for executing operation
CN108595628A (en) * 2018-04-24 2018-09-28 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN108629339A (en) * 2018-06-15 2018-10-09 Oppo广东移动通信有限公司 Image processing method and related product

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578034A (en) * 2017-09-29 2018-01-12 百度在线网络技术(北京)有限公司 information generating method and device
CN107590482A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 information generating method and device
CN108009521A (en) * 2017-12-21 2018-05-08 广东欧珀移动通信有限公司 Humanface image matching method, device, terminal and storage medium
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108416310A (en) * 2018-03-14 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108509041A (en) * 2018-03-29 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for executing operation
CN108595628A (en) * 2018-04-24 2018-09-28 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN108629339A (en) * 2018-06-15 2018-10-09 Oppo广东移动通信有限公司 Image processing method and related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种剪纸风格人脸肖像生成算法;乔凤;普园媛;董孙俊;徐丹;;系统仿真学报(09);全文 *

Also Published As

Publication number Publication date
CN111259695A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN107909065B (en) Method and device for detecting face occlusion
US10936919B2 (en) Method and apparatus for detecting human face
US10853623B2 (en) Method and apparatus for generating information
CN109145781B (en) Method and apparatus for processing image
US10635893B2 (en) Identity authentication method, terminal device, and computer-readable storage medium
CN110288049B (en) Method and apparatus for generating image recognition model
CN107622240B (en) Face detection method and device
CN108701216B (en) Face recognition method and device and intelligent terminal
US9875445B2 (en) Dynamic hybrid models for multimodal analysis
CN108416323B (en) Method and device for recognizing human face
CN108960316B (en) Method and apparatus for generating a model
US10719693B2 (en) Method and apparatus for outputting information of object relationship
WO2020062493A1 (en) Image processing method and apparatus
CN108197592B (en) Information acquisition method and device
CN108280413B (en) Face recognition method and device
CN108491808B (en) Method and device for acquiring information
CN111868742A (en) Machine implemented facial health and beauty aid
CN111539903B (en) Method and device for training face image synthesis model
CN110298850B (en) Segmentation method and device for fundus image
US20200302316A1 (en) Question answering system influenced by user behavior and text metadata generation
CN106663210B (en) Perception-based multimedia processing
CN114612290A (en) Training method of image editing model and image editing method
WO2019232723A1 (en) Systems and methods for cleaning data
US11232560B2 (en) Method and apparatus for processing fundus image
CN111259698B (en) Method and device for acquiring image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant