CN113762305B - Method and device for determining hair loss type - Google Patents

Method and device for determining hair loss type Download PDF

Info

Publication number
CN113762305B
CN113762305B CN202011364134.3A CN202011364134A CN113762305B CN 113762305 B CN113762305 B CN 113762305B CN 202011364134 A CN202011364134 A CN 202011364134A CN 113762305 B CN113762305 B CN 113762305B
Authority
CN
China
Prior art keywords
image
head
hair
target
target head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011364134.3A
Other languages
Chinese (zh)
Other versions
CN113762305A (en
Inventor
左鑫孟
梅涛
周伯文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202011364134.3A priority Critical patent/CN113762305B/en
Publication of CN113762305A publication Critical patent/CN113762305A/en
Application granted granted Critical
Publication of CN113762305B publication Critical patent/CN113762305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Abstract

The invention discloses a method and a device for determining a hair loss type, and relates to the technical field of computers. One embodiment of the method comprises the following steps: acquiring a target image, and labeling a target head in the target image to obtain a region image of the target head; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is obtained by training according to the regional image of the hair loss head and the mask image of the hair corresponding to the hair loss head. According to the method and the device for determining the alopecia type, accuracy and efficiency of determining the alopecia type are improved, requirements on target images are reduced, manpower resource cost is saved, and user experience is improved.

Description

Method and device for determining hair loss type
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for determining a type of hair loss.
Background
Alopecia refers to the phenomenon that hair falls abnormally, resulting in sparse hair or the formation of baldness and spots. Genetic factors, increased age, excessive mental stress, endocrine dyscrasia, and the like may all cause alopecia. The existing method for determining the alopecia type mainly comprises the steps of identifying the area of alopecia in an input picture by adopting a manual calculation and judgment method or an interactive segmentation detection method, calculating the area of alopecia, judging the degree of alopecia, and making corresponding classification reference and recommendation strategies and the like.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art:
the existing method has the technical problems that the judgment of the alopecia type consumes a large amount of manpower resources, the judgment accuracy of the alopecia type and the alopecia degree is low, the user experience is poor, the requirement on a target image is high, the alopecia type determination efficiency is low and the like.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a method and a device for determining a hair loss type, which can improve the accuracy and efficiency of determining the hair loss type, reduce the requirement on a target image, save the cost of human resources and improve the user experience.
To achieve the above object, according to a first aspect of embodiments of the present invention, there is provided a method for determining a type of hair loss, including:
acquiring a target image, and labeling a target head in the target image to obtain a region image of the target head;
dividing the region image of the target head to obtain a mask image of hair corresponding to the target head;
inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is obtained by training according to the regional image of the hair loss head and the mask image of the hair corresponding to the hair loss head.
Further, after the step of labeling the target head in the target image to obtain the region image of the target head, the method further includes:
acquiring a plurality of training images, respectively labeling heads in the plurality of training images to obtain a plurality of binarization images formed by head region images and non-head region images, and constructing a first binary classification model based on the plurality of binarization images;
and filtering the regional image of the target head by using the first two-class model and the first threshold value.
Further, before the step of dividing the region image of the target head to obtain the mask image of the hair corresponding to the target head, the method further includes:
acquiring an image of the non-alopecia head and an image of the alopecia head, and constructing a second classification model by taking the image of the non-alopecia head and the image of the alopecia head as training samples;
and classifying the regional image of the target head by using a second classification model, and judging and determining that the target head has alopecia according to the classification result and a second threshold value.
Further, the step of inputting the region image of the target head and the mask image of the hair into the twin network model for classification processing further comprises:
And respectively carrying out feature extraction on the region image of the target head and the mask image of the hair by utilizing the twin network model to obtain a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair, carrying out weighting treatment on the first feature vector and the second feature vector, and then placing the weighted treatment on the first feature vector and the second feature vector into a full-connection layer of the twin network model for classification treatment.
Further, the step of weighting the first feature vector and the second feature vector includes:
and setting weight coefficients for the first feature vector and the second feature vector respectively, and carrying out weighting processing on the first feature vector and the second feature vector according to the weight coefficients.
Further, the output layer of the twin network model comprises a plurality of output nodes, and each output node corresponds to one alopecia level; the method further comprises the steps of:
and determining the alopecia grade corresponding to the target head according to the output node corresponding to the classification processing result.
According to a second aspect of the embodiments of the present invention, there is provided a device for determining a type of hair loss, including:
the labeling processing module is used for acquiring a target image, labeling the target head in the target image and obtaining a region image of the target head;
The segmentation processing module is used for carrying out segmentation processing on the region image of the target head to obtain a mask image of hair corresponding to the target head;
the alopecia type determining module is used for inputting the regional image of the target head and the mask image of the hair into the twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classifying result; the twin network model is obtained by training according to the regional image of the hair loss head and the mask image of the hair corresponding to the hair loss head.
Further, the alopecia type determining module is further configured to:
and respectively carrying out feature extraction on the region image of the target head and the mask image of the hair by utilizing the twin network model to obtain a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair, carrying out weighting treatment on the first feature vector and the second feature vector, and then placing the weighted treatment on the first feature vector and the second feature vector into a full-connection layer of the twin network model for classification treatment.
According to a third aspect of an embodiment of the present invention, there is provided an electronic apparatus including:
one or more processors;
storage means for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of determining a type of hair loss as described above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method of determining a type of hair loss as any of the above.
One embodiment of the above invention has the following advantages or benefits: because the target image is acquired, the target head in the target image is marked, and a region image of the target head is obtained; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is a technical means obtained by training according to the area image of the hair loss head and the mask image of the hair corresponding to the hair loss head, so that the technical effects that in the existing method, a great amount of human resources are consumed due to the fact that the hair loss type is calculated and judged manually or the method of interactive segmentation detection is adopted, the judging accuracy of the hair loss type and the hair loss degree is low, the user experience is poor, the requirement on the detected image is high, and the determining efficiency of the hair loss type is low are overcome, the accuracy and the efficiency of determining the hair loss type are improved, the requirement on a target image is reduced, the human resource cost is saved, and the user experience is improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic view of the main flow of a method for determining a type of hair loss according to a first embodiment of the present invention;
fig. 2 is a schematic view of the main flow of a method for determining a type of hair loss according to a second embodiment of the present invention;
fig. 3 is a schematic view of main modules of a hair loss type determining apparatus provided according to an embodiment of the present invention;
FIG. 4 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 5 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic view of the main flow of a method for determining a type of hair loss according to a first embodiment of the present invention; as shown in fig. 1, the method for determining a hair loss type according to the embodiment of the present invention mainly includes:
step S101, obtaining a target image, and performing labeling processing on a target head in the target image to obtain a region image of the target head.
Specifically, the purpose of the embodiment of the present invention is to determine the type of hair loss, and it is understood that the above-described target head does not include a human face portion, and the number of target heads is at least one. In order to further improve the accuracy of determining the type of hair loss, the above-mentioned target image is preferably an image obtained by photographing the head from above. Through the arrangement, the regional image of the target head in the target image is marked at first, so that the mask image corresponding to the hair can be used as two paths of input of the twin model, the requirement on the resolution of the target image is reduced, and the accuracy of determining the hair loss type is improved.
Further, according to an embodiment of the present invention, after the step of labeling the target head in the target image to obtain the region image of the target head, the method further includes:
Acquiring a plurality of training images, respectively labeling heads in the plurality of training images to obtain a plurality of binarization images formed by head region images and non-head region images, and constructing a first binary classification model based on the plurality of binarization images;
and filtering the regional image of the target head by using the first two-class model and the first threshold value.
Specifically, according to the embodiment of the invention, the training images can be obtained from an open source database or can be collected by self. In the construction of the first two-classification model, a target detection algorithm may be employed to train a plurality of binarized images composed of head region images and non-head region images, thereby optimizing the first two-classification model. The first and second classification models are used for classifying the regional image of the target head, a classification value (or probability value) of the target head as the head can be obtained, the classification value is compared with a first threshold value, so that the filtering processing of the regional image of the target head is realized, the credibility of the target head for later determining the hair loss type can be determined through the setting, the situation that the regional image of the wrong target head is marked in the marking processing process is avoided, the accuracy of determining the hair loss type is further improved, and the accuracy of the mask image of the hair can be further improved through the subsequent segmentation processing of the target head.
Step S102, segmentation processing is carried out on the regional image of the target head, and a mask image of the hair corresponding to the target head is obtained.
Masking: refers to the control of the area or process of image processing by masking the processed image (in whole or in part) with a selected image, image or object (in the present embodiment, hair).
The mask image of the hair refers to an image formed by the position of the hair in the region image of the target head.
Specifically, according to an embodiment of the present invention, before the step of dividing the area image of the target head to obtain the mask image of the hair corresponding to the target head, the method further includes:
acquiring an image of the non-alopecia head and an image of the alopecia head, and constructing a second classification model by taking the image of the non-alopecia head and the image of the alopecia head as training samples;
and classifying the regional image of the target head by using a second classification model, and judging and determining that the target head has alopecia according to the classification result and a second threshold value.
According to a specific implementation manner of the embodiment of the present invention, an image of a head that does not lose hair and an image of a head that loses hair may be screened from the area images of the head that are obtained by the labeling processing of the plurality of training images, and the images are used as training samples to perform a second classification model training. And classifying the region image of the target head by using a second classification model obtained through training, judging whether the target head has alopecia according to the classification result and a second threshold value, and if so, determining the subsequent alopecia type.
Further, according to an embodiment of the present invention, the step of performing segmentation processing on the region image of the target head to obtain the mask image of the hair corresponding to the target head includes:
and acquiring a head region image, performing segmentation marking on the positions of the hairs in the head region image, and performing hair segmentation model training by taking the head region image with the segmentation marking positions of the hairs as a training set. The region image of the target head is input into the hair segmentation model, and the mask image of the hair corresponding to the target head is input.
In the prior art, an interactive segmentation method such as grabcut (an image segmentation algorithm) is mainly adopted, the method is based on a graph, a part of foreground seed points are required to be manually specified, human errors are easy to occur, through the arrangement, the end-to-end output of the mask image of the hair is realized by utilizing the hair segmentation model obtained through training, the effect of segmenting the mask image of the hair from the region image of the target head can be remarkably improved, and the influence of factors such as complexity of a hairstyle, variability of hair colors, uncertainty of image brightness and the like is reduced.
Step S103, inputting the regional image of the target head and the mask image of the hair into a twin network model, performing classification processing, and determining the alopecia type corresponding to the target head according to the classification processing result; the twin network model is obtained by training according to the regional image of the hair loss head and the mask image of the hair corresponding to the hair loss head.
By the arrangement, the area image of the target head and the mask image of the hair are taken as two paths of input data, the twin network model is adopted for classification processing, and the alopecia type of the target head is determined according to one classification processing result obtained by output, so that the alopecia type is more accurately confirmed.
Specifically, according to an embodiment of the present invention, the step of inputting the region image of the target head and the mask image of the hair into the twin network model to perform the classification processing further includes:
and respectively carrying out feature extraction on the region image of the target head and the mask image of the hair by utilizing the twin network model to obtain a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair, carrying out weighting treatment on the first feature vector and the second feature vector, and then placing the weighted treatment on the first feature vector and the second feature vector into a full-connection layer of the twin network model for classification treatment.
Through the arrangement, the feature vectors obtained by carrying out feature extraction on the two paths of input through the twin network model are subjected to weighted fusion, and then are input into the full-connection layer for classification processing, so that the accuracy and the efficiency of determining the alopecia type can be further improved, and the user experience is further improved.
Preferably, according to an embodiment of the present invention, the step of weighting the first feature vector and the second feature vector includes:
and setting weight coefficients for the first feature vector and the second feature vector respectively, and carrying out weighting processing on the first feature vector and the second feature vector according to the weight coefficients.
According to a specific implementation manner of the embodiment of the present invention, it is considered that the mask image corresponding to the hair is more important for the model output result, and therefore, the weight coefficient of the second feature vector is higher than that of the first feature vector.
Preferably, according to an embodiment of the present invention, the output layer of the twin network model includes a plurality of output nodes, each output node corresponding to one level of alopecia; the method further comprises the steps of:
and determining the alopecia grade corresponding to the target head according to the output node corresponding to the classification processing result.
Illustratively, in the training process of the twin model, the classification result is classified into hair loss grades, and further, a plurality of output nodes are set for the output layer, and each output node corresponds to one hair loss grade. By the arrangement, the hair loss degree can be accurately judged. The method can be applied to institutions such as dermatology, endocrinology, outpatient department of alopecia, hair-planting and hair-care institutions, and the like, and is used for assisting doctors in judging the course of alopecia; the method can also be applied to auditing and classifying of hair loss related commodity shelves in an e-commerce platform, personalized product recommendation and other applications; the method can also be applied to the putting site selection of off-line anti-hair loss commodity advertisements so as to achieve targeted recommendation and sales.
According to the technical scheme of the embodiment of the invention, the target head in the target image is marked by acquiring the target image, so that the region image of the target head is obtained; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is a technical means obtained by training according to the area image of the hair loss head and the mask image of the hair corresponding to the hair loss head, so that the technical effects that in the existing method, a great amount of human resources are consumed due to the fact that the hair loss type is calculated and judged manually or the method of interactive segmentation detection is adopted, the judging accuracy of the hair loss type and the hair loss degree is low, the user experience is poor, the requirement on the detected image is high, and the determining efficiency of the hair loss type is low are overcome, the accuracy and the efficiency of determining the hair loss type are improved, the requirement on a target image is reduced, the human resource cost is saved, and the user experience is improved.
Fig. 2 is a schematic view of the main flow of a method for determining a type of hair loss according to a second embodiment of the present invention; as shown in fig. 2, the method for determining a hair loss type according to the embodiment of the present invention mainly includes:
step S201, a target image is acquired, and labeling processing is carried out on a target head in the target image, so that a region image of the target head is obtained.
Specifically, according to the embodiment of the invention, the target head in the target image is marked to obtain the coordinate position corresponding to the target head, so that the region image of the target head is determined. According to an embodiment of the present invention, a rectangular area of the target head may be obtained, and coordinate positions of the target head (left_point, right_point, width, height) may be marked (upper left-hand abscissa, upper right-hand abscissa, rectangular and wide and rectangular length of the rectangular area, respectively).
Step S202, a plurality of training images are obtained, heads in the plurality of training images are respectively subjected to labeling processing, a plurality of binarized images composed of head area images and non-head area images are obtained, and a first binary classification model is built based on the plurality of binarized images.
Specifically, according to the embodiment of the invention, a training image is obtained from an open source database, a head in the training image is subjected to labeling processing to obtain a binary image with a head type of head and a non-head type of background, and a deep learning detection model (such as yoloV4, faster Rcnn and the like) is used for training the binary image to obtain a first two-class model (also called a head detection model).
Step S203, filtering the regional image of the target head by using the first two-class model and the first threshold value.
Through the arrangement, the first and second classification models classify the regional images of the target head to obtain the first classification value (representing the probability value corresponding to the head in the image), then the regional images of the target head corresponding to the first classification value lower than the first threshold are filtered according to the comparison between the first classification value and the first threshold, and the regional images of the target head, which are marked by errors in the steps, are eliminated, so that the technical effect of improving the accuracy of determining the hair loss type is achieved.
Step S204, obtaining an image of the non-alopecia head and an image of the alopecia head, and constructing a second classification model by taking the image of the non-alopecia head and the image of the alopecia head as training samples.
Specifically, according to the embodiment of the invention, the head region in the training image marked in the above steps can be cut off to obtain the head image, then the head image is screened into the non-alopecia head image and the alopecia head image, and the non-alopecia head image and the alopecia head image are used as training samples to train to obtain the second classification model, and the model can be used for determining whether the target head has alopecia or not, so that the problem of low accuracy caused by artificial judgment is avoided.
Step S205, classifying the regional image of the target head by using the second classification model, and judging whether the target head has alopecia or not according to the classification result and the second threshold. If yes, the target head has alopecia, step S206 is executed; if not, i.e., if there is no hair loss in the target head, the process goes to step S209.
Through the arrangement, the classification processing result output by the second classification model can be the confidence coefficient of whether the target head has alopecia, and the target head can be judged to have alopecia only when the confidence coefficient is larger than or equal to the second threshold value, so that whether the target head has alopecia or not is automatically identified, and further user experience is improved.
It should be noted that the step numbers mentioned in the embodiments of the present invention are not limited to the present invention, for example, it is understood that the model building process mentioned in the present invention may be completed before step S201.
Step S206, segmentation processing is carried out on the region image of the target head, and a mask image of the hair corresponding to the target head is obtained.
The mask image of the hair refers to an image formed by the position of the hair in the region image of the target head.
Specifically, according to an embodiment of the present invention, the step of performing segmentation processing on the region image of the target head to obtain the mask image of the hair corresponding to the target head includes:
And acquiring a head region image, performing segmentation marking on the positions of the hairs in the head region image, and performing hair segmentation model training by taking the head region image with the segmentation marking positions of the hairs as a training set. The region image of the target head is input into the hair segmentation model, and the mask image of the hair corresponding to the target head is input.
In the prior art, an interactive segmentation method such as grabcut (an image segmentation algorithm) is mainly adopted, the method is based on a graph, a part of foreground seed points are required to be manually specified, human errors are easy to occur, through the arrangement, the end-to-end output of the mask image of the hair is realized by utilizing the hair segmentation model obtained through training, the effect of segmenting the mask image of the hair from the region image of the target head can be remarkably improved, and the influence of factors such as complexity of a hairstyle, variability of hair colors, uncertainty of image brightness and the like is reduced.
Step S207, the twin network model is utilized to respectively conduct feature extraction on the region image of the target head and the mask image of the hair, a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair are obtained, the first feature vector and the second feature vector are subjected to weighting processing, and then the first feature vector and the second feature vector are placed into a full-connection layer of the twin network model to conduct classification processing.
Specifically, according to the embodiment of the present invention, the twin network model is obtained by training based on the area image of the hair loss head and the mask image of the hair corresponding to the hair loss head.
Through the arrangement, the feature vectors obtained by carrying out feature extraction on the two paths of input through the twin network model are subjected to weighted fusion, and then are input into the full-connection layer for classification processing, so that the accuracy and the efficiency of determining the alopecia type can be further improved, and the user experience is further improved.
Preferably, according to an embodiment of the present invention, the step of weighting the first feature vector and the second feature vector includes:
and setting weight coefficients for the first feature vector and the second feature vector respectively, and carrying out weighting processing on the first feature vector and the second feature vector according to the weight coefficients.
According to a specific implementation manner of the embodiment of the present invention, it is considered that the mask image corresponding to the hair is more important for the model output result, so the weight coefficient of the second feature vector is higher than the weight coefficient of the first feature vector, the weight coefficient corresponding to the second feature vector may be set to be 1, and the weight coefficient corresponding to the first feature vector is set to be 0.5. It should be noted that the above values are merely examples, and are not limiting examples of the embodiments of the present invention, and may be adjusted according to practical situations during practical application.
Step S208, determining the alopecia type and the alopecia level corresponding to the target head according to the classification processing result.
Preferably, according to an embodiment of the present invention, the output layer of the twin network model includes a plurality of output nodes, each output node corresponding to one level of alopecia; therefore, the above classification result can not only determine the type of hair loss, but also determine the hair loss level corresponding to the target head according to the output node corresponding to the classification result.
Illustratively, in the training process of the twin model, the classification result is classified into hair loss grades, and further, a plurality of output nodes are set for the output layer, and each output node corresponds to one hair loss grade. By the arrangement, the hair loss degree can be accurately judged. The method can be applied to institutions such as dermatology, endocrinology, outpatient department of alopecia, hair-planting and hair-care institutions, and the like, and is used for assisting doctors in judging the course of alopecia; the method can also be applied to auditing and classifying of hair loss related commodity shelves in an e-commerce platform, personalized product recommendation and other applications; the method can also be applied to the putting site selection of off-line anti-hair loss commodity advertisements so as to achieve targeted recommendation and sales.
The hair loss types include an "M" type (two forehead hair gradually fall off and move backward, the whole hair takes on an M-shaped hair loss shape), an "O" type (hair at the top of the head gradually thinly and outwards falls off to form O-shaped Mediterranean hair loss), a "U" type (hair gradually forms a U-shaped hair loss shape after falling off from the top of the head and then extending forward), and a "M+O" type (hair loss mainly concentrates at the forehead and the top of the head and looks like an M and O shape) and the like after hair moves backward along with the hairline from the forehead.
The hair loss level may be classified according to the result in the twin network model training process, for example, 5 (1 (severe hair loss) →5 (slight hair loss).
Step S209, ends.
According to the technical scheme of the embodiment of the invention, the target head in the target image is marked by acquiring the target image, so that the region image of the target head is obtained; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is a technical means obtained by training according to the area image of the hair loss head and the mask image of the hair corresponding to the hair loss head, so that the technical effects that in the existing method, a great amount of human resources are consumed due to the fact that the hair loss type is calculated and judged manually or the method of interactive segmentation detection is adopted, the judging accuracy of the hair loss type and the hair loss degree is low, the user experience is poor, the requirement on the detected image is high, and the determining efficiency of the hair loss type is low are overcome, the accuracy and the efficiency of determining the hair loss type are improved, the requirement on a target image is reduced, the human resource cost is saved, and the user experience is improved.
Fig. 3 is a schematic view of main modules of a hair loss type determining apparatus provided according to an embodiment of the present invention; as shown in fig. 3, the device 300 for determining a hair loss type according to the embodiment of the present invention mainly includes:
the labeling processing module 301 is configured to obtain a target image, and perform labeling processing on a target head in the target image to obtain a region image of the target head.
Through the arrangement, the regional image of the target head in the target image is marked at first, so that the mask image corresponding to the hair can be used as two paths of input of the twin model, the requirement on the resolution of the target image is reduced, and the accuracy of determining the hair loss type is improved.
Further, the above-mentioned determining device 300 for alopecia type further includes a filtering module, and according to an embodiment of the present invention, after the step of labeling the target head in the target image to obtain the region image of the target head, the filtering module is further configured to:
acquiring a plurality of training images, respectively labeling heads in the plurality of training images to obtain a plurality of binarization images formed by head region images and non-head region images, and constructing a first binary classification model based on the plurality of binarization images;
And filtering the regional image of the target head by using the first two-class model and the first threshold value.
Specifically, according to the embodiment of the invention, the training images can be obtained from an open source database or can be collected by self. In the construction of the first two-classification model, a target detection algorithm may be employed to train a plurality of binarized images composed of head region images and non-head region images, thereby optimizing the first two-classification model. The first and second classification models are used for classifying the regional image of the target head, a classification value (or probability value) of the target head as the head can be obtained, the classification value is compared with a first threshold value, so that the filtering processing of the regional image of the target head is realized, the credibility of the target head for later determining the hair loss type can be determined through the setting, the situation that the regional image of the wrong target head is marked in the marking processing process is avoided, the accuracy of determining the hair loss type is further improved, and the accuracy of the mask image of the hair can be further improved through the subsequent segmentation processing of the target head.
The segmentation processing module 302 is configured to perform segmentation processing on the region image of the target head, so as to obtain a mask image of hair corresponding to the target head.
Specifically, according to an embodiment of the present invention, the determining device 300 for the hair loss type further includes a judging module, where before the step of dividing the area image of the target head to obtain the mask image of the hair corresponding to the target head, the judging module is configured to:
acquiring an image of the non-alopecia head and an image of the alopecia head, and constructing a second classification model by taking the image of the non-alopecia head and the image of the alopecia head as training samples;
and classifying the regional image of the target head by using a second classification model, and judging and determining that the target head has alopecia according to the classification result and a second threshold value.
According to a specific implementation manner of the embodiment of the present invention, an image of a head that does not lose hair and an image of a head that loses hair may be screened from the area images of the head that are obtained by the labeling processing of the plurality of training images, and the images are used as training samples to perform a second classification model training. And classifying the region image of the target head by using a second classification model obtained through training, judging whether the target head has alopecia according to the classification result and a second threshold value, and if so, determining the subsequent alopecia type.
Further, according to an embodiment of the present invention, the segmentation processing module 302 is further configured to:
and acquiring a head region image, performing segmentation marking on the positions of the hairs in the head region image, and performing hair segmentation model training by taking the head region image with the segmentation marking positions of the hairs as a training set. The region image of the target head is input into the hair segmentation model, and the mask image of the hair corresponding to the target head is input.
In the prior art, an interactive segmentation method such as grabcut (an image segmentation algorithm) is mainly adopted, the method is based on a graph, a part of foreground seed points are required to be manually specified, human errors are easy to occur, through the arrangement, the end-to-end output of the mask image of the hair is realized by utilizing the hair segmentation model obtained through training, the effect of segmenting the mask image of the hair from the region image of the target head can be remarkably improved, and the influence of factors such as complexity of a hairstyle, variability of hair colors, uncertainty of image brightness and the like is reduced.
The alopecia type determining module 303 is configured to input the area image of the target head and the mask image of the hair into the twin network model, perform classification processing, and determine the alopecia type corresponding to the target head according to the classification processing result; the twin network model is obtained by training according to the regional image of the hair loss head and the mask image of the hair corresponding to the hair loss head.
By the arrangement, the area image of the target head and the mask image of the hair are taken as two paths of input data, the twin network model is adopted for classification processing, and the alopecia type of the target head is determined according to one classification processing result obtained by output, so that the alopecia type is more accurately confirmed.
Specifically, according to an embodiment of the present invention, the above-described hair loss type determination module is further configured to:
and respectively carrying out feature extraction on the region image of the target head and the mask image of the hair by utilizing the twin network model to obtain a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair, carrying out weighting treatment on the first feature vector and the second feature vector, and then placing the weighted treatment on the first feature vector and the second feature vector into a full-connection layer of the twin network model for classification treatment.
Through the arrangement, the feature vectors obtained by carrying out feature extraction on the two paths of input through the twin network model are subjected to weighted fusion, and then are input into the full-connection layer for classification processing, so that the accuracy and the efficiency of determining the alopecia type can be further improved, and the user experience is further improved.
Preferably, according to an embodiment of the present invention, the above-mentioned alopecia type determining module is further configured to:
And setting weight coefficients for the first feature vector and the second feature vector respectively, and carrying out weighting processing on the first feature vector and the second feature vector according to the weight coefficients.
According to a specific implementation manner of the embodiment of the present invention, it is considered that the mask image corresponding to the hair is more important for the model output result, and therefore, the weight coefficient of the second feature vector is higher than that of the first feature vector.
Preferably, according to an embodiment of the present invention, the output layer of the twin network model includes a plurality of output nodes, each output node corresponding to one level of alopecia; the above-mentioned alopecia type determining apparatus 300 further includes a alopecia level determining module for:
and determining the alopecia grade corresponding to the target head according to the output node corresponding to the classification processing result.
Illustratively, in the training process of the twin model, the classification result is classified into hair loss grades, and further, a plurality of output nodes are set for the output layer, and each output node corresponds to one hair loss grade. By the arrangement, the hair loss degree can be accurately judged. The method can be applied to institutions such as dermatology, endocrinology, outpatient department of alopecia, hair-planting and hair-care institutions, and the like, and is used for assisting doctors in judging the course of alopecia; the method can also be applied to auditing and classifying of hair loss related commodity shelves in an e-commerce platform, personalized product recommendation and other applications; the method can also be applied to the putting site selection of off-line anti-hair loss commodity advertisements so as to achieve targeted recommendation and sales.
According to the technical scheme of the embodiment of the invention, the target head in the target image is marked by acquiring the target image, so that the region image of the target head is obtained; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is a technical means obtained by training according to the area image of the hair loss head and the mask image of the hair corresponding to the hair loss head, so that the technical effects that in the existing method, a great amount of human resources are consumed due to the fact that the hair loss type is calculated and judged manually or the method of interactive segmentation detection is adopted, the judging accuracy of the hair loss type and the hair loss degree is low, the user experience is poor, the requirement on the detected image is high, and the determining efficiency of the hair loss type is low are overcome, the accuracy and the efficiency of determining the hair loss type are improved, the requirement on a target image is reduced, the human resource cost is saved, and the user experience is improved.
Fig. 4 shows an exemplary system architecture 400 to which the alopecia type determination method or the alopecia type determination means of the embodiment of the present invention can be applied.
As shown in fig. 4, a system architecture 400 may include terminal devices 401, 402, 403, a network 404, and a server 405 (this architecture is merely an example, and the components contained in a particular architecture may be tailored to the application specific case). The network 404 is used as a medium to provide communication links between the terminal devices 401, 402, 403 and the server 405. The network 404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 405 via the network 404 using the terminal devices 401, 402, 403 to receive or send messages or the like. Various communication client applications, such as an image processing class application, a hair loss type determination class application, etc., may be installed on the terminal devices 401, 402, 403 (for example only).
The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 405 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using the terminal devices 401, 402, 403. The background management server may perform analysis or the like on the received data such as the target image and the like, and feed back the processing result (e.g., classification processing result, hair loss type-merely as an example) to the terminal device.
It should be noted that, the method for determining the type of alopecia provided in the embodiment of the present invention is generally performed by the server 405, and accordingly, the device for determining the type of alopecia is generally disposed in the server 405.
It should be understood that the number of terminal devices, networks and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 5, there is illustrated a schematic diagram of a computer system 500 suitable for use in implementing a terminal device or server in accordance with an embodiment of the present invention. The terminal device or server shown in fig. 5 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present invention.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU) 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 501.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes a labeling processing module, a segmentation processing module, and a hair loss type determination module. The names of these modules do not limit the module itself in some cases, for example, the labeling processing module may also be described as "a module for acquiring a target image, and labeling a target head in the target image to obtain a region image of the target head".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: acquiring a target image, and labeling a target head in the target image to obtain a region image of the target head; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is obtained by training according to the regional image of the hair loss head and the mask image of the hair corresponding to the hair loss head.
According to the technical scheme of the embodiment of the invention, the target head in the target image is marked by acquiring the target image, so that the region image of the target head is obtained; dividing the region image of the target head to obtain a mask image of hair corresponding to the target head; inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is a technical means obtained by training according to the area image of the hair loss head and the mask image of the hair corresponding to the hair loss head, so that the technical effects that in the existing method, a great amount of human resources are consumed due to the fact that the hair loss type is calculated and judged manually or the method of interactive segmentation detection is adopted, the judging accuracy of the hair loss type and the hair loss degree is low, the user experience is poor, the requirement on the detected image is high, and the determining efficiency of the hair loss type is low are overcome, the accuracy and the efficiency of determining the hair loss type are improved, the requirement on a target image is reduced, the human resource cost is saved, and the user experience is improved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for determining a type of hair loss, comprising:
acquiring a target image, and performing labeling treatment on a target head in the target image to obtain a region image of the target head;
dividing the region image of the target head to obtain a mask image of hair corresponding to the target head;
inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is obtained through training according to the regional image of the alopecia head and the mask image of the hair corresponding to the alopecia head.
2. The method according to claim 1, wherein after the step of labeling the target head in the target image to obtain the region image of the target head, the method further comprises:
Acquiring a plurality of training images, respectively labeling heads in the plurality of training images to obtain a plurality of binarization images formed by head region images and non-head region images, and constructing a first binary classification model based on the plurality of binarization images;
and filtering the regional image of the target head by using the first second classification model and a first threshold value.
3. The method according to claim 1, wherein before the step of dividing the area image of the target head to obtain the mask image of the hair corresponding to the target head, the method further comprises:
acquiring an image of an undeviated head and an image of an undeviated head, and constructing a second classification model by taking the image of the undeviated head and the image of the undeviated head as training samples;
and classifying the regional image of the target head by using the second classification model, and judging and determining that the target head has alopecia according to the classification result and a second threshold value.
4. The method of determining a type of hair loss according to claim 1, wherein the step of inputting the area image of the target head and the mask image of the hair into a twin network model for classification processing further comprises:
And respectively carrying out feature extraction on the region image of the target head and the mask image of the hair by utilizing a twin network model to obtain a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair, carrying out weighting treatment on the first feature vector and the second feature vector, and then placing the first feature vector and the second feature vector into a full-connection layer of the twin network model for classification treatment.
5. The method of determining a type of hair loss according to claim 4, wherein the step of weighting the first feature vector and the second feature vector comprises:
and setting weight coefficients for the first feature vector and the second feature vector respectively, and carrying out weighting processing on the first feature vector and the second feature vector according to the weight coefficients.
6. The method of claim 1, wherein the output layer of the twin network model comprises a plurality of output nodes, each output node corresponding to a level of hair loss; the method further comprises the steps of:
and determining the alopecia grade corresponding to the target head according to the output node corresponding to the classification result.
7. A hair loss type determining apparatus, characterized by comprising:
the labeling processing module is used for acquiring a target image, and labeling the target head in the target image to obtain a region image of the target head;
the segmentation processing module is used for carrying out segmentation processing on the region image of the target head to obtain a mask image of the hair corresponding to the target head;
the alopecia type determining module is used for inputting the regional image of the target head and the mask image of the hair into a twin network model, classifying, and determining the alopecia type corresponding to the target head according to the classification result; the twin network model is obtained through training according to the regional image of the alopecia head and the mask image of the hair corresponding to the alopecia head.
8. The hair loss type determination device of claim 7, wherein the hair loss type determination module is further configured to:
and respectively carrying out feature extraction on the region image of the target head and the mask image of the hair by utilizing a twin network model to obtain a first feature vector corresponding to the region image of the target head and a second feature vector corresponding to the mask image of the hair, carrying out weighting treatment on the first feature vector and the second feature vector, and then placing the first feature vector and the second feature vector into a full-connection layer of the twin network model for classification treatment.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
10. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202011364134.3A 2020-11-27 2020-11-27 Method and device for determining hair loss type Active CN113762305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011364134.3A CN113762305B (en) 2020-11-27 2020-11-27 Method and device for determining hair loss type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364134.3A CN113762305B (en) 2020-11-27 2020-11-27 Method and device for determining hair loss type

Publications (2)

Publication Number Publication Date
CN113762305A CN113762305A (en) 2021-12-07
CN113762305B true CN113762305B (en) 2024-04-16

Family

ID=78786097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364134.3A Active CN113762305B (en) 2020-11-27 2020-11-27 Method and device for determining hair loss type

Country Status (1)

Country Link
CN (1) CN113762305B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222989A (en) * 2021-06-09 2021-08-06 联仁健康医疗大数据科技股份有限公司 Image grading method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903257A (en) * 2019-03-08 2019-06-18 上海大学 A kind of virtual hair-dyeing method based on image, semantic segmentation
CN111275703A (en) * 2020-02-27 2020-06-12 腾讯科技(深圳)有限公司 Image detection method, image detection device, computer equipment and storage medium
KR20200070409A (en) * 2018-09-30 2020-06-17 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. Human hairstyle creation method based on multiple feature search and transformation
CN111339975A (en) * 2020-03-03 2020-06-26 华东理工大学 Target detection, identification and tracking method based on central scale prediction and twin neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190294975A1 (en) * 2018-03-21 2019-09-26 Swim.IT Inc Predicting using digital twins
US10846593B2 (en) * 2018-04-27 2020-11-24 Qualcomm Technologies Inc. System and method for siamese instance search tracker with a recurrent neural network
US10713544B2 (en) * 2018-09-14 2020-07-14 International Business Machines Corporation Identification and/or verification by a consensus network using sparse parametric representations of biometric images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200070409A (en) * 2018-09-30 2020-06-17 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. Human hairstyle creation method based on multiple feature search and transformation
CN109903257A (en) * 2019-03-08 2019-06-18 上海大学 A kind of virtual hair-dyeing method based on image, semantic segmentation
CN111275703A (en) * 2020-02-27 2020-06-12 腾讯科技(深圳)有限公司 Image detection method, image detection device, computer equipment and storage medium
CN111339975A (en) * 2020-03-03 2020-06-26 华东理工大学 Target detection, identification and tracking method based on central scale prediction and twin neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Mask-guided Image Classification with Siamese Networks;Hiba Alqasir等;15th International Conference on Computer Vision Theory and Applications;20200131;536-543 *
一种基于掩膜组合的多类弹载图像目标分割算法;袁汉钦;陈栋;杨传栋;王昱翔;刘桢;;舰船电子工程;20200620(06);112-117 *
基于孪生卷积神经网络与三元组损失函数的图像识别模型;张安琪;;电子制作;20181101(21);22+51-52 *
少量样本下基于孪生CNN的SAR目标识别;王博威;潘宗序;胡玉新;马闻;;雷达科学与技术;20191215(06);17-23+29 *

Also Published As

Publication number Publication date
CN113762305A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN107809331B (en) Method and device for identifying abnormal flow
JP6547070B2 (en) Method, device and computer storage medium for push information coarse selection sorting
CN112016793B (en) Resource allocation method and device based on target user group and electronic equipment
CN110895811B (en) Image tampering detection method and device
CN113762305B (en) Method and device for determining hair loss type
CN111311480A (en) Image fusion method and device
CN110555713A (en) method and device for determining sales prediction model
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN110619253B (en) Identity recognition method and device
CN111160410A (en) Object detection method and device
CN108734718B (en) Processing method, device, storage medium and equipment for image segmentation
CN112967191B (en) Image processing method, device, electronic equipment and storage medium
CN110895699B (en) Method and apparatus for processing feature points of image
CN109523564B (en) Method and apparatus for processing image
CN113032251B (en) Method, device and storage medium for determining service quality of application program
CN113344064A (en) Event processing method and device
CN113655942A (en) Chart data display method and device
CN114117248A (en) Data processing method and device and electronic equipment
CN113360693A (en) Method and device for determining image label, electronic equipment and storage medium
CN109657523B (en) Driving region detection method and device
CN111815654A (en) Method, apparatus, device and computer readable medium for processing image
CN113538026B (en) Service amount calculation method and device
CN110633595A (en) Target detection method and device by utilizing bilinear interpolation
CN111144937B (en) Advertisement material determining method, device, equipment and storage medium
CN115527025A (en) Method and apparatus for clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant