CN112419249A - Special clothing picture conversion method, terminal device and storage medium - Google Patents

Special clothing picture conversion method, terminal device and storage medium Download PDF

Info

Publication number
CN112419249A
CN112419249A CN202011263797.6A CN202011263797A CN112419249A CN 112419249 A CN112419249 A CN 112419249A CN 202011263797 A CN202011263797 A CN 202011263797A CN 112419249 A CN112419249 A CN 112419249A
Authority
CN
China
Prior art keywords
picture
special
self
special clothing
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011263797.6A
Other languages
Chinese (zh)
Other versions
CN112419249B (en
Inventor
黄仁裕
高志鹏
赵建强
姚灿荣
曹荣鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meiya Pico Information Co Ltd
Original Assignee
Xiamen Meiya Pico Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meiya Pico Information Co Ltd filed Critical Xiamen Meiya Pico Information Co Ltd
Priority to CN202011263797.6A priority Critical patent/CN112419249B/en
Publication of CN112419249A publication Critical patent/CN112419249A/en
Application granted granted Critical
Publication of CN112419249B publication Critical patent/CN112419249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a special clothing picture conversion method, terminal equipment and a storage medium, wherein the method comprises the following steps: collecting pictures containing special clothes to form a training set; carrying out example segmentation, Gaussian smoothing, Hadamard product operation and color change on the pictures in the training set, and then carrying out image superposition operation on the pictures after color conversion and the original pictures; taking all the superposed pictures as the input of a self-coding network, and training the self-coding network to ensure that the difference of corresponding original pictures in a picture training set output by the self-coding network is minimum; and after the picture of the special clothes to be identified is converted through the trained self-coding network, identifying the special clothes. The method avoids the influence of the color transformation such as illumination, contrast or tone of the picture on the identification of the special clothes in the picture by training the self-coding network, and improves the overall identification rate of the identification of the special clothes.

Description

Special clothing picture conversion method, terminal device and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a special clothing picture conversion method, terminal equipment and a storage medium.
Background
Although the special clothing identification method using the convolutional neural network has been significantly improved, the network model is trained by using pictures of simple scenes downloaded from the internet, and the special clothing identification precision is challenged along with the changes of illumination, visual angles and clothing shapes in complex scenes.
Disclosure of Invention
In order to solve the above problems, the present invention provides a special clothing image transformation method, a terminal device and a storage medium.
The specific scheme is as follows:
a special dress picture conversion method comprises the following steps:
s1: collecting pictures containing special clothes to form a training set;
s2: carrying out example segmentation on the special clothing region of each picture in the training set, and extracting the special clothing region picture after the example segmentation;
s3: performing Gaussian smoothing processing on the special clothing region picture after the example segmentation;
s4: carrying out Hadamard product operation on each picture subjected to Gaussian smoothing and the corresponding original picture in the training set;
s5: carrying out color change on the picture after the Hadamard product operation, and carrying out image superposition operation on the picture after the color change and the corresponding original picture in the training set;
s6: taking all the superposed pictures as the input of a self-coding network, and training the self-coding network to ensure that the difference of corresponding original pictures in a picture training set output by the self-coding network is minimum;
s7: and after the picture of the special clothes to be identified is converted through the trained self-coding network, identifying the special clothes.
Further, the example segmentation specifically comprises the following steps:
s21: carrying out weak positioning on the special clothing area on the picture;
s22: performing semantic segmentation on a human body region in the image through an image semantic segmentation algorithm;
s23: and performing intersection and comparison IOU calculation on the weak positioning area and the semantic segmentation area, and judging the semantic segmentation area as a special clothing area picture to be extracted when the IOU is greater than a threshold value.
Further, weak positioning is carried out by adopting a CAM technology for generating class activation graphs.
Further, the IOU calculation formula is as follows:
Figure BDA0002775471210000021
wherein u is1Is a weakly localized area, u2The regions are semantically segmented.
Further, color transformation includes transforming illumination, contrast, or hue.
Further, the specific formula of the image superposition operation is as follows:
Iadd=Ic*Imask+Is(1-Imask)
wherein, IaddFor superimposed pictures, IcFor pictures after color conversion, ImaskSplitting the picture for example IsIs the corresponding original picture in the training set.
Further, the loss function of the self-coding network is L2A loss function.
The special clothing image conversion terminal device comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the method of the embodiment of the invention.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above for an embodiment of the invention.
By adopting the technical scheme, the method avoids the influence of color transformation such as illumination, contrast or tone of the picture on the identification of the special clothes in the picture by training the self-coding network, and improves the overall identification rate of the identification of the special clothes.
Drawings
Fig. 1 is a flowchart illustrating a first embodiment of the present invention.
Fig. 2 is a schematic diagram of the network structure of the self-coding network in this embodiment.
Detailed Description
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures.
The invention will now be further described with reference to the accompanying drawings and detailed description.
The first embodiment is as follows:
the embodiment of the invention provides a special clothing picture conversion method, as shown in figure 1, the method comprises the following steps:
s1: and collecting pictures containing special clothes to form a training set.
In this embodiment, 10000 special clothes pictures with different scenes and different categories are collected from the internet, and the special clothes are provided with the following categories: army and police clothes are provided with army, air force, navy, rocket force clothes and public security clothes, religious clothes, government-specific clothes and the like.
S2: and carrying out example segmentation on the special clothing region of each picture in the training set, and extracting the special clothing region picture after the example segmentation.
Because the pixel-level labeling of the example segmentation is a great workload, although the boundary of the garment can be labeled manually, the region surrounded by the boundary can be regarded as the region where the garment is located, but the human body postures are different, and a great deal of time is spent on labeling the trunk and the limbs. In order to alleviate the problem of manual labeling, the pixel-level semantic labeling problem can be quickly solved by combining a target weak positioning method on the basis of a human body semantic segmentation algorithm.
In this embodiment, the instance segmentation specifically includes the following steps:
s21: and carrying out weak positioning on the special clothing area on the picture.
The weak positioning is to position the special clothing in the image by the label on the image layer, and different from the target detection, the target detection network needs to label the position information of the special clothing, and the weak positioning method only needs to label the type of the image.
In the embodiment, the weak positioning is carried out by adopting a CAM technology of generating class activation graphs. The CAM technology is mostly convolutional layer, and only uses the average pooling layer before the output layer (softmax for classification), and uses the output of the average pooling layer as the input feature of the fully connected layer for classification. With this simple connection structure, important areas in the picture can be marked in such a way that the output layer weights are mapped back to the convolutional layer features. The global average pooling layer outputs an average of the feature maps of each cell of the last convolutional layer, and a weighted sum of these values is used to generate the final output. It can also be said that the CAM is obtained by calculating the weighted sum of the last convolutional layer feature map, which highlights the clothing saliency region.
M is used for the feature map after the global average pooling1,M2,…MnAnd if so, the layer after the CAM is calculated by the following formula:
CAM=w1*M1+w2*M2+…+wn*Mn
s22: and performing semantic segmentation on the human body region in the image through an image semantic segmentation algorithm.
Since the open-source image instance segmentation algorithm does not use a special garment as an output category, in the embodiment, a human body wearing the special garment is preferentially considered as the output category, and the human body region is segmented by using the image semantic segmentation algorithm.
S23: performing intersection and comparison IOU calculation on the weak positioning area and the semantic segmentation area, and judging the semantic segmentation area as a special clothing area picture to be extracted when the IOU is greater than a threshold value; otherwise, the semantic partition is discarded.
The IOU calculation is as follows:
Figure BDA0002775471210000051
wherein u is1Is a weakly localized area, u2The regions are semantically segmented.
S3: and performing Gaussian smoothing processing on the special clothing region picture after the example segmentation.
The gaussian smoothing process is used to eliminate the edge effect of the segmented region.
S4: and carrying out Hadamard product operation on each picture after the Gaussian smoothing and the corresponding original picture in the training set.
The background of the picture after the Hadamard product operation is black, and the foreground of the picture is a human body region picture after Gaussian smoothing processing.
S5: and carrying out color change on the picture after the Hadamard product operation, and carrying out image superposition operation on the picture after the color change and the corresponding original picture in the training set.
The color transformation can be illumination, contrast or tone transformation, and the like, and is not limited herein, the illumination, contrast or tone of the picture and the like are adjusted through color change, and the self-coding network convenient for subsequent training can convert the picture which cannot be subjected to special clothing identification due to abnormal illumination, contrast or tone into the picture which is normal in illumination, contrast or tone and can be subjected to special clothing identification.
The specific formula of the image superposition operation is as follows:
Iadd=Ic*Imask+Is(1-Imask)
wherein, IaddFor superimposed pictures, IcFor pictures after color conversion, ImaskSplitting the picture for example IsIs the corresponding original picture in the training set.
Steps S3-S5 are the migration operation of the divided region picture. Through the steps, the superposed picture is more real and natural, and the details such as texture, edge, image style and the like similar to the original picture are kept. The self-coding network can be better trained.
S6: and taking all the superposed pictures as the input of the self-coding network, and training the self-coding network to ensure that the difference of the corresponding original pictures in the picture training set output by the self-coding network is minimum.
The network structure of the self-coding network is shown in fig. 2, wherein c1 represents a convolution layer, the convolution kernel is 1 × 1, and the step size is 2; c2 is also a convolution layer with a convolution kernel of 4 × 4; dc1 is a deconvolution layer, the convolution kernel is 1 × 1, and the step length is 2; dc2 is also a deconvolution layer with a convolution kernel of 4 x 4. The network structure adopts a characteristic pyramid mode during encoding and decoding, so that the generated image is closer to the original image.
The loss function of the self-coding network is L2A loss function, the calculation formula is:
Figure BDA0002775471210000061
s7: and after the picture of the special clothes to be identified is converted through the trained self-coding network, identifying the special clothes.
Further, in order to ensure the quality of the picture for identifying the special clothes converted by the self-coding network, the embodiment further comprises the step of performing quality judgment on the picture output by the self-coding network, wherein the quality judgment method is performed by adopting a parameterized model GMM-GIQA.
The parameterized model quality judgment method uses a Gaussian Mixture Model (GMM) to fit the characteristics of real data distribution at a characteristic level. For a picture to be detected, extracting picture characteristics x by using a traditional machine learning algorithm or a convolutional neural network, and then inputting the characteristics into the following formula, wherein the output probability is the quality of the picture.
Figure BDA0002775471210000071
Where x is a picture feature, g (μ)ii) Denotes the ith Gaussian model, μiIs a mean value, σiIs the variance, wiThe sum of all gaussian model weights is equal to 1, which is the weight of the mixture gaussian model.
According to the embodiment of the invention, the influence of color transformation such as illumination, contrast or tone of the picture on the identification of the special clothes in the picture is avoided by training the self-coding network, and the overall identification rate of the identification of the special clothes is improved. The method reduces the image synthesis cost, accelerates the sample synthesis speed, increases the proportion of special clothes in a complex scene, relieves the problem of long tail distribution of training samples, and improves the accuracy of the special clothes identification network. By using the characteristic pyramid self-coding network and combining a human body clothing mask method, the accuracy of generating special clothing images is improved, and the generated images are natural and real, so that the clothing categories are changed, and other areas of the images keep the original style.
Example two:
the invention also provides special clothing image conversion terminal equipment, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the method embodiment of the first embodiment of the invention.
Further, as an executable scheme, the special clothing image conversion terminal device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The special clothing image conversion terminal equipment can comprise, but is not limited to, a processor and a memory. It is understood by those skilled in the art that the above-mentioned constituent structure of the special clothing picture conversion terminal device is only an example of the special clothing picture conversion terminal device, and does not constitute a limitation on the special clothing picture conversion terminal device, and may include more or less components than the above, or combine some components, or different components, for example, the special clothing picture conversion terminal device may further include an input/output device, a network access device, a bus, and the like, which is not limited in this embodiment of the present invention.
Further, as an executable solution, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. The general processor can be a microprocessor or the processor can be any conventional processor, and the processor is a control center of the special clothing picture conversion terminal device and is connected with each part of the whole special clothing picture conversion terminal device by various interfaces and lines.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the special clothing image conversion terminal equipment by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention also provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned method of an embodiment of the invention.
The module/unit integrated with the special clothing image conversion terminal device can be stored in a computer readable storage medium if the module/unit is realized in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A special clothing picture conversion method is characterized by comprising the following steps:
s1: collecting pictures containing special clothes to form a training set;
s2: carrying out example segmentation on the special clothing region of each picture in the training set, and extracting the special clothing region picture after the example segmentation;
s3: performing Gaussian smoothing processing on the special clothing region picture after the example segmentation;
s4: carrying out Hadamard product operation on each picture subjected to Gaussian smoothing and the corresponding original picture in the training set;
s5: carrying out color change on the picture after the Hadamard product operation, and carrying out image superposition operation on the picture after the color change and the corresponding original picture in the training set;
s6: taking all the superposed pictures as the input of a self-coding network, and training the self-coding network to ensure that the difference of corresponding original pictures in a picture training set output by the self-coding network is minimum;
s7: and after the picture of the special clothes to be identified is converted through the trained self-coding network, identifying the special clothes.
2. The special clothing image conversion method according to claim 1, characterized in that: the example segmentation specifically comprises the following steps:
s21: carrying out weak positioning on the special clothing area on the picture;
s22: performing semantic segmentation on a human body region in the image through an image semantic segmentation algorithm;
s23: and performing intersection and comparison IOU calculation on the weak positioning area and the semantic segmentation area, and judging the semantic segmentation area as a special clothing area picture to be extracted when the IOU is greater than a threshold value.
3. The special clothing image conversion method according to claim 2, characterized in that: weak localization is performed using a CAM technique that generates class activation maps.
4. The special clothing image conversion method according to claim 2, characterized in that: the IOU calculation formula is:
Figure FDA0002775471200000021
wherein u is1Is a weakly localized area, u2The regions are semantically segmented.
5. The special clothing image conversion method according to claim 1, characterized in that: color transformation includes transforming illumination, contrast, or hue.
6. The special clothing image conversion method according to claim 1, characterized in that: the specific formula of the image superposition operation is as follows:
Iadd=Ic*Imask+Is(1-Imask)
wherein, IaddFor superimposed pictures, IcFor pictures after color conversion, ImaskSplitting the picture for example IsIs the corresponding original picture in the training set.
7. The special clothing image conversion method according to claim 1, characterized in that: the loss function of the self-coding network is L2A loss function.
8. A special dress picture conversion terminal equipment which characterized in that: comprising a processor, a memory and a computer program stored in said memory and running on said processor, said processor implementing the steps of the method according to any one of claims 1 to 7 when executing said computer program.
9. A computer-readable storage medium storing a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the method as claimed in any one of claims 1 to 7.
CN202011263797.6A 2020-11-12 2020-11-12 Special clothing picture conversion method, terminal device and storage medium Active CN112419249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011263797.6A CN112419249B (en) 2020-11-12 2020-11-12 Special clothing picture conversion method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263797.6A CN112419249B (en) 2020-11-12 2020-11-12 Special clothing picture conversion method, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN112419249A true CN112419249A (en) 2021-02-26
CN112419249B CN112419249B (en) 2022-09-06

Family

ID=74832218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263797.6A Active CN112419249B (en) 2020-11-12 2020-11-12 Special clothing picture conversion method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN112419249B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445632A (en) * 2022-02-08 2022-05-06 支付宝(杭州)信息技术有限公司 Picture processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116698A1 (en) * 2007-11-07 2009-05-07 Palo Alto Research Center Incorporated Intelligent fashion exploration based on clothes recognition
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video
US20150269417A1 (en) * 2014-03-19 2015-09-24 Samsung Electronics Co., Ltd. Method and apparatus for processing images
CN110807367A (en) * 2019-10-05 2020-02-18 上海淡竹体育科技有限公司 Method for dynamically identifying personnel number in motion
WO2020107687A1 (en) * 2018-11-27 2020-06-04 邦鼓思电子科技(上海)有限公司 Vision-based working area boundary detection system and method, and machine equipment
CN111325806A (en) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 Clothing color recognition method, device and system based on semantic segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116698A1 (en) * 2007-11-07 2009-05-07 Palo Alto Research Center Incorporated Intelligent fashion exploration based on clothes recognition
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video
US20150269417A1 (en) * 2014-03-19 2015-09-24 Samsung Electronics Co., Ltd. Method and apparatus for processing images
WO2020107687A1 (en) * 2018-11-27 2020-06-04 邦鼓思电子科技(上海)有限公司 Vision-based working area boundary detection system and method, and machine equipment
CN110807367A (en) * 2019-10-05 2020-02-18 上海淡竹体育科技有限公司 Method for dynamically identifying personnel number in motion
CN111325806A (en) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 Clothing color recognition method, device and system based on semantic segmentation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445632A (en) * 2022-02-08 2022-05-06 支付宝(杭州)信息技术有限公司 Picture processing method and device

Also Published As

Publication number Publication date
CN112419249B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
US11151363B2 (en) Expression recognition method, apparatus, electronic device, and storage medium
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
CN109558832B (en) Human body posture detection method, device, equipment and storage medium
Flores et al. Application of convolutional neural networks for static hand gestures recognition under different invariant features
CN111814794B (en) Text detection method and device, electronic equipment and storage medium
CN112419170B (en) Training method of shielding detection model and beautifying processing method of face image
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
CN113762309B (en) Object matching method, device and equipment
Yin et al. Transfgu: a top-down approach to fine-grained unsupervised semantic segmentation
CN113505768A (en) Model training method, face recognition method, electronic device and storage medium
CN110222572A (en) Tracking, device, electronic equipment and storage medium
CN113490947A (en) Detection model training method and device, detection model using method and storage medium
CN112308866A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111108508A (en) Facial emotion recognition method, intelligent device and computer-readable storage medium
CN110610131B (en) Face movement unit detection method and device, electronic equipment and storage medium
Mohseni et al. Recognizing induced emotions with only one feature: a novel color histogram-based system
CN112419249B (en) Special clothing picture conversion method, terminal device and storage medium
CN116912924B (en) Target image recognition method and device
CN110659631A (en) License plate recognition method and terminal equipment
CN117252947A (en) Image processing method, image processing apparatus, computer, storage medium, and program product
Kakkar Facial expression recognition with LDPP & LTP using deep belief network
CN116363561A (en) Time sequence action positioning method, device, equipment and storage medium
CN112785601B (en) Image segmentation method, system, medium and electronic terminal
Nagashree et al. Hand gesture recognition using support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant