CN116229175B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116229175B
CN116229175B CN202310233909.0A CN202310233909A CN116229175B CN 116229175 B CN116229175 B CN 116229175B CN 202310233909 A CN202310233909 A CN 202310233909A CN 116229175 B CN116229175 B CN 116229175B
Authority
CN
China
Prior art keywords
sample
content length
subset
determining
tag content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310233909.0A
Other languages
Chinese (zh)
Other versions
CN116229175A (en
Inventor
郭若愚
杜宇宁
李晨霞
刘其文
赖宝华
于佃海
马艳军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310233909.0A priority Critical patent/CN116229175B/en
Publication of CN116229175A publication Critical patent/CN116229175A/en
Application granted granted Critical
Publication of CN116229175B publication Critical patent/CN116229175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The disclosure provides an image processing method, an image processing device and a storage medium, and relates to the technical field of image processing, in particular to the technical field of computer vision, deep learning and natural language processing. The specific implementation scheme is as follows: and acquiring a sample set to be processed, wherein samples in the sample set comprise image content and label content, determining a first sample and at least one second sample for joint processing in the sample set, determining whether the first sample and the at least one second sample meet joint processing exit conditions, and respectively carrying out image content joint and label content joint processing on the first sample and the at least one second sample in response to the first sample and the at least one second sample not meeting the joint processing exit conditions to obtain a target sample. According to the technical scheme, the first sample and at least one second sample are combined, so that the diversity of a sample set is improved, and the equality of the sample numbers with different label content lengths is improved.

Description

Image processing method, device, equipment and storage medium
The invention is a divisional application of an invention patent application with the application number of 202210268760.5 and the invention name of image processing method, device, equipment and storage medium, which is proposed by 2022, 03 and 18.
Technical Field
The present disclosure relates to the technical field of computer vision, deep learning, and natural language processing in image processing, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
The data augmentation is a data preprocessing method commonly used in deep learning, and is mainly used for increasing samples of a data set, so that the data set is diversified as much as possible, a trained model has stronger generalization capability, and model accuracy is improved.
In order to better improve the accuracy of training a model, it is generally required to equalize the text length of a sample, and if the sample is an image, it is required to improve the diversity of the image background.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, and storage medium.
According to a first aspect of the present disclosure, there is provided an image processing method including:
acquiring a sample set to be processed, wherein samples in the sample set comprise image content and label content;
determining a first sample and at least one second sample for joint processing in the sample set;
determining whether the first sample and the at least one second sample satisfy a joint processing exit condition;
and respectively carrying out image content joint and label content joint processing on the first sample and the at least one second sample to obtain a target sample in response to the first sample and the at least one second sample not meeting joint processing exit conditions.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a sample set to be processed, and samples in the sample set comprise image content and label content;
a sample determining unit configured to determine a first sample and at least one second sample for joint processing in the sample set;
a determination unit configured to determine whether the first sample and the at least one second sample satisfy a joint processing exit condition;
and the joint unit is used for respectively carrying out image content joint and label content joint processing on the first sample and the at least one second sample to obtain a target sample in response to the first sample and the at least one second sample not meeting the joint processing exit condition.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
According to the technical scheme, the sample diversity of the sample set is improved, and the equality of the sample number of different label content lengths is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
Fig. 1 is a schematic view of an application scenario to which embodiments of the present disclosure are applicable;
fig. 2 is a flowchart of an image processing method provided in a first embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the joint processing of a first sample and a second sample;
fig. 4 is a flowchart of an image processing method provided in a second embodiment of the present disclosure;
fig. 5 is a flowchart of an image processing method provided in a third embodiment of the present disclosure;
fig. 6 is a schematic structural view of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 7 is a schematic block diagram of an example electronic device used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Before introducing the background and technical solutions of the present disclosure, several terms that may be involved in the embodiments of the present disclosure are first introduced:
Artificial intelligence (Artificial Intelligence, AI): is a comprehensive technology of computer science, and by researching the design principle and implementation method of various intelligent machines, the machine has the functions of sensing, reasoning and deciding. Artificial intelligence technology is a comprehensive subject, and relates to a wide range of fields, such as natural language processing technology, machine learning/deep learning and other directions, and with the development of technology, the artificial intelligence technology will be applied in more fields and has an increasingly important value.
Image processing: the method is one of important applications in the field of artificial intelligence, and with the excellent performance of the deep learning method in the aspect of classification of natural images, the target features are extracted and evaluated in the images by adopting a model obtained through training, so that the more and more applications for automatic target authentication are realized.
Machine Learning (ML): machine learning is a method that can be given the ability to machine learning so that it performs functions that direct programming cannot do. In a practical sense, machine learning is a method of training out models by using data and then using model predictions.
Training (training) or learning: training refers to a process in which an AI/ML model learns to perform a particular task (typically by optimizing weights in the AI/ML model).
Optical character recognition (Optical Character Recognition, OCR) is a technology that can convert picture information into text information that is easier to edit and store. The method is widely applied to various scenes at present, such as bill recognition, bank card information recognition, formula recognition and the like, and in addition, OCR also helps a plurality of downstream tasks, such as subtitle translation, safety monitoring and the like; while also facilitating other visual tasks such as video searching and the like.
The convolutional neural network (Convolutional Recurrent Neural Network, CRNN) is mainly used for recognizing text sequences with indefinite lengths end to end, and is used for converting text recognition into sequence learning problems depending on time sequence without cutting single characters, namely sequence recognition based on images. CRNN consists essentially of a convolutional layer, a cyclic layer, and a transcriptional layer, ultimately enabling prediction of sequences of indefinite length using fixed length inputs.
The data augmentation is a common data preprocessing method, is one of the common skills in deep learning, and is mainly used for increasing training data sets, so that the data sets are diversified as much as possible, a trained model has stronger generalization capability, and model accuracy is improved. The common data augmentation method mainly comprises the following steps: illumination transformation, dithering, blurring, random cropping, horizontal/vertical flipping, rotation, scaling, shearing, translation, contrast, noise, and the like.
At present, in the technical field of image processing, a data augmentation method generally augments a single image, the background and transformation processing are relatively single, context information between different images is not considered, context information after different images are fused cannot be used, and when a trained model is applied to a complex background, the model accuracy is generally lower. In addition, in the training process, the label content length marked by the image is not considered, the samples corresponding to the shorter label content length are generally more, the number of the samples with different label content lengths is possibly unbalanced, and the problem of poor model precision is easily caused.
Alternatively, the imbalance in the number of samples of different tag content lengths during training may be interpreted as: when the sample set training model is used, 90% of the images have 3 characters, and when the text recognition model is used for text recognition of images having 2 characters, an ideal recognition effect may not be achieved.
Aiming at the technical problems, the technical conception process of the embodiment of the disclosure is as follows: aiming at the problems of unbalanced sample background, single transformation and sample number of different label content lengths in the related technology, the inventor finds that if different samples are combined, when the samples comprise image content and label content, different image content is combined and different label content is combined to obtain a new sample, so that context information among different samples and background information of different samples can be used in training a model, and meanwhile, the sample number of different label content lengths in a sample set can be adjusted, thereby laying a foundation for improving the accuracy of the training model.
Based on the above technical conception process, the embodiments of the present disclosure provide an image processing method, by acquiring a sample set to be processed, where samples in the sample set include image content and tag content, determining a first sample and at least one second sample for joint processing in the sample set, determining whether the first sample and the at least one second sample satisfy a joint processing exit condition, and performing image content joint and tag content joint processing on the first sample and the at least one second sample, respectively, in response to the first sample and the at least one second sample not satisfying the joint processing exit condition, to obtain a target sample. According to the technical scheme, the first sample and at least one second sample are combined, so that the diversity of a sample set is improved, and the equality of the sample numbers with different label content lengths is improved.
It can be understood that the embodiments of the present disclosure are mainly explained with a scenario applied to text recognition, and when training a text recognition model, a data augmentation method based on image merging is provided, image contents and tag contents of different samples are respectively merged together as a new sample, and in the merging process, the length of the tag contents after merging is considered for balancing the number of samples between different tag content lengths, thereby improving the precision and generalization performance of the text recognition model.
The disclosure provides an image processing method, an image processing device, image processing equipment and a storage medium, which are applied to the technical fields of computer vision, deep learning and natural language processing in image processing, so as to improve the diversity of sample sets and the balance of the number of samples with different tag content lengths.
Note that, the sample set in this embodiment is not a sample set for a specific object, and cannot reflect information of a specific object. It will be appreciated that the sample set in this embodiment is derived from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
Fig. 1 is a schematic view of an application scenario to which an embodiment of the present disclosure is applicable. As shown in fig. 1, the application scenario diagram may include: terminal device 11, network 12, server 13, and processing device 14.
Alternatively, the terminal device 11 may communicate with the server 13 via the network 12, so that the server 13 may acquire an image processing command of the user, thereby acquiring a sample set to be processed based on the image processing command, and transmitting it to the processing device 14. Accordingly, the processing device 14 may obtain a sample set to be processed from the server 13, and execute the technical solution of the embodiment of the disclosure.
Optionally, the processing device 14 may also directly receive an image processing instruction sent by an operator through the terminal device 11, and acquire a sample set to be processed from a database of the operator or other devices based on the image processing instruction, so as to execute the technical solution of the embodiment of the disclosure.
It will be appreciated that embodiments of the present disclosure are not limited to a specific manner in which the processing device 14 obtains the sample set to be processed, which may be determined according to an actual scenario, and will not be described herein.
In the present embodiment, the processing device 14 may execute the program code of the image processing method provided in the present application based on the acquired sample set to be processed to obtain the target sample.
Optionally, the application scenario shown in fig. 1 may further include a data storage device 15, where the data storage device 15 may be connected to the server 13 or connected to the processing device 14, and is used to store data output by the server 13 and/or a target sample output by the processing device 14.
It should be noted that fig. 1 is only a schematic diagram of an application scenario provided by an embodiment of the present disclosure, the embodiment of the present disclosure does not limit devices included in fig. 1, nor limit a positional relationship between devices in fig. 1, for example, in fig. 1, the data storage device 15 may be an external memory with respect to the server 13 or the processing device 14, in other cases, the data storage device 15 may also be disposed in the server 13 or the processing device 14, and the processing device 14 may be a device that exists independently from the server 13 or is integrated into one component of the server 13, which is not limited by the embodiment of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, the processing device 14 may be a terminal device, a server, a virtual machine, or a distributed computer system formed by one or more servers and/or computers. The terminal device includes, but is not limited to, a smart phone, a notebook computer, a desktop computer, a platform computer, a vehicle-mounted device, an intelligent wearable device, and the like, and the embodiment of the disclosure is not limited. The server may be a common server or a cloud server, and the cloud server is also called a cloud computing server or a cloud host, which is a host product in a cloud computing service system. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be noted that, the product implementation form of the present disclosure is program code that is contained in platform software and deployed on a processing device (which may also be a computing cloud or hardware with computing capabilities such as a mobile terminal). In the system configuration diagram shown in fig. 1, the program code of the present disclosure may be stored inside the image processing apparatus. In operation, program code runs on the host memory and/or GPU memory of the processing device.
In the presently disclosed embodiments, "plurality" refers to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The technical solution of the present disclosure is described in detail below by means of a specific embodiment in combination with the application scenario shown in fig. 1. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a schematic flow chart of an image processing method according to a first embodiment of the present disclosure. The method is explained with the processing apparatus in fig. 1 as an execution subject. As shown in fig. 2, the image processing method may include the steps of:
s201, acquiring a sample set to be processed, wherein samples in the sample set comprise image content and label content.
In the embodiment of the disclosure, the processing device may receive the sample set to be processed from other devices, or may read the sample set to be processed from a database stored in the processing device (at this time, the processing device is deployed with the database). The embodiments of the present disclosure do not limit the acquisition process of the sample set to be processed, and may be determined according to an actual scenario.
It will be appreciated that in the embodiments of the present disclosure, the sample set obtained by the processing device may be a sample set after pretreatment, or may be an untreated sample set, which is not limited in this embodiment.
Alternatively, the present embodiment is explained with sample processing in a text recognition scene, and thus, in the present embodiment, a sample set is actually an image set, and accordingly, each sample of the sample set includes image content and tag content. The image content may be understood as an image itself, and the tag content may be understood as characters in the image, so that the tag content length refers to the number of characters in the image, and if no characters are in the image, the tag content length is 0.
S202, determining a first sample and at least one second sample for joint processing in a sample set.
Alternatively, in this embodiment, the joint processing determination may be performed on at least one sample in the sample set, for example, assuming that the first sample is one sample in the sample set and the at least one second sample is a sample selected randomly or based on a certain rule from the sample set.
The at least one second sample may be a sample selected according to a category or a sample selected according to a certain order, and the embodiment is not limited to a specific manner of selecting the second sample from the sample set, or to a specific number of second samples, which may be determined according to an actual setting, and will not be described herein.
S203, determining whether the first sample and at least one second sample meet the joint processing exit condition.
Optionally, a joint processing exit condition is preset in the processing device, that is, after determining at least one second sample from the sample set for a first sample in the sample set, it may be first determined whether the first sample and the at least one second sample meet the joint processing exit condition; if yes, outputting a first sample; if not, performing an operation of S204 on the first sample and at least one second sample to obtain a target sample, and then performing steps S203 and S204 on the target sample and the at least one second sample to obtain a processed target sample.
It will be understood that the joint processing exit condition is a constraint condition for exiting the joint processing, and when the first sample or the processed first sample satisfies the joint processing exit condition, the sample joint processing operation of the present embodiment exits, and when the first sample or the processed first sample does not satisfy the joint processing exit condition, S203 and S204 are executed in a loop until the joint processing exit condition is satisfied.
S204, responding to the first sample and at least one second sample not meeting the joint processing exit condition, and respectively carrying out image content joint processing and label content joint processing on the first sample and at least one second sample to obtain a target sample.
In one possible implementation of the present disclosure, the first sample and the at least one second sample do not satisfy a joint processing exit condition, at which point a joint processing procedure between the samples may be performed.
For example, for the image content and the tag content included in the first sample and each second sample, the image content of the first sample and the image content of all the second samples may be spliced, and the tag content of the first sample and the tag content of all the second samples may be spliced to obtain the target sample. Correspondingly, the image content of the target sample comprises the image content of the first sample and the image content of the second sample, the tag content of the target sample comprises the tag content of the first sample and the tag content of at least one second sample, and the tag content length of the target sample is the sum of the tag content length of the first sample and the tag content length of all the second samples.
Illustratively, fig. 3 is a schematic diagram of a joint processing of a first sample and a second sample. As shown in fig. 3, it is assumed that the first sample is an image Sa, the image content of the image Sa is Ia, the tag content of the image Sa is La (mother and infant department), the second sample is an image Sb, the image content of the image Sb is Ib, and the tag content of the image Sb is Lb (transfer).
Referring to fig. 3, image Sa and image Sb are combined to obtain image Sab, and image content iab=np.concat ([ Ia, ib ], axis=1) of image Sab and label content lab=la+lb of image Sab (mother and infant department transfer). Where np.conccate () is a function formula for concatenating two constant arrays, and the axis is 1.
It will be appreciated that the manner of stitching the image content of the image Sa and the image content of the image Sb, the tag content of the image Sa and the tag content of the image Sb may be determined based on settings, for example, the image Sa and the image Sb may be joined to obtain the image Sba, and in this case, the image content iba=np.concat ([ Ib, ia ], axis=1) of the image Sba and the tag content lba=lb+la of the image Sba (assigned to a mother and infant department).
Alternatively, the target samples obtained by the joint processing in different orders may be regarded as different samples, which may increase the diversity of the samples.
Optionally, after the technical solution of the embodiment of the present disclosure is executed for all samples in the sample set, all returned target samples may be subjected to pretreatment such as normalization, and then set Batch, and then sent to a model to be trained, so as to obtain a final model. The Batch is meant to be a Batch process, that is, a Batch process can be performed on samples in a sample set.
In an embodiment of the disclosure, a sample set to be processed is obtained, wherein samples in the sample set include image content and label content, a first sample and at least one second sample for joint processing are determined in the sample set, whether the first sample and the at least one second sample meet a joint processing exit condition is determined, and in response to the first sample and the at least one second sample not meeting the joint processing exit condition, image content joint processing and label content joint processing are performed on the first sample and the at least one second sample respectively to obtain a target sample. According to the technical scheme, the first sample and at least one second sample are combined, so that the diversity of a sample set is improved, and the equality of the number of samples with different lengths is improved.
In order for the reader to more fully understand the principles of implementation of the present disclosure, the embodiment shown in fig. 2 will now be further refined in conjunction with fig. 4 and 5 below.
Illustratively, in an embodiment of the present disclosure, the joint processing exit condition includes at least one of:
the product of the random number and the probability scaling factor of the first sample is greater than or equal to the joint probability threshold;
the sum of the image content width of the first sample and the image content width of the at least one second sample is greater than or equal to an image width threshold;
The sum of the tag content length of the first sample and the tag content length of the at least one second sample is greater than or equal to the tag content length threshold.
For example, in the present embodiment, in order to achieve convergence of the joint processing procedure, a tag content length threshold Lmax (i.e., a maximum tag content length), an image width threshold Tw, and an image height threshold Th, and a joint probability threshold pc may be preset in the processing apparatus. By way of example, the image height threshold Th is typically 32 and the tag content length threshold may be 25 words.
It can be understood that the specific values of the label content length threshold, the image width threshold Tw, the image height threshold Th, and the joint probability threshold pc are not limited, and may be set according to actual requirements, which will not be described herein.
Optionally, in practical application, the image heights of the samples can be scaled to the image height threshold Th under the condition that the aspect ratio of the samples is the same after the sample set is obtained, that is, by scaling the image heights of all the samples to a uniform size, the subsequent joint processing can be facilitated.
In an embodiment of the present disclosure, determining whether the first sample and the at least one second sample exit the joint processing procedure may be determined from three angles:
First, a determination is made as to whether the number of samples of different tag content lengths need to be balanced based on a probability scaling factor.
Optionally, a random number is first randomly generated by using a random uniformity function, for example, a random number p is uniformly generated in the [0,1] interval, and then, the size relationship between the random number p×ps and pc is calculated, where ps is a probability scaling factor of the first sample, and pc is a joint probability threshold.
As an example, if p.gtps.gtoreq.pc, the probability that the number of samples corresponding to the tag content length in the first sample set satisfies the probability threshold requirement may exit the joint processing. As another example, if p×ps < pc, it indicates that the probability of the number of samples corresponding to the tag content length in the first sample set cannot meet the probability threshold requirement, and it is determined whether the joint processing procedure needs to be exited or not by combining with other determination conditions.
Second, it is determined whether or not the joint processing needs to be continued by the image content width.
Illustratively, the sum Tw of the image content width Ia of the first sample and the image content width Ib of the at least one second sample is first calculated, and then the magnitude relation of Tw and the image width threshold Tw is compared. If Tw is greater than or equal to Tw, it indicates that the image content width of the first sample and the at least one second sample after being combined together has reached the set image width threshold, and at this time, the joint processing procedure may be exited. If Tw is less than Tw, it indicates that the image content width of the first sample and the at least one second sample after being combined together has not reached the set image width threshold, and it is necessary to determine whether to exit the joint processing procedure in combination with other determination conditions.
Third, it is determined whether the joint processing procedure needs to be continued or not by the tag content length.
Illustratively, the sum L of the tag content length La of the first sample and the tag content length Lb of the at least one second sample is first calculated, and then the magnitude relation of the L and the tag content length threshold Lmax is compared. If L is larger than or equal to Lmax, the label content length after the first sample and the at least one second sample are combined together reaches a set label content length threshold, and the joint processing process can be exited. If L < Lmax, it indicates that the label content length after the first sample and the at least one second sample are combined does not reach the set label content length threshold, and then it needs to be determined whether to exit the joint processing procedure by combining with other judging conditions.
In this embodiment, by setting the joint processing exit condition, not only the automatic execution of the joint processing process can be ensured, but also the joint processing process can be automatically exited when the first sample meets the joint processing exit condition, thereby improving the degree of automation of the joint processing.
Optionally, fig. 4 is a schematic flow chart of an image processing method provided in the second embodiment of the disclosure. As shown in fig. 4, in an embodiment of the present disclosure, before the above S202, the image processing method may further include the steps of:
S401, determining a label content length subset list corresponding to a sample set based on the label content length of each sample in the sample set.
Optionally, in this embodiment, for the obtained sample set, in order to determine the number of samples with different tag content lengths, the number of samples with the same tag content length in the sample set may be counted, a tag content length subset corresponding to the different tag content lengths is determined, and then the tag content length subset list corresponding to the sample set is obtained by sorting according to the number of samples included in each tag content length subset.
For example, in the embodiment of the present disclosure, this step S401 may be specifically implemented by the following steps:
a1, determining the label content length of each sample in a sample set;
a2, counting the number of samples in the sample set according to the label content length of each sample, and determining at least one label content length subset and the number of samples in each label content length subset;
a3, sorting the at least one label content length subset based on the number of samples in each label content length subset, and determining a label content length subset list corresponding to the sample set.
In this embodiment, each sample in the sample set carries labeling information, where the labeling information may include, but is not limited to, information including image content, background of the image content, label content length, and the specific content included in the labeling information may be determined according to actual requirements, and is not limited herein.
The processing device determines the label content length of each sample based on the labeling information of each sample, divides the samples with the same label content length into a subset, counts the number of the samples included in each subset to obtain at least one label content length subset and the number of the samples in each label content length subset, and finally sorts at least one label content length subset according to a preset sorting rule based on the number of the samples in each label content length subset to obtain a label content length subset list corresponding to the sample set.
The tag content length is the number of characters included in the tag content in the sample, and if the tag content in the sample does not include characters, the tag content length of the sample is 0, and since the set tag content length threshold is Lmax, the number of characters included in the tag content in the sample is at most Lmax.
Alternatively, in this embodiment, the preset ordering rule may be a descending order, and at this time, the determined at least one tag content length subset may be ordered according to the order of the number of samples from more to less, to obtain the tag content length subset list.
By way of example, the tag content length subset list may be denoted by ks, and ks: [ k0, k1, k2, k3, …, kLmax ]. Wherein, the number of samples with the label content length of k0 is the largest, and the number of samples with the label length of kLmax is the smallest. For example, if the number of samples not including characters is 3, the number of samples including 1 character is 10, and the number of samples including 2 characters is 5, k0=1, the number of images representing the tag content length 1 is the largest, k1=2, and k2=0. That is, in the present embodiment, kLmax represents the order of the tag content length subset in the tag content length list, and does not reflect the specific tag content length nor the number of samples included in the tag content length subset.
It can be understood that, in this embodiment, the preset sorting rule may be an ascending order, so that the manner of determining the tag content length subset list corresponding to the sample set is similar, the difference is that the tag content length subset with the higher sorting is smaller in sample number, and the process of determining the probability scaling factor corresponding to each tag content length subset is opposite for the tag content length subset list obtained by ascending order and descending order.
S402, determining probability scaling factors of the tag content length subsets for the tag content length subsets in the tag content length subset list.
For example, when determining the tag content length subset list formed by the tag content length subsets, the probability scaling factor of each tag content length subset may be calculated based on a preset probability scaling factor formula.
Alternatively, in this embodiment, this step S402 may be implemented by the following steps:
b1, for each tag content length subset in the tag content length subset list, determining an index number of each tag content length subset in the tag content length subset list.
B2, determining the probability scaling factor of each label content length subset according to the index number of each label content length subset in the label content length subset list, the label content length threshold value, the maximum value of the preset probability scaling factor and the minimum value of the preset probability scaling factor.
In this embodiment, the processing device is preset with a tag content length threshold Lmax, a preset probability scaling factor maximum value psmax, and a preset probability scaling factor minimum value psmin, so that the probability scaling factor of each tag content length subset in the tag content length subset list may be calculated based on a preset probability scaling factor formula.
For example, for a tag content length subset with a tag content length k, the preset probability scaling factor formula is: ps=find_index (k, ks)/Lmax (psmax-psmin) +psmin. Where find_index (k, ks) represents the index number of a tag content length subset of tag content length k in the tag content length subset list ks, find_index (k 0, ks) =0, find_index (kLmax, ks) =lmax, if k is not in ks, find_index (k, ks) =0. From this, it can be seen that the probability scaling factor is used to indicate the probability that the number of samples needs to be expanded, for example, when find_index (kLmax, ks) =lmax, ps=psmax, and the probability that the number of samples with label content length Lmax needs to be expanded is the largest.
It can be understood that, in practical applications, the tag content length subsets in ks may also be arranged in ascending order according to the tag content length, where the probability scaling factor is as follows: ps=find_index (k, ks)/Lmax (psmin-psmax) +psmax.
It can be appreciated that in the embodiment of the present disclosure, for the first sample and at least one second sample, the sampling probability may be updated according to the combined tag content length, so as to balance the number of samples corresponding to different tag content lengths, and improve the robustness of the model.
In an embodiment of the present disclosure, a tag content length subset list corresponding to a sample set is determined based on a tag content length of each sample in the sample set, and a probability scaling factor for each tag content length subset is determined for each tag content length subset in the tag content length subset list. According to the technical scheme, the sample number of different label content lengths in the sample set can be effectively balanced, and a foundation is laid for improving the precision and generalization capability of the model.
Optionally, fig. 5 is a schematic flow chart of an image processing method provided in a third embodiment of the disclosure. As shown in fig. 5, in an embodiment of the present disclosure, the image processing method may further include the steps of:
s501, determining the label content length of the first sample.
For example, for a selected first sample, the tag content length of the first sample, that is, the length of the tag content in the first sample, may be determined based on the labeling information of the first sample.
For example, referring to the schematic diagram shown in fig. 3 above, the label content length of the first sample is 4.
S502, determining a target label content length subset to which the first sample belongs according to the label content length of the first sample.
For example, since the tag content length subsets are divided based on the tag content lengths of the samples, the tag content lengths of the samples in each tag content length subset are the same, so that the target tag content length subset to which the first sample belongs can be determined from the tag content length subset list of the first sample according to the tag content length of the first sample.
S503, determining the probability scaling factor of the first sample according to the probability scaling factors of the target tag content length subsets.
Alternatively, this embodiment may be implemented on the basis of the embodiment shown in fig. 4, for example, when determining the probability scaling factor of each tag content length subset in the tag content subset list, the probability scaling factor of the target tag content length subset may be determined based on the tag content length of the target tag content length subset, and thus, the probability scaling factor of the first sample is determined.
In an embodiment of the present disclosure, by determining a tag content length of a first sample, determining a target tag content length subset to which the first sample belongs according to the tag content length of the first sample, and further determining a probability scaling factor of the first sample according to a probability scaling factor of the target tag content length subset. In the technical scheme, the probability scaling factor of the first sample is determined, and a foundation is laid for whether the follow-up joint processing exit condition is met.
Optionally, in an implementation of an embodiment of the disclosure, before the step S203 (determining whether the first sample and the at least one second sample meet the joint processing exit condition), the image processing method may further include the following steps:
and respectively carrying out data amplification processing on the first sample and the at least one second sample to obtain the first sample after the amplification processing and the at least one second sample after the amplification processing.
In this embodiment, for a first sample to be processed, when at least one second sample is randomly selected from a sample set, data augmentation processing may be performed on the first sample and the at least one second sample, so that diversity of the samples is improved, and a foundation is provided for subsequently improving accuracy and generalization performance of a model.
Optionally, in an implementation of an embodiment of the disclosure, before S202 (in the sample set, the first sample and the at least one second sample for joint processing are determined), the image processing method may further include the following steps:
and respectively carrying out data augmentation treatment on the samples in the sample set to obtain the sample set after the augmentation treatment.
When the sample set to be processed is obtained, data augmentation processing can be performed on the samples in the sample set to obtain the sample set after the augmentation processing, so that the samples in the sample set can be effectively utilized, and a foundation is laid for improving the precision of a subsequent training model.
Optionally, in various implementations of the embodiments of the present disclosure, the data augmentation processing method may be a general data augmentation method, for example, illumination transformation, dithering, blurring, and random clipping, and different samples may be different data augmentation methods, so that differences between different individual samples may be fully considered, and different data augmentation methods are applied to different samples processed in a combined manner, thereby increasing background complexity of image content, effectively preventing singleness of image transformation, ensuring diversity of samples after combination, and improving generalization performance of a model.
It can be appreciated that the technical solution of the embodiment of the present disclosure is illustrated in the text recognition field for sample processing for model training, and in practical application, the technical solution can be well extended to other visual tasks, which is not described herein.
As can be seen from the foregoing embodiments, the embodiments of the present disclosure provide a data augmentation method in image processing, which can apply different data augmentation to at least two fused images, and increase the complexity of the image background and the transformation diversity of the images, thereby improving the accuracy and generalization performance of the model.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus provided in this embodiment may be an electronic device or an apparatus in an electronic device. As shown in fig. 6, an image processing apparatus 600 provided by an embodiment of the present disclosure may include:
an obtaining unit 601, configured to obtain a sample set to be processed, where samples in the sample set include image content and tag content;
a sample determining unit 602, configured to determine a first sample and at least one second sample for joint processing in the sample set;
a determining unit 603 for determining whether the first sample and the at least one second sample satisfy a joint processing exit condition;
and the combining unit 604 is configured to perform image content combining and label content combining processing on the first sample and the at least one second sample, respectively, to obtain a target sample, in response to the first sample and the at least one second sample not meeting a combined processing exit condition.
In one possible implementation of the embodiments of the present disclosure, the joint processing exit condition includes at least one of:
the product of the random number and the probability scaling factor of the first sample is greater than or equal to a joint probability threshold;
The sum of the image content width of the first sample and the image content width of the at least one second sample is greater than or equal to an image width threshold;
the sum of the tag content length of the first sample and the tag content length of the at least one second sample is greater than or equal to a tag content length threshold.
In one possible implementation of the embodiment of the present disclosure, the image processing apparatus further includes:
a list determining unit (not shown) for determining a list of tag content length subsets corresponding to the sample set based on the tag content length of each sample in the sample set;
a subset scaling factor determining unit (not shown) for determining, for each tag content length subset of the tag content length subset list, a probability scaling factor for each tag content length subset.
Wherein the list determination unit includes:
a first determining module, configured to determine a tag content length of each sample in the sample set;
the second determining module is used for counting the number of the samples according to the label content length of each sample, and determining at least one label content length subset and the number of the samples in each label content length subset;
And a third determining module, configured to sort the at least one label content length subset based on the number of samples in each label content length subset, and determine a label content length subset list corresponding to the sample set.
Wherein the scaling factor determination unit includes:
a fourth determining module, configured to determine, for each tag content length subset in the tag content length subset list, an index number of each tag content length subset in the tag content length subset list;
and a fifth determining module, configured to determine a probability scaling factor of each tag content length subset according to an index number of each tag content length subset in the tag content length subset list, a tag content length threshold, a preset probability scaling factor maximum value, and a preset probability scaling factor minimum value.
In one possible implementation of the embodiment of the present disclosure, the image processing apparatus further includes:
a length determining unit (not shown) for determining a tag content length of the first sample;
a subset determining unit (not shown) configured to determine, according to the tag content length of the first sample, a target tag content length subset to which the first sample belongs;
A sample scaling factor determination unit (not shown) for determining a probability scaling factor of the first sample based on probability scaling factors of the target tag content length subset.
In one possible implementation of the embodiment of the present disclosure, the image processing apparatus further includes:
a first processing unit (not shown) for performing data augmentation processing on the first sample and the at least one second sample, respectively, to obtain a first sample after the augmentation processing and at least one second sample after the augmentation processing.
In one possible implementation of the embodiment of the present disclosure, the image processing apparatus further includes:
and a second processing unit (not shown) for performing data augmentation processing on the samples in the sample set respectively to obtain the sample set after the augmentation processing.
The image processing device provided in this embodiment may be used to execute the image processing method in any of the above method embodiments, and its implementation principle and technical effects are similar, and will not be described herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 7 is a schematic block diagram of an example electronic device used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When a computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. An image processing method, comprising:
acquiring a sample set to be processed, wherein samples in the sample set comprise image content and label content;
determining a first sample and at least one second sample for joint processing in the sample set;
determining whether the first sample and the at least one second sample satisfy a joint processing exit condition;
responding to the first sample and the at least one second sample not meeting a joint processing exit condition, and respectively carrying out image content joint processing and label content joint processing on the first sample and the at least one second sample to obtain a target sample;
Further comprises:
determining a label content length subset list corresponding to the sample set based on the label content length of each sample in the sample set;
determining probability scaling factors of the tag content length subsets for the tag content length subsets in the tag content length subset list;
the probability scaling factor is used to indicate the probability that an extended number of samples is required.
2. The method of claim 1, wherein the determining, based on the tag content length of each sample in the sample set, a list of tag content length subset corresponding to the sample set comprises:
determining a tag content length for each sample in the set of samples;
counting the number of samples of the sample set according to the label content length of each sample, and determining at least one label content length subset and the number of samples in each label content length subset;
and sorting the at least one label content length subset based on the number of samples in each label content length subset, and determining a label content length subset list corresponding to the sample set.
3. The method of claim 1, wherein the determining, for each tag content length subset in the tag content length subset list, a probability scaling factor for each tag content length subset comprises:
Determining index numbers of the tag content length subsets in the tag content length subset list for each tag content length subset in the tag content length subset list;
and determining the probability scaling factor of each label content length subset according to the index number of each label content length subset in the label content length subset list, the label content length threshold value, the preset probability scaling factor maximum value and the preset probability scaling factor minimum value.
4. A method according to any one of claims 1 to 3, further comprising:
determining a tag content length of the first sample;
determining a target label content length subset to which the first sample belongs according to the label content length of the first sample;
and determining the probability scaling factor of the first sample according to the probability scaling factor of the target tag content length subset.
5. A method according to any one of claims 1 to 3, further comprising:
and respectively carrying out data augmentation processing on the first sample and the at least one second sample to obtain a first sample after the augmentation processing and at least one second sample after the augmentation processing.
6. A method according to any one of claims 1 to 3, further comprising:
And respectively carrying out data augmentation treatment on the samples in the sample set to obtain the sample set after the augmentation treatment.
7. An image processing apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a sample set to be processed, and samples in the sample set comprise image content and label content;
a sample determining unit configured to determine a first sample and at least one second sample for joint processing in the sample set;
a determination unit configured to determine whether the first sample and the at least one second sample satisfy a joint processing exit condition;
the joint unit is used for respectively carrying out image content joint and label content joint processing on the first sample and the at least one second sample to obtain a target sample in response to the first sample and the at least one second sample not meeting joint processing exit conditions;
further comprises:
a list determining unit, configured to determine a label content length subset list corresponding to the sample set based on a label content length of each sample in the sample set;
a subset scaling factor determining unit, configured to determine, for each tag content length subset in the tag content length subset list, a probability scaling factor for each tag content length subset;
The probability scaling factor is used to indicate the probability that an extended number of samples is required.
8. The apparatus of claim 7, wherein the list determination unit comprises:
a first determining module, configured to determine a tag content length of each sample in the sample set;
the second determining module is used for counting the number of the samples according to the label content length of each sample, and determining at least one label content length subset and the number of the samples in each label content length subset;
and a third determining module, configured to sort the at least one label content length subset based on the number of samples in each label content length subset, and determine a label content length subset list corresponding to the sample set.
9. The apparatus of claim 7, wherein the scaling factor determination unit comprises:
a fourth determining module, configured to determine, for each tag content length subset in the tag content length subset list, an index number of each tag content length subset in the tag content length subset list;
and a fifth determining module, configured to determine a probability scaling factor of each tag content length subset according to an index number of each tag content length subset in the tag content length subset list, a tag content length threshold, a preset probability scaling factor maximum value, and a preset probability scaling factor minimum value.
10. The apparatus of any of claims 7 to 9, further comprising:
a length determining unit configured to determine a tag content length of the first sample;
a subset determining unit, configured to determine, according to the tag content length of the first sample, a target tag content length subset to which the first sample belongs;
and the sample scaling factor determining unit is used for determining the probability scaling factor of the first sample according to the probability scaling factors of the target tag content length subset.
11. The apparatus of any of claims 7 to 9, further comprising:
and the first processing unit is used for respectively carrying out data amplification processing on the first sample and the at least one second sample to obtain a first sample after the amplification processing and at least one second sample after the amplification processing.
12. The apparatus of any of claims 7 to 9, further comprising:
and the second processing unit is used for respectively carrying out data augmentation processing on the samples in the sample set to obtain the sample set after the augmentation processing.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 6.
CN202310233909.0A 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium Active CN116229175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310233909.0A CN116229175B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210268760.5A CN114612725B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium
CN202310233909.0A CN116229175B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210268760.5A Division CN114612725B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116229175A CN116229175A (en) 2023-06-06
CN116229175B true CN116229175B (en) 2023-12-26

Family

ID=81864992

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210268760.5A Active CN114612725B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium
CN202310233909.0A Active CN116229175B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210268760.5A Active CN114612725B (en) 2022-03-18 2022-03-18 Image processing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (2) CN114612725B (en)
WO (1) WO2023173617A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612725B (en) * 2022-03-18 2023-04-25 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019085793A1 (en) * 2017-11-01 2019-05-09 腾讯科技(深圳)有限公司 Image classification method, computer device and computer readable storage medium
CN110032650A (en) * 2019-04-18 2019-07-19 腾讯科技(深圳)有限公司 A kind of generation method, device and the electronic equipment of training sample data
KR102052624B1 (en) * 2018-11-09 2019-12-05 주식회사 루닛 Method for machine learning and apparatus for the same
CN111382758A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Training image classification model, image classification method, device, equipment and medium
CN111476284A (en) * 2020-04-01 2020-07-31 网易(杭州)网络有限公司 Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment
CN112784905A (en) * 2021-01-26 2021-05-11 北京嘀嘀无限科技发展有限公司 Data sample expansion method and device and electronic equipment
WO2021174723A1 (en) * 2020-03-02 2021-09-10 平安科技(深圳)有限公司 Training sample expansion method and apparatus, electronic device, and storage medium
WO2021248932A1 (en) * 2020-06-11 2021-12-16 广东浪潮智慧计算技术有限公司 Image data processing method and apparatus, device and readable storage medium
CN113869449A (en) * 2021-10-11 2021-12-31 北京百度网讯科技有限公司 Model training method, image processing method, device, equipment and storage medium

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6471934B2 (en) * 2014-06-12 2019-02-20 パナソニックIpマネジメント株式会社 Image recognition method, camera system
CN106874478A (en) * 2017-02-17 2017-06-20 重庆邮电大学 Parallelization random tags subset multi-tag file classification method based on Spark
US11436508B2 (en) * 2019-05-30 2022-09-06 International Business Machines Corporation Contextual hashtag generator
CN110852379B (en) * 2019-11-11 2022-06-21 北京百度网讯科技有限公司 Training sample generation method and device for target object recognition
CN111639527A (en) * 2020-04-23 2020-09-08 平安国际智慧城市科技股份有限公司 English handwritten text recognition method and device, electronic equipment and storage medium
CN111651668B (en) * 2020-05-06 2023-06-09 上海晶赞融宣科技有限公司 User portrait label generation method and device, storage medium and terminal
CN111931061B (en) * 2020-08-26 2023-03-24 腾讯科技(深圳)有限公司 Label mapping method and device, computer equipment and storage medium
CN112365423A (en) * 2020-11-23 2021-02-12 腾讯科技(深圳)有限公司 Image data enhancement method, device, medium and equipment
CN112668586B (en) * 2020-12-18 2024-05-14 北京百度网讯科技有限公司 Model training method, picture processing device, storage medium, and program product
CN112560987B (en) * 2020-12-25 2024-08-06 北京百度网讯科技有限公司 Image sample processing method, apparatus, device, storage medium, and program product
CN112613569B (en) * 2020-12-29 2024-04-09 北京百度网讯科技有限公司 Image recognition method, training method and device for image classification model
CN112364252B (en) * 2021-01-12 2021-04-23 北京世纪好未来教育科技有限公司 Content recommendation method and device, electronic equipment and storage medium
CN112633419B (en) * 2021-03-09 2021-07-06 浙江宇视科技有限公司 Small sample learning method and device, electronic equipment and storage medium
CN113033537B (en) * 2021-03-25 2022-07-01 北京百度网讯科技有限公司 Method, apparatus, device, medium and program product for training a model
CN113780330A (en) * 2021-04-13 2021-12-10 北京沃东天骏信息技术有限公司 Image correction method and device, computer storage medium and electronic equipment
CN113762037A (en) * 2021-04-23 2021-12-07 腾讯科技(深圳)有限公司 Image recognition method, device, equipment and storage medium
CN113326764B (en) * 2021-05-27 2022-06-07 北京百度网讯科技有限公司 Method and device for training image recognition model and image recognition
CN113642635B (en) * 2021-08-12 2023-09-15 百度在线网络技术(北京)有限公司 Model training method and device, electronic equipment and medium
CN114612725B (en) * 2022-03-18 2023-04-25 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019085793A1 (en) * 2017-11-01 2019-05-09 腾讯科技(深圳)有限公司 Image classification method, computer device and computer readable storage medium
KR102052624B1 (en) * 2018-11-09 2019-12-05 주식회사 루닛 Method for machine learning and apparatus for the same
CN111382758A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Training image classification model, image classification method, device, equipment and medium
CN110032650A (en) * 2019-04-18 2019-07-19 腾讯科技(深圳)有限公司 A kind of generation method, device and the electronic equipment of training sample data
WO2021174723A1 (en) * 2020-03-02 2021-09-10 平安科技(深圳)有限公司 Training sample expansion method and apparatus, electronic device, and storage medium
CN111476284A (en) * 2020-04-01 2020-07-31 网易(杭州)网络有限公司 Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment
WO2021248932A1 (en) * 2020-06-11 2021-12-16 广东浪潮智慧计算技术有限公司 Image data processing method and apparatus, device and readable storage medium
CN112784905A (en) * 2021-01-26 2021-05-11 北京嘀嘀无限科技发展有限公司 Data sample expansion method and device and electronic equipment
CN113869449A (en) * 2021-10-11 2021-12-31 北京百度网讯科技有限公司 Model training method, image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114612725B (en) 2023-04-25
WO2023173617A1 (en) 2023-09-21
CN116229175A (en) 2023-06-06
CN114612725A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN113255694B (en) Training image feature extraction model and method and device for extracting image features
CN113326764A (en) Method and device for training image recognition model and image recognition
CN112633276B (en) Training method, recognition method, device, equipment and medium
CN111611990B (en) Method and device for identifying tables in images
CN113222942A (en) Training method of multi-label classification model and method for predicting labels
CN113205041B (en) Structured information extraction method, device, equipment and storage medium
CN113642583B (en) Deep learning model training method for text detection and text detection method
CN114444619B (en) Sample generation method, training method, data processing method and electronic device
CN113627536B (en) Model training, video classification method, device, equipment and storage medium
CN113360700A (en) Method, device, equipment and medium for training image-text retrieval model and image-text retrieval
CN114429633A (en) Text recognition method, model training method, device, electronic equipment and medium
CN116229175B (en) Image processing method, device, equipment and storage medium
CN113657249B (en) Training method, prediction method, device, electronic equipment and storage medium
CN114494747A (en) Model training method, image processing method, device, electronic device and medium
CN113887630A (en) Image classification method and device, electronic equipment and storage medium
CN117333889A (en) Training method and device for document detection model and electronic equipment
CN115496916B (en) Training method of image recognition model, image recognition method and related device
CN114444514B (en) Semantic matching model training method, semantic matching method and related device
CN113158801B (en) Method for training face recognition model and recognizing face and related device
CN115273148A (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN114912541A (en) Classification method, classification device, electronic equipment and storage medium
CN113554062A (en) Training method, device and storage medium of multi-classification model
CN118365990B (en) Model training method and device applied to contraband detection and electronic equipment
US20220383626A1 (en) Image processing method, model training method, relevant devices and electronic device
CN116049335A (en) POI classification and model training method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant