CN113111953B - Projection image quality processing device - Google Patents

Projection image quality processing device Download PDF

Info

Publication number
CN113111953B
CN113111953B CN202110426033.2A CN202110426033A CN113111953B CN 113111953 B CN113111953 B CN 113111953B CN 202110426033 A CN202110426033 A CN 202110426033A CN 113111953 B CN113111953 B CN 113111953B
Authority
CN
China
Prior art keywords
projection image
image quality
preset
hash
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110426033.2A
Other languages
Chinese (zh)
Other versions
CN113111953A (en
Inventor
洪坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Luckystar Technology Co ltd
Original Assignee
Shenzhen Luckystar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Luckystar Technology Co ltd filed Critical Shenzhen Luckystar Technology Co ltd
Priority to CN202110426033.2A priority Critical patent/CN113111953B/en
Publication of CN113111953A publication Critical patent/CN113111953A/en
Application granted granted Critical
Publication of CN113111953B publication Critical patent/CN113111953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a projection image quality processing method, a device, equipment and a readable storage medium, wherein the projection image quality processing method comprises the following steps: acquiring a projection image to be processed, and performing feature extraction on the projection image based on a feature extraction model to obtain a projection image representation corresponding to the projection image, wherein the feature extraction model is constructed based on comparison learning of a preset positive sample set and a preset negative sample set; based on the representation of the projection image and a preset projection image quality problem classification model, carrying out projection image quality problem classification on the projection image to be processed to obtain a projection image quality problem classification result; and adjusting the projection image quality of the projection image to be processed based on the projection image quality adjustment strategy corresponding to the image quality problem classification result to obtain a target projection image. The method and the device solve the technical problem of low accuracy of projection image quality processing.

Description

Projection image quality processing device
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a readable storage medium for processing projection image quality.
Background
With the continuous development of computer software and artificial intelligence, the application of artificial intelligence is more and more extensive, and when projection image quality processing is performed, whether image quality problems exist in a projection image is generally identified through a conventional convolutional neural network, but because the dimension of the projection image is generally reduced when image features of the projection image are extracted based on the neural network, image information is generally lost in the dimension reduction process, and further when the projection image carries less projection image feature information, the accuracy of projection image quality processing based on the projection image is low.
Disclosure of Invention
The present application mainly aims to provide a method, an apparatus, a device and a readable storage medium for processing projection image quality, and aims to solve the technical problem of low accuracy of projection image quality processing in the prior art.
In order to achieve the above object, the present application provides a projection image quality processing method, which is applied to a projection image quality processing method apparatus, and the projection image quality processing method includes:
acquiring a projection image to be processed, and performing feature extraction on the projection image based on a feature extraction model to obtain a projection image representation corresponding to the projection image, wherein the feature extraction model is constructed based on comparison learning of a preset positive sample set and a preset negative sample set;
based on the projection image representation and a preset projection image quality problem classification model, carrying out projection image quality problem classification on the projection image to be processed to obtain a projection image quality problem classification result;
and adjusting the projection image quality of the projection image to be processed based on the projection image quality adjustment strategy corresponding to the image quality problem classification result to obtain a target projection image.
The application also provides a projection image quality processing method and device, the projection image quality processing method and device is a virtual device, and the projection image quality processing method and device is applied to projection image quality processing method equipment, and the projection image quality processing method and device comprises the following steps:
the characteristic extraction module is used for acquiring a projected image to be processed, extracting the characteristics of the projected image based on a characteristic extraction model and acquiring the representation of the projected image corresponding to the projected image, wherein the characteristic extraction model is constructed based on comparison learning of a preset positive sample set and a preset negative sample set;
the classification module is used for classifying the projection image quality problems of the projection image to be processed based on the projection image representation and a preset projection image quality problem classification model to obtain a projection image quality problem classification result;
and the image quality adjusting module is used for adjusting the projection image quality of the projection image to be processed based on the projection image quality adjusting strategy corresponding to the image quality problem classification result to obtain a target projection image.
The application also provides a projection image quality processing method and device, wherein the projection image quality processing method and device is an entity device, and the projection image quality processing method and device comprises the following steps: a memory, a processor, and a program of the projection image quality processing method stored in the memory and executable on the processor, wherein the program of the projection image quality processing method is executable by the processor to implement the steps of the projection image quality processing method.
The present application further provides a readable storage medium having stored thereon a program for implementing the projection image quality processing method, wherein the program for implementing the projection image quality processing method implements the steps of the projection image quality processing method as described above when executed by a processor.
Compared with the technical means of identifying whether the projected image has image quality problems through a conventional convolutional neural network to perform projection image quality processing in the prior art, the projection image processing method, the projection image processing device, the projection image processing method and the projection image processing device firstly acquire the projected image to be processed, perform characteristic extraction on the projected image based on a characteristic extraction model, and acquire the representation of the projected image corresponding to the projected image, wherein the characteristic extraction model is constructed by performing contrast learning based on a preset positive example sample set and a preset negative example sample set, and then the representation of the projected image is close to the representation corresponding to the positive example image sample and far away from the representation corresponding to the negative example image sample, so that the similarity of the target sound representation and the representation corresponding to the positive example image sample is higher, and the similarity of the representation corresponding to the negative example image sample is lower, the method further achieves the purpose of generating the projection image representation containing the image classification characteristic information, and further performs projection image quality problem classification on the projection image based on the projection image representation and a preset projection image quality problem classification model, namely achieves the purpose of classifying the projection image based on the projection image representation containing more characteristic information, provides more decision basis for performing projection image quality problem classification on the projection image, further generates a projection image quality problem classification result with higher accuracy, and further adjusts the projection image quality of the projection image to be processed based on a projection image quality adjustment strategy corresponding to the image quality problem classification result to obtain a target projection image, thereby overcoming the problems that in the prior art, when the image characteristics of the projection image are extracted based on a neural network, the projection image is generally subjected to dimension reduction, and the image information is generally lost in the dimension reduction process, and then when the projected image carries less projected image feature information, the technical defect that the accuracy of projection image quality processing based on the projected image is low is caused, and the accuracy of projection image quality processing is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
Fig. 1 is a flowchart illustrating a projection image quality processing method according to a first embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a projection image quality processing method according to a second embodiment of the present disclosure;
fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In a first embodiment of the projection image quality processing method according to the present application, referring to fig. 1, the projection image quality processing method includes:
step S10, acquiring a projection image to be processed, and performing feature extraction on the projection image based on a feature extraction model to obtain a projection image representation corresponding to the projection image, wherein the feature extraction model is constructed based on a preset positive sample set and a preset negative sample set through comparison learning;
in this embodiment, it should be noted that the feature extraction model is a machine learning model constructed by performing contrast learning based on a preset positive example sample set and a preset negative example sample set, where the contrast learning is a model construction manner by performing contrast learning based on a training sample, a positive example of the training sample, and a negative example of the training sample, and is used to shorten a distance between a sample representation of the sample and a sample representation of the positive example of the sample, and shorten a distance between a sample representation of the sample and a sample representation of the negative example of the sample, where the preset positive example sample set at least includes a preset positive example projection image sample, where the preset positive example projection image sample is a projection image sample belonging to the positive example sample, such as a projection image without image quality problem, and the preset negative example sample set is a projection image sample belonging to the negative example sample, such as a projection image with a darker color or a projection image with a lower definition, the feature extraction model is used for performing feature extraction on a projection image sample to convert a projection image representation matrix corresponding to the projection image sample into a sample representation, wherein the projection image representation matrix is a high-dimensional matrix representing image features of the projection image sample, the sample representation is a low-dimensional coding matrix or a low-dimensional coding vector uniquely representing the projection image sample, and the dimension of the sample representation of the projection image sample is smaller than that of the projection image representation matrix of the projection image sample.
Acquiring a projection image to be processed, and performing feature extraction on the projection image based on a feature extraction model to obtain a projection image representation corresponding to the projection image, wherein the feature extraction model is constructed based on a preset positive sample set and a preset negative sample set, specifically, acquiring the projection image to be processed, inputting a projection image representation matrix corresponding to the projection image to be processed into the feature extraction model, performing convolution and pooling alternating processing on the projection image representation matrix corresponding to the projection image to be processed for a preset number of times to map the projection image representation matrix corresponding to the projection image to a preset sample representation space to obtain the projection image representation, wherein the preset sample representation space can be a vector space or a matrix space of a preset dimension, and the feature extraction model is based on a positive sample extracted from the preset positive sample set and the preset negative sample set The negative sample extracted in the method is constructed by comparative learning.
Step S20, based on the projection image representation and a preset projection image quality problem classification model, carrying out projection image quality problem classification on the projection image to be processed to obtain a projection image quality problem classification result;
in this embodiment, it should be noted that the preset projection image quality problem classification model is a machine learning model for performing projection image quality problem classification on the projection image to be processed.
Based on the projection image representation and a preset projection image quality problem classification model, carrying out projection image quality problem classification on the projection image to be processed to obtain a projection image quality problem classification result, specifically, mapping the projected image representation to a projected image quality problem classification label by inputting the projected image representation into the preset projected image quality problem classification model, wherein the projection image quality problem classification label is a label of a category of the projection image quality problem, the projection image quality problem may be a low projection brightness, a low projection definition, a black spot in the projection, and the like, and determining the projection image quality problem category corresponding to the projection image to be processed according to the mapping relation between the projection image quality problem classification label and the projection image quality problem category, and taking the projection image quality problem category as the projection image quality problem classification result.
The step of classifying the projection image based on the projection image representation and a preset projection image quality problem classification model to obtain an image quality problem classification result comprises the following steps:
step S21, based on the preset projection image quality problem classification model, carrying out Hash coding on the projection image representation to obtain a Hash coding value corresponding to the projection image representation;
in this embodiment, it should be noted that the hash coding model includes a deep polarization network, where the deep polarization network is a deep learning model based on preset projection image sample category information and polarization loss function optimization, and for input samples belonging to the same sample category, the hash coding model can output the same polarization hash vector, and the polarization loss function is a loss function for optimizing the deep polarization network.
Additionally, it should be noted that the deep polarization network includes a hidden layer and a hash layer, where the hidden layer is a data processing layer of the deep polarization network and is used for performing data processing processes such as convolution processing and pooling processing, the hidden layer is one or more layers of neural networks trained based on deep learning, the hash layer is an output layer of the deep polarization network and is used for performing polarization hash and outputting a corresponding hash result, and the hash layer is one or more layers of neural networks trained based on deep learning.
Based on the preset projection image quality problem classification model, performing hash coding on the projection image representation to obtain a hash coding value corresponding to the projection image representation, specifically, based on the hidden layer, performing data processing on the projection image representation to extract projection image quality problem category characteristics in the projection image representation to obtain a category characteristic representation matrix corresponding to the projection image representation, wherein the category characteristic representation matrix is a matrix representation form of category characteristic information in the projection image representation, and then inputting the category characteristic representation matrix into the hash layer, performing full connection on the category characteristic representation matrix to obtain a full connection vector, and polarizing the full connection vector to obtain a polarized hash vector corresponding to the full connection vector, and further based on each characteristic value in the polarized hash vector, and encoding the polarized hash vector to obtain a hash encoding value.
Wherein the preset projection image quality problem classification model comprises a Hash layer,
the step of performing hash coding on the projection image characterization based on the preset projection image quality problem classification model to obtain a hash coding value corresponding to the projection image characterization comprises:
step S211, inputting the representation of the projected image into the hash layer, and performing polarized hash on the representation of the projected image to obtain a polarized hash result;
in this embodiment, the projection image representation is input to the hash layer, polarization hashing is performed on the projection image representation, a polarization hash result is obtained, specifically, class feature extraction is performed on the projection image representation, a class feature representation matrix is obtained, the class feature representation matrix is further fully connected, a full connection vector is obtained, and then a polarization output channel is matched for each specific bit in the full connection vector, where the specific bit is a bit to which a feature value in a preset feature value range in the full connection vector belongs, for example, the preset feature value range is set to (-1, 1), bits at which all feature values in the range of (-1, 1) are located are specific bits, and then, based on polarization parameters corresponding to each polarization output channel, the feature values on the specific bits corresponding to each polarization output channel are respectively polarized, keeping a feature value of a lower threshold not larger than a preset feature value range away from 0 from a negative direction, keeping a feature value of an upper threshold not smaller than the preset feature value range away from 0 from a positive direction, further obtaining a polarization feature value corresponding to a feature value on each specific bit, directly outputting a feature value on each non-specific bit, obtaining a non-polarization feature value corresponding to each non-specific bit, further generating a polarization hash vector corresponding to each polarization feature value and each non-polarization feature value together based on a position sequence of each polarization feature value and each non-polarization feature value in the full-concatenation vector, and taking the polarization hash vector as the polarization hash result, wherein, preferably, the preset feature value range can be set as a value range symmetrical with respect to 0 value, for example, assuming that the preset feature value range is (-0.5, 0.5), the full join vector is (-0.8, 0.05, -0.05, 1.2), and after the polarization of a specific bit is performed, the polarized hash vector corresponding to the full join vector is (-1.1, 0, 0, 2).
Step S212, converting the polarized hash result into the hash code value based on the target feature value on each bit in the polarized hash result.
In this embodiment, it should be noted that the preset hash coding mode includes a binary hash coding mode and a ternary hash coding mode, and the polarized hash result is a polarized hash vector, that is, a polarized full join vector.
Converting the polarized hash result into the hash code value based on the target characteristic value on each bit in the polarized hash result, specifically, converting the target characteristic value on each bit in the polarized hash result into a corresponding hash value based on a preset hash code mode, and obtaining the hash code value corresponding to the polarized hash result.
Wherein the polarized hash result comprises a polarized hash vector, the hash code value comprises at least one of a binary hash code and a ternary hash code,
the step of converting the polarized hash result into the hash code value based on the target feature value on each bit in the polarized hash result comprises:
step A10, based on the positive and negative signs of each target feature value, performing binary hash code conversion on the polarized hash vector to obtain the binary hash code value;
in this embodiment, based on the signs of the target feature values, binary hash code conversion is performed on the polarized hash vector to obtain the binary hash code value, specifically, based on the signs of the target feature values on the bits in the polarized hash result, the target feature value greater than 0 in the polarized hash vector is converted into a preset first-type binary hash value, and the target feature value smaller than 0 in the polarized hash vector is converted into a preset second-type binary hash value to obtain the binary hash code value, where preferably, the preset first-type binary hash value is set to 1, and the preset second-type binary hash value is set to 0.
And step B10, performing three-value hash code conversion on the polarized hash vector based on the size of each target characteristic value and a preset characteristic value range to obtain three-value hash code values.
In this embodiment, three-valued hash code conversion is performed on the polarized hash vector based on the size of each target feature value and a preset feature value range to obtain three-valued hash code values, specifically, a target feature value in the polarized hash vector whose size is greater than an upper threshold of the preset feature value range is converted into a preset first-type three-valued hash value, a target feature value in the polarized hash vector whose size is smaller than a lower threshold of the preset feature value range is converted into a preset second-type three-valued hash value, a target feature value in the polarized hash vector whose size is not smaller than the lower threshold of the preset feature value range and is not greater than the upper threshold of the preset feature value range is converted into a preset third-type three-valued hash value, and then a three-valued hash code value is obtained, preferably, the preset first-type three-valued hash value may be set to 1, the preset second type three-valued hash value may be set to-1, and the preset third type three-valued hash value may be set to 0.
Step S22, generating the projection image quality problem classification result based on the hash code values and the preset hash code values.
In this embodiment, it should be noted that the preset hash code value is a hash code value corresponding to a category of a sample representation corresponding to a preset projection image sample, and is used to uniquely identify a projection image quality problem category corresponding to the projection image sample, for example, if the hash code value is 111111, the projection image quality problem category corresponding to the projection image sample is identified as a brightness problem, and if the hash code value is 000001, the projection image quality problem category corresponding to the projection image sample is identified as a sharpness problem.
And specifically, by calculating the Hamming distance between the output Hash code value and each preset Hash code value, determining a target Hash code value with the minimum Hamming distance with the output Hash code value in each preset Hash code value, further determining the projection image quality problem type corresponding to the target Hash code, and taking the projection image quality problem type as the projection image quality problem classification result.
Wherein the step of generating the projection image quality problem classification result based on the hash code value and each preset hash code value comprises:
step S221, calculating Hamming distances between the hash code values and the preset hash code values;
in this embodiment, the hamming distance between the hash code value and each of the predetermined hash code values is calculated, specifically, the hash code value is compared with each of the predetermined hash code values, the number of bit pairs different between the hash code value and each preset hash code value is respectively calculated, wherein the distinct bit pairs are combinations of two bits of different bit values, for example, if the value of bit A is 1 and the value of bit B is 0, then bit A and bit B together form a distinct bit pair, the number of distinct bit pairs is then taken as the hamming distance, e.g., assuming a hash code value of 1100, a predetermined hash code value of 1101, a different bit pair exists between the hash code value and the predetermined hash code value, and the hamming distance is 1.
Step S222, determining a target hash code value corresponding to the hash code value in each preset hash code value based on each hamming distance;
in this embodiment, based on each hamming distance, a target hash code value corresponding to the hash code value is determined in each preset hash code value, specifically, a minimum hamming distance is selected from each hamming distance, and a preset hash code corresponding to the minimum hamming distance is used as the target hash code value.
In step S223, the projection image quality problem category corresponding to the target hash code value is used as the projection image quality problem classification result.
In this embodiment, the projection image quality problem category corresponding to the target hash code value is used as the projection image quality problem classification result, and specifically, the projection image quality problem category corresponding to the determined target hash code value is used as the projection image quality problem classification result based on a mapping relationship between a preset hash code value and the projection image quality problem category.
Additionally, it should be noted that the hash coding model is a model optimized based on a polarization loss function and a preset hash coding value, where the preset hash coding value is preset projection image sample category information, and in an implementable manner, the polarization loss function is as follows:
L(v,t^c)=max(m-v*t^c,0)
wherein L is the polarization loss function, m is a preset forced polarization parameter, v is a value on each bit in the polarization hash vector corresponding to the training sample, and an absolute value of v is greater than m, t ^ c is a target hash value corresponding to a bit of the polarization hash vector corresponding to the training sample, the target hash value is a bit value on a preset hash code value corresponding to the training sample, and t ^ c { -1, +1}, and the preset polarization loss function converges to 0, for example, if m is 1, t ^ c is 1, v is-1, at this time, L ^ 2, if the preset polarization loss function converges to 0, v needs to be forced polarized, so that v is 1, at this time, L ^ 0, and further when t ^ c is equal to 1, a value on a bit of the polarization hash vector corresponding to the training sample will gradually move away from 0 in a positive direction, when t ^ c is equal to-1, the value on the bit of the polarization hash vector corresponding to the training sample is gradually away from 0 in the negative direction, and after the polarization is successful, the polarization identifier of each bit in the polarization hash vector corresponding to the training sample is consistent with the corresponding target hash value, wherein the polarization identifier comprises the value range of the bit value and the positive and negative signs of the bit value, namely, the bit value of each bit in the polarization hash vector corresponding to the training sample is consistent with the positive and negative signs of the corresponding target hash value or the value range of the characteristic value, further, as the preset hash code values of the same sample category are the same, the polarization identifiers on each bit in the polarization hash vector corresponding to each training sample belonging to the same sample category are consistent, and further based on each polarization identifier, the obtained hash code values are consistent, that is, for the input samples of the projection images belonging to the same sample class, the same hash code values can be output based on the hash code model.
And step S30, adjusting the projection image quality of the projection image to be processed based on the projection image quality adjustment strategy corresponding to the image quality problem classification result, and obtaining the target projection image.
In this embodiment, the projection image quality problem classification result includes a projection image quality classification vector, and the projection image quality classification vector is a vector representing the projection image quality of the projection image to be processed, wherein the projection image quality problem classification result at least includes a projection image quality tag value, wherein the projection image quality tag value represents a projection image quality category corresponding to the projection image quality tag value at an arrangement position of the projection image quality classification vector, and a numerical size of the projection image quality tag value represents the projection image quality, for example, the projection image quality classification vector is (1, 2), where 1 represents a projection definition level of 1, and 2 represents a projection brightness level of 2.
The method includes the steps of adjusting projection image quality of a projection image to be processed based on a projection image quality adjustment strategy corresponding to an image quality problem classification result to obtain a target projection image, specifically, determining projection image quality grades corresponding to preset projection image quality classes corresponding to the projection image to be processed based on projection image quality label values in projection image quality classification vectors, wherein the preset projection image quality classes can be a definition class, a resolution class, a black dot class, a brightness class and the like, generating a projection image quality adjustment strategy by comparing the projection image quality grades with the preset projection image quality grades, and adjusting the projection image quality of the projection image to be processed according to the projection image quality adjustment strategy to obtain the target projection image.
In another embodiment, the classification result of the projection quality problem includes a projection quality problem category, and the step S30 further includes:
and determining a projection image quality adjustment strategy corresponding to the projection image quality problem category corresponding to the projection image to be processed based on the mapping relation between the projection image quality problem category and the projection image quality adjustment strategy, and adjusting the projection image quality of the projection image to be processed according to the projection image quality adjustment strategy to obtain a target projection image.
Compared with the technical means of identifying whether the projected image has image quality problems or not through a conventional convolutional neural network to process the projected image quality in the prior art, the projected image processing method provided by the embodiment of the application firstly acquires the projected image to be processed, and performs feature extraction on the projected image based on a feature extraction model to obtain the projected image representation corresponding to the projected image, wherein the feature extraction model is constructed by performing contrast learning based on a preset positive example sample set and a preset negative example sample set, and the projected image representation is close to the representation corresponding to the positive example image sample and far away from the representation corresponding to the negative example image sample, so that the similarity of the target sound representation and the representation corresponding to the positive example image sample is higher, the similarity of the representation corresponding to the negative example image sample is lower, and the purpose of generating the projected image representation containing the classified feature information of the images is realized, and then based on the characteristics of the projection images and a preset projection image quality problem classification model, the projection image quality problem classification is carried out on the projection images, so that the purpose of classifying the projection images based on more characteristics of the projection images containing characteristic information can be realized, more decision bases are provided for the projection image quality problem classification of the projection images, further a projection image quality problem classification result with higher accuracy is generated, further the projection image quality of the projection images to be processed is adjusted based on a projection image quality adjustment strategy corresponding to the image quality problem classification result, and a target projection image is obtained, so that the problems that in the prior art, when the image characteristics of the projection images are extracted based on a neural network, the projection images are generally subjected to dimension reduction, image information is generally lost in the dimension reduction process, and further the number of projection image characteristic information carried by the projection images is less are solved, the technical defect that the accuracy of projection image quality processing based on the projection image is low is caused, and the accuracy of the projection image quality processing is improved.
Further, referring to fig. 2, in another embodiment of the present application, based on the first embodiment of the present application, before the step of performing feature extraction on the projection image based on a feature extraction model to obtain a representation of the projection image corresponding to the projection image, where the feature extraction model is constructed by performing contrast learning based on a preset positive example sample set and a preset negative example sample set, the method for processing projection image quality further includes:
step C10, acquiring a feature extraction model to be trained, and extracting a training projection image;
in this embodiment, it should be noted that the training projection image is a training sample used for constructing a feature extraction model, and the feature extraction model to be trained is an untrained feature extraction model.
Step C20, extracting a first contrast projection image corresponding to the training projection image and corresponding second contrast projection images based on the preset positive example sample set and the preset negative example sample set;
in this embodiment, it should be noted that the preset positive example set at least includes a positive example, and the preset negative example set at least includes a negative example, where the positive example is a sample belonging to the same sample category as the training projection image.
And extracting a first contrast projection image corresponding to the training projection image and corresponding second contrast projection images based on the preset positive example sample set and the preset negative example sample set, specifically, randomly extracting a positive example in the preset positive example sample set as the first contrast projection image, and randomly extracting a preset number of negative examples in the preset negative example sample set as the second contrast projection images.
Step C30, respectively extracting the features of the training projection image, the first contrast projection image and each second contrast projection image based on the feature extraction model to be trained to obtain a training projection image representation, a first contrast projection image representation and each second contrast projection image representation;
in this embodiment, feature extraction is performed on the training projection image, the first contrast projection image, and each of the second contrast projection images based on the feature extraction model to be trained, so as to obtain a training projection image representation, a first contrast projection image representation, and each of the second contrast projection images, specifically, feature extraction is performed on the training projection image, the first contrast projection image, and each of the second contrast projection images based on the feature extraction model to be trained, and mapping the training projection image, the first contrast projection image and each second contrast projection image to a preset sample characterization space to obtain a training projection image characterization corresponding to the training projection image, a first contrast projection image characterization corresponding to the first contrast projection image and a second contrast projection image characterization corresponding to each second contrast projection image.
Step C40, calculating the contrast learning loss corresponding to the feature extraction model to be trained based on the training projection image representation, the first contrast projection image representation and each second contrast projection image representation;
in this embodiment, based on the training projection image characterization, the first contrast projection image characterization, and each of the second contrast projection image characterizations, a contrast learning loss corresponding to the feature extraction model to be trained is calculated, specifically, the training projection image characterization, the first contrast projection image characterization, and each of the second contrast projection image characterizations are respectively input into a preset contrast learning loss calculation formula, and the contrast learning loss corresponding to the feature extraction model to be trained is calculated, where the contrast learning calculation formula is as follows:
Figure BDA0003029581510000121
wherein L is the contrast learning loss, u A Is that it isTraining projection image characterization, u B For the first contrast projection image characterization,
Figure BDA0003029581510000122
for the second contrast projection image representations, M is the number of second contrast projection image representations, and further when the distance between the first contrast projection image representation and the training projection image representation is sufficiently small and the distance between each second contrast projection image representation and the training projection image representation is sufficiently large, the contrast learning loss can converge, and further the feature extraction model updated based on the contrast learning loss can have the ability to zoom in the distance between the training projection image representation and the first contrast projection image representation as a positive example and zoom out the distance between the training projection image representation and the second contrast projection image representation as a negative example, and further the feature extraction model can generate different sample representations based on samples of different sample types (positive example or negative example) so that the generated sample representations have sample category information (positive example category information or negative example category information), the information quantity contained in the sample characterization generated by feature extraction is improved.
After the step of calculating the contrast learning loss corresponding to the feature extraction model to be trained based on the training projection image representation, the first contrast projection image representation, and each of the second contrast projection image representations, the projection image quality processing method further includes:
step D10, inputting the training projection image representation into a to-be-trained projection image quality problem classification model so as to classify the projection image quality problems of the training projection image and obtain a projection image quality problem category label;
in this embodiment, the training projection image representation is input to a to-be-trained projection image quality problem classification model to classify the training projection image into a projection image quality problem classification class label, and specifically, the training projection image representation is input to the to-be-trained projection image quality problem classification model to perform data processing on the training projection image representation, where the data processing includes, but is not limited to, convolution, pooling, full connection, and the like, and the training projection image representation is mapped to a projection image quality problem class label, where the projection image quality problem class label is an identifier of a sample class of the training projection image, and the prediction class label is represented by a vector, for example, assuming that the prediction class label is (0, 0, 1), the sample class a is represented.
Step D20, calculating a category prediction loss based on the real category label corresponding to the training projection image and the projection image quality problem category label;
in this embodiment, it should be noted that the real category label is an identifier of a known real projection image category corresponding to the training projection image.
Calculating a category prediction loss based on the real category label corresponding to the training projection image and the projection image quality problem category label, specifically calculating a difference value between the real category label corresponding to the training projection image and the projection image quality problem category label, and obtaining the category prediction loss.
And D30, optimizing the to-be-trained projection image quality problem classification model and the to-be-trained feature extraction model based on the category prediction loss and the contrast learning loss to obtain the target feature extraction model and the preset projection image quality problem classification model.
In this embodiment, based on the class prediction loss and the contrast learning loss, the projection image quality problem classification model to be trained and the feature extraction model to be trained are optimized to obtain the target feature extraction model and the preset projection image quality problem classification model, specifically, based on the class prediction loss, a first model update gradient corresponding to the projection image quality problem classification model to be trained and a second model update gradient corresponding to the feature extraction model to be trained are calculated, based on the contrast learning loss, a third model update gradient corresponding to the feature extraction model to be trained is calculated, further based on the first model update gradient, the projection image quality problem classification model to be trained is updated, and based on the second model update gradient and the third model update gradient, the feature extraction model to be trained is asynchronously updated, and then judging whether the updated projection image quality problem classification model to be trained and the asynchronously updated feature extraction model to be trained both meet a preset training end condition, if so, taking the projection image quality problem classification model to be trained as the preset projection image quality problem classification model, taking the feature extraction model to be trained as the feature extraction model, and if not, returning to the step of extracting the training sound sample, wherein the preset training end condition comprises loss convergence, reaching a maximum iteration time threshold value and the like.
And step C50, optimizing the feature extraction model to be trained based on the comparison learning loss to obtain the feature extraction model.
In this embodiment, the feature extraction model to be trained is optimized based on the comparison learning loss to obtain the feature extraction model, specifically, based on the comparison learning loss, a model update gradient corresponding to the feature extraction model to be trained is calculated, the feature extraction model to be trained is updated according to the model update gradient, if the updated feature extraction model to be trained satisfies a preset iterative training end condition, the feature extraction model to be trained is used as the feature extraction model, and if the updated feature extraction model to be trained does not satisfy the preset iterative training end condition, the step of obtaining the feature extraction model to be trained is returned, where the preset iterative training end condition includes loss convergence, reaching a maximum iteration number threshold value, and the like.
The embodiment of the application provides a method for constructing a feature extraction model, that is, a feature extraction model to be trained is obtained, training projection images are extracted, a first contrast projection image corresponding to the training projection images and each corresponding second contrast projection image are extracted based on the preset positive sample set and the preset negative sample set, features of the training projection images, the first contrast projection images and each corresponding second contrast projection image are extracted based on the feature extraction model to be trained respectively, a training projection image characterization, a first contrast projection image characterization and each second contrast projection image characterization are obtained, and a contrast learning loss corresponding to the feature extraction model to be trained is calculated based on the training projection image characterization, the first contrast projection image characterization and each second contrast projection image characterization, and then based on the contrast learning loss, optimizing the feature extraction model to be trained to obtain the feature extraction model, wherein the feature extraction model is constructed by contrast learning based on positive samples in a preset positive sample set and negative samples in a preset negative sample set, and then the representation of the projection image is close to the representation corresponding to the positive samples and far away from the representation corresponding to the negative samples, so that the similarity of the target sound representation and the representation corresponding to the positive samples is higher, the similarity of the representation corresponding to the negative samples is lower, and further the purpose of generating the representation of the projection image with sample class information is achieved, and further based on the representation of the projection image and the preset projection image quality problem classification model, the projection image quality problem classification of the projection image to be processed is carried out, so that a projection image quality problem classification result with higher accuracy can be generated, and then, based on a projection image quality adjustment strategy corresponding to the image quality problem classification result, adjusting the projection image quality of the projection image to be processed to obtain a target projection image, and laying a foundation for overcoming the technical defect that in the prior art, when the image features of the projection image are extracted based on a neural network, the dimension of the projection image is usually reduced, and image information is usually lost in the dimension reduction process, so that the accuracy of projection image quality processing based on the projection image is low when the projection image carries less projection image feature information.
Referring to fig. 3, fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 3, the apparatus for processing projection image quality may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory such as a disk memory. The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the projection image quality processing method and apparatus may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the apparatus configuration of the projection image quality processing method illustrated in fig. 3 does not constitute a limitation of the projection image quality processing method apparatus, and may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used.
As shown in fig. 3, the memory 1005, which is a kind of computer storage medium, may include an operating system, a network communication module, and a projection image quality processing method program. The operating system is a program that manages and controls hardware and software resources of the projection image quality processing method apparatus, and supports the execution of the projection image quality processing method program and other software and/or programs. The network communication module is used for realizing communication among the components in the memory 1005 and communication with other hardware and software in the projection image quality processing method system.
In the apparatus of fig. 3, the processor 1001 is configured to execute a program of the projection image quality processing method stored in the memory 1005 to implement any one of the steps of the projection image quality processing method described above.
The specific implementation of the projection image quality processing method is substantially the same as that of the above projection image quality processing method, and is not described herein again.
The embodiment of the present application further provides a projection image quality processing method and apparatus, where the projection image quality processing method and apparatus are applied to a projection image quality processing method and apparatus, and the projection image quality processing method and apparatus include:
the characteristic extraction module is used for acquiring a projected image to be processed, extracting the characteristics of the projected image based on a characteristic extraction model and acquiring the representation of the projected image corresponding to the projected image, wherein the characteristic extraction model is constructed based on comparison learning of a preset positive sample set and a preset negative sample set;
the classification module is used for classifying the projection image quality problems of the projection image to be processed based on the projection image representation and a preset projection image quality problem classification model to obtain a projection image quality problem classification result;
and the image quality adjusting module is used for adjusting the projection image quality of the projection image to be processed based on the projection image quality adjusting strategy corresponding to the image quality problem classification result to obtain a target projection image.
Optionally, the classification module is further configured to:
based on the preset projection image quality problem classification model, carrying out Hash coding on the projection image representation to obtain a Hash coding value corresponding to the projection image representation;
and generating the projection image quality problem classification result based on the Hash code values and the preset Hash code values.
Optionally, the classification module is further configured to:
calculating Hamming distances between the hash code values and the preset hash code values;
determining a target hash code value corresponding to the hash code value in each preset hash code value based on each Hamming distance;
and taking the projection image quality problem category corresponding to the target Hash code value as the projection image quality problem classification result.
Optionally, the classification module is further configured to:
inputting the representation of the projection image into the hash layer, and performing polarized hash on the representation of the projection image to obtain a polarized hash result;
and converting the polarized hash result into the hash coding value based on the target characteristic value on each bit in the polarized hash result.
Optionally, the classification module is further configured to:
based on the positive and negative signs of each target characteristic value, performing binary hash code conversion on the polarized hash vector to obtain a binary hash code value; and/or
And performing three-value hash code conversion on the polarized hash vector based on the size of each target characteristic value and a preset characteristic value range to obtain three-value hash code values.
Optionally, the projection image quality processing apparatus is further configured to:
acquiring a feature extraction model to be trained, and extracting a training projection image;
extracting a first contrast projection image corresponding to the training projection image and corresponding second contrast projection images based on the preset positive example sample set and the preset negative example sample set;
respectively extracting the features of the training projection image, the first contrast projection image and each second contrast projection image based on the feature extraction model to be trained to obtain a training projection image representation, a first contrast projection image representation and each second contrast projection image representation;
calculating a contrast learning loss corresponding to the feature extraction model to be trained based on the training projection image representation, the first contrast projection image representation and each second contrast projection image representation;
and optimizing the feature extraction model to be trained based on the comparison learning loss to obtain the feature extraction model.
Optionally, the projection image quality processing apparatus is further configured to:
inputting the representation of the training projection image into a to-be-trained projection image quality problem classification model to classify the projection image quality problem of the training projection image to obtain a projection image quality problem category label;
calculating a category prediction loss based on a real category label corresponding to the training projection image and the projection image quality problem category label;
and optimizing the to-be-trained projection image quality problem classification model and the to-be-trained feature extraction model based on the category prediction loss and the comparison learning loss to obtain the target feature extraction model and the preset projection image quality problem classification model.
The detailed implementation of the projection image quality processing method apparatus of the present application is substantially the same as that of the above projection image quality processing method, and is not repeated herein.
The embodiment of the application provides a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs can be further executed by one or more processors to implement the steps of the projection image quality processing method according to any one of the above.
The specific implementation of the readable storage medium of the present application is substantially the same as the embodiments of the projection image quality processing method, and is not repeated herein.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all equivalent structures or equivalent processes, which are directly or indirectly applied to other related technical fields, and which are not limited by the present application, are also included in the scope of the present application.

Claims (8)

1. A projection image quality processing method is characterized by comprising the following steps:
acquiring a projection image to be processed, and performing feature extraction on the projection image based on a feature extraction model to obtain a projection image representation corresponding to the projection image, wherein the feature extraction model is constructed based on comparison learning of a preset positive sample set and a preset negative sample set;
based on the projection image representation and a preset projection image quality problem classification model, carrying out projection image quality problem classification on the projection image to be processed to obtain a projection image quality problem classification result;
adjusting the projection image quality of the projection image to be processed based on a projection image quality adjustment strategy corresponding to the image quality problem classification result to obtain a target projection image;
before the step of performing feature extraction on the projection image based on the feature extraction model to obtain a representation of the projection image corresponding to the projection image, where the feature extraction model is constructed by contrast learning based on a preset positive example sample set and a preset negative example sample set, the projection image quality processing method further includes:
acquiring a feature extraction model to be trained, and extracting a training projection image;
extracting a first contrast projection image corresponding to the training projection image and corresponding second contrast projection images based on the preset positive example sample set and the preset negative example sample set;
respectively extracting the features of the training projection image, the first contrast projection image and each second contrast projection image based on the feature extraction model to be trained to obtain a training projection image representation, a first contrast projection image representation and each second contrast projection image representation;
calculating a contrast learning loss corresponding to the feature extraction model to be trained based on the training projection image representation, the first contrast projection image representation and each second contrast projection image representation;
optimizing the feature extraction model to be trained based on the comparison learning loss to obtain the feature extraction model;
after the step of calculating the contrast learning loss corresponding to the feature extraction model to be trained based on the training projection image characterization, the first contrast projection image characterization, and each of the second contrast projection image characterizations, the projection image quality processing method further includes:
inputting the representation of the training projection image into a to-be-trained projection image quality problem classification model to classify the projection image quality problem of the training projection image to obtain a projection image quality problem category label;
calculating a category prediction loss based on a real category label corresponding to the training projection image and the projection image quality problem category label;
and optimizing the to-be-trained projection image quality problem classification model and the to-be-trained feature extraction model based on the category prediction loss and the comparison learning loss to obtain the target feature extraction model and the preset projection image quality problem classification model.
2. The method as claimed in claim 1, wherein the step of classifying the projection image according to the projection image characterization and a predetermined projection image quality problem classification model to obtain the classification result of the projection image quality problem comprises:
based on the preset projection image quality problem classification model, carrying out Hash coding on the projection image representation to obtain a Hash coding value corresponding to the projection image representation;
and generating the classification result of the projection image quality problem based on the Hash code value and each preset Hash code value.
3. The method for processing the projection image quality as claimed in claim 2, wherein the step of generating the classification result of the projection image quality problem based on the hash code value and each predetermined hash code value comprises:
calculating the Hamming distance between the Hash coding value and each preset Hash coding value;
determining a target hash code value corresponding to the hash code value in each preset hash code value based on each Hamming distance;
and taking the projection image quality problem category corresponding to the target Hash code value as the projection image quality problem classification result.
4. The method of claim 2, wherein the predetermined projection quality problem classification model comprises a hash layer,
the step of performing hash coding on the projection image characterization based on the preset projection image quality problem classification model to obtain a hash coding value corresponding to the projection image characterization comprises:
inputting the representation of the projection image into the hash layer, and performing polarized hash on the representation of the projection image to obtain a polarized hash result;
and converting the polarized hash result into the hash code value based on the target characteristic value on each bit in the polarized hash result.
5. The projected image quality processing method of claim 4, wherein the polarized hash result comprises a polarized hash vector, the hash code value comprises at least one of a binary hash code and a ternary hash code,
the step of converting the polarized hash result into the hash code value based on the target feature value on each bit in the polarized hash result comprises:
based on the positive and negative signs of each target characteristic value, performing binary hash code conversion on the polarized hash vector to obtain a binary hash code value; and/or
And performing three-value hash code conversion on the polarized hash vector based on the size of each target characteristic value and a preset characteristic value range to obtain three-value hash code values.
6. A projection image quality processing apparatus, comprising:
the characteristic extraction module is used for acquiring a projected image to be processed, extracting the characteristics of the projected image based on a characteristic extraction model and acquiring the representation of the projected image corresponding to the projected image, wherein the characteristic extraction model is constructed based on comparison learning of a preset positive sample set and a preset negative sample set;
the classification module is used for classifying the projection image to be processed according to the projection image characterization and a preset projection image quality problem classification model to obtain a projection image quality problem classification result;
the image quality adjusting module is used for adjusting the projection image quality of the projection image to be processed based on a projection image quality adjusting strategy corresponding to the image quality problem classification result to obtain a target projection image;
the projection image quality processing device is further configured to:
acquiring a feature extraction model to be trained, and extracting a training projection image;
extracting a first contrast projection image corresponding to the training projection image and corresponding second contrast projection images based on the preset positive example sample set and the preset negative example sample set;
respectively extracting the features of the training projection image, the first contrast projection image and each second contrast projection image based on the feature extraction model to be trained to obtain a training projection image representation, a first contrast projection image representation and each second contrast projection image representation;
calculating a contrast learning loss corresponding to the feature extraction model to be trained based on the training projection image representation, the first contrast projection image representation and each second contrast projection image representation;
optimizing the feature extraction model to be trained based on the comparison learning loss to obtain the feature extraction model;
the projection image quality processing device is further configured to:
inputting the representation of the training projection image into a to-be-trained projection image quality problem classification model to classify the projection image quality problem of the training projection image to obtain a projection image quality problem category label;
calculating a category prediction loss based on a real category label corresponding to the training projection image and the projection image quality problem category label;
and optimizing the to-be-trained projection image quality problem classification model and the to-be-trained feature extraction model based on the category prediction loss and the comparison learning loss to obtain the target feature extraction model and the preset projection image quality problem classification model.
7. The projection image quality processing method and device are characterized by comprising the following steps: a memory, a processor, and a program stored on the memory for implementing the projection image quality processing method,
the memory is used for storing a program for realizing the projection image quality processing method;
the processor is configured to execute a program for implementing the projection image quality processing method, so as to implement the projection image quality processing method according to any one of claims 1 to 5.
8. A readable storage medium having a program for implementing a projection image quality processing method stored thereon, the program being executed by a processor to implement the projection image quality processing method according to any one of claims 1 to 5.
CN202110426033.2A 2021-04-20 2021-04-20 Projection image quality processing device Active CN113111953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110426033.2A CN113111953B (en) 2021-04-20 2021-04-20 Projection image quality processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110426033.2A CN113111953B (en) 2021-04-20 2021-04-20 Projection image quality processing device

Publications (2)

Publication Number Publication Date
CN113111953A CN113111953A (en) 2021-07-13
CN113111953B true CN113111953B (en) 2022-08-26

Family

ID=76718935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110426033.2A Active CN113111953B (en) 2021-04-20 2021-04-20 Projection image quality processing device

Country Status (1)

Country Link
CN (1) CN113111953B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918532A (en) * 2019-03-08 2019-06-21 苏州大学 Image search method, device, equipment and computer readable storage medium
CN111612080A (en) * 2020-05-22 2020-09-01 深圳前海微众银行股份有限公司 Model interpretation method, device and readable storage medium
CN112214570A (en) * 2020-09-23 2021-01-12 浙江工业大学 Cross-modal retrieval method and device based on counterprojection learning hash

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677713A (en) * 2015-10-15 2016-06-15 浙江健培慧康医疗科技股份有限公司 Position-independent rapid detection and identification method of symptoms
CN107092661A (en) * 2017-03-28 2017-08-25 桂林明辉信息科技有限公司 A kind of image search method based on depth convolutional neural networks
CN109711422B (en) * 2017-10-26 2023-06-30 北京邮电大学 Image data processing method, image data processing device, image data model building method, image data model building device, computer equipment and storage medium
CN108846340B (en) * 2018-06-05 2023-07-25 腾讯科技(深圳)有限公司 Face recognition method and device, classification model training method and device, storage medium and computer equipment
CN110490794A (en) * 2019-08-09 2019-11-22 三星电子(中国)研发中心 Character image processing method and processing device based on artificial intelligence
CN111163349B (en) * 2020-02-20 2021-03-02 腾讯科技(深圳)有限公司 Image quality parameter adjusting method, device, equipment and readable storage medium
CN111612079B (en) * 2020-05-22 2021-07-20 深圳前海微众银行股份有限公司 Data right confirming method, equipment and readable storage medium
CN111612159A (en) * 2020-05-22 2020-09-01 深圳前海微众银行股份有限公司 Feature importance measuring method, device and readable storage medium
CN111626408B (en) * 2020-05-22 2021-08-06 深圳前海微众银行股份有限公司 Hash coding method, device and equipment and readable storage medium
CN111988614B (en) * 2020-08-14 2022-09-13 深圳前海微众银行股份有限公司 Hash coding optimization method and device and readable storage medium
CN111967609B (en) * 2020-08-14 2021-08-06 深圳前海微众银行股份有限公司 Model parameter verification method, device and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918532A (en) * 2019-03-08 2019-06-21 苏州大学 Image search method, device, equipment and computer readable storage medium
CN111612080A (en) * 2020-05-22 2020-09-01 深圳前海微众银行股份有限公司 Model interpretation method, device and readable storage medium
CN112214570A (en) * 2020-09-23 2021-01-12 浙江工业大学 Cross-modal retrieval method and device based on counterprojection learning hash

Also Published As

Publication number Publication date
CN113111953A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN111444878B (en) Video classification method, device and computer readable storage medium
WO2021036059A1 (en) Image conversion model training method, heterogeneous face recognition method, device and apparatus
CN108335306B (en) Image processing method and device, electronic equipment and storage medium
US11694085B2 (en) Optimizing supervised generative adversarial networks via latent space regularizations
CN111626408B (en) Hash coding method, device and equipment and readable storage medium
US9025889B2 (en) Method, apparatus and computer program product for providing pattern detection with unknown noise levels
CN112966755A (en) Inductance defect detection method and device and readable storage medium
US20140286527A1 (en) Systems and methods for accelerated face detection
CN111988614B (en) Hash coding optimization method and device and readable storage medium
WO2022236824A1 (en) Target detection network construction optimization method, apparatus and device, and medium and product
Barni et al. Forensics aided steganalysis of heterogeneous images
CN112584062B (en) Background audio construction method and device
CN114746898A (en) Method and system for generating trisection images of image matting
CN114998595B (en) Weak supervision semantic segmentation method, semantic segmentation method and readable storage medium
US20230334833A1 (en) Training method and apparatus for image processing network, computer device, and storage medium
CN116612280A (en) Vehicle segmentation method, device, computer equipment and computer readable storage medium
US11887277B2 (en) Removing compression artifacts from digital images and videos utilizing generative machine-learning models
US9595113B2 (en) Image transmission system, image processing apparatus, image storage apparatus, and control methods thereof
CN113902899A (en) Training method, target detection method, device, electronic device and storage medium
CN113111953B (en) Projection image quality processing device
CN111950712A (en) Model network parameter processing method, device and readable storage medium
CN112487927B (en) Method and system for realizing indoor scene recognition based on object associated attention
CN113793298A (en) Pulmonary nodule detection model construction optimization method, equipment, storage medium and product
CN109215057B (en) High-performance visual tracking method and device
US20210183027A1 (en) Systems and methods for recognition of user-provided images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant