CN115631375A - Image ordering estimation method, system, device and medium - Google Patents

Image ordering estimation method, system, device and medium Download PDF

Info

Publication number
CN115631375A
CN115631375A CN202211297496.4A CN202211297496A CN115631375A CN 115631375 A CN115631375 A CN 115631375A CN 202211297496 A CN202211297496 A CN 202211297496A CN 115631375 A CN115631375 A CN 115631375A
Authority
CN
China
Prior art keywords
image
fusion
learning
model
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211297496.4A
Other languages
Chinese (zh)
Inventor
章超
程建梅
白松
冯超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Police College
Original Assignee
Sichuan Police College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Police College filed Critical Sichuan Police College
Priority to CN202211297496.4A priority Critical patent/CN115631375A/en
Publication of CN115631375A publication Critical patent/CN115631375A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system, a device and a medium for estimating image orderliness, comprising the following steps: firstly, any two samples in each batch are spliced and fused in a random position and random size mode; secondly, all images in the training set are fused in a shearing mode to obtain a large number of random combination modes; and finally, inputting the fused image into a CNN model, implicitly embedding randomness information into a learning process, and learning the matching degree of the mixed image and the mixed label to obtain a model with stronger competitiveness. The invention has the advantages that: the image ordering estimation problem is realized in a simpler and more efficient mode, a large number of image blocks are embedded together by using a large number of combinability mixtures, competitive learning is carried out on the ordering categories, the recognition performance is greatly improved, and the model has more robustness.

Description

Image order estimation method, system, device and medium
Technical Field
The invention relates to the technical field of self-supervision learning in image recognition and machine learning, in particular to a competitive learning-based image ordering estimation method, system, device and medium.
Background
The problem of image order classification is widely applied in life, and is a research problem which is more and more emphasized in recent years. The main task of the problem is to assign an ordered category label to the image; compared with the general identification problem, the labels in the problem are ordered, and the labels corresponding to the images have an ordered relationship. Therefore, the design concept of the network model for the problem is also different.
In fact, the image ordering classification problem can be regarded as a special Fine-grained recognition problem (Fine-grained recognition), and the difference between categories is not large, so that the discrimination of the model is highly required. With the wide application of the deep learning technology in the field of image recognition, the related models and algorithms are more mature, the recognition rate is higher, the recognition performance is stronger, and the general recognition problem seems to reach the bottleneck period. The promotion is not obvious for the problem of image orderliness classification in recent years.
The existing method for solving the problem of image order classification mainly has two ways: firstly, establishing a more complex or more powerful network model (which may also be called a feature extractor), such as constructing a multi-scale or more layer network model; and secondly, designing various loss functions to better implement refined identification. The above two approaches consider the network model and the loss function, but do not consider the improvement of the network in the input stage. The invention aims to improve the problem from an input end and provides an image order identification method for hybrid competitive learning.
Disclosure of Invention
The invention provides an image ordering estimation method, a system, a device and a medium aiming at the defects of the prior art. The problem of slight difference among categories in image ordering classification is solved, a richer competitive learning mechanism is introduced in an input stage, and the slight difference among the ordering samples is quantized more finely.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
an image order estimation method, comprising the steps of:
s1: taking any two picture samples x i Hexix i' Performing shear type fusion
Figure BDA0003903277020000021
Figure BDA0003903277020000022
The corresponding label is
Figure BDA0003903277020000023
Where M is a binary matrix with elements taking two values,
Figure BDA0003903277020000024
and
Figure BDA0003903277020000025
for combined images and soft labels, λ is the fusion factor of the shear-mode combination, and the value of λ depends on the area ratio of the image blocks in the shear-mode combination.
S2: in the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes; set up from each batch N t Randomly selecting 2 samples from the samples to mix, and arranging the samples among each other
Figure BDA0003903277020000026
In combination, there are (H x W) in the shear zone 2 A possible shearing matrix, thus, common to each batch
Figure BDA0003903277020000027
And (4) a random combination mode.
S3: fusing the images
Figure BDA0003903277020000028
And a label
Figure BDA0003903277020000029
Inputting the image data into a CNN model, training and learning the model by describing the matching degree of a mixed image and a mixed label, and learning the model with stronger competitiveness
Figure BDA00039032770200000210
And obtaining the weight value of each parameter.
S4: during the test, test image sample x j Input to the trained CNN model
Figure BDA00039032770200000211
In the method, forward calculation is directly carried out to obtain a prediction output y j =f(χ j ;W c )。
Further, the shear fusion in S1 is specifically as follows:
knowing the height and width of the image as H and W, respectively, r was randomly chosen at uniformly distributed Unif (0, H) and Unif (0, W) x And r y Respectively representing the horizontal and vertical coordinate values of the central point of the cropped image, setting
Figure BDA00039032770200000212
The start and stop positions of the abscissa of the clipped image are respectively
Figure BDA00039032770200000213
And
Figure BDA00039032770200000214
Figure BDA00039032770200000215
the starting and stopping positions of the ordinate of the cut image are respectively
Figure BDA00039032770200000216
Figure BDA00039032770200000217
And
Figure BDA00039032770200000218
finally, cutting image information x i' Is added toAnother image x i In is represented by
Figure BDA0003903277020000031
That is, the range of the abscissa and the ordinate of the sheared area are
Figure BDA0003903277020000032
Thus obtaining fusion factor
Figure BDA0003903277020000033
Figure BDA0003903277020000034
Shear zone
Figure BDA0003903277020000035
The value of inner is 0, i.e. is
Figure BDA0003903277020000036
The other regions take the value 0. The fusion factor lambda obeys the uniform distribution among (0, 1), and the value of the fusion factor lambda is influenced by the area proportion of the image block in the shear mode fusion.
Further, S3 specifically is: the images obtained by fusion are
Figure BDA0003903277020000037
Input into CNN model
Figure BDA0003903277020000038
The training and learning are carried out, and two branches are arranged at the output stage and respectively connected with the label y i And y i′ Matching is carried out, and a Cross entropy loss function (Cross-entropy loss) is constructed, and is specifically expressed as:
Figure BDA0003903277020000039
Figure BDA00039032770200000310
training set D in each batch bat The upper total loss function is:
Figure BDA00039032770200000311
wherein
Figure BDA00039032770200000312
In particular, since it is a batch process in the input stage, N is applied to each batch input bat The fusion factors λ of the individual samples are all the same; in different batch inputs, the fusion factor lambda is respectively different under the influence of the difference of the area proportion of the sheared image blocks.
At this time, the CNN model
Figure BDA00039032770200000313
Optimization is performed by the following objectives:
Figure BDA00039032770200000314
the invention also discloses an image order estimation system, which comprises: the system comprises a data acquisition module and an order estimation module;
a data acquisition module: the system comprises an order estimation module, a picture sampling module and a data processing module, wherein the order estimation module is used for acquiring picture sample data and inputting the picture sample data to the order estimation module;
an order estimation module: firstly, carrying out shear type fusion on any two samples;
secondly, at the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes;
and thirdly, inputting the fused image into the CNN model, training and learning the model by depicting the matching degree of the mixed image and the mixed label, learning the model with stronger competitiveness, and obtaining the weight value of each parameter.
And finally, in the testing process, inputting the image sample into the trained CNN model, and directly carrying out forward calculation to obtain prediction output.
The invention also discloses computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the image ordering estimation method when executing the program.
The invention also discloses a computer readable storage medium on which a computer program is stored, which program, when executed by a processor, implements the above-described image order estimation method.
Compared with the prior art, the invention has the advantages that:
firstly, in the image ordering classification problem, the subsequent researchers are promoted to consider the image ordering classification problem with a new visual angle, and a competition mechanism is introduced at the input end for learning.
Secondly, on the data input end, a large number of competitive combination modes are added, and the generalization capability of the network model can be further improved; on the other hand, the operation flow of fine recognition is greatly simplified, especially on the design of the input end. And by considering the competitive relationship among samples, better refined identification is realized in a simpler and more efficient mode.
Thirdly, on 6 different data sets (5 advance datasets and Car datasets), the recognition performance is greatly improved (both are more than 2 percent), and good experimental verification is obtained.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention
FIG. 2 is a diagram comparing an embodiment of the present invention with a prior art frame; (a) is prior art, (b) is the present invention;
FIG. 3 is a comparison of an embodiment of the present invention with the prior art at the input; (a) is prior art, (b) is the present invention;
fig. 4 is a specific explanatory diagram of the problem according to the present invention.
FIG. 5 is a graph comparing the output of the present invention embodiment with the prior art.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below by referring to the accompanying drawings and embodiments.
The invention introduces a data enhancement method, any two images are fused in a random position and random size mode, and randomness information is added into the learning process, so that the method is an implicit fine identification method, and the competitiveness between the ordered images is enhanced, as shown in figure 1.
Step one, a marked training data set D = { (χ) containing N samples is given i ,y i ) I is not less than 1 and not more than N, and is collected from K different ordering categories y = {1,2, \8230;, K }. In each batch input (Mini-batch), a fusion factor lambda is randomly selected in a uniformly distributed Unif (0, 1), and any two sample images x are subjected to i And x i' Performing shear type fusion
Figure BDA0003903277020000051
(see step two for the fusion process).
Step two, knowing that the height and the width of the image are H and W respectively, and randomly taking r in uniformly distributed Unif (0, H) and Unif (0, W) χ And r y Respectively representing the horizontal and vertical coordinate values of the central point of the cut image
Figure BDA0003903277020000052
Figure BDA0003903277020000053
The start and stop positions of the abscissa of the cropped image are respectively
Figure BDA0003903277020000054
And
Figure BDA0003903277020000061
the starting and stopping positions of the ordinate of the cut image are respectively
Figure BDA0003903277020000062
Figure BDA0003903277020000063
And
Figure BDA0003903277020000064
finally cutting out the image information chi i′ Adding into another image i Can be represented as
Figure BDA0003903277020000065
That is, the range of the abscissa and ordinate of the sheared area is
Figure BDA0003903277020000066
Thus, fusion factors can be obtained
Figure BDA0003903277020000067
In this case, the combined image may be represented as
Figure BDA0003903277020000068
Which corresponds to a combination of labels of
Figure BDA0003903277020000069
Where M is a binary matrix with elements, the shearing area
Figure BDA00039032770200000610
The value of inner is 0, i.e. is
Figure BDA00039032770200000611
The other regions take the value 0. The fusion factor lambda obeys the uniform distribution among (0, 1), and the value of the fusion factor lambda is influenced by the area proportion of the image block in the shear mode fusion.
Step three, fusing the obtained images
Figure BDA00039032770200000612
Input to CNN model
Figure BDA00039032770200000613
The training and learning are carried out, and two output stages are providedBranches, respectively, with tags y i And y i′ Matching is carried out, and a Cross-entropy loss function (Cross-entropy loss) is constructed, and is specifically expressed as:
Figure BDA00039032770200000614
Figure BDA00039032770200000615
training set D in each batch bat The upper total loss function is:
Figure BDA00039032770200000616
wherein
Figure BDA00039032770200000617
In particular, since it is a batch process in the input stage, N is applied to each batch input bat The fusion factors λ are the same for each sample; in different batch inputs, the fusion factor lambda is influenced differently by the difference of the area proportion of the sheared image blocks.
At this time, the CNN model
Figure BDA00039032770200000618
Optimization can be done with the goal (here ignoring the regularization term),
Figure BDA0003903277020000071
step four, subjecting the test picture x j Input to the trained model
Figure BDA0003903277020000072
In the method, the output of the category is obtained
Figure BDA0003903277020000073
Figure BDA0003903277020000074
Compared with the prior art, the embodiment is compared with the prior art through experiments, and as shown in fig. 2, 3, 4 and 5, it can be seen that the technical scheme of the invention has higher technical process efficiency and greatly improved recognition performance.
In another embodiment of the present invention, an image ordering estimation system is provided, which can be used to implement the image ordering estimation method described above, and specifically includes: the system comprises a data acquisition module and an order estimation module;
a data acquisition module: the system comprises an order estimation module, a photo processing module and a photo processing module, wherein the order estimation module is used for acquiring photo sample data and inputting the photo sample data to the order estimation module;
an order estimation module: firstly, carrying out shear type fusion on any two samples;
secondly, at the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes;
and thirdly, inputting the fused image into the CNN model, training and learning the model by depicting the matching degree of the mixed image and the mixed label, learning the model with stronger competitiveness, and obtaining the weight value of each parameter.
And finally, in the testing process, inputting the image sample into the trained CNN model, and directly carrying out forward calculation to obtain prediction output.
In yet another embodiment of the present invention, a terminal device is provided that includes a processor and a memory for storing a computer program comprising program instructions, the processor being configured to execute the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and is specifically adapted to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; the processor according to the embodiment of the present invention may be used for the operation of the image order estimation method, and includes the following steps:
firstly, carrying out shear type fusion on any two samples;
secondly, at the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes;
and thirdly, inputting the fused image into the CNN model, training and learning the model by depicting the matching degree of the mixed image and the mixed label, learning the model with stronger competitiveness, and obtaining the weight value of each parameter.
And finally, in the testing process, inputting the image sample into the trained CNN model, and directly carrying out forward calculation to obtain prediction output.
In still another embodiment of the present invention, the present invention further provides a storage medium, specifically a computer-readable storage medium (Memory), which is a Memory device in the terminal device and is used for storing programs and data. It is understood that the computer readable storage medium herein may include a built-in storage medium in the terminal device, and may also include an extended storage medium supported by the terminal device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory.
One or more instructions stored in the computer-readable storage medium may be loaded and executed by a processor to implement the corresponding steps of the image order estimation method in the above embodiments; one or more instructions in the computer-readable storage medium are loaded by the processor and perform the steps of:
firstly, carrying out shear type fusion on any two samples;
secondly, at the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes;
and thirdly, inputting the fused image into the CNN model, training and learning the model by depicting the matching degree of the mixed image and the mixed label, learning the model with stronger competitiveness, and obtaining the weight value of each parameter.
And finally, in the testing process, inputting the image sample into the trained CNN model, and directly performing forward calculation to obtain prediction output.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto and changes may be made without departing from the scope of the invention in its aspects.

Claims (6)

1. An image order estimation method, comprising the steps of:
s1: any two picture samples x i And x i' Performing shear type fusion
Figure FDA0003903277010000011
Figure FDA0003903277010000012
The corresponding label is
Figure FDA0003903277010000013
Where M is a binary matrix with elements taking two values,
Figure FDA0003903277010000014
and
Figure FDA0003903277010000015
the method is a combined image and soft label, lambda is a fusion factor of a shear mode combination, and the value of lambda depends on the area proportion of an image block in the shear mode combination;
s2: in the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes; set up from each batch N t Randomly selecting 2 samples from the samples to mix, wherein the samples have
Figure FDA0003903277010000016
In combination, there are (H W) in the shear zone 2 A possible shearing matrix, so that each batch shares
Figure FDA0003903277010000017
A random combination mode is adopted;
s3: fusing the images
Figure FDA0003903277010000018
And a label
Figure FDA0003903277010000019
Inputting the image data into a CNN model, training and learning the model by describing the matching degree of the mixed image and the mixed label, and learning the model with stronger competitiveness
Figure FDA00039032770100000110
Obtaining the weight value of each parameter;
s4: during the test, image sample x j Input to the trained CNN model
Figure FDA00039032770100000111
In the method, forward calculation is directly carried out to obtain a prediction output y j =f(x j ;W c )。
2. The method according to claim 1, wherein the shear-mode fusion in S1 is as follows:
knowing the height and width of the image as H and W, respectively, r was randomly chosen at uniformly distributed Unif (0, H) and Unif (0, W) χ And r y Respectively representing the horizontal and vertical coordinate values of the central point of the cut image
Figure FDA00039032770100000112
The start and stop positions of the abscissa of the cropped image are respectively
Figure FDA00039032770100000113
And
Figure FDA00039032770100000114
Figure FDA00039032770100000115
the starting and stopping positions of the ordinate of the cut image are respectively
Figure FDA00039032770100000116
Figure FDA00039032770100000117
And
Figure FDA00039032770100000118
finally cutting out the image information chi i' Adding into another image i In is shown as
Figure FDA00039032770100000119
That is, the range of the abscissa and ordinate of the sheared area is
Figure FDA0003903277010000021
Thus obtaining fusion factor
Figure FDA0003903277010000022
Figure FDA0003903277010000023
Shear zone
Figure FDA0003903277010000024
The value in is 0, i.e. is
Figure FDA0003903277010000025
The value of other areas is 0; the fusion factor lambda obeys the uniform distribution among (0, 1), and the value of the fusion factor lambda is influenced by the area ratio of the image blocks in the shear mode fusion.
3. The image ordering estimation method according to claim 1, wherein S3 specifically is: the images obtained by fusion are
Figure FDA0003903277010000026
Input to CNN model
Figure FDA0003903277010000027
The training and learning are carried out, and two branches are arranged at the output stage and respectively connected with the label y i And y i' Matching is carried out, and a cross entropy loss function is constructed, which is specifically expressed as:
Figure FDA0003903277010000028
Figure FDA0003903277010000029
training set D in each batch bat The upper total loss function is:
Figure FDA00039032770100000210
wherein
Figure FDA00039032770100000211
In particular, since it is a batch process in the input stage, N is applied to each batch input bat The fusion factors λ are the same for each sample; in different batch inputs, the fusion factor lambda is influenced differently by the difference of the area proportion of the sheared image blocks;
at this time, the CNN model
Figure FDA00039032770100000212
Optimization is performed by the following objectives:
Figure FDA00039032770100000213
4. an image order estimation system, comprising: the system comprises a data acquisition module and an orderliness estimation module;
a data acquisition module: the system comprises an order estimation module, a picture sampling module and a data processing module, wherein the order estimation module is used for acquiring picture sample data and inputting the picture sample data to the order estimation module;
an order estimation module: firstly, carrying out shear type fusion on any two samples;
secondly, at the input stage of the training process, all images in the training set are fused in a shearing mode to obtain a large number of combination modes;
thirdly, inputting the fused image into a CNN model, training and learning the model by depicting the matching degree of the mixed image and the mixed label, learning the model with stronger competitiveness, and obtaining the weight value of each parameter;
and finally, in the testing process, inputting the image sample into the trained CNN model, and directly carrying out forward calculation to obtain prediction output.
5. A computer device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the image order estimation method of one of claims 1 to 4 when executing the program.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of image order estimation as claimed in one of claims 1 to 4.
CN202211297496.4A 2022-10-22 2022-10-22 Image ordering estimation method, system, device and medium Pending CN115631375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211297496.4A CN115631375A (en) 2022-10-22 2022-10-22 Image ordering estimation method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211297496.4A CN115631375A (en) 2022-10-22 2022-10-22 Image ordering estimation method, system, device and medium

Publications (1)

Publication Number Publication Date
CN115631375A true CN115631375A (en) 2023-01-20

Family

ID=84907440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211297496.4A Pending CN115631375A (en) 2022-10-22 2022-10-22 Image ordering estimation method, system, device and medium

Country Status (1)

Country Link
CN (1) CN115631375A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990262A (en) * 2021-02-08 2021-06-18 内蒙古大学 Integrated solution system for monitoring and intelligent decision of grassland ecological data
CN113436259A (en) * 2021-06-23 2021-09-24 国网智能科技股份有限公司 Deep learning-based real-time positioning method and system for substation equipment
CN115205833A (en) * 2022-07-13 2022-10-18 宁波绿和时代科技有限公司 Method and device for classifying growth states of cotton with few samples

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990262A (en) * 2021-02-08 2021-06-18 内蒙古大学 Integrated solution system for monitoring and intelligent decision of grassland ecological data
CN113436259A (en) * 2021-06-23 2021-09-24 国网智能科技股份有限公司 Deep learning-based real-time positioning method and system for substation equipment
CN115205833A (en) * 2022-07-13 2022-10-18 宁波绿和时代科技有限公司 Method and device for classifying growth states of cotton with few samples

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SANGDOO YUN ET AL.: "CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features", ARXIV, 7 August 2019 (2019-08-07), pages 1 - 14 *
章超: "基于深度学习的图像有序性估计研究", 中国博士学位论文全文数据库 信息科技辑, no. 01, 15 January 2020 (2020-01-15), pages 19 - 21 *

Similar Documents

Publication Publication Date Title
Yang et al. Ssr-net: A compact soft stagewise regression network for age estimation.
Ismail et al. Benchmarking deep learning interpretability in time series predictions
Mita et al. Discriminative feature co-occurrence selection for object detection
US20170344881A1 (en) Information processing apparatus using multi-layer neural network and method therefor
CN111079674B (en) Target detection method based on global and local information fusion
US11934866B2 (en) Operator operation scheduling method and apparatus to determine an optimal scheduling policy for an operator operation
CN112819052A (en) Multi-modal fine-grained mixing method, system, device and storage medium
García-Pedrajas et al. A scalable memetic algorithm for simultaneous instance and feature selection
Bilaniuk et al. Fast LBP face detection on low-power SIMD architectures
Farag Traffic signs classification by deep learning for advanced driving assistance systems
US20220270341A1 (en) Method and device of inputting annotation of object boundary information
Sun et al. LRPRNet: Lightweight deep network by low-rank pointwise residual convolution
WO2022063076A1 (en) Adversarial example identification method and apparatus
Zhao et al. SRK-Augment: A self-replacement and discriminative region keeping augmentation scheme for better classification
CN112488188B (en) Feature selection method based on deep reinforcement learning
CN115631375A (en) Image ordering estimation method, system, device and medium
Cai et al. Single shot multibox detector for honeybee detection
CN112308149A (en) Optimization method and device for image information identification based on machine learning
Shuai et al. YoLite+: a lightweight multi-object detection approach in traffic scenarios
CN113886578B (en) Form classification method and device
CN110414515B (en) Chinese character image recognition method, device and storage medium based on information fusion processing
CN113989567A (en) Garbage picture classification method and device
Pommé et al. NetPrune: a sparklines visualization for network pruning
Ewe et al. LAVRF: Sign language recognition via Lightweight Attentive VGG16 with Random Forest
CN118015385B (en) Long-tail target detection method, device and medium based on multi-mode model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination