CN109784417B - Black hair pork image identification method - Google Patents

Black hair pork image identification method Download PDF

Info

Publication number
CN109784417B
CN109784417B CN201910078350.2A CN201910078350A CN109784417B CN 109784417 B CN109784417 B CN 109784417B CN 201910078350 A CN201910078350 A CN 201910078350A CN 109784417 B CN109784417 B CN 109784417B
Authority
CN
China
Prior art keywords
pork
residual error
network model
image
error network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910078350.2A
Other languages
Chinese (zh)
Other versions
CN109784417A (en
Inventor
焦俊
王文周
侯金波
孙裴
乔焰
辜丽川
何屿彤
吴亚文
陈婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Agricultural University AHAU
Original Assignee
Anhui Acquisitive Internet Of Things Co ltd
Anhui Agricultural University AHAU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Acquisitive Internet Of Things Co ltd, Anhui Agricultural University AHAU filed Critical Anhui Acquisitive Internet Of Things Co ltd
Priority to CN201910078350.2A priority Critical patent/CN109784417B/en
Publication of CN109784417A publication Critical patent/CN109784417A/en
Application granted granted Critical
Publication of CN109784417B publication Critical patent/CN109784417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method for identifying a black hairy pork image, and belongs to the technical field of identification of freshness of black hairy pork. The identification method comprises the following steps: presetting a residual error network model; training a residual error network model by adopting a preset database, and initializing parameters of the residual error network model and the weight of each variable of the residual error network model, wherein the database comprises at least one pork image; taking the LReLU function as an activation function of the self-adaptive network; training the residual error network model again by adopting a preset sample set, wherein the sample set comprises at least one image of the black hair pork; outputting a residual error network model; and identifying the black hair pork image by adopting a residual error network model. The method for identifying the black hairy pork image improves the identification rate of the freshness of the black hairy pork image.

Description

Black hair pork image identification method
Technical Field
The invention relates to the technical field of black hairy pork freshness identification, in particular to a black hairy pork image identification method.
Background
In the storage process of pork, the autolysis, the putrefaction, the decomposition and other changes of the pork can be caused due to the action of enzyme per se, the pollution of microorganisms or the illness before slaughtering, and the like, so that the freshness of the pork is reduced. The pork is decomposed to reduce its nutritive value, and both the microbes and their toxins involved in the deterioration and the toxic decomposition products formed after the deterioration may cause poisoning and diseases.
The putrefaction and deterioration of pork is a gradual process, and the change is complex and is influenced by a plurality of factors. Therefore, how to accurately and quickly assess the quality and safety of meat is a health and personal benefit for consumers.
In the prior art, the evaluation indexes of pork quality are color, texture, pH value, tenderness, freshness and the like, and the freshness is an important and complex index parameter for evaluating the meat quality and safety and comprises various microorganisms, physicochemical properties and biochemical properties. The main components of pork, such as: proteins, fats and carbohydrates, which are broken down by enzymes and bacteria, producing odors; the protein in pork will be gradually decomposed into hydrogen, sulfide, ammonia, ethyl mercaptan, etc. to produce toxic small molecules including histamine, tyramine, putrescine and tryptamine; fats are decomposed into aldehydes and aldehydic acids; carbohydrates decompose alcohols, ketones, aldehydes, hydrocarbons and carboxylic acid gases. These substances, together with other basic nitrogen compounds, affect the color, texture and shape characteristics of pork during storage.
In the prior art, the judgment of pork freshness mainly depends on detecting various indexes of pork, such as color, texture and the like, and then the pork freshness is judged according to preset indexes. Although the method can accurately judge the freshness of the pork, the detection mode is complex, manual operation is relied on, and automation is difficult to realize.
Disclosure of Invention
The embodiment of the invention aims to provide a method for identifying a black hair pork image. The method for identifying the black hairy pork image improves the identification rate of the freshness of the black hairy pork image.
In order to achieve the above object, an embodiment of an aspect of the present invention provides an identification method based on a residual error network and transfer learning, which can be used for identifying the freshness of black-haired pork, including:
the system comprises a plurality of residual modules, a plurality of image processing modules and a plurality of image processing modules, wherein each residual module comprises at least one convolution layer and one pooling layer which are connected in series, the convolution layers are used for filtering input black hair pork images, the pooling layers are used for further integrating the processed black hair pork images, and the input end and the output end of each residual module are connected;
and the self-adaptive network is connected with the residual error module and is used for identifying and classifying the black hair pork image.
Optionally, the number of layers of the adaptive network is 3.
In another aspect, the present invention further provides a training method for a residual error network model, for training any one of the residual error network models, where the training method includes:
presetting a residual error network model;
training the residual error network model by adopting a preset database, and initializing parameters of the residual error network model and the weight of each variable of the residual error network model, wherein the database comprises at least one image of pork;
replacing the full connection layer and the classification layer of the residual error network model with an adaptive network;
taking an LReLU function as an activation function of the adaptive network;
training the residual error network model again by adopting a preset sample set, wherein the sample set comprises at least one image of the black hair pork;
and outputting the residual error network model.
Optionally, the training of the residual error network model by using a preset database, and the initializing the parameters of the residual error network model and the weights of the variables of the residual error network model include:
respectively cutting the image of each pork;
and respectively carrying out at least one of affine transformation, perspective transformation and image rotation on each image of the pork so as to expand the number of the images of the pork.
Optionally, using formula (1) as the LReLU function,
Figure GDA0004001465520000031
wherein x is an input value, f (x) is an output value, and alpha is a preset parameter.
Optionally, the value of α is 0.01.
In another aspect, the present invention further provides a training system for a residual network model, where the training system includes a processor, and the processor is configured to execute any one of the above training methods.
On the other hand, the invention also provides an identification method for the black hair pork, which comprises the step of identifying the image of the black hair pork by adopting any residual error network model.
In still another aspect, the present invention further provides an identification system for black-haired pork, which includes a processor for executing the identification method described above.
According to the technical scheme, the residual error network model has the function of accurately identifying the black rough pork image in a mode of replacing the full connection layer and the classification layer in the residual error network model with the self-adaptive network; according to the training method and system, the knowledge learned by the traditional residual error network model in the pork image is migrated to the recognition of the black rough pork in a migration learning mode, so that the training of the residual error network model can be completed under the condition that only the black rough pork image data of a small sample is provided; according to the identification method provided by the invention, the trained residual error network model is applied to the identification of the black rough pork image, so that the accurate identification of the black rough pork image is realized.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a block diagram of a residual network model according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a training method for training a residual network model according to one embodiment of the present invention;
FIG. 3 is a flow diagram of a method of processing a pork image according to one embodiment of the present invention;
FIG. 4 is a graph of the functional relationship of the ReLU function;
FIG. 5 is a graph of the functional relationship of the LReLU function;
FIG. 6 (a) is a graph of model loss variation during training according to an example of the present invention; and
FIG. 6 (b) is a graph of classification accuracy change during training according to one example of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
In the embodiments of the present application, unless otherwise specified, the use of directional terms such as "upper, lower, top, and bottom" is generally used with respect to the orientation shown in the drawings or the positional relationship of the components with respect to each other in the vertical, or gravitational direction.
In addition, if there is a description relating to "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between the various embodiments can be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
Fig. 1 is a block diagram illustrating a residual network model according to an embodiment of the present invention. In fig. 1, the residual network model may be a plurality of residual modules 10 and an adaptive network 20 connected in series.
In fig. 1, each residual module 10 may include at least one convolutional layer 11 and one pooling layer 12 in series. The convolution layer 11 is used for carrying out filtering processing on the input black rough pork image; the pooling layer 12 is used to further integrate the processed black hair pork image. In addition, in order to avoid the problem of gradient diffusion caused by the processing process of the plurality of convolutional layers 11 and the pooling layer 12, the input end of each residual error module 10 can be connected with the output end, so that the neural network can still train results well when hundreds of layers exist, the feature learning capability of the model is enhanced, and the classification performance of the model is improved.
The adaptive network 20 is connected with the residual error module 10 and is used for identifying and classifying the black hair pork image. Due to the fully connected and classified layers of the traditional residual network model, the modeling of non-linear objects (which may be, for example, black gross pork images in this embodiment) is not well classified. Therefore, in consideration of the advantages of the adaptive network in the aspect of modeling of the nonlinear object, the requirement for identifying the black gross pork image can be well met by replacing the full connection layer and the classification layer with the adaptive network. The determination of the number of adaptive network layers may be any number known to those skilled in the art. In a preferred example of the invention, the adaptive network may be layer 3.
Fig. 2 is a flowchart illustrating a training method for training the residual network model shown in fig. 1 according to an embodiment of the present invention. In fig. 2, the training method may include:
in step S10, a residual network model is preset. The residual network model may be a residual network model known to the person skilled in the art, such as the conventional residual network model described above (residual module, full connectivity layer and classification layer).
In step S11, a preset database is used to train the residual error network model, and parameters of the residual error network model and weights of variables of the residual error network model are initialized. Due to the characteristics of traditional supervised learning, in the training method of the residual error network model, a large number of data sets of black hair pork are generally adopted to directly train the residual error network model, and after corresponding parameters of the residual error network model are adjusted, the residual error network model is directly adopted to carry out recognition. However, since the research on the black hair pork image recognition is relatively less in China, the data sets of the black hair pork which can be provided are relatively less, and it is obviously difficult to meet the implemented hardware conditions by directly training in a traditional mode of adopting a large number of data sets of the black hair pork. Therefore, in this embodiment of the present invention, the residual network model may be trained using a data set (a preset database) composed of pork images similar to black hairy pork, so as to adjust various parameters of the residual network model.
In addition, although the pork image has a certain advantage in terms of quantity over the black hair pork image, there may be cases where the quantity is insufficient. Therefore, before training the residual network model using the pork image, the database of pork images may be processed as shown in fig. 3. In fig. 3, the method may include:
in step S21, the image of each pork is cut separately.
In step S22, at least one of affine transformation, perspective transformation, and image rotation is performed on each image of pork to expand the number of images of pork, respectively.
In step S12, the fully connected layer and the classified layer of the residual network model are replaced with an adaptive network. Due to the modeling of non-linear objects (which may be, for example, black gross pork images in this embodiment) by the fully connected and classified layers of the traditional residual network model, they are not well classified. Therefore, in consideration of the advantages of the adaptive network in the aspect of modeling of the nonlinear object, the requirement for identifying the black hair pork image can be well met by replacing the full connection layer and the classification layer with the adaptive network. The determination of the number of adaptive network layers may be any number known to those skilled in the art. In a preferred example of the invention, the adaptive network may be layer 3.
In step S13, the lreuu function is used as an activation function of the adaptive network. Since the activation function of the conventional adaptive network generally uses a ReLU (Rectified Linear Unit) function, the expression is formula (1),
Figure GDA0004001465520000071
the functional relationship of the formula (1) is shown in fig. 4, when the input value x is less than 0, the output value f (x) of the ReLU function is 0, and then the linear rectification unit is in an inactive state, and the parameters of the adaptive network are not updated or adjusted, so that the neurons of the adaptive network are wasted.
In contrast, the use of equation (1) as a function of LReLU (leak linear rectifier unit) can avoid the waste of neurons well. In this embodiment, the lrellu function may be, for example, as shown in equation (2),
Figure GDA0004001465520000072
where x is an input value, f (x) is an output value, and α is a preset parameter, in a preferred example of the present invention, a value of α may be 0.01.
The functional relationship of the LReLU function may be as shown in fig. 5. As can be seen from fig. 5, when the input value x is less than 0, the output value f (x) <0, and at this time, the leak linear rectification unit is in an active state, that is, the neurons of the adaptive network are in an active state, so that the waste of the neurons is avoided.
In step S14, the residual network model is trained again using a preset sample set. The trained residual error network model can only identify the image of the pork, and the identification of the black hair pork image is difficult to adapt. Therefore, in this embodiment, the knowledge learned from the pork image by the residual network model can be migrated to the task of identifying the black pork image in combination with the idea of migration learning. Specifically, in step S14, the residual network model may be trained again using the black rough pork image, so as to adjust the parameters of the residual network model.
In step S15, the residual network model is output.
In addition, the invention also provides a training system for the residual error network model. The training system may comprise a processor for performing any of the training methods described above. In this embodiment, the processor may be, for example, a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, a System On Chip (SOC), or the like.
On the other hand, the invention also provides an identification method for the black hair pork, which comprises the step of identifying the image of the black hair pork by adopting any residual error network model.
In still another aspect, the present invention further provides an identification system for black-haired pork, the identification system comprising a processor for executing any one of the above identification methods.
According to the technical scheme, the residual error network model has the function of accurately identifying the black hair pork image in a mode of replacing a full connection layer and a classification layer in the residual error network model with the self-adaptive network; according to the training method and the training system, the knowledge learned by the traditional residual error network model in the pork image is transferred to the recognition of the black hair pork in a transfer learning mode, so that the training of the residual error network model can be completed under the condition that only the black hair pork image data of a small sample is provided; according to the identification method provided by the invention, the trained residual error network model is applied to the identification of the black rough pork image, so that the accurate identification of the black rough pork image is realized.
In the case of the example 1, the following examples are given,
according to national regulations, the identification standard of pork freshness is as follows:
pork with pH value of 5.6-6.2 and dilution of 1/10000, total amount of microbial flora of 2.46-16.2CFU/mL and coliform flora of 3.48-5.97CFU/mL is defined as fresh meat;
pH value is 6.2-6.7 and in the environment with dilution of 1/10000, total amount of microbial flora is 16.8-370.43CFU/mL, coliform flora is 9.24-93CFU/mL is sub-fresh meat;
the pH value is more than 6.7, and in the environment with dilution degree of 1/10000, the total amount of microorganisms is 410-3070CFU/mL, and the coliform is 240-1100CFU/mL, and the meat is putrefy.
Continuously judging that the total amount of microbial flora in the sub-fresh pork is between 16.2 and 28.4CFU/mL, the coliform is between 5.97 and 9.2CFU/mL and the pH value is between 6.1 and 6.3 as the first grade of the sub-fresh pork according to the national standard;
judging the secondary grade of the sub-fresh meat when the total amount of microbial flora is between 28.4 and 142CFU/mL, the coliform flora is between 9.2 and 28CFU/mL and the pH value is between 6.2 and 6.5; the total amount of microbial flora is 142-370CFU/mL, the coliform is 28-93CFU/mL, and the pH value is 6.4-6.7.
Setting the total amount of microbial flora in the rotten pork at 370-1040CFU/mL, the coliform at 93-240CFU/mL and the pH value at 6.7-6.8 as the first grade of the rotten pork; judging the secondary level of the putrefactive meat when the total amount of the microbial flora is 1040-1420CFU/mL, the coliform flora is 240-290CFU/mL and the pH value is 6.8-7.0; the total amount of microbial flora is 1420-3070CFU/mL, and coliform is judged to be putrefying meat grade three when the total amount is 290-1100 CFU/mL and the pH value is more than 7.0.
Each pork is detected according to the above criteria to determine the freshness of each pork, and the freshness is correlated with the image of the pork to generate correlated data to construct a preset database. Considering that the database needs huge data amount as a support, in the embodiment, various pork freshness and images accumulated by key laboratories of university of Anhui agriculture for epidemic disease prevention and control for years can be selected as the database.
The hardware conditions for executing the training method, the recognition method and the residual error network model provided by the present invention may be, for example:
the CPU is Intel Core i7-6700k, the mainboard is Huashuo Z170, the graphics card is GeForce GTX1080, the hard disk is Samsung SSD 950 PRO256GB + Seagate ST2000 2.0TB, the memory is Kiston DDR464GB, the operating system is Windows10 enterprise edition, and the Caffe system is Windows10 edition.
The various parameters for training the residual network model are shown in table (1),
watch (1)
Figure GDA0004001465520000091
Figure GDA0004001465520000101
Before training the residual network model, the images in the database may be preprocessed in advance in consideration of the problem of the recognition degree of the images. The preprocessing mode can comprise the following steps:
cutting each image of pork separately (e.g., to 224 x 224, which may allow for a significant increase in machine reading speed);
affine transformation, perspective transformation and image rotation operations are respectively performed on the image of each pork to expand the number of images of the pork, thereby improving the accuracy of machine recognition by increasing the number of images to be recognized. The data of the database of the picture of pork are shown in table 2,
TABLE 2
Figure GDA0004001465520000102
The residual network model was trained and tested separately with the data shown in table 2, and the whole training process was supervised using a cross-entropy cost function (this cross-entropy can be used to measure the difference between the true value of the sample and the output value of the residual network model).
The model loss variation curves during the training process are shown in fig. 6 (a) and 6 (b). In fig. 6 (a), the abscissa is the epochs (number of iterations) of the training (all samples in the training set traversed once is called epochs), and the ordinate is the model loss. As can be seen from FIG. 6 (a), after approximately 800 epochs, the model loss on both the training and validation sets stabilized around a value of 0. FIG. 6 (b) is a graph showing the variation of the classification accuracy of the model during the training process, with the abscissa representing the epochs and the ordinate representing the classification accuracy of the model. As can be seen from fig. 6 (b), after about 800 epochs, the classification accuracy rates of the training set and the test set are stabilized at about 99.7% and 92%, respectively, and the accuracy rate of the verification set can reach 98.7% at most. Therefore, compared with the residual network model in the prior art, the residual network model provided by the invention has better meat image classification performance.
In this embodiment, the confusion degree between various types of residual network model outputs can be quantitatively evaluated by using a confusion matrix, the rows and columns of the matrix represent the real and predicted situations respectively, and any element x in the matrix ij The number of pictures representing the prediction of the ith category into the jth category accounts for the numberThe proportion of the total number of category images. The diagonal element values represent the classification accuracy rates corresponding to the freshness of different black rough pork, and the other positions are corresponding error rates. The confusion matrix is shown in table 3. Wherein xx is fresh meat, cx1 is a first level of sub-fresh meat, cx2 is a second level of sub-fresh meat, cx3 is a third level of sub-fresh meat, fb1 is a first level of putrid meat, fb2 is a second level of putrid meat, and fb3 is a third level of putrid meat.
TABLE 3
class xx cx1 cx2 cx3 fb1 fb2 fb3
xx 0.99 0.01 0 0 0 0 0
c x1 0 0.98 0 0 0.02 0 0
cx2 0.07 0 0.92 0.08 0 0 0
cx3 0 0 0.09 0.91 0 0 0
fb1 0 0 0 0.04 0.91 0.05 0
fb2 0 0 0 0 0.04 0.94 0.02
fb3 0 0 0 0 0.02 0.01 0.97
From table 3, it can be seen that the classification effect of fresh pork xx is the best, the accuracy can reach 99%, and the accuracy of the fresh third level and the rotten first level is the worst, and is 91%; the overall average classification accuracy is 94.5%, so that the residual error network model and the training method thereof provided by the invention have a good classification effect on the freshness data set of the black rough pork.
In addition, in this embodiment, the residual network model in the prior art may also be compared with the residual network model provided by the present invention, and the comparison result is shown in table 4. The ResNet-50 (AAUSet) is obtained by training with AAUSeT (the database of freshness and images of various pork accumulated by key laboratories of epidemic disease prevention and control of university of Anhui agriculture for many years) based on a traditional residual network model, the ResNet-50 (HMZJ) is obtained by training with a small sample set of black-haired pork based on the traditional residual network model, and the ResNet-50 (migration) is obtained by training with AAUSet based on the traditional residual network model and migrating to the HMZJ data set to fine tune the model; the Proposed (ReLU) and Proposed (LReLU) are based on the residual network model provided by the invention, and the activating functions in the adaptive network respectively adopt ReLU and LReLU. The classification accuracy of the models in the table is obtained on the test set of this embodiment, and Flop is the floating point operand of the model.
TABLE 4
Figure GDA0004001465520000121
As can be seen from Table 4, the least accurate classification was Resnet-50 (HMZJ). The training method and the residual error network model provided by the invention have the advantages that the classification accuracy is obviously better than that of other residual error network models. In addition, the training method and the residual error network model provided by the invention also reduce the operation amount of the model, shorten the training time of the model, and improve the classification performance of the model compared with the original model.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solution of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the foregoing embodiments may be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, the embodiments of the present invention will not be described separately for the various possible combinations.
Those skilled in the art can understand that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a (may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different embodiments of the present invention may be made, and the same should be considered as what is disclosed in the embodiments of the present invention as long as it does not depart from the spirit of the embodiments of the present invention.

Claims (4)

1. A method for identifying a black hair pork image is characterized by comprising the following steps:
presetting a residual error network model, wherein the residual error network model comprises:
the system comprises a plurality of residual error modules, a plurality of image processing modules and a plurality of image processing modules, wherein each residual error module comprises at least one convolution layer and one pooling layer which are connected in series, the convolution layers are used for filtering an input black rough pork image, the pooling layers are used for further integrating the processed black rough pork image, and the input end and the output end of each residual error module are connected;
the self-adaptive network is connected with the residual error module and is used for identifying and classifying the black rough pork image, and the number of layers of the self-adaptive network is 3;
training the residual error network model by adopting a preset database, and initializing parameters of the residual error network model and the weight of each variable of the residual error network model, wherein the database comprises at least one image of pork;
taking an LReLU function as an activation function of the adaptive network;
training the residual error network model again by adopting a preset sample set, wherein the sample set comprises at least one image of the black hair pork;
outputting the residual error network model;
and identifying the black hair pork image by adopting the residual error network model.
2. The identification method according to claim 1, wherein the training of the residual error network model by using the preset database, and the initializing of the parameters of the residual error network model and the weights of the variables of the residual error network model comprises:
respectively cutting the image of each pork;
and respectively carrying out at least one of affine transformation, perspective transformation and image rotation on each image of the pork so as to expand the number of the images of the pork.
3. The identification method according to claim 2, characterized in that formula (1) is used as the LReLU function,
Figure FDA0004001465510000021
wherein x is an input value, f (x) is an output value, and alpha is a preset parameter.
4. The identification method according to claim 3, wherein α has a value of 0.01.
CN201910078350.2A 2019-01-28 2019-01-28 Black hair pork image identification method Active CN109784417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910078350.2A CN109784417B (en) 2019-01-28 2019-01-28 Black hair pork image identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910078350.2A CN109784417B (en) 2019-01-28 2019-01-28 Black hair pork image identification method

Publications (2)

Publication Number Publication Date
CN109784417A CN109784417A (en) 2019-05-21
CN109784417B true CN109784417B (en) 2023-03-24

Family

ID=66502708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910078350.2A Active CN109784417B (en) 2019-01-28 2019-01-28 Black hair pork image identification method

Country Status (1)

Country Link
CN (1) CN109784417B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503314A (en) * 2019-08-02 2019-11-26 Oppo广东移动通信有限公司 A kind of freshness appraisal procedure and device, storage medium
CN113240081B (en) * 2021-05-06 2022-03-22 西安电子科技大学 High-resolution range profile target robust identification method aiming at radar carrier frequency transformation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239802A (en) * 2017-06-28 2017-10-10 广东工业大学 A kind of image classification method and device
CN108052884A (en) * 2017-12-01 2018-05-18 华南理工大学 A kind of gesture identification method based on improvement residual error neutral net

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10528846B2 (en) * 2016-11-14 2020-01-07 Samsung Electronics Co., Ltd. Method and apparatus for analyzing facial image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239802A (en) * 2017-06-28 2017-10-10 广东工业大学 A kind of image classification method and device
CN108052884A (en) * 2017-12-01 2018-05-18 华南理工大学 A kind of gesture identification method based on improvement residual error neutral net

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度残差网络的图像隐写分析方法;高培贤等;《计算机工程与设计》;20181016(第10期);全文 *

Also Published As

Publication number Publication date
CN109784417A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN105044298B (en) A kind of Eriocheir sinensis class grade of freshness detection method based on machine olfaction
CN109784417B (en) Black hair pork image identification method
Yun Prediction model of algal blooms using logistic regression and confusion matrix
CN113723070B (en) Text similarity model training method, text similarity detection method and device
CN111626821A (en) Product recommendation method and system for realizing customer classification based on integrated feature selection
CN106156805A (en) A kind of classifier training method of sample label missing data
CN114360652B (en) Cell strain similarity evaluation method and similar cell strain culture medium formula recommendation method
CN115049019B (en) Method and device for evaluating arsenic adsorption performance of metal organic framework and related equipment
CN111401444B (en) Method and device for predicting red wine origin, computer equipment and storage medium
Yan et al. A deep learning method combined with electronic nose to identify the rice origin
CN110634198B (en) Industrial system layered fault diagnosis method based on regular polycell filtering
CN113723535A (en) CycleGAN deep learning-based cell micronucleus image processing method and storage medium
CN109617864B (en) Website identification method and website identification system
Akritas et al. 7 Statistical analysis of censored environmental data
CN113177578A (en) Agricultural product quality classification method based on LSTM
CN110910970B (en) Method for predicting toxicity of chemicals by taking zebra fish embryos as receptors through building QSAR model
CN112001436A (en) Water quality classification method based on improved extreme learning machine
CN109657710B (en) Data screening method and device, server and storage medium
CN115600102B (en) Abnormal point detection method and device based on ship data, electronic equipment and medium
Zhao The water potability prediction based on active support vector machine and artificial neural network
CN115345248A (en) Deep learning-oriented data depolarization method and device
Zhu et al. Rapid freshness prediction of crab based on a portable electronic nose system
CN114240929A (en) Color difference detection method and device
CN113159419A (en) Group feature portrait analysis method, device and equipment and readable storage medium
CN105628741A (en) Automatic pork flavor classification method based on data space conversion of electronic nose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240402

Address after: 233600 110m to the southwest of Zijin Yujing, Ziguang Avenue South, Guoyang County, Bozhou City, Anhui Province

Patentee after: Woyang Quantum Information Technology Co.,Ltd.

Country or region after: China

Address before: 230061 No. 130 Changjiang West Road, Hefei, Anhui

Patentee before: Anhui Agricultural University

Country or region before: China

Patentee before: ANHUI ACQUISITIVE, INTERNET OF THINGS Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240429

Address after: 230036 No. 130 Changjiang West Road, Hefei, Anhui

Patentee after: Anhui Agricultural University

Country or region after: China

Address before: 233600 110m to the southwest of Zijin Yujing, Ziguang Avenue South, Guoyang County, Bozhou City, Anhui Province

Patentee before: Woyang Quantum Information Technology Co.,Ltd.

Country or region before: China