CN114170622B - Koi evaluation device, method, program, and storage medium - Google Patents

Koi evaluation device, method, program, and storage medium Download PDF

Info

Publication number
CN114170622B
CN114170622B CN202111334580.4A CN202111334580A CN114170622B CN 114170622 B CN114170622 B CN 114170622B CN 202111334580 A CN202111334580 A CN 202111334580A CN 114170622 B CN114170622 B CN 114170622B
Authority
CN
China
Prior art keywords
koi
evaluation
information
completion model
feature amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111334580.4A
Other languages
Chinese (zh)
Other versions
CN114170622A (en
Inventor
范军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanxin Trading Co ltd
Original Assignee
Sanxin Trading Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanxin Trading Co ltd filed Critical Sanxin Trading Co ltd
Publication of CN114170622A publication Critical patent/CN114170622A/en
Application granted granted Critical
Publication of CN114170622B publication Critical patent/CN114170622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a koi evaluation device, method, program and storage medium for objectively evaluating koi. The fancy carp evaluation device is provided with: a feature value acquisition unit (11) that acquires a feature value extracted from an image obtained by capturing an image of a koi to be evaluated; an evaluation unit (12) that evaluates a koi as an evaluation target by inputting the feature value of the koi as the evaluation target acquired by the feature value acquisition unit (11) into a learning completion model obtained by machine learning the relationship between the feature value of each of a plurality of koi and the evaluation result of the koi; and an output unit (13) that outputs the evaluation result of the koi to be evaluated, which is evaluated by the evaluation unit (12).

Description

Koi evaluation device, method, program, and storage medium
Technical Field
The present invention relates to a koi evaluation device, method, and program for evaluating the value of koi.
Background
Evaluation meetings for evaluating koi have been held in the past. In a conventional fancy carp evaluation meeting, a panelist evaluates a fancy carp produced by a producer, and ranks the fancy carp based on the evaluation result.
The conventional fancy carp is examined in the following point of view.
1. Koi is living and should therefore be "healthy"
2. Is a kind of ornamental fish, so it should be 'clean and beautiful'
3. In ornamental fish, the work of art, also called a swimming gemstone, should have artistic and varietal characteristics.
Documents of the prior art
Patent literature
Patent document 1: japanese patent No. 6650984
Disclosure of Invention
Problems to be solved by the invention
However, since the conventional review is performed on the basis of the general criteria as described above, the subjective dependency of the reviewer is large, and even with a fancy carp of a high rank, for example, the specific evaluation content, for example, which part of the fancy carp is evaluated to be high, is unclear.
In the koi trade, objective evaluation of koi is desired by determining a transaction amount according to the result of a reception and the like.
Patent document 1 proposes a method for identifying an individual koi based on an image obtained by capturing an image of the koi, but no proposal has been made on evaluation of the koi.
In view of the above circumstances, an object of the present invention is to provide a koi evaluation device, a method, and a program that enable objective evaluation of koi.
Means for solving the problems
The fancy carp evaluation device of the present invention includes: a feature value acquisition unit that acquires a feature value extracted from an image obtained by capturing an image of a koi to be evaluated; an evaluation unit that evaluates a koi as an evaluation target by inputting the feature amount of the koi as the evaluation target acquired by the feature amount acquisition unit into a learning completion model obtained by machine learning a relationship between the feature amount of each of the plurality of koi and an evaluation result of the koi; and an output unit that outputs an evaluation result of the koi as an evaluation target evaluated by the evaluation unit.
In the koi evaluation method according to the present invention, a feature amount extracted from an image obtained by imaging a koi as an evaluation target is acquired, the koi as the evaluation target is evaluated by inputting the acquired feature amount of the koi as the evaluation target to a learning completion model, which is a model obtained by machine learning a relationship between the feature amount of each of a plurality of koi and an evaluation result of the koi, and the evaluation result of the koi as the evaluation target is output.
The koi evaluation program of the present invention causes a computer to execute the steps of: acquiring a feature quantity extracted from an image obtained by shooting a koi as an evaluation object; evaluating a fancy carp serving as an evaluation object by inputting the acquired characteristic quantity of the fancy carp serving as the evaluation object into a learning completion model, wherein the learning completion model is a model obtained by machine learning the relationship between the characteristic quantity of each fancy carp in a plurality of fancy carps and the evaluation result of the fancy carp; and outputting the evaluation result of the koi as the evaluation object.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the koi evaluation device, method, and program of the present invention, objective evaluation of koi can be performed by acquiring a feature amount extracted from an image obtained by imaging koi as an evaluation target, and evaluating koi as an evaluation target by inputting the feature amount of koi as an evaluation target to a learning completion model obtained by machine learning a relationship between the feature amount of each of a plurality of koi and an evaluation result of the koi, and outputting the evaluation result of koi as the evaluation target.
Drawings
Fig. 1 is a block diagram showing a schematic configuration of an embodiment of a koi evaluation device according to the present invention.
Fig. 2 is a diagram showing an example of an image of a koi as an evaluation target.
Fig. 3 is a diagram showing an example of an image in which the posture correction processing is completed.
Fig. 4 is a diagram showing an example of the outline information of koi.
Fig. 5 is a diagram showing an example of a frequency distribution of colors of each pixel constituting an image of koi.
Fig. 6 is a diagram showing an example of a radar chart as an evaluation result of koi.
Fig. 7 is a flowchart for explaining the processing flow of the koi evaluation apparatus.
Detailed Description
Hereinafter, an embodiment of the koi evaluation device according to the present invention will be described in detail with reference to the drawings. Fig. 1 is a schematic configuration diagram of a koi evaluation device 1 according to the present embodiment.
The koi evaluation apparatus 1 extracts a feature amount from an image obtained by imaging a koi, evaluates the koi based on the feature amount, and outputs the evaluation result. This enables subjective evaluation of koi to be objectively evaluated in the past.
Specifically, as shown in fig. 1, the koi evaluation device 1 includes an image acquisition unit 10, a feature value acquisition unit 11, an evaluation unit 12, and an output unit 13.
The image acquisition unit 10 acquires an image obtained by imaging a koi to be evaluated. Specifically, the image acquiring unit 10 acquires an image obtained by observing a koi from above (on the dorsal fin side) the koi in a plan view. In the present embodiment, the carii has a dorsal fin side as upper, a ventral side as lower, a head side as front, a tail side as rear, a right eye side as right, and a left eye side as left.
The image acquisition unit 10 acquires an image of a koi output from a terminal device connected to the koi evaluation device 1 via a communication line such as the internet. The terminal device may be a tablet terminal, a smart phone, a desktop computer, a notebook computer, or the like.
The feature value acquisition unit 11 extracts a feature value from the image of the koi as the evaluation target acquired by the image acquisition unit 10 to acquire the feature value. In the present embodiment, information on the body type of a koi, information on the color of the koi, information on the pattern of the koi, and information on the degree of finish of decoration (japanese portrait り) are acquired as the feature quantities.
Before the above-described feature value extraction process, the feature value acquisition unit 11 performs a preprocessing for improving the accuracy of feature value extraction on an image obtained by capturing an image of a koi as an evaluation target. As the preprocessing, first, the feature value acquisition unit 11 performs the posture correction processing. The posture correction processing is processing for correcting the posture of the koi in the image to a preset posture.
The posture correction processing is processing as follows: when the image of the koi to be evaluated is an image in which orthogonal X and Y axes are set as a coordinate system as shown in fig. 2, for example, the pose of the koi in the image is not along the Y direction, the pose of the koi is made along the Y direction. Specifically, in the posture correction processing, first, a clipping processing is performed in which a minimum rectangle (a dotted rectangle shown in fig. 2) including the entire koi included in the image is set, and an image in the range of the rectangle is clipped. Then, a rotation process is performed such that the longer side of the image of the rectangular range is parallel to the Y direction. By the cut-out processing and the rotation processing, an image in which the posture correction processing is completed as shown in fig. 3 is generated.
Next, color correction processing is performed on the image for which the posture correction processing is completed. The color correction processing is processing for making the color of an image close to the primary color. For example, when a shadow exists in an image, a color correction process is performed to remove the shadow. In addition, as for the color correction processing for eliminating shading, an existing image processing can be used.
Then, contour extraction processing is performed on the image after the color correction processing is completed. The outline extraction processing is processing for extracting an outline of a koi. For the contour extraction processing, existing image processing can be used. The outline information of the koi extracted by the outline extraction processing is information used in the extraction of the feature amount described below.
The feature amount acquisition unit 11 acquires information on the body type, color, pattern, and decoration completion degree of a koi based on the image subjected to the posture correction processing and the color correction processing described above and the contour information of the koi extracted by the contour extraction processing.
First, information on the body type of koi will be described.
In evaluating koi, the following points are considered with respect to the body type of koi.
The tail cylinder should not be too thin or too thick, and should be coordinated with the body parts
The size of the head should not be asymmetrical to the whole body
The size of each fin should be symmetrical and side-to-side symmetric with respect to the body
Arched with the dorsal muscles and the spinal column as the central line, left-right symmetrical and without skewness
The line from the head to the tail fin should be smooth
Therefore, the feature value acquisition unit 11 of the present embodiment determines a to f shown in fig. 4 based on the contour information of the koi. a is the width of the head and is the distance from the inside of the left eye to the inside of the right eye. B is a shoulder width (width at the thickest part of the body), and is a distance between the rearmost end edge of the left pectoral fin and the rearmost end edge of the right pectoral fin. c is the protrusion of the abdomen, and is the distance between the left and right ends at the front end of the dorsal fin. d is the thickness of the tail cylinder and is the distance between the left end and the right end at the front end position of the tail fin. e is the lateral width of the pectoral fin, and is the distance from the base of the pectoral fin to the edge in the left-right direction. f is the longitudinal width of the pectoral fin, and is the length from the front end to the back end of the pectoral fin.
Further, the positions of the left and right eyes, the position of the pectoral fin, the position of the dorsal fin, and the position of the caudal fin are detected by using the existing image processing and pattern recognition.
Then, the feature value acquisition unit 11 calculates the following equation to obtain the ratios Wa to Wf.
Wa=a/b、Wb=b/c、Wc=c/d、Wd=a/d、We=b/e、Wf=e/f
The feature value acquisition unit 11 also finds g to k shown in fig. 4 based on the contour information of the koi. g is the total body length from the apex of the mouth to the posterior end of the tail fin. h is the head length from the top of the mouth to the rearmost end of the gill cover. i is the back length from the rearmost end of the gill cover to the front end of the dorsal fin. j is the carcass length from the front end of the dorsal fin to the front end of the tail fin. k is the tail fin length from the front end of the tail fin to the back end of the tail fin.
Although not shown in fig. 4, the position of the gill cover can be detected by extracting the profile of the gill cover. In addition, when it is difficult to detect only an image captured of a koi from above the koi, the image acquiring unit 10 may further acquire an image captured from the left side or an image captured from the right side, and the feature value acquiring unit 11 may detect the position of the gill cover from these images.
Then, the feature value acquisition unit 11 calculates the following expression to obtain the ratios Lh to Lf.
Lh=g/h、Li=g/i、Lj=g/j、Lk=g/k、Lf=g/f
The feature value acquisition unit 11 calculates 11 ratios Wa to Wf and Lh to Lf described above as information on the body type of the koi.
The feature value acquisition unit 11 obtains a spine line of the koi from the contour information of the koi. Specifically, a line from the tip of the dorsal fin to the tip of the tail fin through the dorsal fin is obtained as a spinal line. The rear end of the dorsal fin may be connected to the front end of the tail fin by a straight line, for example.
Next, the feature value acquisition unit 11 acquires information on whether or not the koi has defects as information on the body type of the koi.
Specifically, the feature amount acquiring unit 11 checks whether or not the whiskers are complete, that is, 4 whiskers including 2 large whiskers and 2 small whiskers, based on the outline information of the koi. The feature value acquisition unit 11 obtains the positions of the left and right eyes from the image of the koi, and checks whether the positions of the eyes are bilaterally symmetrical. Further, the feature value acquisition unit 11 detects whether or not the gill cover is warped based on the contour information of the koi. The feature value acquisition unit 11 checks whether or not the pectoral fin is defective, twisted, or deformed based on the contour information of koi.
The feature value acquisition unit 11 extracts scales from an image of a koi, and determines whether the scales are not aligned but not arranged in order. Specifically, the feature value acquiring unit 11 calculates the center of each scale, and calculates a straight line connecting the centers of the scales adjacent in the front-rear direction. Then, the feature value acquisition unit 11 detects whether the straight line is uneven and includes irregularities, thereby confirming that the scale is irregular.
The feature value acquisition unit 11 acquires information on whether or not a scar is present by checking whether or not a pattern of a scale is absent in a range equal to or larger than a predetermined area in a pattern of a scale extracted from an image of a koi.
As described above, the feature value acquiring unit 11 acquires information on beard, information on symmetry of left and right eyes, information on warpage of gill cover, information on pectoral fin defect, information on scale arrangement, and information on scar as information on whether or not a fancy carp has defects.
That is, the feature value acquisition unit 11 acquires the 11 ratios, the spinal line, and the 6 defect information as information on the body type of the koi.
Further, as the information of the spinal line, for example, coordinates of a plurality of points on the line in the case of plotting the spinal line in a preset coordinate system are acquired. In addition, regarding the 6 pieces of defect information, for example, a value "1" is acquired as defective information, and a value "0" is acquired as non-defective information.
Next, information on the color of the koi is described. In the present embodiment, information on the color when the breed of koi is "red and white" will be described.
When evaluating koi (red and white), the following points are considered with respect to white (the ground muscle) as information on the color of koi.
Deep milky white and transparent
The ground body has no lower red (look like pink skin ground)
Without yellowing
Absence of stains, etc
Therefore, the feature value acquisition unit 11 of the present embodiment obtains a frequency distribution of colors of each pixel constituting an image of a koi as shown in fig. 5.
Then, the feature value acquisition unit 11 obtains the frequency of the milky-white pixel based on the frequency distribution of the color and acquires the frequency as information of the white space. The milky RGB value is set to a predetermined value. White color of koi (red and white) is preferably milky white.
The feature amount obtaining unit 11 obtains the frequency of pink pixels corresponding to the lower red based on the frequency distribution of colors, and obtains the frequency as information of the lower red. The RGB value of pink corresponding to the lower red is set to a preset value. The white color of koi (red-white) is preferably milky white, and red (pink) is preferably not appeared.
The feature value acquisition unit 11 obtains the frequency of yellow pixels corresponding to yellowing based on the frequency distribution of colors and acquires the frequency as yellowing information. The RGB value of yellow corresponding to yellowing is set to a predetermined value. It is preferable that yellowing is not observed.
The feature value acquisition unit 11 obtains the frequency of the pixels of the tea color system and the black color system corresponding to the stain based on the frequency distribution of the color, and acquires the frequency as information of the stain. The RGB values of the tea color system and the black color system corresponding to the stain are set to predetermined values. No stains are preferred.
Next, regarding the information of the color of the koi, the following points are considered with respect to the red disc (red part).
The color of the red disk is uniform from the head to the tail cylinder
Bright red and good color tone (persimmon color system and pink color system are regarded as bright and good)
Instead of observing the color, the thickness of the red dish (dip-dyeing (Japanese: り Write み), i.e. the red dish with no visible scale boundary is considered to be thick)
Front insert with good and regular edge cutting of red dish, without bleeding at margin of red and scale/front insert
Dark, dark and blotchy red in red dishes
Therefore, the feature value acquisition unit 11 according to the present embodiment extracts a range of the red from the image of the koi, and calculates the frequency distribution of the color of each pixel within the range of the red. The RGB range corresponding to the range of the red disc is set to a preset range.
Then, the feature value acquisition unit 11 obtains the variance or standard deviation of the frequency distribution of the colors within the scope of the scarlet to acquire information on the uniformity of the color of the scarlet. It is preferable that the variance or standard deviation is small, since the color is uniform.
The feature value acquiring unit 11 converts RGB of each pixel in the scope of the disk into a signal in a color space of L × a × b, for example, obtains a saturation based on the values of a × and b, and acquires the saturation as information on the sharpness (color tone) of the color of the disk.
The feature value acquisition unit 11 extracts the pattern of the scales included in the entire image of the scope of the coccon, detects the edge of the extracted pattern of the scales, obtains the edge value, and acquires the edge value as the information on the thickness of the coccon. The edge amount refers to the total number of pixels constituting an edge. The red dish is preferably thick when the edge amount is small.
Further, the feature amount acquisition unit 11 obtains the sharpness (degree of blur) of the edge at the boundary between the range of the red dish and the range of the white space based on the image of the koi, and acquires the sharpness as information on the blurring of the front margin/front margin. In addition, as for the calculation method of the sharpness (degree of blurring) of the edge, an existing method can be used. When the sharpness of the edge is high, bleeding of the front insertion/margin is small, which is preferable.
The dark and dark spots, which do not show colors on the disk, are obtained as the above-mentioned information on the spots.
That is, the feature amount acquiring unit 11 acquires information on white ground, scarlet, yellowness, blemish, uniformity of red-like disk color, sharpness (hue) of red-like disk color, thickness of red-like disk, and color bleeding of prime/margin as the color information of koi.
Next, information on the pattern of koi is described. In the present embodiment, information on a pattern in the case where the breed of koi is "red and white" will be described.
In evaluating koi (red and white), information on the pattern of koi takes the following points into consideration.
The shape, size and number of the patches should be coordinated with the body from head to tail through the body
The red disc has cut edges from the tip of the nose and the root of the caudate fin.
Therefore, the feature value acquisition unit 11 according to the present embodiment extracts the range of the red disc from the image of the koi. Then, the feature amount acquisition unit 11 checks whether or not a part of the scope of the coccoid is included in the range of the tip of the nose and whether or not the scope of the coccoid covers the base of the skeg. The range of the nose tip and the base of the tail fin are set by using an existing image processing method based on a predetermined condition.
The feature amount acquiring unit 11 acquires "good" information as the coverage information of the coccoid when a part of the range of the coccoid is not included in the range of the tip of the nose and the range of the coccoid does not cover the base of the skein, and acquires "bad" information as the coverage information of the coccoid when a part of the range of the coccoid is included in the range of the tip of the nose or the range of the coccoid covers the base of the skein.
The feature value acquisition unit 11 divides the image of the koi into 3 regions, i.e., a head portion, a body portion, and a tail portion, and specifies a range of the coccoid included in each region. The head, body and tail may be divided under predetermined conditions.
Then, the feature amount acquisition unit 11 calculates the ratio of the area of the region of the coccoid included in each part to the entire area of each part, and acquires the ratio of each part as the equalization information of the coccoid. In addition, when the ratio of the coccoid of each part is 50% to 90%, the balance of the coccoid is determined to be good, and when the ratio is other than that, the balance of the coccoid is determined to be poor.
The feature amount acquiring unit 11 acquires the number of coccid included in each part and the length of the periphery of each coccid as the balance information of the coccids.
That is, the feature amount acquisition unit 11 acquires the coccoid overlay information and the coccoid balance information as the pattern information of the koi. In addition, the coverage information of the coccoid disk is, for example, a numerical value "1" when the coverage information of the coccoid disk is "good" and a numerical value "0" when the coverage information of the coccoid disk is "bad".
Next, information on the finish of modification of koi is described. In the present embodiment, information on the finish of decoration when the variety of koi is "red and white" will be described.
In evaluating koi (red and white), information on the finish of modification of koi takes the following points into consideration.
Clear, moist, bright, clear, white turbidity
The muscles of the earth are transparent, glossy, shiny, smooth and without hyperemia
Contrast of red color from the ground muscle (bright color tone)
Therefore, the feature value acquisition unit 11 obtains the contrast, brightness, and saturation based on the image of the koi, and acquires the contrast, brightness, and saturation as the information of the finish of modification. In addition, regarding the chroma, the chroma of the range of the red disc is obtained as the information of the color of the koi, but the chroma of the entire image of the koi is obtained as the information of the finish of the modification.
The above description is of the feature amount acquired by the feature amount acquisition unit 11 of the present embodiment.
Next, the evaluation unit 12 will be described.
The evaluation unit 12 has a learning completion model obtained by machine learning the relationship between the above-described feature amount of each of the plurality of koi and the evaluation result of the koi. More specifically, the evaluation unit 12 of the present embodiment includes 4 learning completion models as follows: a learning completion model for body type evaluation for evaluating the body type of a koi as an evaluation target; a learning completion model for texture evaluation for evaluating the texture of a koi; a pattern evaluation learning completion model for evaluating the pattern of the koi; and a learning completion model for finish modification evaluation for evaluating finish modification of koi.
As for the body type evaluation learning completion model, for example, a reference learning completion model is generated by machine learning the relationship between the above-mentioned body type information of 10 koi of the world championship (the total general win) and the evaluation result (score) on the body type obtained at the past international koi evaluation meeting and the whole japanese koi evaluation meeting held at the whole japan loving society and the whole japan koi evaluation meeting. The reference learning completion model is further subjected to machine learning of the relationship between the information on the body types of arbitrary koi and the evaluation result (score) on the body types, thereby obtaining a body type evaluation learning completion model. The arbitrary plurality of koi includes koi having a good evaluation result and koi having a poor evaluation result.
As for the texture evaluation learning completion model, the relationship between the above-described color information of 10 koi, which is the world champion, and the evaluation result (score) regarding the color is machine-learned, and a reference learning completion model is generated. The reference learning completion model is further subjected to machine learning of the relationship between the color information of any of the plurality of koi and the evaluation result (score) concerning the color, thereby obtaining a texture evaluation learning completion model.
The pattern evaluation learning completion model is a reference learning completion model generated by machine learning the relationship between the above-described pattern information of 10 koi, which are world champions, and the evaluation result (score) of the pattern. Then, the reference learning completion model is further subjected to machine learning of the relationship between the above-described pattern information of a plurality of arbitrary koi and the evaluation result (score) concerning the pattern, thereby obtaining a pattern evaluation learning completion model.
The learning completion model for evaluation of the degree of finish of modification was generated by machine learning the relationship between the information on the degree of finish of modification and the evaluation result (score) on the degree of finish of modification of 10 koi of the world champion. The reference learning completion model is further machine-learned for the relationship between the information on the finish of modification and the evaluation result (score) on the finish of modification of any of the plurality of koi, thereby obtaining a learning completion model for finish of modification evaluation.
As a method of machine learning, a known method can be used, and a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), a noise reduction Stack auto encoder (DSA), and the like can be used.
The evaluation unit 12 evaluates the koi to be evaluated using 4 learning completion models, i.e., the body type evaluation learning completion model, the texture evaluation learning completion model, the pattern evaluation learning completion model, and the modification completion degree evaluation learning completion model.
Specifically, the evaluation unit 12 inputs the information on the body type of the koi as the evaluation target acquired by the feature value acquisition unit 11 to the learning completion model for body type evaluation, thereby evaluating the body type of the koi as the evaluation target and obtaining an evaluation result (score).
The evaluation unit 12 inputs the information on the color of the koi as the evaluation target acquired by the feature value acquisition unit 11 to the learning completion model for qualitative evaluation, thereby evaluating the texture of the koi as the evaluation target and obtaining an evaluation result (score).
The evaluation unit 12 inputs the information on the pattern of the koi as the evaluation target acquired by the feature value acquisition unit 11 to the learning completion model for pattern evaluation, and evaluates the pattern of the koi as the evaluation target to obtain an evaluation result (score).
The evaluation unit 12 inputs the information of the finish of modification of the koi as the evaluation target acquired by the feature value acquisition unit 11 to the learning finish model for finish of modification evaluation, and evaluates the finish of modification of the koi as the evaluation target to obtain an evaluation result (score).
As described above, the evaluation unit 12 obtains the score of the body type, the color, the pattern, and the finish of the decoration of the koi to be evaluated.
The evaluation unit 12 multiplies the 4 scores by a predetermined weight and adds the multiplied scores to obtain a total evaluation result (total score) of koi to be evaluated.
The weight can be arbitrarily set, and for example, the score for the body type is set to 40%, the score for the body type is set to 30%, the score for the pattern is set to 20%, and the score for the finish of decoration is set to 10%. In this case, when the score of the body type is s1, the score of the texture is s2, the score of the pattern is s3, and the score of the finish of the decoration is s4, the total score sa is calculated by the following equation.
sa=0.4×s1+0.3×s2+0.2×s3+0.1×s4
The above-described weight is not limited to the above-described example, and may be changed according to the growth stage of a koi to be evaluated.
For example, when the koi to be evaluated is a fry ((12 th to 25 th) 1cm to 25cm), the weight of the body type is 25%, the weight of the texture is 25%, the weight of the pattern is 25%, and the weight of the finish of the decoration is 25%. In the case where the koi to be evaluated is koi ((30 th to 40 th) 25.1cm to 40cm), the weight of body type, texture, pattern and finish are set to 35%, 30%, 20% and 15%, respectively. In the case where the koi to be evaluated is adult fish ((45 th to 55 th sections) 40.1cm to 55cm), the weight of body type, the weight of texture, the weight of pattern, and the weight of finish of decoration are set to 40%, 30%, and 20%, respectively. In the case where the koi to be evaluated is a strong fish ((60 th to 70 th sections) 55.cm to 70cm), the weight of the body type is set to 40%, the weight of the texture is set to 30%, the weight of the pattern is set to 20%, and the weight of the finish of decoration is set to 10%. In the case where the koi to be evaluated is maci ((75 th to 90 th sections) 70.1cm to 90cm), the weight of the body type is set to 50%, the weight of the texture is set to 25%, the weight of the pattern is set to 15%, and the weight of the finish of decoration is set to 10%.
The evaluation unit 12 may receive the information on the growth stage of koi, set a weight corresponding to the growth stage, and calculate the total score sa.
In the present embodiment, the evaluation unit 12 has 4 learning completion models, but is not limited to this, and 4 learning completion models may be stored in a server device or the like connected to the koi evaluation device 1 via a communication network and evaluated using the models.
In addition, the koi evaluation device 1 of the present embodiment may be provided with a learning completion model generation unit, and the learning completion model generation unit may generate 4 learning completion models, that is, a learning completion model for body type evaluation, a learning completion model for texture evaluation, a learning completion model for pattern evaluation, and a learning completion model for finish of decoration evaluation. In this case, the image of the koi for machine learning described above may be acquired by the image acquiring unit 10, and the feature amount of the acquired image may be acquired by the feature amount acquiring unit 11. The evaluation result (score) of each koi may be set and input by the user using a predetermined input device (not shown) connected to the koi evaluation device 1.
The output unit 13 outputs the evaluation result of the koi as the evaluation target evaluated by the evaluation unit 12. The output unit 13 of the present embodiment outputs a score s1 of the body type, a score s2 of the texture, a score s3 of the pattern, a score s4 of the finish of the decoration, and a total score sa of koi to be evaluated.
Specifically, the output unit 13 of the present embodiment displays the score s1 of the body type, the score s2 of the texture, the score s3 of the pattern, the score s4 of the finish of decoration, and the total score sa of the koi to be evaluated on a predetermined display device (not shown). As a display method, for example, as shown in fig. 6, the body shape score s1, the texture score s2, the pattern score s3, and the embellishment degree score s4 are displayed in the form of radar charts, and the total score sa is displayed as text.
The output destination of the output unit 13 according to the present embodiment is not limited to the display device, and may be output to a computer, a server device, or the like connected to the koi evaluation device 1 via a communication network, or may be output to a printing device such as a printer.
The koi evaluation device 1 is a computer, and includes a semiconductor Memory such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit), a ROM (Read Only Memory) or a RAM (Random Access Memory), a storage device such as a hard disk, and a communication I/F (Interface).
Further, the semiconductor memory or the hard disk of the koi evaluation device 1 is equipped with the koi evaluation program according to one embodiment of the present invention. The CPU or the GPU executes the koi evaluation program to cause the image acquisition unit 10, the feature value acquisition unit 11, the evaluation unit 12, and the output unit 13 to function.
That is, the koi evaluation program causes the computer to execute the steps of: acquiring a feature quantity extracted from an image obtained by shooting a koi as an evaluation object; evaluating a koi as an evaluation object by inputting a feature quantity of the koi as the evaluation object to a learning completion model, wherein the learning completion model is a model obtained by machine learning a relationship between the feature quantity of each koi of a plurality of koi and an evaluation result of the koi; and outputting the evaluation result of the koi as the evaluation object.
The hardware configuration of the koi evaluation device 1 is not limited to the above configuration.
In the present embodiment, the functions of the image acquisition unit 10, the feature amount acquisition unit 11, the evaluation unit 12, and the output unit 13 are all executed by a koi evaluation program, but the present invention is not limited to this, and a part or all of the functions may be configured by hardware such as an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other electric circuits.
Next, the processing flow of the koi evaluation apparatus 1 according to the present embodiment will be described with reference to a flowchart shown in fig. 7.
First, the image acquiring unit 10 acquires an image obtained by imaging a koi to be evaluated (S10).
Next, the feature value acquisition unit 11 performs the above-described preprocessing on the image of the koi as the evaluation target (S12).
Next, the feature value obtaining unit 11 obtains feature values of the body type information, the color information, the pattern information, and the finish of decoration information of the koi based on the preprocessed image (S14).
Then, the feature amounts of the 4 learning completion models included in the evaluation unit 12 are input, and the score of the body type, the score of the texture, the score of the pattern, the score of the finish degree, and the total score of the body type and the texture of the koi to be evaluated are obtained (S16).
The score of the koi as the evaluation target obtained by the evaluation unit 12 is output to the output unit 13, and the output unit 13 displays the score of the body type, the score of the texture, the score of the pattern, and the score of the finish of decoration of the koi as the evaluation target in the form of a radar chart and displays the total score in a text form (S18).
Further, in the koi evaluation apparatus 1 of the above embodiment, 4 learning completion models, that is, the body type evaluation learning completion model, the texture evaluation learning completion model, the pattern evaluation learning completion model, and the modification completion degree evaluation learning completion model, are used to evaluate koi, but the present invention is not limited thereto, and a comprehensive evaluation learning completion model may be generated by machine learning the relationship between the body type information, the color information, and the pattern information of a plurality of koi and the comprehensive score of each koi. The evaluation unit 12 may input the body shape information, color information, and pattern information of the koi to be evaluated to the learning completion model for comprehensive evaluation, and may calculate the comprehensive score. Further, information on the degree of finish of modification may be added as a feature amount to generate a learning completion model for comprehensive evaluation, and information on the body type, color, pattern, and degree of finish of modification of the koi to be evaluated may be input to obtain a comprehensive score.
In the above description of the embodiment, the evaluation method in the case where the variety of a koi is "red and white" has been described, but another variety of a koi may be evaluated in the koi evaluation apparatus 1. That is, the koi evaluation apparatus 1 may evaluate a plurality of types of koi. In this case, the feature amount acquisition unit 11 of the koi evaluation apparatus 1 acquires a feature amount corresponding to the variety of the koi, and the evaluation unit 12 evaluates the koi using a learning completion model corresponding to the variety of the koi. That is, the evaluation unit 12 has a plurality of learning completion models corresponding to a plurality of koi breeds. As the learning completion model corresponding to the variety of koi, 4 learning completion models, that is, a body type evaluation learning completion model, a texture evaluation learning completion model, a pattern evaluation learning completion model, and a decoration completion degree evaluation learning completion model, are provided for each variety of koi, as in the above-described embodiment.
The variety of koi includes 21 types: the color of the red-white color of the 1 st, the 2. major-positive trichrome, the 3. showa trichrome, the 4. write carp, the 5. honeysuckle scale (1-4 honeysuckle scale species), the 6. mani carp, the 7. light yellow, the 8. fall jade, the 9. clothing carp, the 10. variegated carp, the 11. five colors, the 12. fancy skin smooth carp (golden black dragon system is regarded as the fancy skin smooth carp), the 13. light brocade carp, the 14. honeysuckle scale two (6-13 honeysuckle scale species), the 15. non-humiture carp (including the silver scale species and the pine leaf species), the 16. light non-humiture carp (including the silver scale species and the pine leaf species), the 17. danding (all species of the danding mold containing the silver scale species), the 18. showa red-white color of the 19. major-positive trichrome, the 20. stamen and the trichrome, and the 21. showa (18-21 species of the 18-21).
As described above, although the characteristic amount differs for each variety of koi, for example, in the case of the large positive three colors, since the koi has an ink color, information such as uniformity and terminating property of the ink color and thickness of the ink color is added as information on the color of the koi. As information on the uniformity of ink, for example, the variance and standard deviation of pixels of a color corresponding to the ink color are obtained. As information on the termination of the ink color, for example, the variance of the position of the pattern of the independent black color corresponding to the ink color is obtained, and whether or not the pattern is scattered is evaluated. Further, as information on the thickness of the ink color, the edge amount in the range of the ink color is obtained.
In addition, in the case of the showa three colors, information on whether or not the ground body of sinking ink color is a light yellow-flesh ground is added as information on the color of the koi. Pixels of a color corresponding to the range of the ink color are obtained as information on whether the ground of the ink color is light yellow.
In this way, the characteristic amount unique to each type of koi is acquired, a learning completion model is created, and evaluation corresponding to the type of koi is performed.
The following remarks are also disclosed with respect to the koi evaluation device of the present invention.
(attached note)
In the koi evaluation device according to the present invention, the feature value acquisition unit may acquire information on a body type of the koi as the feature value.
In the koi evaluation device according to the present invention, the feature value acquisition unit may acquire information on a color of the koi as the feature value.
In the koi evaluation device according to the present invention, the feature value acquisition unit may acquire information on a pattern of the koi as the feature value.
In the koi evaluation device according to the present invention, the feature value acquisition unit may acquire information on a body type, color, and pattern of the koi as the feature value.
In the koi evaluation device according to the present invention, the evaluation unit may include a learning completion model for each variety of koi.
Description of the reference numerals
1: a fancy carp evaluation device; 10: an image acquisition unit; 11: a characteristic amount acquisition unit; 12: an evaluation unit; 13: an output unit.

Claims (4)

1. A fancy carp evaluation device is provided with:
a feature value acquisition unit that acquires a feature value extracted from an image obtained by imaging a koi as an evaluation target;
an evaluation unit that evaluates the koi as an evaluation target by inputting the feature amount of the koi as the evaluation target acquired by the feature amount acquisition unit to a learning completion model obtained by machine learning a relationship between the feature amount of each of a plurality of koi and an evaluation result of the koi; and
an output unit that outputs the evaluation result of the koi as the evaluation target evaluated by the evaluation unit,
the evaluation unit performs three evaluations as follows and obtains evaluation results: evaluating using a learning completion model obtained by machine learning a relationship between information of a body type of a koi as the feature amount and an evaluation result of the koi; evaluating using a learning completion model obtained by machine learning a relationship between information of a color of a koi as the feature amount and an evaluation result of the koi; using an evaluation by a learning completion model obtained by machine learning of a relationship between information of a pattern of a koi as the feature amount and an evaluation result of the koi,
the output unit outputs the evaluation results of the three evaluations for the koi as the evaluation target.
2. The Koi evaluation device according to claim 1, wherein,
the evaluation section has the learning completion model for each variety of koi.
3. A method for evaluating a fancy carp,
acquiring a feature amount extracted from an image obtained by photographing a koi as an evaluation object,
evaluating the koi as an evaluation target by inputting the acquired feature quantity of the koi as the evaluation target to a learning completion model obtained by machine learning a relationship between the feature quantity of each of a plurality of koi and an evaluation result of the koi,
outputting the evaluation result of the koi as the evaluation object,
wherein, the following three evaluations are carried out and the evaluation result is obtained: evaluating using a learning completion model obtained by machine learning a relationship between information of body types of koi as the feature amount and an evaluation result of the koi; evaluating using a learning completion model obtained by machine learning a relationship between information of a color of a koi as the feature amount and an evaluation result of the koi; using an evaluation by a learning completion model obtained by machine learning of a relationship between information of a pattern of a koi as the feature amount and an evaluation result of the koi,
and outputting the evaluation results of the three evaluations to the koi as the evaluation object.
4. A storage medium storing a koi evaluation program for causing a computer to execute the steps of:
acquiring a feature quantity extracted from an image obtained by shooting a koi as an evaluation object;
evaluating the koi as an evaluation object by inputting the acquired feature quantity of the koi as the evaluation object to a learning completion model, wherein the learning completion model is a model obtained by machine learning the relationship between the feature quantity of each koi in a plurality of koi and the evaluation result of the koi; and
outputting the evaluation result of the koi as the evaluation object,
wherein the koi evaluation program causes the computer to further execute the steps of:
three evaluations were performed and evaluation results were obtained as follows: evaluating using a learning completion model obtained by machine learning a relationship between information of a body type of a koi as the feature amount and an evaluation result of the koi; evaluating using a learning completion model obtained by machine learning a relationship between information of a color of a koi as the feature amount and an evaluation result of the koi; evaluating using a learning completion model obtained by machine learning a relationship between information of a pattern of a koi as the feature amount and an evaluation result of the koi; and
and outputting evaluation results of the three evaluations for the koi as the evaluation object.
CN202111334580.4A 2021-10-05 2021-11-11 Koi evaluation device, method, program, and storage medium Active CN114170622B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021163904A JP7008957B1 (en) 2021-10-05 2021-10-05 Nishiki-koi evaluation device and method and program
JP2021-163904 2021-10-05

Publications (2)

Publication Number Publication Date
CN114170622A CN114170622A (en) 2022-03-11
CN114170622B true CN114170622B (en) 2022-08-19

Family

ID=80478862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111334580.4A Active CN114170622B (en) 2021-10-05 2021-11-11 Koi evaluation device, method, program, and storage medium

Country Status (2)

Country Link
JP (1) JP7008957B1 (en)
CN (1) CN114170622B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7218031B1 (en) 2022-08-01 2023-02-06 三信トレーディング株式会社 Automatic biological measuring device, method and program
JP7239121B1 (en) 2022-08-01 2023-03-14 三信トレーディング株式会社 Organism growth prediction device, method and program, and 3D image generation and display system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012121166A1 (en) * 2011-03-04 2012-09-13 日本電気株式会社 Distribution management system, distribution management method, and device, label and program used by same
CN107945175A (en) * 2017-12-12 2018-04-20 百度在线网络技术(北京)有限公司 Evaluation method, device, server and the storage medium of image
JP2018132962A (en) * 2017-02-15 2018-08-23 オムロン株式会社 Image output device and image output method
CN111712352A (en) * 2018-02-16 2020-09-25 新东工业株式会社 Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980873B (en) * 2017-03-09 2020-07-07 南京理工大学 Koi screening method and device based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012121166A1 (en) * 2011-03-04 2012-09-13 日本電気株式会社 Distribution management system, distribution management method, and device, label and program used by same
JP2018132962A (en) * 2017-02-15 2018-08-23 オムロン株式会社 Image output device and image output method
CN107945175A (en) * 2017-12-12 2018-04-20 百度在线网络技术(北京)有限公司 Evaluation method, device, server and the storage medium of image
CN111712352A (en) * 2018-02-16 2020-09-25 新东工业株式会社 Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium

Also Published As

Publication number Publication date
JP2023054913A (en) 2023-04-17
JP7008957B1 (en) 2022-01-25
CN114170622A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN114170622B (en) Koi evaluation device, method, program, and storage medium
CN109344724B (en) Automatic background replacement method, system and server for certificate photo
CN103649987B (en) Face impression analysis method, beauty information providing method and face image generation method
CN106846390B (en) Image processing method and device
CN108510500B (en) Method and system for processing hair image layer of virtual character image based on human face skin color detection
CN106127735B (en) A kind of facilities vegetable edge clear class blade face scab dividing method and device
Dalayap et al. Landmark and outline methods in describing petal, sepal and labellum shapes of the flower of Mokara orchid varieties
CN109712095A (en) A kind of method for beautifying faces that rapid edge retains
CN114170624B (en) Koi evaluation system, and device, method, program, and storage medium for implementing koi evaluation system
CN109448093A (en) A kind of style image generation method and device
CN112907438B (en) Portrait generation method and device, electronic equipment and storage medium
CN114088714B (en) Method for detecting surface regularity of grain particles
CN112749713B (en) Big data image recognition system and method based on artificial intelligence
CN109448010A (en) A kind of grain pattern automatic generation method that continues in all directions based on content characteristic
CN108734703B (en) Polished tile printing pattern detection method, system and device based on machine vision
JP5057988B2 (en) Melange yarn image simulation apparatus, method and program thereof
CN115705748A (en) Facial feature recognition system
CN113989214A (en) Footprint pressure characteristic analysis system based on region segmentation method
CN112801119A (en) Pear variety identification method based on image identification
JP5093540B2 (en) Eye position detection method and detection system
JP7218031B1 (en) Automatic biological measuring device, method and program
Kaya et al. Comparison of unsupervised segmentation of retinal blood vessels in gray level image with PCA and green channel image
JP4831361B2 (en) Eye position detection method and detection system
Guillermo et al. Determining ‘Carabao’mango ripeness stages using three image processing algorithms
Agrawal et al. Color Me Good: Branding in the Coloring Style of Movie Posters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant