WO2021220715A1 - Determination device and determination program - Google Patents

Determination device and determination program Download PDF

Info

Publication number
WO2021220715A1
WO2021220715A1 PCT/JP2021/014201 JP2021014201W WO2021220715A1 WO 2021220715 A1 WO2021220715 A1 WO 2021220715A1 JP 2021014201 W JP2021014201 W JP 2021014201W WO 2021220715 A1 WO2021220715 A1 WO 2021220715A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
time
unit
determination
dimensional array
Prior art date
Application number
PCT/JP2021/014201
Other languages
French (fr)
Japanese (ja)
Inventor
信行 古園井
洋一 木川
亮介 平本
雄梧 加世田
Original Assignee
日東電工株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日東電工株式会社 filed Critical 日東電工株式会社
Publication of WO2021220715A1 publication Critical patent/WO2021220715A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a determination device and a determination program.
  • the length of rumination time is an indicator of their health condition. Therefore, if the rumination time can be appropriately managed, it is possible to accurately detect that the health condition has changed.
  • a method of determining whether or not a ruminant state is present for example, a method of attaching an acceleration sensor to a ruminant animal and analyzing acceleration data can be mentioned.
  • the acceleration data is only one-dimensional time-series data indicating the magnitude of acceleration at each time, and the amount of information is small.
  • the acceleration data contains variable elements based on various behaviors other than rumination behavior, and based on the one-dimensional time-series data, the characteristics of rumination behavior, which is one of the behavior classifications, are grasped and rumination. It is not easy to accurately determine whether or not it is in a state.
  • One aspect is aimed at accurately determining a specific behavioral classification of ruminants.
  • the determination device A generator that frequency-analyzes one-dimensional time-series data indicating acceleration output from an acceleration sensor attached to an anticorrosive animal and generates two-dimensional array data indicating the intensity of each frequency at each time. It has a determination unit for determining the behavior classification of the ruminant for each predetermined time range by inputting the two-dimensional array data for each predetermined time range.
  • FIG. 1 is a diagram showing an example of a system configuration of a determination system and a functional configuration of a determination device.
  • FIG. 2 is a diagram showing an example of the hardware configuration of the determination device.
  • FIG. 3 is a first diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • FIG. 4 is a diagram showing an example of the functional configuration of the learning unit.
  • FIG. 5 is a diagram showing a specific example of the functional configuration and processing of the inference data generation unit.
  • FIG. 6 is a first diagram showing a specific example of the functional configuration and processing of the inference unit.
  • FIG. 7 is a flowchart showing the flow of the determination process.
  • FIG. 8 is a second diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • FIG. 8 is a second diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • FIG. 9 is a third diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • FIG. 10 is a fourth diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • FIG. 11 is a second diagram showing a specific example of the functional configuration and processing of the inference unit.
  • FIG. 1 is a diagram showing an example of a system configuration of a determination system and a functional configuration of a determination device.
  • the determination system 100 includes a measuring device 110, a gateway device 120, and a determination device 130.
  • the measuring device 110 and the gateway device 120 are connected via wireless communication, and the gateway device 120 and the determination device 130 are communicably connected via a network (not shown).
  • the measuring device 110 is an acceleration sensor in the three-axis directions (x-axis direction, y-axis direction, z-axis direction) attached to a predetermined portion (neck in the example of FIG. 1) of the cow 10.
  • the x-axis direction is, for example, a direction along the body surface of the neck of the cow 10, and refers to a direction along the circumference of the neck
  • the y-axis direction is, for example, the body surface of the neck of the cow 10. It shall be a direction along the direction from the head to the body.
  • the z-axis direction is defined as, for example, a direction perpendicular to the body surface of the neck of the cow 10.
  • the measuring device 110 measures one-dimensional time-series data indicating acceleration in each of the three axial directions at a predetermined sampling frequency (for example, 5 [Hz] to 20 [Hz]) and transmits it to the gateway device 120.
  • a predetermined sampling frequency for example, 5 [Hz] to 20 [Hz]
  • the gateway device 120 transmits the one-dimensional time series data indicating the accelerations in each of the three axial directions transmitted from the measuring device 110 to the determination device 130.
  • the determination device 130 determines whether the cow 10 is in a ruminant state or a non-ruminant state based on one-dimensional time-series data indicating acceleration in each of the three axial directions.
  • a determination program is installed in the determination device 130, and when the program is executed, the determination device 130 serves as a learning data generation unit 131, a learning unit 132, an inference data generation unit 133, and an inference unit 134. Function.
  • the learning data generation unit 131 extracts one-dimensional time-series data indicating acceleration in the z-axis direction from one-dimensional time-series data indicating acceleration in each of the three axis directions. Further, the learning data generation unit 131 performs frequency analysis of the extracted one-dimensional time series data for each predetermined analysis time range (for example, 15 seconds) to show the intensity of each frequency at each time. Generate sequence data (spectrogram image).
  • the predetermined analysis time range is shifted at a predetermined shift interval (for example, 1 second interval), and the result of frequency analysis in each analysis time range is a two-dimensional array at each start time of each analysis time range. Generated as data. That is, each time referred to here refers to the start time of each time range (hereinafter, the same applies).
  • the learning data generation unit 131 associates the two-dimensional array data with the information indicating that it is in the rebellious state or the information indicating that it is in the non-reflexed state (correct answer data) at each time for learning. It is stored as data in the learning data storage unit 135.
  • the learning unit 132 reads the learning data from the learning data storage unit 135, and obtains the two-dimensional array data at each time included in the read learning data for each predetermined determination time range (for example, 15 seconds). Input to the judgment model in sequence.
  • the predetermined determination time range is shifted at a predetermined shift interval (for example, 1 second interval), and the two-dimensional array data of each determination time range is input to the determination model.
  • the learning unit 132 sets the model of the determination model so that the information indicating that the ruminant state or the information indicating the non-ruminant state (classification probability) output from the determination model approaches the corresponding correct answer data. Update the parameters. As described above, the learning unit 132 relates to the determination model that specifies the correspondence between the two-dimensional array data in the predetermined determination time range and the information indicating that the ruminant state is present or the information indicating that the ruminant state is present. Perform machine learning.
  • the model parameters of the learned determination model generated by the learning unit 132 performing machine learning are applied to the inference unit 134.
  • the inference data generation unit 133 acquires one-dimensional time-series data indicating acceleration in each of the three axial directions transmitted from the measuring device attached to the cow to be determined via the gateway device 120. Further, the inference data generation unit 133 extracts one-dimensional time-series data indicating acceleration in the z-axis direction from one-dimensional time-series data indicating acceleration in each of the three axis directions. Further, the inference data generation unit 133 generates two-dimensional array data indicating the intensity of each frequency at each time by frequency-analyzing the extracted one-dimensional time series data for each predetermined analysis time range.
  • the inference data generation unit 133 associates the two-dimensional array data at each time with each time information and stores it in the inference data storage unit 136 as inference data.
  • the inference unit 134 is an example of the determination unit.
  • the inference unit 134 reads the inference data from the inference data storage unit 136, and sequentially obtains the two-dimensional array data at each time included in the read inference data in a predetermined inference time range (with a predetermined determination time range). Input to the trained judgment model for each time range of the same length). Further, the inference unit 134 outputs the inference result (information indicating that it is in the ruminant state or information indicating that it is in the non-ruminant state) output from the learned determination model together with the corresponding time information.
  • the amount of information used for determination is increased by frequency-analyzing the acceleration data which is one-dimensional time series data and generating the two-dimensional array data. Further, in the determination device 130 according to the present embodiment, by inputting the two-dimensional array data into the determination model and performing machine learning, it is possible to capture the characteristics of the ruminant behavior of the cow such as periodicity and continuity. Generate a judgment model for.
  • the determination device 130 it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
  • FIG. 2 is a diagram showing an example of the hardware configuration of the determination device.
  • the determination device 130 includes a processor 201, a memory 202, an auxiliary storage device 203, an I / F (Interface) device 204, a communication device 205, and a drive device 206.
  • the hardware of the determination device 130 is connected to each other via the bus 207.
  • the processor 201 has various arithmetic devices such as a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit).
  • the processor 201 reads various programs (for example, a determination program, etc.) onto the memory 202 and executes them.
  • the memory 202 has a main storage device such as a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the processor 201 and the memory 202 form a so-called computer, and the processor 201 executes various programs read on the memory 202, so that the computer performs the above functions (learning data generation unit 131 to inference unit 134). Realize.
  • the auxiliary storage device 203 stores various programs and various data used when various programs are executed by the processor 201.
  • the learning data storage unit 135 and the inference data storage unit 136 are realized in the auxiliary storage device 203.
  • the I / F device 204 is a connection device that connects the operation device 210 and the display device 211, which are examples of external devices, and the determination device 130.
  • the I / F device 204 receives an operation on the determination device 130 via the operation device 210. Further, the I / F device 204 outputs the result of processing by the determination device 130 and displays it on the display device 211.
  • the communication device 205 is a communication device for communicating with another device. In the case of the determination device 130, it communicates with the gateway device 120, which is another device, via the communication device 205.
  • the drive device 206 is a device for setting the recording medium 212.
  • the recording medium 212 referred to here includes a medium such as a CD-ROM, a flexible disk, a magneto-optical disk, or the like that optically, electrically, or magnetically records information. Further, the recording medium 212 may include a semiconductor memory or the like for electrically recording information such as a ROM or a flash memory.
  • the various programs installed in the auxiliary storage device 203 are installed, for example, by setting the distributed recording medium 212 in the drive device 206 and reading the various programs recorded in the recording medium 212 by the drive device 206. Will be done.
  • the various programs installed in the auxiliary storage device 203 may be installed by being downloaded from the network via the communication device 205.
  • FIG. 3 is a first diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • the learning data generation unit 131 includes an acceleration data acquisition unit 301, a z-axis direction extraction unit 302, a frequency analysis unit 303, and a data generation unit 304.
  • the acceleration data acquisition unit 301 acquires one-dimensional time-series data indicating acceleration in each of the three axial directions transmitted from the measuring device 110 via the gateway device 120.
  • graph 311 is an example of one-dimensional time-series data indicating acceleration in each of the three axial directions acquired by the acceleration data acquisition unit 301, with the horizontal axis indicating time and the vertical axis indicating acceleration. ..
  • the one-dimensional time-series data indicating the acceleration in each of the three axial directions acquired by the acceleration data acquisition unit 301 of the learning data generation unit 131 shows that the cow is in a rebellious state (or non-reflexive state). It is assumed that the information indicating that the state) is associated in advance.
  • the z-axis direction extraction unit 302 extracts one-dimensional time-series data indicating acceleration in the z-axis direction from the one-dimensional time-series data indicating acceleration in each of the three axes directions acquired by the acceleration data acquisition unit 301.
  • graph 312 is an example of one-dimensional time-series data showing acceleration in the z-axis direction extracted from one-dimensional time-series data showing acceleration in each of the three axes by the z-axis direction extraction unit 302. Is.
  • the frequency analysis unit 303 frequency-analyzes one-dimensional time-series data indicating acceleration in the z-axis direction extracted by the z-axis direction extraction unit 302 for each predetermined analysis time range, and determines the intensity of each frequency at each time. Generate the two-dimensional array data shown.
  • graph 313 is an example of two-dimensional array data generated by the frequency analysis unit 303, with the horizontal axis representing time and the vertical axis representing frequency. Further, the difference in color in the graph 313 represents the difference in the intensity of each frequency at each time. For example, in Graph 313, red has the highest intensity, and hereafter, the intensity decreases in the order of orange ⁇ yellow ⁇ green ⁇ blue ⁇ purple.
  • the difference in intensity of each frequency at each time is treated as a gray scale (for example, 0 to 255) (that is, it is not a value of three types of R, G, and B (0 to 255)). It shall be treated as one type of shade value (0 to 255)).
  • the rectangle shown by the broken line in the graph 312 indicates a predetermined analysis time range, and the one-dimensional time series data included in the analysis time range is frequency-analyzed, so that the corresponding time is set in the graph 313.
  • Two-dimensional array data (rectangular indicated by a broken line) is generated.
  • the data generation unit 304 extracts the two-dimensional array data generated by the frequency analysis unit 303 at each time for each predetermined determination time range.
  • graph 314 shows how the data generation unit 304 extracts the two-dimensional array data for each predetermined determination time range (see reference numeral 315).
  • the data generation unit 304 sequentially extracts two-dimensional array data while shifting a predetermined determination time range (for example, 15 seconds) at a predetermined shift interval (for example, 1 second interval).
  • the data generation unit 304 associates the extracted two-dimensional array data of each determination time range with the information indicating that it is in the rebellious state or the information indicating that it is in the non-rebellious state, and learns it as learning data. It is stored in the data storage unit 135.
  • the learning data 320 is an example of the learning data stored by the data generation unit 304, and includes "image data" and "correct answer data” as information items.
  • the “image data” stores the two-dimensional array data of each determination time range extracted by the data generation unit 304.
  • the "correct answer data” either information indicating that the ruminant state is present or information indicating that the ruminant state is present is stored as the correct answer data in the corresponding determination time range.
  • FIG. 4 is a diagram showing an example of the functional configuration of the learning unit 132.
  • the learning unit 132 has a CNN unit 401 and a comparison / change unit 402.
  • the learning unit 132 reads the learning data 320 from the learning data storage unit 135, and inputs each two-dimensional array data included in the "image data" of the read learning data 320 to the CNN unit 401. Further, the learning unit 132 reads the learning data 320 from the learning data storage unit 135, and is included in the "correct answer data" of the read learning data 320, and is information indicating that the learning data is in a rut state or a non-warping state. Information indicating that is input to the comparison / change unit 402.
  • the CNN unit 401 is, for example, a determination model formed by a convolutional neural network, and when two-dimensional array data is input, it outputs information indicating that it is in a reflexive state or information indicating that it is in a non-reflexive state. do.
  • the CNN unit 401 updates the model parameters by the comparison / change unit 402 according to the output of the information indicating the ruminant state or the information indicating the non-ruminant state.
  • the comparison / change unit 402 When the comparison / change unit 402 outputs the information indicating that it is in the ruminant state or the information indicating that it is in the non-ruminant state by the CNN unit 401, it compares it with the correct answer data input by the learning unit 132 and makes an error. (Error of classification probability) is calculated. Further, the comparison / change unit 402 back-propagates the calculated error and updates the model parameters of the CNN unit 401.
  • FIG. 5 is a diagram showing a specific example of the functional configuration and processing of the inference data generation unit.
  • the inference data generation unit 133 includes an acceleration data acquisition unit 501, a z-axis direction extraction unit 502, a frequency analysis unit 503, and a data generation unit 504.
  • acceleration data acquisition unit 501 to the data generation unit 504 of the inference data generation unit 133 have the same functions as the acceleration data acquisition unit 301 to the data generation unit 304 of the learning data generation unit 131. , The description is omitted here.
  • the one-dimensional time-series data indicating the acceleration in each of the three axial directions acquired by the acceleration data acquisition unit 501 is the time-series data transmitted from the measuring device attached to the cow to be determined, and is in a rebellious state (or). Information indicating that it is in a non-reflexive state) is not associated.
  • the two-dimensional array data extracted while shifting the predetermined inference time range (see reference numeral 515) at a predetermined shift interval is used as the time information which is the start time of each inference time range. It is stored in the inference data storage unit 136 in association with each other.
  • the predetermined inference time range is equal to the predetermined determination time range, for example, 15 seconds.
  • the shift interval when the data generation unit 504 extracts the two-dimensional array data is equal to the shift interval when the data generation unit 304 extracts the two-dimensional array data, for example, 1 second.
  • FIG. 6 is a diagram showing a specific example of the functional configuration and processing of the inference unit.
  • the inference unit 134 has a CNN unit 601 and a determination result output unit 602.
  • the inference unit 134 reads out the inference data 520 from the inference data storage unit 136, and inputs the two-dimensional array data included in the "image data" of the read inference data 520 into the CNN unit 601.
  • the model parameters of the learned determination model generated by performing machine learning by the learning unit 132 are applied to the CNN unit 601.
  • the CNN unit 601 outputs an inference result (information indicating that it is in a ruminant state or information indicating that it is in a non-ruminant state) by inputting two-dimensional array data.
  • the determination result output unit 602 arranges and outputs the inference results output by the CNN unit 601 along the time axis.
  • the determination result 610 shows how the inference results output from the CNN unit 601 are arranged along the time axis based on the time information.
  • FIG. 7 is a flowchart showing the flow of the determination process.
  • the determination device 130 first executes the learning phase. Specifically, in step S701, the acceleration data acquisition unit 301 of the learning data generation unit 131 is associated with information indicating that it is in a rebellious state or information indicating that it is in a non-reflexive state in the three-axis direction. Acquire one-dimensional time-series data showing each acceleration.
  • the z-axis direction extraction unit 302 of the learning data generation unit 131 has the one-dimensional time-series data indicating the acceleration in the z-axis direction from the acquired one-dimensional time-series data indicating the accelerations in each of the three-axis directions. Is extracted. Further, the frequency analysis unit 303 of the learning data generation unit 131 performs frequency analysis while shifting the one-dimensional time series data indicating the acceleration in the z-axis direction at a predetermined shift interval for each predetermined analysis time range. As a result, the frequency analysis unit 303 of the learning data generation unit 131 generates two-dimensional array data indicating the intensity of each frequency at each time.
  • the data generation unit 304 of the learning data generation unit 131 extracts the two-dimensional array data while shifting the predetermined determination time range at a predetermined shift interval. Further, the data generation unit 304 of the learning data generation unit 131 corresponds to the information indicating that the two-dimensional array data extracted for each predetermined determination time range is in the rebellious state or the information indicating that the non-rebellious state is present. It is attached and stored in the learning data storage unit 135 as learning data.
  • step S703 the learning unit 132 compares the image data (two-dimensional array data) included in the learning data with the CNN unit 401 with the correct answer data (information indicating that it is in the rebellious state (or non-rebellious state)). It is input to the change unit 402 and machine learning is performed on the determination model.
  • the determination device 130 shifts to the inference phase.
  • the acceleration data acquisition unit 501 of the inference data generation unit 133 is a one-dimensional time indicating acceleration in each of the three axial directions measured by a measuring device attached to the cow to be determined. Get series data.
  • step S705 the z-axis direction extraction unit 502 of the inference data generation unit 133 increases the one-dimensional time-series data indicating the acceleration in the z-axis direction from the acquired one-dimensional time-series data indicating the accelerations in each of the three-axis directions. Is extracted. Further, the frequency analysis unit 503 of the inference data generation unit 133 performs frequency analysis while shifting the one-dimensional time series data indicating the acceleration in the z-axis direction at a predetermined shift interval for each predetermined analysis time range. As a result, the frequency analysis unit 503 of the inference data generation unit 133 generates two-dimensional array data indicating the intensity of each frequency at each time.
  • the data generation unit 504 of the inference data generation unit 133 extracts the two-dimensional array data while shifting the predetermined inference time range at a predetermined shift interval. Further, the data generation unit 504 of the inference data generation unit 133 stores the two-dimensional array data extracted for each predetermined inference time range in the inference data storage unit 136 as inference data in association with the time information. do.
  • step S706 the inference unit 134 executes the trained determination model CNN unit 601 by inputting the image data (two-dimensional array data) included in the inference data. As a result, the CNN unit 601 outputs an inference result (information indicating that it is in a ruminant state or information indicating that it is in a non-ruminant state).
  • step S707 the determination result output unit 602 of the inference unit 134 outputs the inference result output from the CNN unit 601 in association with the time information.
  • the determination device 130 is -Acquire one-dimensional time-series data indicating acceleration in each of the three axial directions, which is output from a measuring device attached to the neck of the cow.
  • -A two-dimensional array showing the intensity of each frequency at each time after extracting the one-dimensional time-series data showing the acceleration in the z-axis direction from the one-dimensional time-series data showing the acceleration in each of the three axes and analyzing the frequency.
  • Generate data -Inputs two-dimensional array data for each predetermined judgment time range and outputs information indicating the action classification for each predetermined judgment time range (information indicating that the ruminant state is present or information indicating that the ruminant state is present). do.
  • the amount of information used for determination is increased by frequency-analyzing the acceleration data which is one-dimensional time series data and generating the two-dimensional array data. Further, in the determination device 130 according to the present embodiment, by inputting the two-dimensional array data into the determination model and performing machine learning, it is possible to capture the characteristics of the ruminant behavior of the cow such as periodicity and continuity. Generate a judgment model for.
  • the characteristic of the ruminant behavior of the cow such as the continuation of a high intensity state near a predetermined frequency (having periodicity and continuity), is exhibited in the individual. It can be captured accurately regardless of the difference in frequency characteristics of each.
  • the determination device 130 it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
  • FIG. 8 is a second diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • the difference from FIG. 3 is that in the case of the learning data generation unit 800, the x-axis direction extraction unit 801 and the y-axis direction extraction unit 811 and the frequency analysis units 802 and 812 are provided, and the function of the data generation unit 820 is shown in FIG. This is different from the function of the data generation unit 304 of. Further, the difference from FIG. 3 is that the configuration of the learning data 830 stored in the learning data storage unit 135 is different from the configuration of the learning data 320 of FIG.
  • the graphs corresponding to the graphs 311 to 314 are omitted due to space limitations, but the graphs of the data output from each part of the learning data generation unit 800 are the same as those of the graphs 311 to 314. be.
  • the x-axis direction extraction unit 801 extracts time-series data indicating acceleration in the x-axis direction from the one-dimensional time-series data indicating acceleration in each of the three axes directions acquired by the acceleration data acquisition unit 301.
  • the y-axis direction extraction unit 811 extracts time-series data indicating acceleration in the y-axis direction from the one-dimensional time-series data indicating acceleration in each of the three axes directions acquired by the acceleration data acquisition unit 301.
  • the frequency analysis unit 802 frequency-analyzes the time-series data indicating the acceleration in the x-axis direction extracted by the x-axis direction extraction unit 801 for each predetermined analysis time range, and two-dimensionally indicates the intensity of each frequency at each time. Generate array data.
  • the frequency analysis unit 812 frequency-analyzes the time-series data indicating the acceleration in the y-axis direction extracted by the y-axis direction extraction unit 811 for each predetermined analysis time range, and two-dimensionally indicates the intensity of each frequency at each time. Generate array data.
  • the data generation unit 820 indicates that the two-dimensional array data for each predetermined determination time range, which is output from the frequency analysis units 802, 812, and 303, is information indicating that it is in the rebellious state or that it is in the non-rebellious state. It is stored in the learning data storage unit 135 in association with the information.
  • the learning data 830 is an example of the learning data stored by the data generation unit 820, and the information items are "image data (x)", “image data (y)”, and “image data”. (Z) "," Correct answer data "is included.
  • the "image data (x)" is two-dimensional array data of each determination time range extracted by the data generation unit 304, and is two-dimensional array data based on one-dimensional time-series data indicating acceleration in the x-axis direction. Is stored.
  • the "image data (y)" is two-dimensional array data of each determination time range extracted by the data generation unit 304, and is two-dimensional array data based on one-dimensional time-series data indicating acceleration in the y-axis direction. Is stored.
  • the "image data (z)" is two-dimensional array data of each determination time range extracted by the data generation unit 304, and is two-dimensional array data based on one-dimensional time-series data indicating acceleration in the z-axis direction. Is stored.
  • the functional configuration of the learning unit 132 in the second embodiment is the same as the functional configuration of the learning unit 132 in the first embodiment shown in FIG.
  • the learning unit 132 in the second embodiment when inputting the learning data 830 read from the learning data storage unit 135, -Two-dimensional array data included in "image data (x)" is 1ch (channel), -Two-dimensional array data included in "image data (y)” is 2ch (channel), -The two-dimensional array data included in the "image data (z)” is 3ch (channel),
  • machine learning is performed on the determination model (however, the two-dimensional array data of each channel is the two-dimensional array data to which the same time information is associated).
  • the inference data generation unit 133 in the second embodiment has the same configuration as the learning data generation unit 800 in the second embodiment shown in FIG.
  • the inference data generated by the inference data generation unit 133 includes "image data (x)”, “image data (y)”, “image data (z)", and "time information” as information items. "Is included.
  • the inference unit 134 in the second embodiment has the same functional configuration as the inference unit 134 in the first embodiment shown in FIG. However, in the case of the inference unit 134 in the second embodiment, when inputting the inference data read from the inference data storage unit 136, ⁇ 1ch of 2D array data included in "image data (x)" ⁇ 2ch, 2ch, 2D array data included in "image data (y)” -The two-dimensional array data included in the "image data (z)" is 3ch, The inference processing is performed by inputting the data to the CNN unit 601 and the inference result is output (however, the two-dimensional array data of each channel is the two-dimensional array data to which the same time information is associated).
  • the determination device 130 is -Acquire one-dimensional time-series data indicating acceleration in each of the three axial directions, which is output from a measuring device attached to the neck of the cow.
  • After extracting the one-dimensional time-series data showing the acceleration in the x-axis direction, the y-axis direction, and the z-axis direction from the one-dimensional time-series data showing the acceleration in each of the three axis directions, frequency analysis is performed, and at each time. Two-dimensional array data showing the intensity of each frequency is generated.
  • the one-dimensional time series data indicating the accelerations in each of the three axial directions is frequency-analyzed, the two-dimensional array data is generated for each axis, and then the machine is used. Do learning.
  • the determination device 130 it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
  • the time series data in the x-axis direction, the y-axis direction, and the z-axis direction are extracted from the one-dimensional time-series data showing the accelerations in each of the three-axis directions, and the respective two-dimensional data are extracted.
  • the case of generating array data has been described. Further, the two-dimensional array data at this time has been described as being handled in gray scale.
  • each two-dimensional array data is handled in color, and the two-dimensional array data is divided and processed for each color component (for each R value, G value, and B value). The case will be described.
  • the third embodiment will be described focusing on the differences from the second embodiment.
  • FIG. 9 is a third diagram showing a specific example of the functional configuration and processing of the learning data generation unit.
  • the learning data generation unit 900 has R value extraction units 901, 911, 921, G value extraction units 902, 912, 922, and B value extraction units 903, 913, 923.
  • the function of the data generation unit 930 is different from the function of the data generation unit 820 of FIG.
  • the difference from FIG. 8 is that the configuration of the learning data 940 stored in the learning data storage unit 135 is different from the configuration of the learning data 830 of FIG.
  • the R value extraction units 901, 911, and 921 extract the two-dimensional array data of the R value from the two-dimensional array data output from the frequency analysis units 802, 812, and 303, respectively.
  • the G value extraction units 902, 912, and 922 extract the two-dimensional array data of the G value from the two-dimensional array data output from the frequency analysis units 802, 812, and 303, respectively.
  • the B value extraction units 903, 913, and 923 extract the B value two-dimensional array data from the two-dimensional array data output from the frequency analysis units 802, 812, and 303, respectively.
  • the data generation unit 930 is output from the R value extraction units 901, 911, 921, the G value extraction units 902, 912, 922, and the B value extraction units 903, 913, and 923, respectively, in two dimensions for each predetermined analysis time range. Get array data.
  • the data generation unit 930 extracts two-dimensional array data while shifting a predetermined determination time range at a predetermined shift interval, and information indicating that the state is ruminant or information indicating that the state is non-ruminant. It is stored as learning data 940 in association with.
  • the learning data 940 is an example of the learning data stored by the data generation unit 930.
  • the training data 940 contains "image data (x, R)", ⁇ "image data (x, B)", “image data (y, R)” ⁇ “image data (y, B)” as information items. ",” Image data (z, R) "to” Image data (z, B) "," Correct answer data "is included.
  • image data (x, R)" to “image data (x, B)” are two-dimensional array data of each determination time range based on one-dimensional time-series data indicating acceleration in the x-axis direction. Two-dimensional array data of R value, G value, and B value are stored.
  • image data (y, R)" to “image data (y, B)” are two-dimensional array data of each determination time range based on one-dimensional time-series data indicating acceleration in the y-axis direction. Two-dimensional array data of R value, G value, and B value are stored.
  • image data (z, R)" to “image data (z, B)” are two-dimensional array data of each determination time range based on one-dimensional time-series data indicating acceleration in the z-axis direction. Two-dimensional array data of R value, G value, and B value are stored.
  • the functional configuration of the learning unit 132 in the third embodiment is the same as the functional configuration of the learning unit 132 in the first embodiment shown in FIG.
  • the learning unit 132 in the third embodiment when inputting the learning data 940 read from the learning data storage unit 135, -Each two-dimensional array data included in “image data (x, R)" to “image data (x, B)” is 1ch to 3ch, -The two-dimensional array data included in the "image data (y, R)" to “image data (y, B)” is 4ch to 6ch, -The two-dimensional array data included in “image data (z, R)” to “image data (z, B)” is 7ch to 9ch, Is input to the CNN unit 401 to perform machine learning on the determination model.
  • the inference data generation unit 133 in the third embodiment has the same configuration as the learning data generation unit 900 in the third embodiment shown in FIG.
  • the inference data generated by the inference data generation unit 133 has information as an item of information. -"Image data (x, R)" to “Image data (x, B)", -"Image data (y, R)” to “Image data (y, B)", -"Image data (z, R)” to "Image data (z, B)", "Time information", Is included.
  • the inference unit 134 in the third embodiment has the same functional configuration as the inference unit 134 in the first embodiment shown in FIG. However, in the case of the inference unit 134 in the third embodiment, when inputting the inference data read from the inference data storage unit 136, -Each two-dimensional array data included in “image data (x, R)" to “image data (x, B)” is 1ch to 3ch, -The two-dimensional array data included in the "image data (y, R)" to “image data (y, B)” is 4ch to 6ch, -The two-dimensional array data included in “image data (z, R)" to “image data (z, B)” is 7ch to 9ch, Is input to the CNN unit 601 to perform inference processing and output the inference result.
  • the determination device 130 is -Acquire one-dimensional time-series data indicating acceleration in each of the three axial directions, which is output from a measuring device attached to the neck of the cow.
  • After extracting the one-dimensional time-series data showing the acceleration in the x-axis direction, the y-axis direction, and the z-axis direction from the one-dimensional time-series data showing the acceleration in each of the three axis directions, frequency analysis is performed, and at each time. Two-dimensional array data showing the intensity of each frequency is generated. -The generated two-dimensional array data is divided for each color component.
  • the determination device 130 frequency-analyzes one-dimensional time-series data indicating acceleration in each of the three axial directions, generates two-dimensional array data for each axis, and for each color component. After dividing into, machine learning is performed.
  • the determination device 130 it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
  • the information associated with the time series data is not limited to this.
  • the information indicating that the ruminant state or the information indicating the non-ruminant state instead of the information indicating that the ruminant state or the information indicating the non-ruminant state, the information indicating the behavior classification other than the information indicating the ruminant state or the information indicating the non-ruminant state is provided. It may be associated.
  • the behavior classification other than the information indicating the ruminant state or the information indicating the non-ruminant state is shown.
  • Information may be associated.
  • the information indicating the behavior classification other than the information indicating the ruminant state or the information indicating the non-ruminant state mentioned here includes, for example, the information indicating the feeding state and the walking state. Information and the like indicating the above are included.
  • FIG. 10 is a fourth diagram showing a specific example of the functional configuration and processing of the learning data generation unit. As shown in graphs 1011 to 1013 of FIG. 10, when the information indicating the behavior classification 1 or the information indicating the behavior classification 2 is associated, the "correct answer data" of the learning data 1020 is associated with any of the behavior classifications. Is stored.
  • the behavior classification of the cow to be determined can be accurately determined based on the one-dimensional time series data indicating the acceleration in each of the three axial directions. ..
  • the determination result output unit 602 has been described as outputting the inference result output from the CNN unit 601 in association with the time information. Further, in the first to fourth embodiments, the determination result output unit 602 has been described as outputting the inference result for each shift interval.
  • the interval at which the cow transitions from the ruminant state to the non-ruminant state, or the interval at which the cow transitions from the non-ruminant state to the ruminant state, is sufficiently longer than the shift interval.
  • the inference results for a plurality of inference time ranges including the predetermined time are referred to, and the final inference result is obtained based on the referenced plurality of inference results. Determine the inference result.
  • FIG. 11 is a second diagram showing a specific example of the functional configuration and processing of the inference unit.
  • the difference from the inference unit 134 in the first embodiment shown in FIG. 6 is that in the case of FIG. 11, it has a statistical processing unit 1100.
  • the statistical processing unit 1100 is an example of a determination unit.
  • the statistical processing unit 1100 acquires the inference result for each predetermined inference time range output from the CNN unit 601.
  • the inference result 1110 shows the inference results for a plurality of inference time ranges 1121 to 1125 including a predetermined time 1111.
  • the statistical processing unit 1100 determines the inference result of the predetermined time 1111 included in the inference result 1110 based on the inference results for the plurality of inference time ranges 1121 to 1125.
  • the statistical processing unit 1100 determines an inference result having a high inference frequency as an inference result at a predetermined time 1111 (that is, the final inference result is determined by a majority vote).
  • a predetermined time 1111 that is, the final inference result is determined by a majority vote.
  • the statistical processing unit 1100 determines the inference result at the predetermined time 1111 as the information indicating the ruminant state.
  • the final inference result at each time is obtained by performing statistical processing on the plurality of inference results. To determine.
  • the determination device 130 it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
  • the inference results for a plurality of inference time ranges including the predetermined time 1111 are targeted, but the target used for the determination is limited to this. Not done. For example, a predetermined time 1111 based on inference results for a predetermined number of inference time ranges before or after the predetermined time 1111 regardless of whether or not the predetermined time 1111 is included. You may determine the final inference result in.
  • the noise contained in the two-dimensional array data generated when the inference time range is shortened.
  • the influence of can be eliminated.
  • the inference time range is shortened, the time resolution is improved as compared with the case where the inference time range is lengthened, but it is easily affected by noise.
  • the influence of noise is eliminated. can do.
  • Time series data may be extracted.
  • the one-dimensional time series data indicating the acceleration is divided into three axes for frequency analysis, two-dimensional array data is generated for each axis, and each two-dimensional array data is further obtained.
  • the case of dividing each color component has been described.
  • the two-dimensional array data generated for any of the one-dimensional time-series data indicating the acceleration in the x-axis direction, the y-axis direction, and the z-axis direction extracted from the one-dimensional time-series data indicating the acceleration is used as a color component. It may be divided for each.
  • the method of generating the two-dimensional array data is not limited to this, for example, the value obtained by the frequency analysis is not imaged (not converted to grayscale, R, G, B), and each frequency at each time. It may be configured to generate two-dimensional array data indicating the strength of.
  • the acceleration sensor is attached to the neck portion, but the attachment portion of the acceleration sensor is not limited to the neck portion and may be attached to another portion.
  • Judgment system 110 Measuring device 130: Judgment device 131: Learning data generation unit 132: Learning unit 133: Inference data generation unit 134: Inference unit 301: Acceleration data acquisition unit 302: z-axis direction extraction unit 303: Frequency Analysis unit 304: Data generation unit 320: Learning data 401: CNN unit 402: Comparison / change unit 501: Acceleration data acquisition unit 502: z-axis direction extraction unit 503: Frequency analysis unit 504: Data generation unit 520: Inference data 601: CNN unit 602: Judgment result output unit 830: Learning data 940: Learning data 1020: Learning data 1100: Statistical processing unit

Abstract

The objective of the present invention is to determine accurately the specific behavior class of a ruminant. This determination device comprises: a generation unit performing frequency analysis on time-series one-dimensional data delivered from an acceleration sensor attached to a ruminant and representing acceleration, and generating two-dimensional array data representing the intensity of each frequency at each time point; and a determination unit taking as input the two-dimensional array data for each of preset time ranges and determining the behavior class of the ruminant for each of the preset time ranges.

Description

判定装置及び判定プログラムJudgment device and judgment program
 本発明は、判定装置及び判定プログラムに関する。 The present invention relates to a determination device and a determination program.
 一般に、牛などの反芻動物は、反芻時間の長さが健康状態を表す指標となる。このため、反芻時間を適切に管理することができれば、健康状態が変化したことを精度よく検知することができる。 In general, for ruminant animals such as cows, the length of rumination time is an indicator of their health condition. Therefore, if the rumination time can be appropriately managed, it is possible to accurately detect that the health condition has changed.
 一方で、反芻状態であるか否かを判定する方法として、例えば、反芻動物に加速度センサを取り付け、加速度データを分析する方法が挙げられる。 On the other hand, as a method of determining whether or not a ruminant state is present, for example, a method of attaching an acceleration sensor to a ruminant animal and analyzing acceleration data can be mentioned.
特開2018-170969号公報Japanese Unexamined Patent Publication No. 2018-1770969 特開2017-158509号公報JP-A-2017-158509 特開2017-051146号公報Japanese Unexamined Patent Publication No. 2017-051146
 しかしながら、加速度データは、各時間における加速度の大きさを示す1次元の時系列データに過ぎず、情報量が少ない。一方で、加速度データには反芻行動以外の様々な行動に基づく変動要素が含まれており、当該1次元の時系列データに基づいて行動分類の1つである反芻行動の特徴を捉えて、反芻状態であるか否かを精度よく判定することは容易ではない。 However, the acceleration data is only one-dimensional time-series data indicating the magnitude of acceleration at each time, and the amount of information is small. On the other hand, the acceleration data contains variable elements based on various behaviors other than rumination behavior, and based on the one-dimensional time-series data, the characteristics of rumination behavior, which is one of the behavior classifications, are grasped and rumination. It is not easy to accurately determine whether or not it is in a state.
 一つの側面では、反芻動物の特定の行動分類を精度よく判定することを目的としている。 One aspect is aimed at accurately determining a specific behavioral classification of ruminants.
 一態様によれば、判定装置は、
 反芻動物に取り付けられた加速度センサより出力される、加速度を示す1次元の時系列データを周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する生成部と、
 所定の時間範囲ごとの前記2次元配列データを入力として、前記反芻動物の前記所定の時間範囲ごとの行動分類を判定する判定部とを有する。
According to one aspect, the determination device
A generator that frequency-analyzes one-dimensional time-series data indicating acceleration output from an acceleration sensor attached to an anticorrosive animal and generates two-dimensional array data indicating the intensity of each frequency at each time.
It has a determination unit for determining the behavior classification of the ruminant for each predetermined time range by inputting the two-dimensional array data for each predetermined time range.
 反芻動物の特定の行動分類を精度よく判定することができる。 It is possible to accurately determine the specific behavioral classification of ruminants.
図1は、判定システムのシステム構成及び判定装置の機能構成の一例を示す図である。FIG. 1 is a diagram showing an example of a system configuration of a determination system and a functional configuration of a determination device. 図2は、判定装置のハードウェア構成の一例を示す図である。FIG. 2 is a diagram showing an example of the hardware configuration of the determination device. 図3は、学習用データ生成部の機能構成及び処理の具体例を示す第1の図である。FIG. 3 is a first diagram showing a specific example of the functional configuration and processing of the learning data generation unit. 図4は、学習部の機能構成の一例を示す図である。FIG. 4 is a diagram showing an example of the functional configuration of the learning unit. 図5は、推論用データ生成部の機能構成及び処理の具体例を示す図である。FIG. 5 is a diagram showing a specific example of the functional configuration and processing of the inference data generation unit. 図6は、推論部の機能構成及び処理の具体例を示す第1の図である。FIG. 6 is a first diagram showing a specific example of the functional configuration and processing of the inference unit. 図7は、判定処理の流れを示すフローチャートである。FIG. 7 is a flowchart showing the flow of the determination process. 図8は、学習用データ生成部の機能構成及び処理の具体例を示す第2の図である。FIG. 8 is a second diagram showing a specific example of the functional configuration and processing of the learning data generation unit. 図9は、学習用データ生成部の機能構成及び処理の具体例を示す第3の図である。FIG. 9 is a third diagram showing a specific example of the functional configuration and processing of the learning data generation unit. 図10は、学習用データ生成部の機能構成及び処理の具体例を示す第4の図である。FIG. 10 is a fourth diagram showing a specific example of the functional configuration and processing of the learning data generation unit. 図11は、推論部の機能構成及び処理の具体例を示す第2の図である。FIG. 11 is a second diagram showing a specific example of the functional configuration and processing of the inference unit.
 以下、各実施形態について添付の図面を参照しながら説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複した説明を省略する。 Hereinafter, each embodiment will be described with reference to the attached drawings. In the present specification and the drawings, components having substantially the same functional configuration are designated by the same reference numerals, so that duplicate description will be omitted.
 [第1の実施形態]
 <判定システムのシステム構成及び判定装置の機能構成>
 はじめに、反芻動物の一例である牛の行動分類(牛が反芻状態であるか非反芻状態であるか)を判定する判定システムのシステム構成、及び、判定装置の機能構成について説明する。図1は、判定システムのシステム構成及び判定装置の機能構成の一例を示す図である。
[First Embodiment]
<System configuration of judgment system and functional configuration of judgment device>
First, the system configuration of the determination system for determining the behavior classification of cattle (whether the cattle are in the ruminant state or the non-ruminant state), which is an example of ruminant animals, and the functional configuration of the determination device will be described. FIG. 1 is a diagram showing an example of a system configuration of a determination system and a functional configuration of a determination device.
 図1に示すように、判定システム100は、計測装置110と、ゲートウェイ装置120と、判定装置130とを有する。判定システム100において、計測装置110とゲートウェイ装置120とは、無線通信を介して接続され、ゲートウェイ装置120と判定装置130とは、不図示のネットワークを介して通信可能に接続される。 As shown in FIG. 1, the determination system 100 includes a measuring device 110, a gateway device 120, and a determination device 130. In the determination system 100, the measuring device 110 and the gateway device 120 are connected via wireless communication, and the gateway device 120 and the determination device 130 are communicably connected via a network (not shown).
 計測装置110は、牛10の所定部位(図1の例では、首部)に取り付けられる3軸方向(x軸方向、y軸方向、z軸方向)の加速度センサである。なお、x軸方向は、例えば、牛10の首部の体表に沿った方向であって、首部の円周に沿った方向を指し、y軸方向は、例えば、牛10の首部の体表に沿った方向であって、頭部から胴体部に向かう方向を指すものとする。また、z軸方向は、例えば、牛10の首部の体表に垂直な方向を指すものとする。 The measuring device 110 is an acceleration sensor in the three-axis directions (x-axis direction, y-axis direction, z-axis direction) attached to a predetermined portion (neck in the example of FIG. 1) of the cow 10. The x-axis direction is, for example, a direction along the body surface of the neck of the cow 10, and refers to a direction along the circumference of the neck, and the y-axis direction is, for example, the body surface of the neck of the cow 10. It shall be a direction along the direction from the head to the body. Further, the z-axis direction is defined as, for example, a direction perpendicular to the body surface of the neck of the cow 10.
 計測装置110では、3軸方向それぞれの加速度を示す1次元の時系列データを、所定のサンプリング周波数(例えば、5[Hz]~20[Hz])で計測し、ゲートウェイ装置120に送信する。 The measuring device 110 measures one-dimensional time-series data indicating acceleration in each of the three axial directions at a predetermined sampling frequency (for example, 5 [Hz] to 20 [Hz]) and transmits it to the gateway device 120.
 ゲートウェイ装置120は、計測装置110から送信された3軸方向それぞれの加速度を示す1次元の時系列データを、判定装置130に送信する。 The gateway device 120 transmits the one-dimensional time series data indicating the accelerations in each of the three axial directions transmitted from the measuring device 110 to the determination device 130.
 判定装置130は、3軸方向それぞれの加速度を示す1次元の時系列データに基づいて、牛10が反芻状態であるか非反芻状態であるかを判定する。判定装置130には、判定プログラムがインストールされており、当該プログラムが実行されることで、判定装置130は、学習用データ生成部131、学習部132、推論用データ生成部133、推論部134として機能する。 The determination device 130 determines whether the cow 10 is in a ruminant state or a non-ruminant state based on one-dimensional time-series data indicating acceleration in each of the three axial directions. A determination program is installed in the determination device 130, and when the program is executed, the determination device 130 serves as a learning data generation unit 131, a learning unit 132, an inference data generation unit 133, and an inference unit 134. Function.
 学習用データ生成部131は、3軸方向それぞれの加速度を示す1次元の時系列データから、z軸方向の加速度を示す1次元の時系列データを抽出する。また、学習用データ生成部131は、抽出した1次元の時系列データを、所定の解析時間範囲(例えば、15秒)ごとに周波数解析することで、各時間における各周波数の強度を示す2次元配列データ(スペクトログラム画像)を生成する。 The learning data generation unit 131 extracts one-dimensional time-series data indicating acceleration in the z-axis direction from one-dimensional time-series data indicating acceleration in each of the three axis directions. Further, the learning data generation unit 131 performs frequency analysis of the extracted one-dimensional time series data for each predetermined analysis time range (for example, 15 seconds) to show the intensity of each frequency at each time. Generate sequence data (spectrogram image).
 なお、所定の解析時間範囲は、所定のシフト間隔(例えば、1秒間隔)でシフトされ、それぞれの解析時間範囲で周波数解析された結果が、それぞれの解析時間範囲の各開始時間における2次元配列データとして生成される。つまり、ここでいう各時間とは、各時間範囲の開始時間を指すものとする(以下、同じ)。 The predetermined analysis time range is shifted at a predetermined shift interval (for example, 1 second interval), and the result of frequency analysis in each analysis time range is a two-dimensional array at each start time of each analysis time range. Generated as data. That is, each time referred to here refers to the start time of each time range (hereinafter, the same applies).
 また、学習用データ生成部131は、各時間において、2次元配列データと、反芻状態であることを示す情報または非反芻状態であることを示す情報(正解データ)とを対応付けて、学習用データとして学習用データ格納部135に格納する。 Further, the learning data generation unit 131 associates the two-dimensional array data with the information indicating that it is in the rebellious state or the information indicating that it is in the non-reflexed state (correct answer data) at each time for learning. It is stored as data in the learning data storage unit 135.
 学習部132は、学習用データ格納部135より学習用データを読み出し、読み出した学習用データに含まれる、各時間における2次元配列データを、所定の判定時間範囲(例えば、15秒)ごとに、順次、判定モデルに入力する。なお、所定の判定時間範囲は、所定のシフト間隔(例えば、1秒間隔)でシフトされ、それぞれの判定時間範囲の2次元配列データが、判定モデルに入力される。 The learning unit 132 reads the learning data from the learning data storage unit 135, and obtains the two-dimensional array data at each time included in the read learning data for each predetermined determination time range (for example, 15 seconds). Input to the judgment model in sequence. The predetermined determination time range is shifted at a predetermined shift interval (for example, 1 second interval), and the two-dimensional array data of each determination time range is input to the determination model.
 また、学習部132は、判定モデルより出力される反芻状態であることを示す情報または非反芻状態であることを示す情報(分類確率)が、対応する正解データに近づくように、判定モデルのモデルパラメータを更新する。このように、学習部132は、所定の判定時間範囲の2次元配列データと、反芻状態であることを示す情報または非反芻状態であることを示す情報と、の対応関係を特定する判定モデルについて機械学習を行う。 Further, the learning unit 132 sets the model of the determination model so that the information indicating that the ruminant state or the information indicating the non-ruminant state (classification probability) output from the determination model approaches the corresponding correct answer data. Update the parameters. As described above, the learning unit 132 relates to the determination model that specifies the correspondence between the two-dimensional array data in the predetermined determination time range and the information indicating that the ruminant state is present or the information indicating that the ruminant state is present. Perform machine learning.
 なお、学習部132が機械学習を行うことで生成される学習済みの判定モデルのモデルパラメータは、推論部134に適用される。 The model parameters of the learned determination model generated by the learning unit 132 performing machine learning are applied to the inference unit 134.
 推論用データ生成部133は、判定対象の牛に取り付けられた計測装置から送信される3軸方向それぞれの加速度を示す1次元の時系列データを、ゲートウェイ装置120を介して取得する。また、推論用データ生成部133は、3軸方向それぞれの加速度を示す1次元の時系列データから、z軸方向の加速度を示す1次元の時系列データを抽出する。また、推論用データ生成部133は、抽出した1次元の時系列データを、所定の解析時間範囲ごとに周波数解析することで、各時間における各周波数の強度を示す2次元配列データを生成する。 The inference data generation unit 133 acquires one-dimensional time-series data indicating acceleration in each of the three axial directions transmitted from the measuring device attached to the cow to be determined via the gateway device 120. Further, the inference data generation unit 133 extracts one-dimensional time-series data indicating acceleration in the z-axis direction from one-dimensional time-series data indicating acceleration in each of the three axis directions. Further, the inference data generation unit 133 generates two-dimensional array data indicating the intensity of each frequency at each time by frequency-analyzing the extracted one-dimensional time series data for each predetermined analysis time range.
 また、推論用データ生成部133は、各時間における2次元配列データと、各時間情報とを対応付けて、推論用データとして推論用データ格納部136に格納する。 Further, the inference data generation unit 133 associates the two-dimensional array data at each time with each time information and stores it in the inference data storage unit 136 as inference data.
 推論部134は判定部の一例である。推論部134は、推論用データ格納部136より推論用データを読み出し、読み出した推論用データに含まれる、各時間における2次元配列データを、順次、所定の推論時間範囲(所定の判定時間範囲と同じ長さの時間範囲)ごとに、学習済み判定モデルに入力する。また、推論部134は、学習済み判定モデルより出力される推論結果(反芻状態であることを示す情報または非反芻状態であることを示す情報)を、対応する時間情報とともに出力する。 The inference unit 134 is an example of the determination unit. The inference unit 134 reads the inference data from the inference data storage unit 136, and sequentially obtains the two-dimensional array data at each time included in the read inference data in a predetermined inference time range (with a predetermined determination time range). Input to the trained judgment model for each time range of the same length). Further, the inference unit 134 outputs the inference result (information indicating that it is in the ruminant state or information indicating that it is in the non-ruminant state) output from the learned determination model together with the corresponding time information.
 このように、本実施形態に係る判定装置130では、1次元の時系列データである加速度データを周波数解析し、2次元配列データを生成することで、判定に用いる情報量を増やす。また、本実施形態に係る判定装置130では、2次元配列データを判定モデルに入力し、機械学習を行うことで、周期性や継続性といった牛の反芻行動の特徴を捉えることが可能な学習済みの判定モデルを生成する。 As described above, in the determination device 130 according to the present embodiment, the amount of information used for determination is increased by frequency-analyzing the acceleration data which is one-dimensional time series data and generating the two-dimensional array data. Further, in the determination device 130 according to the present embodiment, by inputting the two-dimensional array data into the determination model and performing machine learning, it is possible to capture the characteristics of the ruminant behavior of the cow such as periodicity and continuity. Generate a judgment model for.
 これにより、本実施形態に係る判定装置130によれば、牛の行動分類(牛が反芻状態であるか非反芻状態であるか)を精度よく判定することができる。 Thereby, according to the determination device 130 according to the present embodiment, it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
 <判定装置のハードウェア構成>
 次に、判定装置130のハードウェア構成について説明する。図2は、判定装置のハードウェア構成の一例を示す図である。図2に示すように、判定装置130は、プロセッサ201、メモリ202、補助記憶装置203、I/F(Interface)装置204、通信装置205、ドライブ装置206を有する。なお、判定装置130の各ハードウェアは、バス207を介して相互に接続されている。
<Hardware configuration of judgment device>
Next, the hardware configuration of the determination device 130 will be described. FIG. 2 is a diagram showing an example of the hardware configuration of the determination device. As shown in FIG. 2, the determination device 130 includes a processor 201, a memory 202, an auxiliary storage device 203, an I / F (Interface) device 204, a communication device 205, and a drive device 206. The hardware of the determination device 130 is connected to each other via the bus 207.
 プロセッサ201は、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)等の各種演算デバイスを有する。プロセッサ201は、各種プログラム(例えば、判定プログラム等)をメモリ202上に読み出して実行する。 The processor 201 has various arithmetic devices such as a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit). The processor 201 reads various programs (for example, a determination program, etc.) onto the memory 202 and executes them.
 メモリ202は、ROM(Read Only Memory)、RAM(Random Access Memory)等の主記憶デバイスを有する。プロセッサ201とメモリ202とは、いわゆるコンピュータを形成し、プロセッサ201が、メモリ202上に読み出した各種プログラムを実行することで、当該コンピュータは上記機能(学習用データ生成部131~推論部134)を実現する。 The memory 202 has a main storage device such as a ROM (Read Only Memory) and a RAM (Random Access Memory). The processor 201 and the memory 202 form a so-called computer, and the processor 201 executes various programs read on the memory 202, so that the computer performs the above functions (learning data generation unit 131 to inference unit 134). Realize.
 補助記憶装置203は、各種プログラムや、各種プログラムがプロセッサ201によって実行される際に用いられる各種データを格納する。例えば、学習用データ格納部135、推論用データ格納部136は、補助記憶装置203において実現される。 The auxiliary storage device 203 stores various programs and various data used when various programs are executed by the processor 201. For example, the learning data storage unit 135 and the inference data storage unit 136 are realized in the auxiliary storage device 203.
 I/F装置204は、外部装置の一例である操作装置210、表示装置211と、判定装置130とを接続する接続デバイスである。I/F装置204は、判定装置130に対する操作を、操作装置210を介して受け付ける。また、I/F装置204は、判定装置130による処理の結果を出力し、表示装置211に表示する。 The I / F device 204 is a connection device that connects the operation device 210 and the display device 211, which are examples of external devices, and the determination device 130. The I / F device 204 receives an operation on the determination device 130 via the operation device 210. Further, the I / F device 204 outputs the result of processing by the determination device 130 and displays it on the display device 211.
 通信装置205は、他の装置と通信するための通信デバイスである。判定装置130の場合、通信装置205を介して他の装置であるゲートウェイ装置120と通信する。 The communication device 205 is a communication device for communicating with another device. In the case of the determination device 130, it communicates with the gateway device 120, which is another device, via the communication device 205.
 ドライブ装置206は記録媒体212をセットするためのデバイスである。ここでいう記録媒体212には、CD-ROM、フレキシブルディスク、光磁気ディスク等のように情報を光学的、電気的あるいは磁気的に記録する媒体が含まれる。また、記録媒体212には、ROM、フラッシュメモリ等のように情報を電気的に記録する半導体メモリ等が含まれていてもよい。 The drive device 206 is a device for setting the recording medium 212. The recording medium 212 referred to here includes a medium such as a CD-ROM, a flexible disk, a magneto-optical disk, or the like that optically, electrically, or magnetically records information. Further, the recording medium 212 may include a semiconductor memory or the like for electrically recording information such as a ROM or a flash memory.
 なお、補助記憶装置203にインストールされる各種プログラムは、例えば、配布された記録媒体212がドライブ装置206にセットされ、該記録媒体212に記録された各種プログラムがドライブ装置206により読み出されることでインストールされる。あるいは、補助記憶装置203にインストールされる各種プログラムは、通信装置205を介してネットワークからダウンロードされることで、インストールされてもよい。 The various programs installed in the auxiliary storage device 203 are installed, for example, by setting the distributed recording medium 212 in the drive device 206 and reading the various programs recorded in the recording medium 212 by the drive device 206. Will be done. Alternatively, the various programs installed in the auxiliary storage device 203 may be installed by being downloaded from the network via the communication device 205.
 <学習用データ生成部の機能構成及び処理の具体例>
 次に、判定装置130により実現される各機能のうち、学習用データ生成部131の機能構成、及び、学習用データ生成部131による処理の具体例について説明する。図3は、学習用データ生成部の機能構成及び処理の具体例を示す第1の図である。図3に示すように、学習用データ生成部131は、加速度データ取得部301、z軸方向抽出部302、周波数解析部303、データ生成部304を有する。
<Specific example of functional configuration and processing of the learning data generation unit>
Next, among the functions realized by the determination device 130, the functional configuration of the learning data generation unit 131 and a specific example of the processing by the learning data generation unit 131 will be described. FIG. 3 is a first diagram showing a specific example of the functional configuration and processing of the learning data generation unit. As shown in FIG. 3, the learning data generation unit 131 includes an acceleration data acquisition unit 301, a z-axis direction extraction unit 302, a frequency analysis unit 303, and a data generation unit 304.
 加速度データ取得部301は、計測装置110から送信された3軸方向それぞれの加速度を示す1次元の時系列データを、ゲートウェイ装置120を介して取得する。図3において、グラフ311は、加速度データ取得部301が取得した、3軸方向それぞれの加速度を示す1次元の時系列データの一例であり、横軸は時間を、縦軸は加速度を示している。 The acceleration data acquisition unit 301 acquires one-dimensional time-series data indicating acceleration in each of the three axial directions transmitted from the measuring device 110 via the gateway device 120. In FIG. 3, graph 311 is an example of one-dimensional time-series data indicating acceleration in each of the three axial directions acquired by the acceleration data acquisition unit 301, with the horizontal axis indicating time and the vertical axis indicating acceleration. ..
 なお、グラフ311に示すように、学習用データ生成部131の加速度データ取得部301が取得する、3軸方向それぞれの加速度を示す1次元の時系列データには、牛が反芻状態(または非反芻状態)であることを示す情報が予め対応付けられているものとする。 As shown in Graph 311, the one-dimensional time-series data indicating the acceleration in each of the three axial directions acquired by the acceleration data acquisition unit 301 of the learning data generation unit 131 shows that the cow is in a rebellious state (or non-reflexive state). It is assumed that the information indicating that the state) is associated in advance.
 z軸方向抽出部302は、加速度データ取得部301が取得した3軸方向それぞれの加速度を示す1次元の時系列データから、z軸方向の加速度を示す1次元の時系列データを抽出する。図3において、グラフ312は、z軸方向抽出部302により、3軸方向それぞれの加速度を示す1次元の時系列データから抽出された、z軸方向の加速度を示す1次元の時系列データの一例である。 The z-axis direction extraction unit 302 extracts one-dimensional time-series data indicating acceleration in the z-axis direction from the one-dimensional time-series data indicating acceleration in each of the three axes directions acquired by the acceleration data acquisition unit 301. In FIG. 3, graph 312 is an example of one-dimensional time-series data showing acceleration in the z-axis direction extracted from one-dimensional time-series data showing acceleration in each of the three axes by the z-axis direction extraction unit 302. Is.
 周波数解析部303は、z軸方向抽出部302により抽出された、z軸方向の加速度を示す1次元の時系列データを所定の解析時間範囲ごとに周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する。図3において、グラフ313は、周波数解析部303により生成された2次元配列データの一例であり、横軸は時間を、縦軸は周波数を示している。また、グラフ313内の色の違いは、各時間における各周波数の強度の違いを表している。例えば、グラフ313において赤色は最も強度が高く、以下、橙→黄→緑→青→紫の順で強度が低くなることを表している。ただし、本実施形態では、各時間における各周波数の強度の違いをグレイスケール(例えば、0~255)として取り扱うものとする(つまり、R、G、B3種類の値(0~255)ではなく、濃淡1種類の値(0~255)として取り扱うものとする)。 The frequency analysis unit 303 frequency-analyzes one-dimensional time-series data indicating acceleration in the z-axis direction extracted by the z-axis direction extraction unit 302 for each predetermined analysis time range, and determines the intensity of each frequency at each time. Generate the two-dimensional array data shown. In FIG. 3, graph 313 is an example of two-dimensional array data generated by the frequency analysis unit 303, with the horizontal axis representing time and the vertical axis representing frequency. Further, the difference in color in the graph 313 represents the difference in the intensity of each frequency at each time. For example, in Graph 313, red has the highest intensity, and hereafter, the intensity decreases in the order of orange → yellow → green → blue → purple. However, in the present embodiment, the difference in intensity of each frequency at each time is treated as a gray scale (for example, 0 to 255) (that is, it is not a value of three types of R, G, and B (0 to 255)). It shall be treated as one type of shade value (0 to 255)).
 なお、グラフ312において破線で示す矩形は、所定の解析時間範囲を示しており、当該解析時間範囲に含まれる1次元の時系列データが周波数解析されることで、グラフ313において、対応する時間における2次元配列データ(破線で示す矩形)が生成される。 The rectangle shown by the broken line in the graph 312 indicates a predetermined analysis time range, and the one-dimensional time series data included in the analysis time range is frequency-analyzed, so that the corresponding time is set in the graph 313. Two-dimensional array data (rectangular indicated by a broken line) is generated.
 データ生成部304は、周波数解析部303により生成された各時間における2次元配列データを、所定の判定時間範囲ごとに抽出する。図3において、グラフ314は、データ生成部304により、2次元配列データが所定の判定時間範囲(符号315参照)ごとに抽出される様子を示している。 The data generation unit 304 extracts the two-dimensional array data generated by the frequency analysis unit 303 at each time for each predetermined determination time range. In FIG. 3, graph 314 shows how the data generation unit 304 extracts the two-dimensional array data for each predetermined determination time range (see reference numeral 315).
 データ生成部304では、所定の判定時間範囲(例えば、15秒)を、所定のシフト間隔(例えば、1秒間隔)でシフトさせながら、2次元配列データを順次抽出する。 The data generation unit 304 sequentially extracts two-dimensional array data while shifting a predetermined determination time range (for example, 15 seconds) at a predetermined shift interval (for example, 1 second interval).
 また、データ生成部304は、抽出した各判定時間範囲の2次元配列データを、反芻状態であることを示す情報または非反芻状態であることを示す情報と対応付けて、学習用データとして、学習用データ格納部135に格納する。 Further, the data generation unit 304 associates the extracted two-dimensional array data of each determination time range with the information indicating that it is in the rebellious state or the information indicating that it is in the non-rebellious state, and learns it as learning data. It is stored in the data storage unit 135.
 図3において、学習用データ320は、データ生成部304により格納された学習用データの一例であり、情報の項目として、"画像データ"と"正解データ"とを含む。 In FIG. 3, the learning data 320 is an example of the learning data stored by the data generation unit 304, and includes "image data" and "correct answer data" as information items.
 "画像データ"には、データ生成部304により抽出された各判定時間範囲の2次元配列データが格納される。"正解データ"には、対応する判定時間範囲での正解データとして、反芻状態であることを示す情報または非反芻状態であることを示す情報のいずれかが格納される。 The "image data" stores the two-dimensional array data of each determination time range extracted by the data generation unit 304. In the "correct answer data", either information indicating that the ruminant state is present or information indicating that the ruminant state is present is stored as the correct answer data in the corresponding determination time range.
 <学習部の機能構成>
 次に、判定装置130により実現される各機能のうち、学習部132の機能構成について説明する。図4は、学習部132の機能構成の一例を示す図である。図4に示すように、学習部132は、CNN部401と、比較/変更部402とを有する。
<Functional configuration of the learning department>
Next, among the functions realized by the determination device 130, the functional configuration of the learning unit 132 will be described. FIG. 4 is a diagram showing an example of the functional configuration of the learning unit 132. As shown in FIG. 4, the learning unit 132 has a CNN unit 401 and a comparison / change unit 402.
 学習部132は、学習用データ格納部135より学習用データ320を読み出し、読み出した学習用データ320の"画像データ"に含まれる各2次元配列データをCNN部401に入力する。また、学習部132は、学習用データ格納部135より学習用データ320を読み出し、読み出した学習用データ320の"正解データ"に含まれる、反芻状態であることを示す情報または非反芻状態であることを示す情報を比較/変更部402に入力する。 The learning unit 132 reads the learning data 320 from the learning data storage unit 135, and inputs each two-dimensional array data included in the "image data" of the read learning data 320 to the CNN unit 401. Further, the learning unit 132 reads the learning data 320 from the learning data storage unit 135, and is included in the "correct answer data" of the read learning data 320, and is information indicating that the learning data is in a rut state or a non-warping state. Information indicating that is input to the comparison / change unit 402.
 CNN部401は、例えば、畳み込みニューラルネットワークにより形成された判定モデルであり、2次元配列データが入力されることで、反芻状態であることを示す情報または非反芻状態であることを示す情報を出力する。 The CNN unit 401 is, for example, a determination model formed by a convolutional neural network, and when two-dimensional array data is input, it outputs information indicating that it is in a reflexive state or information indicating that it is in a non-reflexive state. do.
 また、CNN部401は、反芻状態であることを示す情報または非反芻状態であることを示す情報を出力したことに応じて、比較/変更部402により、モデルパラメータが更新される。 Further, the CNN unit 401 updates the model parameters by the comparison / change unit 402 according to the output of the information indicating the ruminant state or the information indicating the non-ruminant state.
 比較/変更部402は、CNN部401により、反芻状態であることを示す情報または非反芻状態であることを示す情報が出力されると、学習部132により入力された正解データと比較し、誤差(分類確率の誤差)を算出する。また、比較/変更部402は、算出した誤差を逆伝播させ、CNN部401のモデルパラメータを更新する。 When the comparison / change unit 402 outputs the information indicating that it is in the ruminant state or the information indicating that it is in the non-ruminant state by the CNN unit 401, it compares it with the correct answer data input by the learning unit 132 and makes an error. (Error of classification probability) is calculated. Further, the comparison / change unit 402 back-propagates the calculated error and updates the model parameters of the CNN unit 401.
 <推論用データ生成部の機能構成及び処理の具体例>
 次に、判定装置130により実現される各機能のうち、推論用データ生成部133の機能構成、及び、推論用データ生成部133による処理の具体例について説明する。図5は、推論用データ生成部の機能構成及び処理の具体例を示す図である。図5に示すように、推論用データ生成部133は、加速度データ取得部501、z軸方向抽出部502、周波数解析部503、データ生成部504を有する。
<Specific example of functional configuration and processing of data generation unit for inference>
Next, among the functions realized by the determination device 130, the functional configuration of the inference data generation unit 133 and a specific example of the processing by the inference data generation unit 133 will be described. FIG. 5 is a diagram showing a specific example of the functional configuration and processing of the inference data generation unit. As shown in FIG. 5, the inference data generation unit 133 includes an acceleration data acquisition unit 501, a z-axis direction extraction unit 502, a frequency analysis unit 503, and a data generation unit 504.
 なお、推論用データ生成部133が有する加速度データ取得部501~データ生成部504の各部は、学習用データ生成部131が有する加速度データ取得部301~データ生成部304の各部と同じ機能であるため、ここでは説明を省略する。 Since the acceleration data acquisition unit 501 to the data generation unit 504 of the inference data generation unit 133 have the same functions as the acceleration data acquisition unit 301 to the data generation unit 304 of the learning data generation unit 131. , The description is omitted here.
 ただし、加速度データ取得部501が取得する3軸方向それぞれの加速度を示す1次元の時系列データは、判定対象の牛に取り付けられた計測装置から送信された時系列データであり、反芻状態(または非反芻状態)であることを示す情報が対応付けられていない。 However, the one-dimensional time-series data indicating the acceleration in each of the three axial directions acquired by the acceleration data acquisition unit 501 is the time-series data transmitted from the measuring device attached to the cow to be determined, and is in a rebellious state (or). Information indicating that it is in a non-reflexive state) is not associated.
 このため、データ生成部504では、所定の推論時間範囲(符号515参照)を、所定のシフト間隔でシフトさせながら抽出した2次元配列データを、各推論時間範囲の開始の時間である時間情報と対応付けて、推論用データ格納部136に格納する。 Therefore, in the data generation unit 504, the two-dimensional array data extracted while shifting the predetermined inference time range (see reference numeral 515) at a predetermined shift interval is used as the time information which is the start time of each inference time range. It is stored in the inference data storage unit 136 in association with each other.
 推論用データ530は、時間=tを開始の時間とする推論時間範囲の2次元配列データが格納された様子を示している。また、推論用データ530は、時間=t+αを開始の時間とする推論時間範囲の2次元配列データが格納された様子を示している。更に、時間=t+βを開始の時間とする推論時間範囲の2次元配列データが格納された様子を示している。 The inference data 530 shows how the two-dimensional array data in the inference time range with time = t as the start time is stored. Further, the inference data 530 shows that the two-dimensional array data in the inference time range with time = t + α as the start time is stored. Further, it shows how the two-dimensional array data in the inference time range with time = t + β as the start time is stored.
 なお、学習用データ生成部131と同様に、所定の推論時間範囲は、所定の判定時間範囲と等しく、例えば、15秒である。また、データ生成部504が2次元配列データを抽出する際のシフト間隔は、データ生成部304が2次元配列データを抽出する際のシフト間隔と等しく、例えば、1秒である。 Similar to the learning data generation unit 131, the predetermined inference time range is equal to the predetermined determination time range, for example, 15 seconds. The shift interval when the data generation unit 504 extracts the two-dimensional array data is equal to the shift interval when the data generation unit 304 extracts the two-dimensional array data, for example, 1 second.
 <推論部の機能構成及び処理の具体例>
 次に、判定装置130により実現される各機能のうち、推論部134の機能構成及び推論部134の処理の具体例について説明する。図6は、推論部の機能構成及び処理の具体例を示す図である。図6に示すように、推論部134は、CNN部601と、判定結果出力部602とを有する。
<Specific examples of functional configuration and processing of the inference unit>
Next, among the functions realized by the determination device 130, the functional configuration of the inference unit 134 and specific examples of the processing of the inference unit 134 will be described. FIG. 6 is a diagram showing a specific example of the functional configuration and processing of the inference unit. As shown in FIG. 6, the inference unit 134 has a CNN unit 601 and a determination result output unit 602.
 推論部134は、推論用データ格納部136より推論用データ520を読み出し、読み出した推論用データ520の"画像データ"に含まれる2次元配列データをCNN部601に入力する。 The inference unit 134 reads out the inference data 520 from the inference data storage unit 136, and inputs the two-dimensional array data included in the "image data" of the read inference data 520 into the CNN unit 601.
 CNN部601には、学習部132により機械学習が行われることで生成された学習済みの判定モデルのモデルパラメータが適用される。CNN部601は2次元配列データが入力されることで、推論結果(反芻状態であることを示す情報または非反芻状態であることを示す情報)を出力する。 The model parameters of the learned determination model generated by performing machine learning by the learning unit 132 are applied to the CNN unit 601. The CNN unit 601 outputs an inference result (information indicating that it is in a ruminant state or information indicating that it is in a non-ruminant state) by inputting two-dimensional array data.
 判定結果出力部602は、CNN部601により出力された推論結果を、時間軸に沿って配列して出力する。図6において、判定結果610は、CNN部601より出力された推論結果を、時間情報に基づいて時間軸に沿って配列した様子を示している。 The determination result output unit 602 arranges and outputs the inference results output by the CNN unit 601 along the time axis. In FIG. 6, the determination result 610 shows how the inference results output from the CNN unit 601 are arranged along the time axis based on the time information.
 <判定処理の流れ>
 次に、判定装置130による判定処理全体の流れについて説明する。図7は、判定処理の流れを示すフローチャートである。
<Flow of judgment processing>
Next, the flow of the entire determination process by the determination device 130 will be described. FIG. 7 is a flowchart showing the flow of the determination process.
 図7に示すように、判定装置130では、はじめに学習フェーズを実行する。具体的には、ステップS701において、学習用データ生成部131の加速度データ取得部301は、反芻状態であることを示す情報または非反芻状態であることを示す情報が対応付けられた、3軸方向それぞれの加速度を示す1次元の時系列データを取得する。 As shown in FIG. 7, the determination device 130 first executes the learning phase. Specifically, in step S701, the acceleration data acquisition unit 301 of the learning data generation unit 131 is associated with information indicating that it is in a rebellious state or information indicating that it is in a non-reflexive state in the three-axis direction. Acquire one-dimensional time-series data showing each acceleration.
 ステップS702において、学習用データ生成部131のz軸方向抽出部302は、取得された3軸方向それぞれの加速度を示す1次元の時系列データからz軸方向の加速度を示す1次元の時系列データを抽出する。また、学習用データ生成部131の周波数解析部303は、z軸方向の加速度を示す1次元の時系列データを、所定の解析時間範囲ごとに所定のシフト間隔でシフトさせながら周波数解析する。これにより、学習用データ生成部131の周波数解析部303は、各時間における各周波数の強度を示す2次元配列データを生成する。 In step S702, the z-axis direction extraction unit 302 of the learning data generation unit 131 has the one-dimensional time-series data indicating the acceleration in the z-axis direction from the acquired one-dimensional time-series data indicating the accelerations in each of the three-axis directions. Is extracted. Further, the frequency analysis unit 303 of the learning data generation unit 131 performs frequency analysis while shifting the one-dimensional time series data indicating the acceleration in the z-axis direction at a predetermined shift interval for each predetermined analysis time range. As a result, the frequency analysis unit 303 of the learning data generation unit 131 generates two-dimensional array data indicating the intensity of each frequency at each time.
 また、学習用データ生成部131のデータ生成部304は、所定の判定時間範囲を所定のシフト間隔でシフトさせながら、2次元配列データを抽出する。更に、学習用データ生成部131のデータ生成部304は、所定の判定時間範囲ごとに抽出した2次元配列データを、反芻状態であることを示す情報または非反芻状態であることを示す情報と対応付けて、学習用データとして、学習用データ格納部135に格納する。 Further, the data generation unit 304 of the learning data generation unit 131 extracts the two-dimensional array data while shifting the predetermined determination time range at a predetermined shift interval. Further, the data generation unit 304 of the learning data generation unit 131 corresponds to the information indicating that the two-dimensional array data extracted for each predetermined determination time range is in the rebellious state or the information indicating that the non-rebellious state is present. It is attached and stored in the learning data storage unit 135 as learning data.
 ステップS703において、学習部132は、学習用データに含まれる画像データ(2次元配列データ)をCNN部401に、正解データ(反芻状態(または非反芻状態)であることを示す情報)を比較/変更部402に入力し、判定モデルについて機械学習を行う。 In step S703, the learning unit 132 compares the image data (two-dimensional array data) included in the learning data with the CNN unit 401 with the correct answer data (information indicating that it is in the rebellious state (or non-rebellious state)). It is input to the change unit 402 and machine learning is performed on the determination model.
 機械学習が完了し、学習済みの判定モデルが生成されると、判定装置130は、推論フェーズに移行する。具体的には、ステップS704において、推論用データ生成部133の加速度データ取得部501は、判定対象の牛に取り付けられた計測装置において測定された、3軸方向それぞれの加速度を示す1次元の時系列データを取得する。 When the machine learning is completed and the learned determination model is generated, the determination device 130 shifts to the inference phase. Specifically, in step S704, the acceleration data acquisition unit 501 of the inference data generation unit 133 is a one-dimensional time indicating acceleration in each of the three axial directions measured by a measuring device attached to the cow to be determined. Get series data.
 ステップS705において、推論用データ生成部133のz軸方向抽出部502は、取得された3軸方向それぞれの加速度を示す1次元の時系列データからz軸方向の加速度を示す1次元の時系列データを抽出する。また、推論用データ生成部133の周波数解析部503は、z軸方向の加速度を示す1次元の時系列データを、所定の解析時間範囲ごとに所定のシフト間隔でシフトさせながら周波数解析する。これにより、推論用データ生成部133の周波数解析部503は、各時間における各周波数の強度を示す2次元配列データを生成する。更に、推論用データ生成部133のデータ生成部504は、所定の推論時間範囲を所定のシフト間隔でシフトさせながら、2次元配列データを抽出する。更に、推論用データ生成部133のデータ生成部504は、所定の推論時間範囲ごとに抽出した2次元配列データを、時間情報と対応付けて、推論用データとして、推論用データ格納部136に格納する。 In step S705, the z-axis direction extraction unit 502 of the inference data generation unit 133 increases the one-dimensional time-series data indicating the acceleration in the z-axis direction from the acquired one-dimensional time-series data indicating the accelerations in each of the three-axis directions. Is extracted. Further, the frequency analysis unit 503 of the inference data generation unit 133 performs frequency analysis while shifting the one-dimensional time series data indicating the acceleration in the z-axis direction at a predetermined shift interval for each predetermined analysis time range. As a result, the frequency analysis unit 503 of the inference data generation unit 133 generates two-dimensional array data indicating the intensity of each frequency at each time. Further, the data generation unit 504 of the inference data generation unit 133 extracts the two-dimensional array data while shifting the predetermined inference time range at a predetermined shift interval. Further, the data generation unit 504 of the inference data generation unit 133 stores the two-dimensional array data extracted for each predetermined inference time range in the inference data storage unit 136 as inference data in association with the time information. do.
 ステップS706において、推論部134は、推論用データに含まれる画像データ(2次元配列データ)を入力することで、学習済みの判定モデルであるCNN部601を実行させる。これにより、CNN部601は、推論結果(反芻状態であることを示す情報または非反芻状態であることを示す情報)を出力する。 In step S706, the inference unit 134 executes the trained determination model CNN unit 601 by inputting the image data (two-dimensional array data) included in the inference data. As a result, the CNN unit 601 outputs an inference result (information indicating that it is in a ruminant state or information indicating that it is in a non-ruminant state).
 ステップS707において、推論部134の判定結果出力部602は、CNN部601より出力された推論結果を、時間情報と対応付けて出力する。 In step S707, the determination result output unit 602 of the inference unit 134 outputs the inference result output from the CNN unit 601 in association with the time information.
 <まとめ>
 以上の説明から明らかなように、第1の実施形態に係る判定装置130は、
・牛の首部に取り付けられた計測装置より出力される、3軸方向それぞれの加速度を示す1次元の時系列データを取得する。
・3軸方向それぞれの加速度を示す1次元の時系列データからz軸方向の加速度を示す1次元の時系列データを抽出したうえで周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する。
・所定の判定時間範囲ごとの2次元配列データを入力として、所定の判定時間範囲ごとの行動分類を示す情報(反芻状態であることを示す情報または非反芻状態であることを示す情報)を出力する。
<Summary>
As is clear from the above description, the determination device 130 according to the first embodiment is
-Acquire one-dimensional time-series data indicating acceleration in each of the three axial directions, which is output from a measuring device attached to the neck of the cow.
-A two-dimensional array showing the intensity of each frequency at each time after extracting the one-dimensional time-series data showing the acceleration in the z-axis direction from the one-dimensional time-series data showing the acceleration in each of the three axes and analyzing the frequency. Generate data.
-Inputs two-dimensional array data for each predetermined judgment time range and outputs information indicating the action classification for each predetermined judgment time range (information indicating that the ruminant state is present or information indicating that the ruminant state is present). do.
 このように、第1の実施形態に係る判定装置130では、1次元の時系列データである加速度データを周波数解析し、2次元配列データを生成することで、判定に用いる情報量を増やす。また、本実施形態に係る判定装置130では、2次元配列データを判定モデルに入力し、機械学習を行うことで、周期性や継続性といった牛の反芻行動の特徴を捉えることが可能な学習済みの判定モデルを生成する。 As described above, in the determination device 130 according to the first embodiment, the amount of information used for determination is increased by frequency-analyzing the acceleration data which is one-dimensional time series data and generating the two-dimensional array data. Further, in the determination device 130 according to the present embodiment, by inputting the two-dimensional array data into the determination model and performing machine learning, it is possible to capture the characteristics of the ruminant behavior of the cow such as periodicity and continuity. Generate a judgment model for.
 これにより、第1の実施形態に係る判定装置130によれば、例えば、所定の周波数付近の強度が高い状態が継続する(周期性、継続性を有する)といった牛の反芻行動の特徴を、個体ごとの周波数特性の違いによらず、精度よく捉えることができる。 As a result, according to the determination device 130 according to the first embodiment, for example, the characteristic of the ruminant behavior of the cow, such as the continuation of a high intensity state near a predetermined frequency (having periodicity and continuity), is exhibited in the individual. It can be captured accurately regardless of the difference in frequency characteristics of each.
 この結果、第1の実施形態に係る判定装置130によれば、牛の行動分類(牛が反芻状態であるか非反芻状態であるか)を精度よく判定することができる。 As a result, according to the determination device 130 according to the first embodiment, it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
 [第2の実施形態]
 上記第1の実施形態では、3軸方向それぞれの加速度を示す1次元の時系列データから、z軸方向の加速度を示す1次元の時系列データを抽出して、2次元配列データを生成する場合について説明した。これに対して、第2の実施形態では、3軸方向それぞれの加速度を示す1次元の時系列データから、x軸方向、y軸方向、z軸方向それぞれの時系列データを抽出して、それぞれの2次元配列データを生成する場合について説明する。以下、第2の実施形態について、上記第1の実施形態との相違点を中心に説明する。
[Second Embodiment]
In the first embodiment, when one-dimensional time-series data showing acceleration in the z-axis direction is extracted from one-dimensional time-series data showing acceleration in each of the three axes to generate two-dimensional array data. Was explained. On the other hand, in the second embodiment, time-series data in the x-axis direction, the y-axis direction, and the z-axis direction are extracted from the one-dimensional time-series data showing the accelerations in the three-axis directions, respectively. The case of generating the two-dimensional array data of the above will be described. Hereinafter, the second embodiment will be described focusing on the differences from the first embodiment.
 <学習用データ生成部の機能構成及び処理の具体例>
 図8は、学習用データ生成部の機能構成及び処理の具体例を示す第2の図である。図3との相違点は、学習用データ生成部800の場合、x軸方向抽出部801、y軸方向抽出部811、周波数解析部802、812を有する点、データ生成部820の機能が図3のデータ生成部304の機能とは異なる点である。また、図3との相違点は、学習用データ格納部135に格納される学習用データ830の構成が、図3の学習用データ320の構成とは異なる点である。
<Specific example of functional configuration and processing of the learning data generation unit>
FIG. 8 is a second diagram showing a specific example of the functional configuration and processing of the learning data generation unit. The difference from FIG. 3 is that in the case of the learning data generation unit 800, the x-axis direction extraction unit 801 and the y-axis direction extraction unit 811 and the frequency analysis units 802 and 812 are provided, and the function of the data generation unit 820 is shown in FIG. This is different from the function of the data generation unit 304 of. Further, the difference from FIG. 3 is that the configuration of the learning data 830 stored in the learning data storage unit 135 is different from the configuration of the learning data 320 of FIG.
 なお、図8では、紙面の都合上、グラフ311~314に対応するグラフを省略しているが、学習用データ生成部800の各部より出力されるデータのグラフは、グラフ311~314と同様である。 In FIG. 8, the graphs corresponding to the graphs 311 to 314 are omitted due to space limitations, but the graphs of the data output from each part of the learning data generation unit 800 are the same as those of the graphs 311 to 314. be.
 x軸方向抽出部801は、加速度データ取得部301が取得した3軸方向それぞれの加速度を示す1次元の時系列データから、x軸方向の加速度を示す時系列データを抽出する。 The x-axis direction extraction unit 801 extracts time-series data indicating acceleration in the x-axis direction from the one-dimensional time-series data indicating acceleration in each of the three axes directions acquired by the acceleration data acquisition unit 301.
 y軸方向抽出部811は、加速度データ取得部301が取得した3軸方向それぞれの加速度を示す1次元の時系列データから、y軸方向の加速度を示す時系列データを抽出する。 The y-axis direction extraction unit 811 extracts time-series data indicating acceleration in the y-axis direction from the one-dimensional time-series data indicating acceleration in each of the three axes directions acquired by the acceleration data acquisition unit 301.
 周波数解析部802は、x軸方向抽出部801により抽出された、x軸方向の加速度を示す時系列データを所定の解析時間範囲ごとに周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する。 The frequency analysis unit 802 frequency-analyzes the time-series data indicating the acceleration in the x-axis direction extracted by the x-axis direction extraction unit 801 for each predetermined analysis time range, and two-dimensionally indicates the intensity of each frequency at each time. Generate array data.
 周波数解析部812は、y軸方向抽出部811により抽出された、y軸方向の加速度を示す時系列データを所定の解析時間範囲ごとに周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する。 The frequency analysis unit 812 frequency-analyzes the time-series data indicating the acceleration in the y-axis direction extracted by the y-axis direction extraction unit 811 for each predetermined analysis time range, and two-dimensionally indicates the intensity of each frequency at each time. Generate array data.
 データ生成部820は、周波数解析部802、812、303からそれぞれ出力される、所定の判定時間範囲ごとの2次元配列データを、反芻状態であることを示す情報または非反芻状態であることを示す情報と対応付けて、学習用データ格納部135に格納する。 The data generation unit 820 indicates that the two-dimensional array data for each predetermined determination time range, which is output from the frequency analysis units 802, 812, and 303, is information indicating that it is in the rebellious state or that it is in the non-rebellious state. It is stored in the learning data storage unit 135 in association with the information.
 図8において、学習用データ830は、データ生成部820により格納された学習用データの一例であり、情報の項目として、"画像データ(x)"、"画像データ(y)"、"画像データ(z)"、"正解データ"を含む。 In FIG. 8, the learning data 830 is an example of the learning data stored by the data generation unit 820, and the information items are "image data (x)", "image data (y)", and "image data". (Z) "," Correct answer data "is included.
 "画像データ(x)"には、データ生成部304により抽出された各判定時間範囲の2次元配列データであって、x軸方向の加速度を示す1次元の時系列データに基づく2次元配列データが格納される。 The "image data (x)" is two-dimensional array data of each determination time range extracted by the data generation unit 304, and is two-dimensional array data based on one-dimensional time-series data indicating acceleration in the x-axis direction. Is stored.
 "画像データ(y)"には、データ生成部304により抽出された各判定時間範囲の2次元配列データであって、y軸方向の加速度を示す1次元の時系列データに基づく2次元配列データが格納される。 The "image data (y)" is two-dimensional array data of each determination time range extracted by the data generation unit 304, and is two-dimensional array data based on one-dimensional time-series data indicating acceleration in the y-axis direction. Is stored.
 "画像データ(z)"には、データ生成部304により抽出された各判定時間範囲の2次元配列データであって、z軸方向の加速度を示す1次元の時系列データに基づく2次元配列データが格納される。 The "image data (z)" is two-dimensional array data of each determination time range extracted by the data generation unit 304, and is two-dimensional array data based on one-dimensional time-series data indicating acceleration in the z-axis direction. Is stored.
 "正解データ"には、対応する判定時間範囲での正解データとして、反芻状態であることを示す情報または非反芻状態であることを示す情報のいずれかが格納される。 In the "correct answer data", either information indicating that the ruminant state is present or information indicating that the ruminant state is present is stored as the correct answer data in the corresponding determination time range.
 <学習部、推論用データ生成部、推論部の機能構成>
 次に、第2の実施形態における学習部132、推論用データ生成部133、推論部134の機能構成について説明する。
<Functional configuration of learning unit, inference data generation unit, and inference unit>
Next, the functional configurations of the learning unit 132, the inference data generation unit 133, and the inference unit 134 in the second embodiment will be described.
 このうち、第2の実施形態における学習部132の機能構成は、図4で示した第1の実施形態における学習部132の機能構成と同様である。ただし、第2の実施形態における学習部132の場合、学習用データ格納部135より読み出した学習用データ830を入力する際、
・"画像データ(x)"に含まれる2次元配列データを1ch(チャネル)、
・"画像データ(y)"に含まれる2次元配列データを2ch(チャネル)、
・"画像データ(z)"に含まれる2次元配列データを3ch(チャネル)、
としてCNN部401に入力することで、判定モデルについて機械学習を行う(ただし、各chの2次元配列データは同じ時間情報が対応付けられた2次元配列データである)。
Of these, the functional configuration of the learning unit 132 in the second embodiment is the same as the functional configuration of the learning unit 132 in the first embodiment shown in FIG. However, in the case of the learning unit 132 in the second embodiment, when inputting the learning data 830 read from the learning data storage unit 135,
-Two-dimensional array data included in "image data (x)" is 1ch (channel),
-Two-dimensional array data included in "image data (y)" is 2ch (channel),
-The two-dimensional array data included in the "image data (z)" is 3ch (channel),
By inputting the data to the CNN unit 401, machine learning is performed on the determination model (however, the two-dimensional array data of each channel is the two-dimensional array data to which the same time information is associated).
 また、第2の実施形態における推論用データ生成部133は、図8に示した第2の実施形態における学習用データ生成部800と同様の構成を有する。ただし、推論用データ生成部133により生成される推論用データには、情報の項目として、"画像データ(x)"、"画像データ(y)"、"画像データ(z)"、"時間情報"が含まれる。 Further, the inference data generation unit 133 in the second embodiment has the same configuration as the learning data generation unit 800 in the second embodiment shown in FIG. However, the inference data generated by the inference data generation unit 133 includes "image data (x)", "image data (y)", "image data (z)", and "time information" as information items. "Is included.
 また、第2の実施形態における推論部134は、図6で示した第1の実施形態における推論部134の機能構成と同様である。ただし、第2の実施形態における推論部134の場合、推論用データ格納部136より読み出した推論用データを入力する際、
・"画像データ(x)"に含まれる2次元配列データを1ch、
・"画像データ(y)"に含まれる2次元配列データを2ch、
・"画像データ(z)"に含まれる2次元配列データを3ch、
としてCNN部601に入力することで推論処理を行い、推論結果を出力する(ただし、各chの2次元配列データは同じ時間情報が対応付けられた2次元配列データである)。
Further, the inference unit 134 in the second embodiment has the same functional configuration as the inference unit 134 in the first embodiment shown in FIG. However, in the case of the inference unit 134 in the second embodiment, when inputting the inference data read from the inference data storage unit 136,
・ 1ch of 2D array data included in "image data (x)"
・ 2ch, 2ch, 2D array data included in "image data (y)"
-The two-dimensional array data included in the "image data (z)" is 3ch,
The inference processing is performed by inputting the data to the CNN unit 601 and the inference result is output (however, the two-dimensional array data of each channel is the two-dimensional array data to which the same time information is associated).
 <まとめ>
 以上の説明から明らかなように、第2の実施形態に係る判定装置130は、
・牛の首部に取り付けられた計測装置より出力される、3軸方向それぞれの加速度を示す1次元の時系列データを取得する。
・3軸方向それぞれの加速度を示す1次元の時系列データからx軸方向、y軸方向、z軸方向の加速度を示す1次元の時系列データをそれぞれ抽出したうえで周波数解析し、各時間における各周波数の強度を示す2次元配列データをそれぞれ生成する。
・所定の判定時間範囲ごとの複数チャネル(3ch)の2次元配列データを入力として、所定の判定時間範囲ごとの行動分類を示す情報(反芻状態であることを示す情報または非反芻状態であることを示す情報)を出力する。
<Summary>
As is clear from the above description, the determination device 130 according to the second embodiment is
-Acquire one-dimensional time-series data indicating acceleration in each of the three axial directions, which is output from a measuring device attached to the neck of the cow.
・ After extracting the one-dimensional time-series data showing the acceleration in the x-axis direction, the y-axis direction, and the z-axis direction from the one-dimensional time-series data showing the acceleration in each of the three axis directions, frequency analysis is performed, and at each time. Two-dimensional array data showing the intensity of each frequency is generated.
-Information indicating the behavior classification for each predetermined judgment time range (information indicating that it is in a ruminant state or non-ruminant state) by inputting two-dimensional array data of multiple channels (3 channels) for each predetermined judgment time range. Information indicating) is output.
 このように、第2の実施形態に係る判定装置130では、3軸方向それぞれの加速度を示す1次元の時系列データをそれぞれ周波数解析し、軸ごとに2次元配列データを生成したうえで、機械学習を行う。 As described above, in the determination device 130 according to the second embodiment, the one-dimensional time series data indicating the accelerations in each of the three axial directions is frequency-analyzed, the two-dimensional array data is generated for each axis, and then the machine is used. Do learning.
 これにより、第2の実施形態に係る判定装置130によれば、牛の行動分類(牛が反芻状態であるか非反芻状態であるか)を精度よく判定することができる。 Thereby, according to the determination device 130 according to the second embodiment, it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
 [第3の実施形態]
 上記第2の実施形態では、3軸方向それぞれの加速度を示す1次元の時系列データから、x軸方向、y軸方向、z軸方向のそれぞれの時系列データを抽出して、それぞれの2次元配列データを生成する場合について説明した。また、このときの2次元配列データは、グレイスケールで取り扱われるものとして説明した。
[Third Embodiment]
In the second embodiment, the time series data in the x-axis direction, the y-axis direction, and the z-axis direction are extracted from the one-dimensional time-series data showing the accelerations in each of the three-axis directions, and the respective two-dimensional data are extracted. The case of generating array data has been described. Further, the two-dimensional array data at this time has been described as being handled in gray scale.
 これに対して、第3の実施形態では、それぞれの2次元配列データをカラーで取り扱うとともに、2次元配列データを、色成分ごと(R値、G値、B値ごと)に分割して処理する場合について説明する。以下、第3の実施形態について、上記第2の実施形態との相違点を中心に説明する。 On the other hand, in the third embodiment, each two-dimensional array data is handled in color, and the two-dimensional array data is divided and processed for each color component (for each R value, G value, and B value). The case will be described. Hereinafter, the third embodiment will be described focusing on the differences from the second embodiment.
 <学習用データ生成部の機能構成>
 図9は、学習用データ生成部の機能構成及び処理の具体例を示す第3の図である。図8との相違点は、学習用データ生成部900の場合、R値抽出部901、911、921、G値抽出部902、912、922、B値抽出部903、913、923を有する点、データ生成部930の機能が図8のデータ生成部820の機能とは異なる点である。また、図8との相違点は、学習用データ格納部135に格納される学習用データ940の構成が、図8の学習用データ830の構成とは異なる点である。
<Functional configuration of the learning data generator>
FIG. 9 is a third diagram showing a specific example of the functional configuration and processing of the learning data generation unit. The difference from FIG. 8 is that the learning data generation unit 900 has R value extraction units 901, 911, 921, G value extraction units 902, 912, 922, and B value extraction units 903, 913, 923. The function of the data generation unit 930 is different from the function of the data generation unit 820 of FIG. Further, the difference from FIG. 8 is that the configuration of the learning data 940 stored in the learning data storage unit 135 is different from the configuration of the learning data 830 of FIG.
 R値抽出部901、911、921は、周波数解析部802、812、303それぞれより出力される2次元配列データから、R値の2次元配列データを抽出する。 The R value extraction units 901, 911, and 921 extract the two-dimensional array data of the R value from the two-dimensional array data output from the frequency analysis units 802, 812, and 303, respectively.
 G値抽出部902、912、922は、周波数解析部802、812、303それぞれより出力される2次元配列データから、G値の2次元配列データを抽出する。 The G value extraction units 902, 912, and 922 extract the two-dimensional array data of the G value from the two-dimensional array data output from the frequency analysis units 802, 812, and 303, respectively.
 B値抽出部903、913、923は、周波数解析部802、812、303それぞれより出力される2次元配列データから、B値の2次元配列データを抽出する。 The B value extraction units 903, 913, and 923 extract the B value two-dimensional array data from the two-dimensional array data output from the frequency analysis units 802, 812, and 303, respectively.
 データ生成部930は、R値抽出部901、911、921、G値抽出部902、912、922、B値抽出部903、913、923からそれぞれ出力される、所定の解析時間範囲ごとの2次元配列データを取得する。 The data generation unit 930 is output from the R value extraction units 901, 911, 921, the G value extraction units 902, 912, 922, and the B value extraction units 903, 913, and 923, respectively, in two dimensions for each predetermined analysis time range. Get array data.
 また、データ生成部930は、所定の判定時間範囲を、所定のシフト間隔でシフトさせながら、2次元配列データを抽出し、反芻状態であることを示す情報または非反芻状態であることを示す情報と対応付けて、学習用データ940として格納する。 Further, the data generation unit 930 extracts two-dimensional array data while shifting a predetermined determination time range at a predetermined shift interval, and information indicating that the state is ruminant or information indicating that the state is non-ruminant. It is stored as learning data 940 in association with.
 図9において、学習用データ940は、データ生成部930により格納された学習用データの一例である。学習用データ940は、情報の項目として、"画像データ(x、R)"、~"画像データ(x、B)"、"画像データ(y、R)"~"画像データ(y、B)"、"画像データ(z、R)"~"画像データ(z、B)"、"正解データ"を含む。 In FIG. 9, the learning data 940 is an example of the learning data stored by the data generation unit 930. The training data 940 contains "image data (x, R)", ~ "image data (x, B)", "image data (y, R)" ~ "image data (y, B)" as information items. "," Image data (z, R) "to" Image data (z, B) "," Correct answer data "is included.
 "画像データ(x、R)"~"画像データ(x、B)"には、x軸方向の加速度を示す1次元の時系列データに基づく各判定時間範囲の2次元配列データであって、R値、G値、B値の各2次元配列データが格納される。 The "image data (x, R)" to "image data (x, B)" are two-dimensional array data of each determination time range based on one-dimensional time-series data indicating acceleration in the x-axis direction. Two-dimensional array data of R value, G value, and B value are stored.
 "画像データ(y、R)"~"画像データ(y、B)"には、y軸方向の加速度を示す1次元の時系列データに基づく各判定時間範囲の2次元配列データであって、R値、G値、B値の各2次元配列データが格納される。 The "image data (y, R)" to "image data (y, B)" are two-dimensional array data of each determination time range based on one-dimensional time-series data indicating acceleration in the y-axis direction. Two-dimensional array data of R value, G value, and B value are stored.
 "画像データ(z、R)"~"画像データ(z、B)"には、z軸方向の加速度を示す1次元の時系列データに基づく各判定時間範囲の2次元配列データであって、R値、G値、B値の各2次元配列データが格納される。 The "image data (z, R)" to "image data (z, B)" are two-dimensional array data of each determination time range based on one-dimensional time-series data indicating acceleration in the z-axis direction. Two-dimensional array data of R value, G value, and B value are stored.
 <学習部、推論用データ生成部、推論部の機能構成>
 次に、第3の実施形態における学習部132、推論用データ生成部133、推論部134の機能構成について説明する。
<Functional configuration of learning unit, inference data generation unit, and inference unit>
Next, the functional configurations of the learning unit 132, the inference data generation unit 133, and the inference unit 134 in the third embodiment will be described.
 このうち、第3の実施形態における学習部132の機能構成は、図4で示した第1の実施形態における学習部132の機能構成と同様である。ただし、第3の実施形態における学習部132の場合、学習用データ格納部135より読み出した学習用データ940を入力する際、
・"画像データ(x、R)"~"画像データ(x、B)"に含まれる各2次元配列データを1ch~3ch、
・"画像データ(y、R)"~"画像データ(y、B)"に含まれる2次元配列データを4ch~6ch、
・"画像データ(z、R)"~"画像データ(z、B)"に含まれる2次元配列データを7ch~9ch、
としてCNN部401に入力することで、判定モデルについて機械学習を行う。
Of these, the functional configuration of the learning unit 132 in the third embodiment is the same as the functional configuration of the learning unit 132 in the first embodiment shown in FIG. However, in the case of the learning unit 132 in the third embodiment, when inputting the learning data 940 read from the learning data storage unit 135,
-Each two-dimensional array data included in "image data (x, R)" to "image data (x, B)" is 1ch to 3ch,
-The two-dimensional array data included in the "image data (y, R)" to "image data (y, B)" is 4ch to 6ch,
-The two-dimensional array data included in "image data (z, R)" to "image data (z, B)" is 7ch to 9ch,
Is input to the CNN unit 401 to perform machine learning on the determination model.
 また、第3の実施形態における推論用データ生成部133は、図9に示した第3の実施形態における学習用データ生成部900と同様の構成を有する。ただし、推論用データ生成部133により生成される推論用データには、情報の項目として、
・"画像データ(x、R)" ~"画像データ(x、B)"、
・"画像データ(y、R)"~"画像データ(y、B)"、
・"画像データ(z、R)"~"画像データ(z、B)"、
"時間情報"、
が含まれる。
Further, the inference data generation unit 133 in the third embodiment has the same configuration as the learning data generation unit 900 in the third embodiment shown in FIG. However, the inference data generated by the inference data generation unit 133 has information as an item of information.
-"Image data (x, R)" to "Image data (x, B)",
-"Image data (y, R)" to "Image data (y, B)",
-"Image data (z, R)" to "Image data (z, B)",
"Time information",
Is included.
 また、第3の実施形態における推論部134は、図6で示した第1の実施形態における推論部134の機能構成と同様である。ただし、第3の実施形態における推論部134の場合、推論用データ格納部136より読み出した推論用データを入力する際、
・"画像データ(x、R)"~"画像データ(x、B)"に含まれる各2次元配列データを1ch~3ch、
・"画像データ(y、R)"~"画像データ(y、B)"に含まれる2次元配列データを4ch~6ch、
・"画像データ(z、R)"~"画像データ(z、B)"に含まれる2次元配列データを7ch~9ch、
としてCNN部601に入力することで推論処理を行い、推論結果を出力する。
Further, the inference unit 134 in the third embodiment has the same functional configuration as the inference unit 134 in the first embodiment shown in FIG. However, in the case of the inference unit 134 in the third embodiment, when inputting the inference data read from the inference data storage unit 136,
-Each two-dimensional array data included in "image data (x, R)" to "image data (x, B)" is 1ch to 3ch,
-The two-dimensional array data included in the "image data (y, R)" to "image data (y, B)" is 4ch to 6ch,
-The two-dimensional array data included in "image data (z, R)" to "image data (z, B)" is 7ch to 9ch,
Is input to the CNN unit 601 to perform inference processing and output the inference result.
 <まとめ>
 以上の説明から明らかなように、第3の実施形態に係る判定装置130は、
・牛の首部に取り付けられた計測装置より出力される、3軸方向それぞれの加速度を示す1次元の時系列データを取得する。
・3軸方向それぞれの加速度を示す1次元の時系列データからx軸方向、y軸方向、z軸方向の加速度を示す1次元の時系列データをそれぞれ抽出したうえで周波数解析し、各時間における各周波数の強度を示す2次元配列データをそれぞれ生成する。
・生成した2次元配列データを、色成分ごとに分割する。
・所定の判定時間範囲ごとの複数チャネル(9ch)の2次元配列データを入力として、所定の判定時間範囲ごとの行動分類を示す情報(反芻状態であることを示す情報または非反芻状態であることを示す情報)を出力する。
<Summary>
As is clear from the above description, the determination device 130 according to the third embodiment is
-Acquire one-dimensional time-series data indicating acceleration in each of the three axial directions, which is output from a measuring device attached to the neck of the cow.
・ After extracting the one-dimensional time-series data showing the acceleration in the x-axis direction, the y-axis direction, and the z-axis direction from the one-dimensional time-series data showing the acceleration in each of the three axis directions, frequency analysis is performed, and at each time. Two-dimensional array data showing the intensity of each frequency is generated.
-The generated two-dimensional array data is divided for each color component.
-Information indicating the behavior classification for each predetermined judgment time range (information indicating that it is in a ruminant state or non-ruminant state) by inputting two-dimensional array data of multiple channels (9 channels) for each predetermined judgment time range. Information indicating) is output.
 このように、第3の実施形態に係る判定装置130では、3軸方向それぞれの加速度を示す1次元の時系列データをそれぞれ周波数解析し、軸ごとに2次元配列データを生成し、色成分ごとに分割したうえで、機械学習を行う。 In this way, the determination device 130 according to the third embodiment frequency-analyzes one-dimensional time-series data indicating acceleration in each of the three axial directions, generates two-dimensional array data for each axis, and for each color component. After dividing into, machine learning is performed.
 これにより、第3の実施形態に係る判定装置130によれば、牛の行動分類(牛が反芻状態であるか非反芻状態であるか)を精度よく判定することができる。 Thereby, according to the determination device 130 according to the third embodiment, it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
 [第4の実施形態]
 上記第1の実施形態では、3軸方向それぞれの加速度を示す1次元の時系列データに、反芻状態であることを示す情報または非反芻状態であることを示す情報が対応付けられているものとして説明した。
[Fourth Embodiment]
In the first embodiment, it is assumed that the one-dimensional time series data indicating the acceleration in each of the three axial directions is associated with the information indicating the ruminant state or the information indicating the non-ruminant state. explained.
 しかしながら、時系列データに対応付けられている情報は、これに限定されない。例えば、反芻状態であることを示す情報または非反芻状態であることを示す情報に代えて、反芻状態であることを示す情報または非反芻状態であることを示す情報以外の行動分類を示す情報が対応付けられていてもよい。 However, the information associated with the time series data is not limited to this. For example, instead of the information indicating that the ruminant state or the information indicating the non-ruminant state, the information indicating the behavior classification other than the information indicating the ruminant state or the information indicating the non-ruminant state is provided. It may be associated.
 あるいは、反芻状態であることを示す情報または非反芻状態であることを示す情報に加えて、更に、反芻状態であることを示す情報または非反芻状態であることを示す情報以外の行動分類を示す情報が対応付けられていてもよい。 Alternatively, in addition to the information indicating the ruminant state or the information indicating the non-ruminant state, the behavior classification other than the information indicating the ruminant state or the information indicating the non-ruminant state is shown. Information may be associated.
 なお、ここでいう反芻状態であることを示す情報または非反芻状態であることを示す情報以外の行動分類を示す情報には、例えば、採食状態であることを示す情報、歩行状態であることを示す情報等が含まれる。 In addition, the information indicating the behavior classification other than the information indicating the ruminant state or the information indicating the non-ruminant state mentioned here includes, for example, the information indicating the feeding state and the walking state. Information and the like indicating the above are included.
 図10は、学習用データ生成部の機能構成及び処理の具体例を示す第4の図である。図10のグラフ1011~1013に示すように、行動分類1を示す情報または行動分類2を示す情報が対応付けられていた場合、学習用データ1020の"正解データ"には、いずれかの行動分類が格納される。 FIG. 10 is a fourth diagram showing a specific example of the functional configuration and processing of the learning data generation unit. As shown in graphs 1011 to 1013 of FIG. 10, when the information indicating the behavior classification 1 or the information indicating the behavior classification 2 is associated, the "correct answer data" of the learning data 1020 is associated with any of the behavior classifications. Is stored.
 これにより、第4の実施形態に係る判定装置130によれば、3軸方向それぞれの加速度を示す1次元の時系列データに基づいて、判定対象の牛の行動分類を精度よく判定することができる。 As a result, according to the determination device 130 according to the fourth embodiment, the behavior classification of the cow to be determined can be accurately determined based on the one-dimensional time series data indicating the acceleration in each of the three axial directions. ..
 [第5の実施形態]
 上記第1乃至第4の実施形態において、判定結果出力部602は、CNN部601より出力された推論結果を、時間情報と対応付けて出力するものとして説明した。また、上記第1乃至第4の実施形態において、判定結果出力部602は、シフト間隔ごとに推論結果を出力するものとして説明した。
[Fifth Embodiment]
In the first to fourth embodiments, the determination result output unit 602 has been described as outputting the inference result output from the CNN unit 601 in association with the time information. Further, in the first to fourth embodiments, the determination result output unit 602 has been described as outputting the inference result for each shift interval.
 一方で、牛が反芻状態から非反芻状態へと移行する間隔、あるいは、非反芻状態から反芻状態へと移行する間隔は、シフト間隔と比較して十分に長い。 On the other hand, the interval at which the cow transitions from the ruminant state to the non-ruminant state, or the interval at which the cow transitions from the non-ruminant state to the ruminant state, is sufficiently longer than the shift interval.
 そこで、第5の実施形態では、所定の時間における推論結果を出力する際、当該所定の時間を含む複数の推論時間範囲についての推論結果を参照し、参照した複数の推論結果に基づいて、最終的な推論結果を決定する。 Therefore, in the fifth embodiment, when the inference result at a predetermined time is output, the inference results for a plurality of inference time ranges including the predetermined time are referred to, and the final inference result is obtained based on the referenced plurality of inference results. Determine the inference result.
 図11は、推論部の機能構成及び処理の具体例を示す第2の図である。図6に示した第1の実施形態における推論部134との相違点は、図11の場合、統計処理部1100を有する点である。 FIG. 11 is a second diagram showing a specific example of the functional configuration and processing of the inference unit. The difference from the inference unit 134 in the first embodiment shown in FIG. 6 is that in the case of FIG. 11, it has a statistical processing unit 1100.
 統計処理部1100は決定部の一例である。統計処理部1100は、CNN部601から出力される、所定の推論時間範囲ごとの推論結果を取得する。図11において、推論結果1110は、所定の時間1111を含む複数の推論時間範囲1121~1125についての推論結果を示している。統計処理部1100は、推論結果1110に含まれる、所定の時間1111の推論結果を、複数の推論時間範囲1121~1125についての推論結果に基づいて決定する。 The statistical processing unit 1100 is an example of a determination unit. The statistical processing unit 1100 acquires the inference result for each predetermined inference time range output from the CNN unit 601. In FIG. 11, the inference result 1110 shows the inference results for a plurality of inference time ranges 1121 to 1125 including a predetermined time 1111. The statistical processing unit 1100 determines the inference result of the predetermined time 1111 included in the inference result 1110 based on the inference results for the plurality of inference time ranges 1121 to 1125.
 具体的には、統計処理部1100は、推論頻度の高い推論結果を、所定の時間1111における推論結果として決定する(つまり、多数決により最終的な推論結果を決定する)。図11の例では、推論結果として非反芻状態を示す情報を出力したケースが1回で、推論結果として反芻状態を示す情報を出力したケースが4回である。このため、統計処理部1100では、所定の時間1111における推論結果を、反芻状態を示す情報に決定する。 Specifically, the statistical processing unit 1100 determines an inference result having a high inference frequency as an inference result at a predetermined time 1111 (that is, the final inference result is determined by a majority vote). In the example of FIG. 11, the case where the information indicating the non-ruminant state is output as the inference result is once, and the case where the information indicating the ruminant state is output as the inference result is four times. Therefore, the statistical processing unit 1100 determines the inference result at the predetermined time 1111 as the information indicating the ruminant state.
 このように、同じ時間について複数の推論結果が出力される場合、第5の実施形態に係る判定処理では、複数の推論結果に対して統計処理を行うことで、各時間における最終的な推論結果を決定する。 In this way, when a plurality of inference results are output for the same time, in the determination process according to the fifth embodiment, the final inference result at each time is obtained by performing statistical processing on the plurality of inference results. To determine.
 これにより、第5の実施形態に係る判定装置130によれば、牛の行動分類(牛が反芻状態であるか非反芻状態であるか)を精度よく判定することができる。 Thereby, according to the determination device 130 according to the fifth embodiment, it is possible to accurately determine the behavior classification of the cow (whether the cow is in the ruminant state or the non-ruminant state).
 なお、上記説明では、所定の時間1111における最終的な推論結果を決定するにあたり、所定の時間1111を含む複数の推論時間範囲についての推論結果を対象としたが、決定に用いる対象はこれに限定されない。例えば、所定の時間1111を含むか否かに関わらず、所定の時間1111より前の、または、所定の時間1111より後の所定数の推論時間範囲についての推論結果に基づいて、所定の時間1111における最終的な推論結果を決定してもよい。 In the above description, in determining the final inference result at the predetermined time 1111, the inference results for a plurality of inference time ranges including the predetermined time 1111 are targeted, but the target used for the determination is limited to this. Not done. For example, a predetermined time 1111 based on inference results for a predetermined number of inference time ranges before or after the predetermined time 1111 regardless of whether or not the predetermined time 1111 is included. You may determine the final inference result in.
 このように、所定数の推論時間範囲についての推論結果に基づいて、各時間における最終的な推論結果を決定することで、推論時間範囲を短くした場合に生じる、2次元配列データに含まれるノイズの影響を、排除することができる。 In this way, by determining the final inference result at each time based on the inference results for a predetermined number of inference time ranges, the noise contained in the two-dimensional array data generated when the inference time range is shortened. The influence of can be eliminated.
 つまり、推論時間範囲を短くすると、推論時間範囲を長くした場合と比較して、時間分解能が上がる一方で、ノイズの影響を受けやすくなるところ、統計処理部1100によれば、ノイズの影響を排除することができる。 That is, when the inference time range is shortened, the time resolution is improved as compared with the case where the inference time range is lengthened, but it is easily affected by noise. However, according to the statistical processing unit 1100, the influence of noise is eliminated. can do.
 [その他の実施形態]
 上記第1の実施形態では、3軸方向それぞれの加速度を示す1次元の時系列データから、z軸方向の加速度を示す時系列データを抽出する場合について説明した。しかしながら、z軸方向の加速度を示す1次元の時系列データに代えて、x軸方向またはy軸方向の加速度を示す1次元の時系列データを抽出してもよい。
[Other Embodiments]
In the first embodiment, the case of extracting the time-series data showing the acceleration in the z-axis direction from the one-dimensional time-series data showing the accelerations in the three-axis directions has been described. However, instead of the one-dimensional time-series data showing the acceleration in the z-axis direction, the one-dimensional time-series data showing the acceleration in the x-axis direction or the y-axis direction may be extracted.
 つまり、3軸方向それぞれの加速度を示す1次元の時系列データから、x軸方向、y軸方向、z軸方向の加速度を示す1次元の時系列データのいずれか1つまたは複数の1次元の時系列データを抽出してもよい。 That is, one or more of the one-dimensional time-series data showing the accelerations in the three-axis directions to the one-dimensional time-series data showing the accelerations in the x-axis direction, the y-axis direction, and the z-axis direction. Time series data may be extracted.
 また、上記第3の実施形態では、加速度を示す1次元の時系列データを3軸に分けて周波数解析し、軸ごとに2次元配列データを生成したうえで、それぞれの2次元配列データを更に色成分ごとに分割する場合について説明した。しかしながら、加速度を示す1次元の時系列データから抽出した、x軸方向、y軸方向、z軸方向の加速度を示す1次元の時系列データのいずれかについて生成した2次元配列データを、色成分ごとに分割してもよい。 Further, in the third embodiment, the one-dimensional time series data indicating the acceleration is divided into three axes for frequency analysis, two-dimensional array data is generated for each axis, and each two-dimensional array data is further obtained. The case of dividing each color component has been described. However, the two-dimensional array data generated for any of the one-dimensional time-series data indicating the acceleration in the x-axis direction, the y-axis direction, and the z-axis direction extracted from the one-dimensional time-series data indicating the acceleration is used as a color component. It may be divided for each.
 また、上記各実施形態では、各時間における各周波数の強度を示す2次元配列データを生成する際、周波数解析により得られた値を画像化し(グレイスケールやR、G、Bに変換し)、"画像データ"として取り扱うものとして説明した。 Further, in each of the above embodiments, when generating two-dimensional array data indicating the intensity of each frequency at each time, the value obtained by the frequency analysis is imaged (converted to gray scale or R, G, B). It was explained as being treated as "image data".
 しかしながら、2次元配列データの生成方法はこれに限定されず、例えば、周波数解析により得られた値を画像化せず(グレイスケールやR、G、Bへ変換せず)、各時間における各周波数の強度を示す2次元配列データを生成するように構成してもよい。 However, the method of generating the two-dimensional array data is not limited to this, for example, the value obtained by the frequency analysis is not imaged (not converted to grayscale, R, G, B), and each frequency at each time. It may be configured to generate two-dimensional array data indicating the strength of.
 また、上記各実施形態では、加速度センサを首部に取り付けるものとして説明したが、加速度センサの取り付け部位は首部に限定されず、他の部位に取り付けてもよい。 Further, in each of the above embodiments, the acceleration sensor is attached to the neck portion, but the attachment portion of the acceleration sensor is not limited to the neck portion and may be attached to another portion.
 なお、上記実施形態に挙げた構成等に、その他の要素との組み合わせ等、ここで示した構成に本発明が限定されるものではない。これらの点に関しては、本発明の趣旨を逸脱しない範囲で変更することが可能であり、その応用形態に応じて適切に定めることができる。 It should be noted that the present invention is not limited to the configurations shown here, such as combinations with other elements in the configurations and the like described in the above embodiments. These points can be changed without departing from the spirit of the present invention, and can be appropriately determined according to the application form thereof.
 本出願は、2020年5月1日に出願された日本国特許出願第2020-081460号に基づきその優先権を主張するものであり、同日本国特許出願の全内容を参照することにより本願に援用する。 This application claims its priority based on Japanese Patent Application No. 2020-081460 filed on May 1, 2020, and by referring to the entire contents of the Japanese patent application, the present application is made. Invite.
 100         :判定システム
 110         :計測装置
 130         :判定装置
 131         :学習用データ生成部
 132         :学習部
 133         :推論用データ生成部
 134         :推論部
 301         :加速度データ取得部
 302         :z軸方向抽出部
 303         :周波数解析部
 304         :データ生成部
 320         :学習用データ
 401         :CNN部
 402         :比較/変更部
 501         :加速度データ取得部
 502         :z軸方向抽出部
 503         :周波数解析部
 504         :データ生成部
 520         :推論用データ
 601         :CNN部
 602         :判定結果出力部
 830         :学習用データ
 940         :学習用データ
 1020        :学習用データ
 1100        :統計処理部
100: Judgment system 110: Measuring device 130: Judgment device 131: Learning data generation unit 132: Learning unit 133: Inference data generation unit 134: Inference unit 301: Acceleration data acquisition unit 302: z-axis direction extraction unit 303: Frequency Analysis unit 304: Data generation unit 320: Learning data 401: CNN unit 402: Comparison / change unit 501: Acceleration data acquisition unit 502: z-axis direction extraction unit 503: Frequency analysis unit 504: Data generation unit 520: Inference data 601: CNN unit 602: Judgment result output unit 830: Learning data 940: Learning data 1020: Learning data 1100: Statistical processing unit

Claims (8)

  1.  反芻動物に取り付けられた加速度センサより出力される、加速度を示す1次元の時系列データを周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する生成部と、
     所定の時間範囲ごとの前記2次元配列データを入力として、前記反芻動物の前記所定の時間範囲ごとの行動分類を判定する判定部と
     を有する判定装置。
    A generator that frequency-analyzes one-dimensional time-series data indicating acceleration output from an acceleration sensor attached to an anticorrosive animal and generates two-dimensional array data indicating the intensity of each frequency at each time.
    A determination device having a determination unit for determining the behavior classification of the ruminant for each predetermined time range by inputting the two-dimensional array data for each predetermined time range.
  2.  前記生成部は、
     前記加速度センサより出力されるx軸方向の加速度を示す1次元の時系列データ、y軸方向の加速度を示す1次元の時系列データ、z軸方向の加速度を示す1次元の時系列データのうちの、いずれか1つまたは複数の1次元の時系列データを周波数解析することで、1または複数の前記2次元配列データを生成する、請求項1に記載の判定装置。
    The generator
    Of the one-dimensional time-series data indicating the acceleration in the x-axis direction, the one-dimensional time-series data indicating the acceleration in the y-axis direction, and the one-dimensional time-series data indicating the acceleration in the z-axis direction output from the acceleration sensor. The determination device according to claim 1, wherein one or more of the two-dimensional array data is generated by frequency-analyzing one or more one-dimensional one-dimensional time-series data.
  3.  前記生成部は、
     生成した2次元配列データを、色成分ごとに分割することで、複数の前記2次元配列データを生成する、請求項2に記載の判定装置。
    The generator
    The determination device according to claim 2, wherein a plurality of the two-dimensional array data is generated by dividing the generated two-dimensional array data for each color component.
  4.  前記判定部は、
     生成された1または複数の前記2次元配列データを含む、1または複数チャネルの2次元配列データを入力として、前記反芻動物の前記所定の時間範囲ごとの行動分類を判定する、請求項3に記載の判定装置。
    The determination unit
    The third aspect of claim 3, wherein the behavior classification of the rut animal for each predetermined time range is determined by inputting the two-dimensional array data of one or a plurality of channels including the generated one or a plurality of the two-dimensional array data. Judgment device.
  5.  前記行動分類には、少なくとも、前記反芻動物が反芻状態であることを示す情報または非反芻状態であることを示す情報のいずれかが含まれる、請求項1に記載の判定装置。 The determination device according to claim 1, wherein the behavior classification includes at least either information indicating that the ruminant is in a ruminant state or information indicating that the ruminant is in a non-ruminant state.
  6.  前記判定部は、
     2次元配列データと、反芻動物の所定の時間範囲ごとの行動分類との対応関係を特定する判定モデルについて機械学習を行うことで生成された学習済み判定モデルを有する、請求項1に記載の判定装置。
    The determination unit
    The determination according to claim 1, further comprising a learned determination model generated by performing machine learning on a determination model that specifies the correspondence between the two-dimensional array data and the behavior classification for each predetermined time range of the rutant. Device.
  7.  所定の時間を含む前記所定の時間範囲が複数ある場合、それぞれの時間範囲について前記判定部が判定した結果を統計処理することで、前記所定の時間における行動分類を決定する決定部を更に有する、請求項1に記載の判定装置。 When there are a plurality of the predetermined time ranges including the predetermined time, the determination unit further has a determination unit for determining the action classification at the predetermined time by statistically processing the result determined by the determination unit for each time range. The determination device according to claim 1.
  8.  反芻動物に取り付けられた加速度センサより出力される、加速度を示す1次元の時系列データを周波数解析し、各時間における各周波数の強度を示す2次元配列データを生成する生成工程と、
     所定の時間範囲ごとの前記2次元配列データを入力として、前記反芻動物の前記所定の時間範囲ごとの行動分類を判定する判定工程と
     をコンピュータに実行させるための判定プログラム。
    A generation process that frequency-analyzes one-dimensional time-series data indicating acceleration output from an acceleration sensor attached to an anticorrosive animal and generates two-dimensional array data indicating the intensity of each frequency at each time.
    A determination program for causing a computer to perform a determination step of determining the behavior classification of the rut animal for each predetermined time range by inputting the two-dimensional array data for each predetermined time range.
PCT/JP2021/014201 2020-05-01 2021-04-01 Determination device and determination program WO2021220715A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-081460 2020-05-01
JP2020081460A JP2023089316A (en) 2020-05-01 2020-05-01 Determination device and determination program

Publications (1)

Publication Number Publication Date
WO2021220715A1 true WO2021220715A1 (en) 2021-11-04

Family

ID=78332379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/014201 WO2021220715A1 (en) 2020-05-01 2021-04-01 Determination device and determination program

Country Status (2)

Country Link
JP (1) JP2023089316A (en)
WO (1) WO2021220715A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004504051A (en) * 2000-07-19 2004-02-12 バー−シャロム, アヴシャロム Methods and systems for monitoring ruminant physiological conditions and / or suitability of animal feed for ruminants
JP2013022001A (en) * 2011-07-26 2013-02-04 Aichi Electric Co Ltd Method for monitoring behavior and physiological index of animal using low frequency pressure sensor and device therefor
WO2016036303A1 (en) * 2014-09-04 2016-03-10 Delaval Holding Ab Arrangement and method for measuring rumination of an animal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004504051A (en) * 2000-07-19 2004-02-12 バー−シャロム, アヴシャロム Methods and systems for monitoring ruminant physiological conditions and / or suitability of animal feed for ruminants
JP2013022001A (en) * 2011-07-26 2013-02-04 Aichi Electric Co Ltd Method for monitoring behavior and physiological index of animal using low frequency pressure sensor and device therefor
WO2016036303A1 (en) * 2014-09-04 2016-03-10 Delaval Holding Ab Arrangement and method for measuring rumination of an animal

Also Published As

Publication number Publication date
JP2023089316A (en) 2023-06-28

Similar Documents

Publication Publication Date Title
Chanales et al. Overlap among spatial memories triggers repulsion of hippocampal representations
Alvarenga et al. Using a three-axis accelerometer to identify and classify sheep behaviour at pasture
Ahmed et al. Facial features detection system to identify children with autism spectrum disorder: deep learning models
Yue et al. Hierarchical feature extraction for early Alzheimer’s disease diagnosis
Faria et al. Machine Learning algorithms applied to the classification of robotic soccer formations and opponent teams
US11410086B2 (en) System and method for class specific deep learning
JP2018147474A (en) Learning device, learning result utilization device, learning method, and learning program
CN106971410A (en) A kind of white matter fiber tract method for reconstructing based on deep learning
US20200237284A1 (en) System and method for mri image synthesis for the diagnosis of parkinson&#39;s disease using deep learning
US20160086028A1 (en) Method for classifying a known object in a field of view of a camera
JP7334801B2 (en) LEARNING DEVICE, LEARNING METHOD AND LEARNING PROGRAM
CN105718931A (en) System And Method For Determining Clutter In An Acquired Image
Arun et al. Assessing the validity of saliency maps for abnormality localization in medical imaging
EP3287954A1 (en) Verification device, verification method, and verification program
CN113920123B (en) Addictive brain network analysis method and device
CN109977755B (en) Method for detecting standing and lying postures of pig by adopting single image
WO2021220715A1 (en) Determination device and determination program
CN111989040A (en) Walking form display method, walking form display system, and walking form analysis device
CN114266718A (en) Medical image processing apparatus and method, and learning method of learning model
CN113128585A (en) Deep neural network based multi-size convolution kernel method for realizing electrocardiographic abnormality detection and classification
KR102192345B1 (en) Eeg signal variability analysis system for depression diagnosis and method thereof
Xu et al. Ultra-rapid object categorization in real-world scenes with top-down manipulations
WO2023108418A1 (en) Brain atlas construction and neural circuit detection method and related product
CN113516641A (en) End-to-end brain image data processing method and device based on deep learning
El-ziaat et al. A Hybrid Deep Learning Approach for Freezing of Gait Prediction in Patients with Parkinson's Disease

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21796722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21796722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP