CN116076421A - Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers - Google Patents

Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers Download PDF

Info

Publication number
CN116076421A
CN116076421A CN202211462489.5A CN202211462489A CN116076421A CN 116076421 A CN116076421 A CN 116076421A CN 202211462489 A CN202211462489 A CN 202211462489A CN 116076421 A CN116076421 A CN 116076421A
Authority
CN
China
Prior art keywords
feeding
data
video stream
visual analysis
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211462489.5A
Other languages
Chinese (zh)
Other versions
CN116076421B (en
Inventor
刘琰
黄灼
黄锡雄
陈随夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gizwits Iot Technology Co ltd
Original Assignee
Gizwits Iot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gizwits Iot Technology Co ltd filed Critical Gizwits Iot Technology Co ltd
Priority to CN202211462489.5A priority Critical patent/CN116076421B/en
Publication of CN116076421A publication Critical patent/CN116076421A/en
Application granted granted Critical
Publication of CN116076421B publication Critical patent/CN116076421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K61/00Culture of aquatic animals
    • A01K61/80Feeding devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Environmental Sciences (AREA)
  • Zoology (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for obtaining accurate feeding quantity through behavioral visual analysis of a feeding worker, which is applied to a feeding analysis system, and the method for obtaining accurate feeding quantity through behavioral visual analysis of the feeding worker comprises the following steps: acquiring a starting instruction of the camera device; starting the camera device based on the starting instruction to obtain video stream data of the feeding process; performing feature recognition on the video stream data to obtain feeding parameters; performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy; and obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity. According to the method, the feeding time, the feeding quantity and the feeding ratio are accurately determined through behavioral visual analysis of feeding workers, the overall fish pond culture efficiency is improved, and therefore economic benefits of farmers are improved.

Description

Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers
Technical Field
The invention relates to the technical field of breeding feeding, in particular to a method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers.
Background
China has huge fish pond culture industry, and in the process of fish pond culture, the feeding time and the feeding amount have great influence on the growth index of the fish shoal. In the related art, most of feeding of domestic cultivation is finished by manual operation, a worker conveys feed to a feeding point, unpacks a bag, loads the feed into a feeding machine, and transfers the feed to the next feeding point to continue operation until feeding is finished; most of feeding machines have no automatic metering function, so that the quantity and the type of the feed fed by each feeding point can be registered only manually by workers or are not registered by most farmers. This practice has the chance of causing inaccurate feed delivery, or incorrect delivery, without any way of finding the cause.
Disclosure of Invention
The invention mainly aims to provide a method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers, a device for obtaining the accurate feeding amount and a readable storage medium, and aims to solve the technical problem of how to accurately feed in the prior art so as to improve the overall culture efficiency of a fish pond.
In order to achieve the above object, the present invention provides a method for obtaining accurate feeding amount through behavioral visual analysis of a feeding worker, which is applied to a feeding analysis system, wherein the feeding analysis comprises a feeding auxiliary device, a vertical-rod type camera device and a cloud server platform, the vertical-rod type camera device is configured to obtain the feeding process of the feeding auxiliary device, the vertical-rod type camera device is in communication connection with the cloud server platform, and the method for obtaining accurate feeding amount through behavioral visual analysis of the feeding worker comprises:
acquiring a starting instruction of the camera device;
starting the camera device based on the starting instruction to obtain video stream data of the feeding process;
performing feature recognition on the video stream data to obtain feeding parameters, wherein the feeding parameters comprise: feeding time, feeding amount, feeding feed class and/or feeding feed proportion;
performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy;
and obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity.
Preferably, the feeding auxiliary device includes an infrared detector, and before the step of acquiring the start instruction of the image capturing device, the method further includes:
acquiring a detection signal of an infrared detector and the duration of the detection signal;
determining that a detection signal is present and that the duration is greater than a preset value;
generating a start instruction of the image pickup device.
Preferably, the feeding auxiliary device and the upright type image pickup device comprise a plurality of groups corresponding to each other, and the step of generating the starting instruction of the image pickup device comprises the following steps:
pairing the infrared detector of the feeding auxiliary device with the upright type camera device;
generating a starting instruction of the successfully paired camera device according to the pairing result; and/or the number of the groups of groups,
the step of starting the camera device based on the starting instruction to obtain video stream data of the feeding process comprises the following steps:
pairing the infrared detector of the feeding auxiliary device with the upright type camera device;
and acquiring video stream data of the feeding process in the range of the feeding auxiliary device corresponding to the successfully paired infrared detector according to the pairing result.
Preferably, after the step of starting the image capturing device based on the start instruction to obtain video stream data of the feeding process, the method further includes:
acquiring a detection signal of the infrared detector again and the duration of the detection signal;
and determining that the detection signal is not present and the duration time of the absence of the detection signal is longer than a preset value, and closing the image pickup device.
Preferably, the step of performing feature recognition on the video stream data to obtain a feeding parameter includes:
acquiring key frame image data based on video stream data;
acquiring feed packaging information based on the key frame image data, wherein the feed packaging information comprises packaging color, packaging shape, packaging weight, packaging residual quantity and a component table;
and carrying out cloud server deep learning on the feed packaging information to obtain the feeding parameters.
Preferably, the step of performing feature recognition on the video stream data to obtain a feeding parameter further includes:
acquiring audio data of the water surface of the fishpond based on the video stream data;
acquiring liveness information of fish shoals of the fishpond based on the audio data, wherein the liveness comprises primary liveness and secondary liveness;
and carrying out server deep learning on the liveness information to obtain the feeding parameters.
Preferably, after the step of starting the image capturing device based on the start instruction to obtain video stream data of the feeding process, the method further includes:
performing feature recognition on the video stream data to obtain action behavior data;
performing cloud server deep learning based on the action behavior data to obtain abnormal behavior data;
determining an abnormal behavior event according to the abnormal behavior data;
generating a traceability prompt strategy based on the abnormal behavior event;
and sending the tracing reminding information to the worker according to the tracing reminding strategy.
Preferably, the step of performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy includes:
acquiring fish swarm growth data and fish pond output data;
establishing a deep learning algorithm model according to the feeding parameters, the fish swarm growth data and the fish pond output data;
and obtaining a feeding strategy according to the physical characteristics of the deep learning algorithm model.
The invention further provides a device for obtaining the accurate feeding amount, which comprises a memory, a processor and a control program stored on the memory and used for realizing the accurate feeding method, wherein the processor is used for executing the control program for realizing the accurate feeding method so as to realize the steps of the method for obtaining the accurate feeding amount through the behavior visual analysis of a feeding worker.
Further, the present invention also provides a readable storage medium having stored thereon a control program which, when executed by a processor, implements the steps of the method of obtaining an accurate feeding amount through behavioral visual analysis of a feeding worker as described above.
In the technical scheme of the invention, a starting instruction of the camera device is firstly obtained; then starting the camera device based on the starting instruction to obtain video stream data of the feeding process; then, carrying out feature recognition on the video stream data to obtain feeding parameters; then, based on the feeding parameters, deep learning of a cloud server is carried out to obtain a feeding strategy; and finally, obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity. That is, the camera is started according to the control instruction, and then the accurate feeding amount is obtained according to the analysis of the behavior vision of the feeding workers (the video data shot by the camera), so that the feeding time and the feeding amount can be accurately determined, and the overall cultivation efficiency of the fishpond can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a feed analysis system according to the present invention;
FIG. 2 is a flow chart of an embodiment of a method for obtaining accurate feeding amount by behavioral visual analysis of a batch feeder according to the present invention;
FIG. 3 is a flow chart of another embodiment of the method of the present invention for obtaining accurate feeding through behavioral visual analysis of a batch feeder;
FIG. 4 is a flow chart of another embodiment of the method for obtaining accurate feeding amount by behavioral visual analysis of a feeder;
FIG. 5 is a flow chart of a further embodiment of the method of the present invention for obtaining accurate feeding through behavioral visual analysis of a batch feeder;
FIG. 6 is a flow chart of another embodiment of the method for obtaining accurate feeding amount by visual analysis of the behavior of a feeder;
FIG. 7 is a flow chart of another embodiment of the method for obtaining accurate feeding amount by visual analysis of the behavior of a feeding worker according to the present invention:
FIG. 8 is a flow chart of another embodiment of the method for obtaining accurate feeding amount by visual analysis of the behavior of a feeding worker according to the present invention:
FIG. 9 is a flow chart of another embodiment of the method for obtaining accurate feeding amount by visual analysis of the behavior of a feeder;
FIG. 10 is a flow chart of another embodiment of the method for obtaining accurate feeding amount by visual analysis of the behavior of a feeder.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear are used in the embodiments of the present invention) are merely for explaining the relative positional relationship, movement conditions, and the like between the components in a certain specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicators are changed accordingly.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, "and/or" throughout this document includes three schemes, taking a and/or B as an example, including a technical scheme, a technical scheme B, and a technical scheme that both a and B satisfy; in addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Referring to fig. 1, the present application provides a method for obtaining an accurate feeding amount through a behavioral visual analysis of a feeding worker, and applies the method to a feeding analysis system, the feeding analysis including a feeding auxiliary device 100, a vertical-rod type camera device 200 and a cloud server platform 300, the vertical-rod type camera device 200 being configured to obtain a feeding process of the feeding auxiliary device 100, the vertical-rod type camera device 200 being communicatively connected with the cloud server platform 300, more specifically, the feeding auxiliary device 100 including a background baffle facilitating shooting, an operation platform facilitating shooting angle, and an infrared sensor, the operation platform being mainly used for a worker feeding operation, the infrared sensor being configured to detect a human body, and obtaining an infrared detection signal when the worker approaches the feeding auxiliary device 100; the upright type camera device 200 is controlled by the cloud server platform 300, and is in communication connection with the cloud server platform 300 in a wired mode or a wireless mode, the specific connection mode is not limited herein, the upright type camera device 200 also has a human body tracking function, and human body tracking can be realized in the operation range of the feeding auxiliary device 100, so that the effect of preventing data loss is achieved; the cloud server platform 300 provides algorithm support and strategy pushing service, the cloud server platform 300 can receive video data of a plurality of different fishponds simultaneously, the video analysis data are analyzed and processed to obtain a throwing strategy, cloud sharing can be achieved through data conclusion, and therefore feeding time and feeding amount can be accurately determined through behavior visual analysis of feeding workers, overall fishpond cultivation efficiency is improved, and economic benefits of farmers are improved.
The specific steps of the method of obtaining a precise feeding amount by behavioral visual analysis of a feeder worker will be mainly described below, and it should be noted that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that here.
Referring to fig. 2, the method for obtaining accurate feeding amount through behavior visual analysis of a feeding worker comprises the following steps:
s100, acquiring a starting instruction of the image pickup device;
s200, starting the camera device based on the starting instruction to obtain video stream data of a feeding process;
s300, carrying out feature recognition on the video stream data to obtain feeding parameters, wherein the feeding parameters comprise: feeding time, feeding amount, feeding feed class and/or feeding feed proportion;
s400, carrying out cloud server deep learning based on the feeding parameters to obtain a feeding strategy;
s500, obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity.
Specifically, in this embodiment, by first acquiring a start instruction of the image capturing apparatus; then starting the camera device based on the starting instruction to obtain video stream data of the feeding process; then, carrying out feature recognition on the video stream data to obtain feeding parameters; then, based on the feeding parameters, deep learning of a cloud server is carried out to obtain a feeding strategy; and finally, obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity. That is, the image pickup device is started according to the control instruction, and then the accurate feeding amount is obtained by analyzing according to the behavioral vision of the feeding worker (video data photographed by the image pickup device).
More specifically, there are various ways to obtain the start command of the image capturing device 200, such as a timing trigger start command, a human body close-range induction trigger start command, or a manual control start command, where the manual control start may be performed by a start button, or a start remote controller, and the human body close-range induction trigger start may be an infrared living body detection or an NFC close-range sensing detection, where the start command of the image capturing device 200 is described by taking the infrared detection as an example, that is, when a feeding worker enters the range of the operation platform of the feeding auxiliary device 100, the infrared sensor detects a living body signal, sends a detection signal to the cloud server platform 300, and after receiving the detection signal, the cloud server platform 300 sends the start command to the image capturing device, so as to control the start of the image capturing device 200; after the camera device 200 is started, continuous video stream data are shot and transmitted to the cloud server platform 300, the cloud server platform 300 performs feature recognition on the video stream data to obtain feeding parameters, and as the video is provided with a time watermark, the cloud server platform 300 can analyze and obtain feeding time, wherein the feeding parameters comprise parameters such as feeding time, feeding quantity, feeding feed category, feeding feed proportion and the like; the cloud server deep learning can obtain a feeding strategy through deep learning by a neural network, so that accurate feeding quantity is obtained, accurate feeding is carried out according to the accurate feeding quantity, and the neural network performs deep learning similar to the process of analyzing, processing, summarizing and making decisions on multiparty data by human brain.
Therefore, the feeding time and the feeding amount are accurately determined through the behavioral visual analysis of the feeding workers, the overall fish pond culture efficiency is improved, and the economic benefit of farmers is improved.
To improve the flexibility of starting the image capturing apparatus 200, in an embodiment, the starting and closing of the image capturing apparatus 200 are controlled by infrared detection: referring to fig. 3-4, the feeding auxiliary device includes an infrared detector, and the step of acquiring the start instruction of the image capturing device further includes:
s000, acquiring a detection signal of the infrared detector and the duration of the detection signal;
s010, determining that a detection signal exists and the duration time is longer than a preset value;
s020, generating a starting instruction of the image pickup device.
Specifically, when the cloud server platform 300 obtains the detection signal of the infrared detector and the duration of the signal reaches a preset value, for example, when the duration reaches 3S, 5S or other preset time, it is indicated that a feeding worker does exist in the working area to perform feeding operation, and at this time, the cloud server platform 300 generates a start instruction of the image capturing device; when the cloud server platform 300 acquires the intermittent detection signal, it indicates that a worker may pass through the operation area, and at this time, the cloud server platform 300 does not generate a start instruction of the image capturing apparatus; after generating the start-up instruction of the image pickup apparatus, the steps S100 to S500 are executed.
The camera continuously shoots as long as a worker works, data are stored in a local space, and meanwhile, the cloud end is reported.
In other embodiments, a function of local visual computing can be added, the video is always in a shooting state, cloud and storage are not reported, and when workers work, the local computing can know that the storage and cloud reporting of the video need to be started so as to improve the accuracy of image acquisition.
In other embodiments, for energy saving and reduced flow costs, the camera 200 should remain off when no one arrives; therefore, after the step of starting the image capturing device based on the start instruction to obtain video stream data of the feeding process, the method further comprises:
s600, acquiring a detection signal of the infrared detector again and the duration of the detection signal;
and S700, determining that no detection signal exists and the duration time of the absence of the detection signal is longer than a preset value, and closing the image pickup device.
Specifically, when the cloud server platform 300 does not acquire the detection signal of the infrared detector and the duration of absence of the detection signal is longer than a preset value, it is indicated that the feeder worker has completed the feeding work in the work area, and at this time, the image pickup apparatus 200 of the corresponding area needs to be turned off.
It can be understood that a fishpond is provided with a plurality of different feeding points around the fishpond, each feeding point should be provided with a feeding auxiliary device 100 and a vertical-rod type image pickup device 200, that is, the feeding auxiliary devices 100 and the vertical-rod type image pickup devices 200 comprise a plurality of groups corresponding to each other one by one, and each feeding auxiliary device 100 can only be successfully matched with the corresponding image pickup device 200; referring to fig. 5, in an embodiment, the step of generating the start command of the image capturing apparatus includes:
s021, pairing the infrared detector of the feeding auxiliary device with the upright type camera device;
s022, generating a starting instruction of the successfully paired camera device according to the pairing result;
specifically, when the infrared detector of a certain feeding auxiliary device detects that a worker performs the operation, the worker is paired with all the image pickup devices immediately, and because the infrared detector can only be successfully paired with the image pickup device 200 at the corresponding position, a starting instruction of the successfully paired image pickup device is generated according to the pairing result, namely, only the image pickup device 200 at the corresponding position is started to acquire video streams; it will be appreciated that when there are simultaneous worker jobs at a plurality of locations, the image capturing apparatuses 200 at a plurality of corresponding locations are also simultaneously activated for video stream acquisition.
It should be noted that, since the image capturing device 200 has a human body tracking function, the tracking of the human body can be realized within the working range of the feeding auxiliary device 100, so as to achieve the effect of preventing the data from being lost, the image capturing device 200 has a risk of capturing a video of a non-corresponding working area, so as to collect the original data in error, and the cloud server platform 300 analyzes the original data in error to obtain an incorrect and inaccurate feeding conclusion. Referring to fig. 6, in an embodiment, the step of activating the camera based on the activation instruction to obtain video stream data of the feeding process includes:
s210, matching an infrared detector of the feeding auxiliary device with a vertical rod type camera device;
s220, acquiring video stream data of a feeding process in a range of a feeding auxiliary device corresponding to the successfully paired infrared detector according to the pairing result.
Specifically, when the infrared detector of a certain feeding auxiliary device detects that a worker works in the process, the worker is paired with all the image pickup devices immediately, and because the infrared detector can only be successfully paired with the image pickup device 200 at the corresponding position, a starting instruction of the successfully paired image pickup device is generated according to the pairing result, namely, only the image pickup device 200 at the corresponding position is started to acquire video streams; the image capturing device 200 may only acquire video stream data of the corresponding feeding process within the range of the feeding auxiliary device, for example, control the tracking angle of the image capturing device 200, or perform intermediate processing on the video stream data to delete video data of other region features.
In order to improve the feature extraction accuracy of the video information, referring to fig. 7, in other embodiments, the step of performing feature recognition on the video stream data to obtain the feeding parameter includes:
s310, acquiring key frame image data based on video stream data;
s320, acquiring feed packaging information based on the key frame image data, wherein the feed packaging information comprises packaging color, packaging shape, packaging weight, packaging residual quantity and a component table;
s330, carrying out cloud server deep learning on the feed packaging information to obtain the feeding parameters.
The method comprises the steps of firstly, decomposing video stream data into a plurality of key frame images, wherein the key frame images are images comprising package information so as to facilitate feature extraction of the package information, and then extracting feed package information from the key frame images, wherein the feed package information comprises package colors, package shapes, package weights, package residual amounts and component tables; and finally, carrying out cloud server deep learning on the feed packaging information to obtain the feeding parameters, namely analyzing and obtaining information such as feeding quantity, feed products and the like through a feeding process of shooting workers. The feeding parameters obtained in this example are mainly feeding amount, feed quality or feeding ratio parameters.
In order to further improve the accuracy of acquiring the feeding parameters and improve the accuracy of the feeding decision, referring to fig. 8, in other embodiments, the step of performing feature recognition on the video stream data to obtain the feeding parameters further includes:
s340, acquiring audio data of the water surface of the fishpond based on the video stream data;
s350, acquiring activity information of fish shoals of the fish pond based on the audio data, wherein the activity comprises primary activity and secondary activity:
s360, server deep learning is carried out on the liveness information to obtain the feeding parameters.
Specifically, firstly extracting sound features in video stream data, processing the sound features to obtain activity information of fish shoals in a fish pond, wherein the higher the sound generated by jumping of the fish shoals on the water surface is, the higher the activity of the fish shoals comprises primary activity and secondary activity, the primary activity indicates that the fish shoals are more active, further indicates that the fish shoals are hungry, and the time period is suitable for feeding; the secondary liveness indicates that the fish shoal is quieter, further indicates that the fish shoal is not hungry, food is not contended on the water surface, and the period is not suitable for feeding. Therefore, the embodiment can perform server deep learning on the liveness information to obtain the feeding parameters, wherein the feeding parameters mainly comprise feeding time and optimal feeding time, so that the optimal feeding time is conveniently and subsequently determined, and feeding is avoided when the liveness is low.
For good management of the feeding workers, different management means may be adopted, referring to fig. 9, in other embodiments, after the step of starting the image capturing device based on the start instruction to obtain video stream data of the feeding process, the method further includes:
s800, performing feature recognition on the video stream data to obtain action behavior data;
s900, performing cloud server deep learning based on the action behavior data to obtain abnormal behavior data;
s1000, determining the abnormal behavior data as an abnormal behavior event;
s1100, generating a traceability prompt strategy based on the abnormal behavior event;
s1200, sending the traceability reminding information to the worker according to the traceability reminding strategy.
Specifically, firstly, feature recognition is carried out on video stream data to obtain action behavior data, then whether the worker acts have abnormal behaviors or not is analyzed, for example, the feeding action is obviously less than the normal value, the actions such as smoking, leaving and the like are also in the analysis range, if the similar actions exist, the actions are recorded as abnormal events, and a traceability prompt strategy is generated; and finally, sending a tracing reminding message to the worker according to the tracing reminding strategy, such as a short message of ' the system detects the behavior of working smoking and takes notice of ' the next time ' or ' the system detects the behavior of feeding abnormality and takes notice of the next time '. By the mode, the traceability management of the feeding workers is realized.
In other embodiments, linkage control on the feeding machine can be added, when abnormal feeding behavior occurs, the feeding machine is in a protection state, namely a feeding forbidden state, and meanwhile related responsible persons are notified remotely.
Referring to fig. 10, the step of performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy includes:
s410, obtaining fish swarm growth data and fish pond output data;
s420, establishing a deep learning algorithm model according to feeding parameters, fish shoal growth data and fish pond output data;
s430, obtaining a feeding strategy according to the physical characteristics of the deep learning algorithm model.
Specifically, the fish school growth data and the fish pond output data can be data obtained by observation statistics of farmers or evaluation and culture experience in the fish school growth process, or can be data obtained by sales of past fish schools, and then a deep learning algorithm model is built according to feeding parameters (feeding time, feeding optimal time, feeding proportion and feeding amount), the fish school growth data and the fish pond output data; and finally, obtaining a feeding strategy according to the physical characteristics of the deep learning algorithm model. The multi-party data are converged, summarized, analyzed and generalized in the cloud, and the optimal feeding proportion and mode can be predicted by combining the growth condition of fish shoals in the cultivation process, the accurate feeding data (different in feeding each time) of the formula can be issued to each feeding machine, and feeding workers can share the feeding data as resources or serve as charging services according to prompt operation.
Therefore, different ponds can adopt different feed proportions, feed brands and different throwing times for throwing, and the purpose is to calculate the influence of the parameters on the growth of the fish shoal at the cloud, so that the optimal feeding mode and the optimal feed proportion mode are calculated. Furthermore, the data of a plurality of fishing farms can be more polymerized and analyzed at the cloud, the fish-swarm cultivation data of the same species in the same area can be analyzed, and the most critical parameters affecting the growth of the fish-swarm can be obtained by comparing the feeding modes and the feed proportions adopted by the high-yield and low-yield fish ponds; the feeding time, the feeding amount and the feeding ratio are accurately determined through the behavioral visual analysis of the feeding workers, so that the overall cultivation efficiency of the fishpond is improved, and the economic benefit of farmers is improved.
The embodiment of the invention also provides a device for obtaining the accurate feeding amount, which comprises a memory, a processor and a control program stored on the memory and used for realizing the accurate feeding method, wherein the processor is used for executing the control program for realizing the accurate feeding method so as to realize the steps of the method for obtaining the accurate feeding amount through the behavior visual analysis of a feeding worker, wherein the steps are as follows:
acquiring a starting instruction of the camera device;
starting the camera device based on the starting instruction to obtain video stream data of the feeding process;
performing feature recognition on the video stream data to obtain feeding parameters, wherein the feeding parameters comprise: feeding time, feeding amount, feeding feed class and/or feeding feed proportion;
performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy;
and obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity.
The embodiment of the invention also provides a readable storage medium, wherein a control program is stored on the readable storage medium, and the control program realizes the steps of the method for obtaining the accurate feeding amount through the behavior visual analysis of a feeding worker when being executed by a processor:
acquiring a starting instruction of the camera device;
starting the camera device based on the starting instruction to obtain video stream data of the feeding process;
performing feature recognition on the video stream data to obtain feeding parameters, wherein the feeding parameters comprise: feeding time, feeding amount, feeding feed class and/or feeding feed proportion;
performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy;
and obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (10)

1. The method for obtaining the accurate feeding amount through the behavior visual analysis of the feeding workers is applied to a feeding analysis system and is characterized in that the feeding analysis system comprises a feeding auxiliary device, a vertical rod type camera device and a cloud server platform, the vertical rod type camera device is configured to obtain the feeding process of the feeding auxiliary device, the vertical rod type camera device is in communication connection with the cloud server platform, and the method for obtaining the accurate feeding amount through the behavior visual analysis of the feeding workers comprises the following steps:
acquiring a starting instruction of the camera device;
starting the camera device based on the starting instruction to obtain video stream data of the feeding process;
performing feature recognition on the video stream data to obtain feeding parameters, wherein the feeding parameters comprise: feeding time, feeding amount, feeding feed class and/or feeding feed proportion;
performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy;
and obtaining accurate feeding quantity according to the feeding strategy, and carrying out accurate feeding according to the accurate feeding quantity.
2. The method for obtaining accurate feeding amount by behavioral visual analysis of a batch feeder according to claim 1, wherein the batch feeder comprises an infrared detector, and the step of obtaining the start instruction of the image pickup device further comprises:
acquiring a detection signal of an infrared detector and the duration of the detection signal;
determining that a detection signal is present and that the duration is greater than a preset value;
generating a start instruction of the image pickup device.
3. The method for obtaining accurate feeding amount through behavioral visual analysis of a feeding worker according to claim 2, wherein the feeding auxiliary device and the upright type camera device comprise a plurality of groups corresponding to each other one by one, and the step of generating the starting instruction of the camera device comprises the following steps:
pairing the infrared detector of the feeding auxiliary device with the upright type camera device;
generating a starting instruction of the successfully paired camera device according to the pairing result; and/or the number of the groups of groups,
the step of starting the camera device based on the starting instruction to obtain video stream data of the feeding process comprises the following steps:
pairing the infrared detector of the feeding auxiliary device with the upright type camera device;
and acquiring video stream data of the feeding process in the range of the feeding auxiliary device corresponding to the successfully paired infrared detector according to the pairing result.
4. The method for obtaining accurate feeding amount by behavioral visual analysis of a batch feeder according to claim 1, wherein after the step of starting the image pickup device based on the start instruction to obtain video stream data of the batch feeding process, further comprising:
acquiring a detection signal of the infrared detector again and the duration of the detection signal;
and determining that the detection signal is not present and the duration time of the absence of the detection signal is longer than a preset value, and closing the image pickup device.
5. The method for obtaining accurate feeding amount by behavioral visual analysis of a feeder according to claim 1, wherein said step of feature-identifying said video stream data to obtain feeding parameters comprises:
acquiring key frame image data based on video stream data;
acquiring feed packaging information based on the key frame image data, wherein the feed packaging information comprises packaging color, packaging shape, packaging weight, packaging residual quantity and a component table;
and carrying out cloud server deep learning on the feed packaging information to obtain the feeding parameters.
6. The method for obtaining accurate feeding amount by behavioral visual analysis of a batch feeder according to claim 5, wherein said step of feature-identifying said video stream data to obtain batch feeding parameters further comprises:
acquiring audio data of the water surface of the fishpond based on the video stream data;
acquiring liveness information of fish shoals of the fishpond based on the audio data, wherein the liveness comprises primary liveness and secondary liveness;
and carrying out server deep learning on the liveness information to obtain the feeding parameters.
7. The method for obtaining accurate feeding amount by behavioral visual analysis of a batch feeder according to claim 1, wherein after the step of starting the image pickup device based on the start instruction to obtain video stream data of the batch feeding process, further comprising:
performing feature recognition on the video stream data to obtain action behavior data;
performing cloud server deep learning based on the action behavior data to obtain abnormal behavior data;
determining an abnormal behavior event according to the abnormal behavior data;
generating a traceability prompt strategy based on the abnormal behavior event;
and sending the tracing reminding information to the worker according to the tracing reminding strategy.
8. The method for obtaining accurate feeding amount through behavioral visual analysis of a feeding worker according to any one of claims 1 to 7, wherein the step of performing cloud server deep learning based on the feeding parameters to obtain a feeding strategy comprises:
acquiring fish swarm growth data and fish pond output data;
establishing a deep learning algorithm model according to the feeding parameters, the fish swarm growth data and the fish pond output data;
and obtaining a feeding strategy according to the physical characteristics of the deep learning algorithm model.
9. A device for obtaining accurate feeding amount, characterized by comprising a memory, a processor and a control program stored on the memory for realizing the accurate feeding method, wherein the processor is used for executing the control program for realizing the accurate feeding method so as to realize the steps of the method for obtaining the accurate feeding amount through the behavior visual analysis of a feeding worker according to any one of claims 1 to 8.
10. A readable storage medium, wherein a control program is stored on the readable storage medium, which when executed by a processor, implements the steps of the method for obtaining accurate feeding amount by behavioral visual analysis of a feeding worker according to any one of claims 1 to 8.
CN202211462489.5A 2022-11-21 2022-11-21 Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers Active CN116076421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211462489.5A CN116076421B (en) 2022-11-21 2022-11-21 Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211462489.5A CN116076421B (en) 2022-11-21 2022-11-21 Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers

Publications (2)

Publication Number Publication Date
CN116076421A true CN116076421A (en) 2023-05-09
CN116076421B CN116076421B (en) 2024-04-16

Family

ID=86185726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211462489.5A Active CN116076421B (en) 2022-11-21 2022-11-21 Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers

Country Status (1)

Country Link
CN (1) CN116076421B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110115980A1 (en) * 2009-11-17 2011-05-19 Shmueli Yaron Automatic control of visual parameters in video processing
CA2699372A1 (en) * 2010-04-09 2011-10-09 Kuo-Hung Tseng Aquacultural remote control system
CN105941176A (en) * 2016-05-06 2016-09-21 齐洪方 Intelligent culture system based on LabVIEW development platform and control method thereof
CN106774540A (en) * 2016-12-01 2017-05-31 上海工程技术大学 A kind of intelligent culture control system and method based on video identification
CN107894758A (en) * 2017-12-28 2018-04-10 安徽金桥湾农业科技有限公司 A kind of intelligent fish pond based on Internet of Things
CN108496868A (en) * 2018-04-09 2018-09-07 浙江庆渔堂农业科技有限公司 High density breed in stew automatic feeding system based on technology of Internet of things
CN108925481A (en) * 2018-05-31 2018-12-04 深圳市零度智控科技有限公司 A kind of automatic dispensing device of shrimp material and control method based on image recognition
JP2020078278A (en) * 2018-11-14 2020-05-28 株式会社 アイエスイー Automatic feeding method and automatic feeding system of farmed fish
WO2021016955A1 (en) * 2019-07-31 2021-02-04 唐山哈船科技有限公司 Feeding device and method for aquaculture
CN114586722A (en) * 2022-01-18 2022-06-07 江苏叁拾叁信息技术有限公司 Intelligent feeding unmanned ship and feeding method thereof
CN114627401A (en) * 2021-07-29 2022-06-14 广州机智云物联网科技有限公司 Fishpond management system, culture equipment control method and device and computer equipment
CN114638779A (en) * 2021-07-29 2022-06-17 广州机智云物联网科技有限公司 Textile quality inspection system, method and device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110115980A1 (en) * 2009-11-17 2011-05-19 Shmueli Yaron Automatic control of visual parameters in video processing
CA2699372A1 (en) * 2010-04-09 2011-10-09 Kuo-Hung Tseng Aquacultural remote control system
CN105941176A (en) * 2016-05-06 2016-09-21 齐洪方 Intelligent culture system based on LabVIEW development platform and control method thereof
CN106774540A (en) * 2016-12-01 2017-05-31 上海工程技术大学 A kind of intelligent culture control system and method based on video identification
CN107894758A (en) * 2017-12-28 2018-04-10 安徽金桥湾农业科技有限公司 A kind of intelligent fish pond based on Internet of Things
CN108496868A (en) * 2018-04-09 2018-09-07 浙江庆渔堂农业科技有限公司 High density breed in stew automatic feeding system based on technology of Internet of things
CN108925481A (en) * 2018-05-31 2018-12-04 深圳市零度智控科技有限公司 A kind of automatic dispensing device of shrimp material and control method based on image recognition
JP2020078278A (en) * 2018-11-14 2020-05-28 株式会社 アイエスイー Automatic feeding method and automatic feeding system of farmed fish
WO2021016955A1 (en) * 2019-07-31 2021-02-04 唐山哈船科技有限公司 Feeding device and method for aquaculture
CN114627401A (en) * 2021-07-29 2022-06-14 广州机智云物联网科技有限公司 Fishpond management system, culture equipment control method and device and computer equipment
CN114638779A (en) * 2021-07-29 2022-06-17 广州机智云物联网科技有限公司 Textile quality inspection system, method and device, computer equipment and storage medium
CN114586722A (en) * 2022-01-18 2022-06-07 江苏叁拾叁信息技术有限公司 Intelligent feeding unmanned ship and feeding method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
夏英凯: "水产养殖水下机器人研究进展", 华中农业大学学报, 31 December 2021 (2021-12-31), pages 85 - 97 *
孟蕊: "畜禽精准饲喂管理技术发展现状与展望", 家畜生态学报, 31 December 2021 (2021-12-31), pages 1 - 7 *

Also Published As

Publication number Publication date
CN116076421B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US20210161193A1 (en) System and method of estimating livestock weight
CN109717120B (en) Fish culture monitoring feeding system and method based on Internet of things
CN107667903B (en) Livestock breeding living body weight monitoring method based on Internet of things
US20130322699A1 (en) Methods and systems for determining and displaying animal metrics
CN109640641B (en) Bait casting system, image processing device and image processing method
US20220189192A1 (en) Growth evaluation device, growth evaluation method, and growth evaluation program
JP6842100B1 (en) Aquatic animal detection device, information processing device, terminal device, aquatic animal detection system, aquatic animal detection method, and aquatic animal detection program
CN111401386A (en) Monitoring method and device for livestock stall, intelligent cruise robot and storage medium
US20210076570A1 (en) Agricultural work apparatus, agricultural work management system, and program
JP7074185B2 (en) Feature estimation device, feature estimation method, and program
CN107454313A (en) The photographic method and camera system of agricultural intelligent device
JPWO2019198701A1 (en) Analyzers, analytical methods, programs and aquatic organism monitoring systems
CN115661650A (en) Farm management system based on data monitoring of Internet of things
CN116076421B (en) Method for obtaining accurate feeding amount through behavioral visual analysis of feeding workers
JP7223880B2 (en) Convolutional Neural Network Model for Detection of Dairy Cow Teats and its Construction Method
CN110889847A (en) Nuclear radiation damage assessment system and method based on infrared imaging
CN112775979A (en) Control method of pet accompanying robot, pet accompanying robot and chip
KR102404137B1 (en) Stationary Livestock weight estimation system based on 3D images and Livestock weight estimation method using the same
CN111144202B (en) Object control method, device and system, electronic equipment and storage medium
CN112153892B (en) Device for fly management
CN112116647B (en) Weighting method and weighting device
CN115760904A (en) Livestock and poultry statistical method, device, electronic equipment and medium
CN113705282A (en) Information acquisition device, system and method and information acquisition vehicle
TW202114518A (en) Method for automatically detecting and repelling birds
CN111652084A (en) Abnormal laying hen identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant