CN110161970B - Agricultural Internet of things integrated service management system - Google Patents

Agricultural Internet of things integrated service management system Download PDF

Info

Publication number
CN110161970B
CN110161970B CN201910481452.9A CN201910481452A CN110161970B CN 110161970 B CN110161970 B CN 110161970B CN 201910481452 A CN201910481452 A CN 201910481452A CN 110161970 B CN110161970 B CN 110161970B
Authority
CN
China
Prior art keywords
remote sensing
video
altitude remote
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910481452.9A
Other languages
Chinese (zh)
Other versions
CN110161970A (en
Inventor
彭荣君
韩天甲
唐吉龙
安宏艳
孟庆民
孟庆山
唐庆刚
李瑛�
柳树林
张亚菲
王平
汪敏
闫大明
丁文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Beidahuang Agriculture Co ltd
Original Assignee
Qixing Farm In Heilongjiang Province
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qixing Farm In Heilongjiang Province filed Critical Qixing Farm In Heilongjiang Province
Priority to CN201910481452.9A priority Critical patent/CN110161970B/en
Publication of CN110161970A publication Critical patent/CN110161970A/en
Application granted granted Critical
Publication of CN110161970B publication Critical patent/CN110161970B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/406Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
    • G05B19/4065Monitoring tool breakage, life or condition
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/37Measurements
    • G05B2219/37616Use same monitoring tools to monitor tool and workpiece

Abstract

The invention provides an agricultural Internet of things comprehensive service management system. The system comprises a monitoring subsystem, a meteorological subsystem, an underground water level monitoring subsystem and a control center subsystem; sending video data and soil environment data obtained by monitoring points, air environment data of a meteorological monitoring station and underground water bit data to a control center subsystem; the control center subsystem predicts the growth vigor of the corresponding crops based on the received data, acquires soil element information influencing the growth of the crops and environmental element information in the air, and monitors the underground water level change conditions of underground water level monitoring points. The agricultural Internet of things comprehensive service management system can realize intelligent agricultural Internet of things management and control.

Description

Agricultural Internet of things integrated service management system
Technical Field
The invention relates to an information processing technology, in particular to an agricultural Internet of things comprehensive service management system.
Background
The agricultural internet of things is an internet of things which is displayed in real time through various instruments or used as a parameter for automatic control to participate in automatic control. It can provide scientific basis for accurate regulation and control of the greenhouse, and achieves the purposes of increasing yield, improving quality, regulating growth cycle and improving economic benefit.
The agricultural internet of things is generally applied to a monitoring network formed by a large number of sensor nodes, information is collected through various sensors to help farmers find problems in time and accurately determine the positions where the problems occur, so that agriculture gradually turns from a production mode taking manpower as a center and relying on isolated machinery to a production mode taking information and software as a center, and various automatic, intelligent and remote control production devices are used in a large number.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to determine the key or critical elements of the present invention, nor is it intended to limit the scope of the present invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of this, the embodiment of the invention provides an agricultural internet of things comprehensive service management system, so as to at least solve the problems of low efficiency and incomplete functions in the existing agricultural management system.
The invention provides an agricultural Internet of things comprehensive service management system which comprises a monitoring subsystem, a meteorological subsystem, an underground water level monitoring subsystem and a control center subsystem, wherein the monitoring subsystem is used for monitoring the underground water level of a ground water level; the monitoring subsystem comprises a plurality of monitoring points, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, the at least one video device is used for capturing video data of a corresponding area, the at least one first sensor is used for acquiring soil environment data corresponding to the monitoring point, and the first communication device is used for sending the video data and the soil environment data acquired by the corresponding monitoring point to the control center subsystem; the weather subsystem comprises a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the second sensors are used for acquiring air environment data corresponding to the weather monitoring station, and the second communication device is used for sending the air environment data corresponding to the weather monitoring station to the control center subsystem; the underground water level monitoring subsystem comprises a plurality of underground water level monitoring points, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, the underground water level monitoring device is used for acquiring underground water level data at a corresponding position in real time and transmitting the acquired underground water level data to the control center subsystem through the third communication device; the control center subsystem comprises a fourth communication device and a control processing device, wherein the fourth communication device is used for receiving all data from the monitoring subsystem, the meteorological subsystem and the underground water level monitoring subsystem and sending the data to the control processing device; the control processing means is for: predicting the growth of corresponding crops and acquiring soil element information influencing the growth of the crops at least based on the video data and the environmental data corresponding to each monitoring point received from the monitoring subsystem; acquiring environmental element information in the air influencing the growth of crops at least based on the corresponding air environment data at each weather monitoring station received from the weather subsystem; and monitoring the underground water level change conditions of the underground water level monitoring points at least based on the underground water level data corresponding to the underground water level monitoring points received from the underground water level monitoring subsystem.
The invention discloses an agricultural Internet of things comprehensive service management system which comprises a monitoring subsystem, a meteorological subsystem, an underground water level monitoring subsystem and a control center subsystem, wherein the monitoring subsystem is used for monitoring the underground water level; in the monitoring subsystem, video data and soil environment data obtained by corresponding monitoring points are sent to the control center subsystem through a video device, a first sensor and a first communication device which are arranged at each monitoring point; in the weather subsystem, the air environment data corresponding to the weather monitoring stations is sent to the control center subsystem through a second sensor and a second communication device which are arranged at each weather monitoring station; in the underground water level monitoring subsystem, the acquired underground water level data is sent to the control center subsystem through an underground water level monitoring device and a third communication device which are arranged at each underground water level monitoring point; therefore, the control center subsystem predicts the growth of the corresponding crops and obtains the soil element information influencing the growth of the crops at least based on the video data and the environmental data corresponding to the monitoring points received from the monitoring subsystem, obtains the environmental element information in the air influencing the growth of the crops at least based on the air environmental data corresponding to the meteorological monitoring stations received from the meteorological subsystem, and monitors the underground water level change conditions of the underground water level monitoring points at least based on the underground water level data corresponding to the underground water level monitoring points received from the underground water level monitoring subsystem. Therefore, the agricultural Internet of things comprehensive service management system can realize intelligent agricultural Internet of things management and control.
In some implementations, a plurality of target frame images in a target video are obtained based on scene switching points (i.e., scene switching time), and a plurality of frame images to be detected in each video to be detected are obtained based on scene switching points, where a target frame image is a switched video frame corresponding to each scene switching point in a target video, a frame image to be detected is a switched video frame corresponding to each scene switching point in each video to be detected, and by comparing similarities between each target frame image of a target video and each frame image to be detected in each video to be detected, two kinds of information are obtained, one is the number of frame images to be detected in each video to be detected, which is related to a target frame image (i.e., the number of all frame images to be detected in the video to be detected), and the other is the number of target frame images in each video to be detected (i.e., the number of all frame images to be detected in the video to be detected), whether the video to be detected is similar to the target video or not is determined based on the combination of the two kinds of information, so that on one hand, the similar video of the target video can be obtained more efficiently, on the other hand, the range needing to be searched can be narrowed for subsequent further similar video judgment, and the workload is greatly reduced.
In addition, in an implementation manner, a plurality of target frame images in a target video may be first obtained based on a scene switching point (i.e., a scene switching time), and a plurality of frame images to be detected in each video to be detected may be obtained based on the scene switching point, where the target frame image is a switched video frame corresponding to each scene switching point in the target video, the frame image to be detected is a switched video frame corresponding to each scene switching point in each video to be detected, and by comparing similarities between each target frame image of the target video and each frame image to be detected in each video to be detected, two kinds of information are obtained, one is the number of frame images to be detected in each video to be detected, which is related to the target frame image (i.e., the number of all frame images to be detected in the video to be detected, and the other is the number of target frame images related to each video to be detected (i.e., the number of all frame images to be detected in the video to be detected is similar Quantity), determining a first score of each video to be detected based on the combination of the two information (steps 401-403), then screening out a part of the video to be detected as candidate videos based on the first score, aiming at carrying out secondary screening from the candidate videos so as to finally obtain similar videos of the target video, wherein the secondary screening from the candidate videos is realized by calculating a second score of each candidate video. When calculating the second score, firstly, performing video segmentation on the target video and each candidate video based on the scene change point to obtain a plurality of first video segments corresponding to the target video and a plurality of second video segments corresponding to each candidate video, obtaining two other kinds of information by comparing the similarity between the first video segments in the target video and the second video segments in the candidate video, wherein one kind of information is the number of second video segments related to the target video in the candidate video (namely, the number of similar segments contained in the candidate video), and the other kind of information is the number of first video segments related to each candidate video (namely, the number of all first video segments related to each similar segment contained in each candidate video), determining the second score of each candidate video based on the combination of the two kinds of information (steps 501-505), and screening the candidate videos according to the second score of each candidate video to determine which videos are similar to the target video. Therefore, the first score and the second score of the video to be detected (or the candidate video) are obtained by combining the four kinds of information, and the video to be detected is screened twice by combining the first score and the second score, so that the similar video obtained by screening is more accurate.
Compared with the prior art of directly calculating the similarity of two videos, the method can greatly reduce the workload and improve the processing efficiency, can firstly carry out primary screening by calculating the first score, the calculation is based on the frame image after scene switching, the calculation amount is much smaller than the similarity calculation of the whole video, then carries out secondary screening on the result of the primary screening, and the secondary screening does not carry out the similarity calculation on all candidate videos, and does not calculate the similarity of the whole video together for a single candidate video, but divides the candidate video based on the scene switching point, carries out the similarity calculation on a part of the divided video segments (namely the similar segments mentioned above) in the candidate video and the corresponding segments in the target video, thus, compared with the prior art of calculating the similarity calculation between every two videos (and the whole video), the calculation amount is greatly reduced, and the efficiency is improved.
The unmanned aerial vehicle remote sensing technology provided by the invention utilizes a domestic high-grade first satellite to collect remote sensing image data once every 8 days to generate a remote sensing image map, and the remote sensing image map is used for monitoring the crop growth, plant diseases and insect pests, flood disasters and crop yield in autumn in a large range. Due to the weather factor image, the cloud cover of the data exceeds the standard to shield surface crops, so that the satellite image data reflection spectrum is seriously distorted, the data 31 scene which cannot be used can be normally used, only 8 scenes can be normally used, and in order to solve the problem and improve the data accuracy, the low-altitude remote sensing monitoring is further perfected on the basis of the satellite high-altitude remote sensing technology. The multispectral imager is hung by the unmanned aerial vehicle, the low-altitude remote sensing technology is combined, image data of crops, environment, growth vigor and the like are further processed and analyzed, and big data of crop growth vigor, pest and disease occurrence conditions, yield prediction and the like are collected. And can utilize the area of every plot in the accurate measurement farm of unmanned aerial vehicle, the accuracy of traditional measurement mode is 2 meters, utilizes unmanned aerial vehicle but can reach several centimetres, has improved the accuracy of data greatly.
These and other advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings.
Drawings
The invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like reference numerals are used throughout the figures to indicate like or similar parts. The accompanying drawings, which are incorporated in and form a part of this specification, illustrate preferred embodiments of the present invention and, together with the detailed description, serve to further explain the principles and advantages of the invention. In the drawings:
fig. 1 is a schematic diagram illustrating an exemplary system composition of an agricultural internet of things integrated service management system of the present invention;
fig. 2 is a schematic diagram showing another possible system composition of the agricultural internet of things integrated service management system of the present invention;
fig. 3 is a schematic diagram showing an exemplary flow of a part of processing performed by a server side in the agricultural internet of things integrated service management system of the present invention;
FIG. 4 is a flow chart illustrating one possible process of calculating a first score for a video to be detected;
figure 5 is a flow chart illustrating one possible process of step 308.
Skilled artisans appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve the understanding of the embodiments of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the device structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
The embodiment of the invention provides an agricultural Internet of things comprehensive service management system, which comprises: the system comprises a monitoring subsystem, a meteorological subsystem, an underground water level monitoring subsystem and a control center subsystem; the monitoring subsystem comprises a plurality of monitoring points, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, the at least one video device is used for capturing video data of a corresponding area, the at least one first sensor is used for acquiring soil environment data corresponding to the monitoring point, and the first communication device is used for sending the video data and the soil environment data acquired by the corresponding monitoring point to the control center subsystem; the weather subsystem comprises a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the second sensors are used for acquiring corresponding air environment data at the weather monitoring station, and the second communication device is used for sending the air environment data of the corresponding weather monitoring station to the control center subsystem; the underground water level monitoring subsystem comprises a plurality of underground water level monitoring points, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, and the underground water level monitoring device is used for acquiring underground water level data of a corresponding position in real time and transmitting the acquired underground water level data to the control center subsystem through the third communication device; the control center subsystem comprises a fourth communication device and a control processing device, wherein the fourth communication device is used for receiving all data from the monitoring subsystem, the meteorological subsystem and the underground water level monitoring subsystem and sending the data to the control processing device; the control processing means is for: predicting the growth of corresponding crops and acquiring soil element information influencing the growth of the crops at least based on the video data and the environmental data corresponding to each monitoring point received from the monitoring subsystem; acquiring environmental element information in the air influencing the growth of crops at least based on the corresponding air environment data at each weather monitoring station received from the weather subsystem; and monitoring the underground water level change conditions of the underground water level monitoring points at least based on the underground water level data corresponding to the underground water level monitoring points received from the underground water level monitoring subsystem.
Fig. 1 shows an example of the agricultural internet of things integrated service management system.
As shown in fig. 1, the agricultural internet of things integrated service management system includes a monitoring subsystem 110, a weather subsystem 120, a ground water level monitoring subsystem 130 and a control center subsystem 140.
The monitoring subsystem 110 includes a plurality of monitoring points, where each monitoring point is provided with at least one video device (e.g., a general camera, a panoramic camera, etc.), at least one first sensor, and a first communication device, where the at least one video device is configured to capture video data of a corresponding area, the at least one first sensor is configured to obtain soil environment data corresponding to the monitoring point, and the first communication device is configured to send the video data and the soil environment data obtained by the corresponding monitoring point to the control center subsystem.
Wherein the at least one first sensor may comprise a plurality of different types of sensors, such as existing sensors for detecting the soil environment.
The first communication means may be, for example, a wifi communication module, or may be a module such as bluetooth.
The weather subsystem 120 includes a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communicator, the plurality of second sensors are used for acquiring the air environment data corresponding to the weather monitoring station, and the second communicator is used for transmitting the air environment data corresponding to the weather monitoring station to the control center subsystem.
The plurality of second sensors may include a plurality of different types of sensors, such as existing sensors for detecting an air environment.
The second communication means may be, for example, a wifi communication module, or may be a module such as bluetooth.
The groundwater level monitoring subsystem 130 comprises a plurality of groundwater level monitoring points, wherein each groundwater level monitoring point is provided with a groundwater level monitoring device and a third communication device, the groundwater level monitoring device is used for acquiring groundwater level data of a corresponding position in real time, and the acquired groundwater level data is sent to the control center subsystem through the third communication device.
The third communication device may be, for example, a wifi communication module, or may be a module such as bluetooth.
The control center subsystem 140 includes a fourth communication device and a control processing device, and the fourth communication device is used for receiving all data from the monitoring subsystem, the weather subsystem and the ground water level monitoring subsystem and sending the data to the control processing device.
The fourth communication device may be, for example, a wifi communication module, or may be a module such as bluetooth.
The control processing device can predict the growth of the corresponding crops (for example, the prior art can be adopted) and acquire the soil element information influencing the growth of the crops at least based on the video data and the environmental data corresponding to each monitoring point received from the monitoring subsystem.
The control processing device can also acquire the information of the environmental elements in the air influencing the growth of the crops at least based on the corresponding air environment data at each weather monitoring station received from the weather subsystem.
In addition, the control processing device can monitor the underground water level change conditions of the underground water level monitoring points at least based on the underground water level data corresponding to the underground water level monitoring points received from the underground water level monitoring subsystem.
Fig. 2 shows another example of the agricultural internet of things integrated service management system.
As an example, the agricultural internet of things integrated service management system may include, in addition to the portions shown in fig. 1, a geographic information subsystem 150 and an agricultural drone and satellite remote sensing subsystem 160.
The geographic information subsystem 150 includes an electronic map of a preset farm, and a plurality of preset positions on the electronic map are provided with marking information.
The agricultural drone and satellite remote sensing subsystem 160 includes a drone end, a satellite communication end, and a server end.
The unmanned aerial vehicle end is suitable for collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times and sending the low-altitude remote sensing images to the server end in real time;
the satellite communication terminal is suitable for acquiring a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the high-altitude remote sensing image to the server terminal in real time;
the server side is suitable for at least one function of crop growth prediction, insect pest detection and flood disaster analysis and early warning based on a low-altitude remote sensing image from the unmanned aerial vehicle side and/or a high-altitude remote sensing image from the satellite communication side.
For example, the annotation information includes one or more of land information, water conservancy information, and forestry information.
For example, in the greenhouse control system, the temperature sensor, the humidity sensor, the pH value sensor, the illuminance sensor, the CO2 sensor and other devices of the Internet of things system are used for detecting physical quantity parameters such as temperature, relative humidity, pH value, illumination intensity, soil nutrients, CO2 concentration and the like in the environment, so that a good and appropriate growing environment for crops is ensured. The realization of remote control makes the technical staff just can monitor the control to the environment of a plurality of big-arch shelters at the office. Wireless networks are used to measure the optimal conditions for achieving crop growth.
In the unmanned aerial vehicle remote sensing technology, a small digital camera (or scanner) is usually used as an airborne remote sensing device, compared with a traditional aerial photograph, the unmanned aerial vehicle remote sensing technology has the problems of small image size, large number of images and the like, and corresponding software is developed for carrying out interactive processing on images by aiming at the characteristics of the remote sensing images, camera calibration parameters, attitude data during shooting (or scanning) and relevant geometric models. In addition, the system also comprises image automatic identification and quick splicing software, so that the quick inspection of the image quality and the flight quality and the quick processing of data are realized, and the real-time and quick technical requirements of the whole system are met.
Fig. 3 shows a flowchart of processing executed by a server side in the agricultural internet of things integrated service management system according to the invention.
Firstly, the server side groups the received low-altitude remote sensing images and high-altitude remote sensing images, and generates a video to be detected by using each group of images to obtain a plurality of videos to be detected (this step is not shown in fig. 3).
Then, as shown in fig. 3, in step 301, a target video is received. The target video is received from outside, such as a user terminal. The target video can be a video file in any format, and can also be a video file conforming to one of preset formats. The preset format includes, for example, video formats such as MPEG-4, AVI, MOV, ASF, 3GP, MKV, and FLV.
Next, in step 302, a plurality of scene cut times in the target video are determined. Step 302 may, for example, detect a scene change time in the target video by using the prior art, which is not described herein again.
Next, in step 303, for each scene switching time in the target video, a switched video frame corresponding to the scene switching time in the target video is obtained. That is, at each scene change point (i.e., scene change time), the frame before the change is referred to as a pre-change video frame, and the frame after the change is referred to as a post-change video frame. Thus, in a target video, one or more post-switching video frames (or 0 post-switching video frames, that is, no switching scene in the video, always the same scene) can be obtained.
Then, in step 304, the first frame image of the target video and the switched video frames corresponding to all scene switching times in the target video are taken as a plurality of target frame images (if there is no switched video frame in the target video, there is only one target frame image, that is, the first frame image of the target video), and the total number of all target frame images is recorded as N, where N is a non-negative integer. Generally, N is 2 or more. When there is no switched video frame in the target video, N is equal to 1.
Next, in step 305, for each video to be detected in a predetermined video database, determining a plurality of scene switching moments in the video to be detected, obtaining a switched video frame corresponding to each scene switching moment in the video to be detected, and taking a first frame image of the video to be detected and switched video frames corresponding to all scene switching moments in the video to be detected as frame images to be detected.
The preset video database stores a plurality of videos serving as the videos to be detected in advance. For example, the predetermined video database may be a database stored in a video playing platform, or a database stored in a memory such as a network cloud disk.
Then, in step 306, for each target frame image, the similarity between each frame image to be detected of each video to be detected and the target frame image is calculated, and the frame image to be detected, the similarity of which with the target frame image is higher than the first threshold, is determined as the candidate frame image corresponding to the video to be detected. The first threshold may be set according to an empirical value, for example, the first threshold may be 80% or 70%, or the like.
Then, in step 307, for each video to be detected, a first score of the video to be detected is calculated.
For example, for each video to be detected, a first score of the video to be detected can be obtained by performing steps 401 to 403 as shown in fig. 4.
In step 401, the number of candidate frame images corresponding to the video to be detected is calculated and recorded as a1, where a1 is a non-negative integer.
Next, in step 402, the number of all target frame images related to each candidate frame image corresponding to the video to be detected is calculated and recorded as a2, where a2 is a non-negative integer.
Then, in step 403, the first score of the video to be detected is calculated according to the following formula, S1 ═ q1 × a1+ q2 × a 2.
S1 is the first score of the video to be detected, q1 represents the weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents the weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, wherein q1 is equal to the preset first weight value.
Alternatively, the first weight value is, for example, equal to 0.5, which may also be set empirically.
When a2 is equal to N, q2 is equal to a preset second weight value.
When a2 < N, q2 is equal to a preset third weight value.
Wherein the second weight value is greater than the third weight value.
Alternatively, the second weight value is equal to 1, for example, and the third weight value is equal to 0.5, for example, or the second weight value and the third weight value may be set empirically.
Alternatively, the second weight value may be equal to d times the third weight value, d being a real number greater than 1. Where d can be an integer or a decimal number, for example, d can be an integer or a decimal number greater than or equal to 2, such as 2, 3, or 5, and so on.
Referring to fig. 3, after the step 307 is executed (for example, after the processing of the step 307 is completed through the steps 401 and 203), in step 308, similar videos of the target video are determined in the videos to be detected according to the first score of each video to be detected.
Optionally, in step 308, the step of determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected may include: and selecting the video to be detected with the first score higher than the second threshold value from all the videos to be detected as the similar video of the target video. The second threshold may be set according to an empirical value, for example, the second threshold may be equal to 5, and different values may be set according to different application conditions.
Thus, through the processing of steps 301 to 308, similar videos similar to the target video can be determined in the predetermined video database.
Thus, a plurality of target frame images in the target video are obtained based on the scene switching points (i.e. scene switching time), and a plurality of frame images to be detected in each video to be detected are obtained based on the scene switching points, wherein the target frame images are switched video frames corresponding to each scene switching point in the target video, the frame images to be detected are switched video frames corresponding to each scene switching point in each video to be detected, and two kinds of information are obtained by comparing the similarity between each target frame image in the target video and each frame image to be detected in each video to be detected respectively, one kind of information is the number of frame images to be detected in each video to be detected, which is related to the target frame image (i.e. the number of all frame images to be detected in the video to be detected), and the other kind of information is the number of target frame images related to each video to be detected (i.e. the number of all target frame images similar to be detected in the video to, whether the video to be detected is similar to the target video or not is determined based on the combination of the two kinds of information, so that on one hand, the similar video of the target video can be obtained more efficiently, on the other hand, the range needing to be searched can be narrowed for subsequent further similar video judgment, and the workload is greatly reduced.
In a preferred example (hereinafter, referred to as example 1), if the target video has 3 scene switching points, the target video has 4 switched video frames (including a first frame), i.e., 4 target frame images, p, and p, i.e., the total number N of all target frame images is 4, if a certain video to be detected (v) has 5 scene switching points, the video to be detected v has 6 switched video frames, i.e., 6 frame images to be detected, p ', and p', if p ', and p', respectively, each of the 6 frame images to be detected is subjected to similarity calculation with each of the above 4 target frame images, if the similarity between p 'and p is x, the similarity between p' and p 'is p, and p', p 'is p', and p 'is p', and p ', if p' is x, p ', p is p', p is p ', p is p', the similarity is p ', the similarity is p', the similarity is p ', the similarity is x, if p', the similarity is p ', the similarity between p', the similarity is x, the similarity of.
Assuming that another video to be detected (assumed to be v2), through similar processing, the number a1 of the candidate frame images corresponding to the video to be detected v2 is 4, and the number a2 of all the target frame images related to each candidate frame image corresponding to the video to be detected v2 is 4, so that a2 is N, so that q2 is 1, then the first score S1 of the video to be detected v2 is 0.5 × 4+1 × 4, or "q 1 × a1+ q2 × a2 is 0.5 × +1 ×.
Thus, in example 1, the first score of the video to be detected v2 is much higher than the first score of the video to be detected v1, and assuming that the second threshold value is 5 scores (different values may be set in other examples), the video to be detected v2 may be determined as a similar video of the target video, and the video to be detected v1 is not a similar video.
In addition, in step 308, the step of determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected may also include the processing shown in fig. 3.
As shown in fig. 5, the process of step 308 described above can be implemented by steps 501-506.
In step 501, among all videos to be detected, in which the first score is higher than the second threshold, are selected as candidate videos.
Next, in step 502, the target video is segmented based on the multiple scene switching times of the target video to obtain multiple first video segments corresponding to the target video, and the total number of all the first video segments in the target video is recorded as M, where M is a non-negative integer.
Then, in step 503, for each candidate video, the candidate video is segmented based on the scene change time instants of the candidate video, and a plurality of second video segments corresponding to the candidate video are obtained.
Next, in step 504, for a second video segment corresponding to each candidate frame image of each candidate video, a first video segment related to a target frame image corresponding to the candidate frame image is selected from a plurality of first video segments, a similarity calculation is performed between the selected first video segment and the second video segment, and if the similarity between the first video segment and the second video segment is higher than a third threshold, the second video segment is determined as a similar segment corresponding to the first video segment. Wherein the third threshold value may be set according to an empirical value, for example, the third threshold value may be equal to 60% or 70% or 80% or 90%, etc.
For example, the similarity calculation between two video segments can be implemented by using the prior art, and is not described herein again.
Then, in step 505, for each candidate video, calculating the number of similar segments contained in the candidate video, which is denoted as b1, where b1 is a non-negative integer, calculating the number of all first video segments related to the similar segments contained in the candidate video, which is denoted as b2, where b2 is a non-negative integer, and calculating a second score of the candidate video according to the following formula, where S2 ═ q3 × b1+ q4 × b2, where S2 is the second score of the candidate video, q3 represents a weight corresponding to the number of similar segments contained in the candidate video, q4 represents a weight corresponding to the number of all first video segments related to the similar segments contained in the candidate video, where q3 is equal to a preset fourth weight value, q4 is equal to a preset fifth weight value when b2 is M, q 3638 is equal to a preset sixth weight value when b 3985 is less than M, where the fifth weight value is greater than the preset sixth weight value, where the fifth weight value may be set according to experience.
Then, in step 506, similar videos of the target video are determined among the candidate videos according to the second score of each candidate video.
Optionally, step 506 may include: among all the candidate videos, a candidate video in which the second score is higher than the fourth threshold is selected as a similar video of the target video. The fourth threshold may be set according to an empirical value, for example, the fourth threshold may be equal to 5, and different values may be set according to different application conditions.
Thus, in one implementation, a plurality of target frame images in a target video may be first obtained based on scene switching points (i.e., scene switching time), and a plurality of frame images to be detected in each video to be detected may be obtained based on the scene switching points, where the target frame images are switched video frames corresponding to each scene switching point in the target video, the frame images to be detected are switched video frames corresponding to each scene switching point in each video to be detected, and by comparing similarities between each target frame image of the target video and each frame image to be detected in each video to be detected, two kinds of information are obtained, one is the number of frame images to be detected in each video to be detected, which is related to the target frame image (i.e., the number of all frame images to be detected in the video to be detected), and the other is the number of target frame images related to each video to be detected (i.e., the number of all frame images to be detected in the video to be detected (i.e., all frame images Quantity), determining a first score of each video to be detected based on the combination of the two information (steps 401-403), then screening out a part of the video to be detected as candidate videos based on the first score, aiming at carrying out secondary screening from the candidate videos so as to finally obtain similar videos of the target video, wherein the secondary screening from the candidate videos is realized by calculating a second score of each candidate video. When calculating the second score, firstly, performing video segmentation on the target video and each candidate video based on the scene change point to obtain a plurality of first video segments corresponding to the target video and a plurality of second video segments corresponding to each candidate video, obtaining two other kinds of information by comparing the similarity between the first video segments in the target video and the second video segments in the candidate video, wherein one kind of information is the number of second video segments related to the target video in the candidate video (namely, the number of similar segments contained in the candidate video), and the other kind of information is the number of first video segments related to each candidate video (namely, the number of all first video segments related to each similar segment contained in each candidate video), determining the second score of each candidate video based on the combination of the two kinds of information (steps 501-505), and screening the candidate videos according to the second score of each candidate video to determine which videos are similar to the target video. Therefore, the first score and the second score of the video to be detected (or the candidate video) are obtained by combining the four kinds of information, and the video to be detected is screened twice by combining the first score and the second score, so that the similar video obtained by screening is more accurate.
Compared with the prior art of directly calculating the similarity of two videos, the method can greatly reduce the workload and improve the processing efficiency, can firstly carry out primary screening by calculating the first score, the calculation is based on the frame image after scene switching, the calculation amount is much smaller than the similarity calculation of the whole video, then carries out secondary screening on the result of the primary screening, and the secondary screening does not carry out the similarity calculation on all candidate videos, and does not calculate the similarity of the whole video together for a single candidate video, but divides the candidate video based on the scene switching point, carries out the similarity calculation on a part of the divided video segments (namely the similar segments mentioned above) in the candidate video and the corresponding segments in the target video, thus, compared with the prior art of calculating the similarity calculation between every two videos (and the whole video), the calculation amount is greatly reduced, and the efficiency is improved.
According to one embodiment, the agricultural internet of things integrated service management system may further include a yield prediction platform, and the yield prediction platform includes a first model training unit, a second model training unit, a first prediction unit, a second prediction unit, and a third prediction unit.
The first model training unit can take each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as input, take the real yield grades corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, train a preset convolutional neural network model, and take the trained preset convolutional neural network model as a first prediction model.
The production rate level referred to herein (e.g., "production rate level" in "actual production rate level", or "production rate level" in "predicted production rate level" described below) is a plurality of different levels set in advance. For example, a number of production levels may be preset empirically or experimentally, such as 3 levels (e.g., 2 levels, 4 levels, 5 levels, 8 levels, or 10 levels, etc.), wherein the first level corresponds to a production range of x 1-x2 (e.g., 1 kgf-1.2 kgf), the second level corresponds to a production range of x 2-x 3 (e.g., 1.2 kgf-1.4 kgf), and the third level corresponds to a production range of x 3-x 4 (e.g., 1.4 kgf-1.6 kgf).
For example, if the yield is 1.5 kilo kilograms, the corresponding yield grade is the third grade.
Wherein if the yield is exactly equal to the boundary value, the lower grade can be taken. For example, a throughput of 1.2 kilo kilograms corresponds to the first grade.
It should be noted that each set of the low-altitude remote sensing image and the high-altitude remote sensing image may include more than one low-altitude remote sensing image, and may also include more than one high-altitude remote sensing image.
The historical data comprises a plurality of groups of low-altitude remote sensing images and high-altitude remote sensing images, and real yield grades, corresponding weather data and corresponding pest data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images; in addition, the historical data can also comprise the real yield corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images. Each set of low-altitude and high-altitude remote sensing images (and corresponding real yield grade, real yield, corresponding weather data, corresponding pest data and the like) corresponds to a historical case.
Where the weather data may be in the form of a vector, for example, the weather data is represented by (t1, t2) (or more dimensions), where t1, t2 have a value of 0 or 1,0 represents that the corresponding item is no, and 1 represents that the corresponding item is true. For example, t1 indicates whether drought, t2 indicates whether flooding, and so on. For example, weather data (0,1) indicates no drought but flooding, while weather data (0,0) indicates neither drought nor flooding.
Further, pest data may be in the form of vectors, for example, weather data is represented by (h1, h2, h3, h4, h5) (or less or more dimensions), where the values of h1 to h5 are 0 or 1,0 represents no for the corresponding item, and 1 represents true for the corresponding item. For example, h1 item indicates whether the pest frequency is 0, h2 item indicates whether the pest frequency is 1-3, h3 item indicates whether the pest frequency is 3-5, h4 item indicates whether the pest frequency is more than 5, h5 item indicates whether the total area of the pest frequency exceeds a predetermined area (for example, the total area can be set according to experience or determined by a test), and the like. For example, pest data (1,0,0,0,0) indicates that no pest has occurred, while pest data (0,0,1,0,1) indicates that 3-5 pests have occurred and that the total area of pest occurrences exceeds a predetermined area.
The second model training unit can obtain a first prediction yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data by using the first prediction model, namely, after the first prediction model is trained, each group of low-altitude remote sensing images and high-altitude remote sensing images are input into the first prediction model, and the output result at the moment is used as the first prediction yield grade corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images.
The second model training unit takes the first predicted yield grade, the corresponding weather data and the corresponding pest damage data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as input, takes the real yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, trains a preset BP neural network model, and takes the trained preset BP neural network model as a second predicted model;
it should be noted that one of the input quantities of the second model training unit is selected as the "first predicted yield grade" corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images, and the corresponding real yield grade is not selected (both the real yield and the real yield grade are known), because, in the testing stage, the image to be tested does not know the real yield grade (or the real yield), so that the trained second prediction model can classify (i.e., predict) the image to be tested more accurately.
In this way, the first prediction unit can input the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently into the first prediction model, and obtain a first prediction yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently.
Then, the second prediction unit may input the first predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, and the weather data and the pest data corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently into the second prediction model, and output results of the second prediction model at this time are used as the second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently.
In this way, the third prediction unit can determine similar cases corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently in a plurality of historical cases by using the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently (hereinafter referred to as a to-be-predicted image), and calculate prediction yield values corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently based on the real yield of the similar cases and the second prediction yield level corresponding to the to-be-predicted image.
As an example, the third prediction unit comprises, for example, a similar case determination module and a prediction module.
Wherein, the similar case determining module can execute the following processing: and calculating the similarity between each image and each image in the images to be predicted according to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, and determining the number of images with the similarity higher than a fifth threshold value in the images to be predicted as the first score of the images.
For example, for a certain image px in a certain group of low-altitude remote sensing images and high-altitude remote sensing images in the history data, assuming that 10 images pd1, pd2, … and pd10 are included in the image to be predicted, the similarity between the image px and the 10 images, that is, the similarity xs1 between px and pd1, the similarity xs2 and … between px and pd2, and the similarity xs10 between px and pd10 are calculated respectively. Assuming that only xs1, xs3, and xs8 among xs1 to xs10 are greater than the above-described fifth threshold, the number of images having a similarity higher than the fifth threshold with respect to the image px in the image to be predicted is 3, that is, the first score of the image px is 3.
Then, the similar case determination module may take the sum of the first scores of the images in the low-altitude remote sensing image group and the high-altitude remote sensing image group as the first scores of the low-altitude remote sensing image group and the high-altitude remote sensing image group (and the corresponding first scores of the historical cases) for each low-altitude remote sensing image group and the high-altitude remote sensing image group in the historical data. Preferably, the first score of each history case may be normalized, for example, or multiplied by a coefficient such that the first score multiplied by a predetermined coefficient (e.g., all first scores multiplied by 0.01 or 0.05, etc.) is between 0 and 1.
For example, for a historical case, it is assumed that the corresponding set of low-altitude remote sensing images and high-altitude remote sensing images includes 5 low-altitude remote sensing images and 5 high-altitude remote sensing images (or other numbers), and these 10 images are denoted as images pl1 to pl 10. In calculating the first score of the history case, assuming that the first scores of the images pl 1-pl 10 are spl 1-spl 10 (assuming that spl 1-spl 10 are already normalized scores), the first score of the history case is spl1+ spl2+ spl3+ … + spl10, i.e., the sum of spl 1-spl 10.
Then, the similarity case determination module may use the similarity between the weather data corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images and the weather data corresponding to the current low-altitude remote sensing images and the current high-altitude remote sensing images to be predicted as a second score of the group of low-altitude remote sensing images and the high-altitude remote sensing images. The weather data is, for example, in a vector form, and the similarity between the weather data may be calculated by using a vector similarity calculation method, which is not described herein again.
Then, the similarity case determination module may use a similarity between the pest data corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images and the pest data corresponding to the current low-altitude remote sensing images and the high-altitude remote sensing images to be predicted as a third score of the group of low-altitude remote sensing images and the high-altitude remote sensing images, where the pest data is, for example, in a vector form, and the similarity between the pest data may be calculated by using a vector similarity calculation method, which is not described herein again.
Then, the similar case determination module may calculate a weighted sum of the first score, the second score and the third score corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images as a total score of the group of low-altitude remote sensing images and the high-altitude remote sensing images. Wherein the respective weights of the first score, the second score and the third score may be set empirically or determined experimentally, for example, the weights of the first score, the second score and the third score may be 1, 1/3, respectively, and so on; alternatively, the first score, the second score, and the third score may have different weights.
In this way, the similar case determining module can take the first N groups of low-altitude remote sensing images and high-altitude remote sensing images with the highest total score as the similar cases corresponding to the low-altitude remote sensing images and high-altitude remote sensing images to be predicted currently, wherein N is 1, 2 or 3 or other positive integers.
After the similar case determination module determines N similar cases of the image to be predicted, the prediction module may perform the following processing: and determining the weight of each similar case according to the total score corresponding to each similar case, and calculating the weighted sum of the real yields of the N similar cases according to the determined weights, wherein the sum of the weights of the N similar cases is 1.
For example, assuming that N is 3, 3 similar cases of the image to be predicted are obtained, assuming that the total scores of the 3 similar cases are sz1, sz2, and sz3, respectively, wherein sz1 is smaller than sz2, and sz2 is smaller than sz 3. For example, the weights corresponding to the 3 similar cases may be set to qsz1, qsz2, and qsz3 in order, so that qsz1: qsz2: qsz3 (the ratio of the three) is equal to sz1: sz2: sz3 (the ratio of the three).
If the calculated weighted sum of the real yields of the N similar cases is the same as the second predicted yield level corresponding to the image to be predicted, the prediction module may use the weighted sum of the real yields of the N similar cases as the predicted yield value corresponding to the image to be predicted.
If the calculated weighted sum of the real yields of the N similar cases is higher than the second predicted yield level corresponding to the image to be predicted, the prediction module may use the maximum value in the yield numerical range corresponding to the second predicted yield level corresponding to the image to be predicted as the predicted yield value corresponding to the image to be predicted.
If the calculated weighted sum of the real yields of the N similar cases is lower than the second predicted yield level corresponding to the image to be predicted, the prediction module may use the minimum value in the yield numerical range corresponding to the second predicted yield level corresponding to the image to be predicted as the predicted yield value corresponding to the image to be predicted.
For example, assuming that the total fractions of 3 similar cases to be predicted (assuming that the actual yields are 1.1 kgs, 1.3 kgs and 1.18 kgs, respectively) are 1, 2 and 2 (assuming that the total fractions of other historical cases are less than 1), the weights corresponding to the 3 similar cases may be set to 0.2, 0.4 and 0.4 in sequence, and then the "weighted sum of the actual yields of the N similar cases" ═ 0.2 × 1.1+0.4 × 1.3+0.4 × 1.18 ═ 0.22+0.52+0.472 × 1.212 kgs, and the corresponding yield grades are the second grades x2 to x3 (e.g., 1.2 kgs to 1.4 kgs).
Assuming that the second prediction yield level corresponding to the image to be predicted is the first level x 1-x2 (e.g., 1 kgf-1.2 kgf), the upper boundary of the yield range corresponding to the first level (i.e., 1.2 kgf) can be used as the prediction yield value corresponding to the image to be predicted.
Assuming that the second prediction yield level corresponding to the image to be predicted is the second level x 2-x 3 (e.g., 1.2 kilo-kg-1.4 kilo-kg), 1.212 kilo-kg can be used as the prediction yield value corresponding to the image to be predicted.
Assuming that the second prediction yield level corresponding to the image to be predicted is the third level x 3-x 4 (e.g., 1.4 kgs-1.6 kgs), the lower boundary of the yield range corresponding to the third level (i.e., 1.4 kgs) can be used as the prediction yield value corresponding to the image to be predicted.
Through the mode, the prediction result (namely the second prediction yield level) of the image to be predicted is utilized, and the prediction result obtained by utilizing the information of the similar cases (namely the weighted sum of the real yields of the N similar cases) is utilized, so that the obtained final yield prediction result is more in line with the actual situation and is more accurate.
According to the embodiment of the invention, the agricultural Internet of things comprehensive service management system further comprises an agricultural product searching platform, wherein the agricultural product searching platform comprises a database unit, a similar calculation unit and a display unit.
The database unit is used for storing picture data and character data of a plurality of stored agricultural products, wherein the picture data of each stored agricultural product comprises one or more pictures;
the similar calculation unit is configured to receive a picture to be searched for and/or a text to be retrieved of a product to be searched for from a user side, for example, object detection may be performed on the picture to be searched for of the product to be searched for first to obtain all identified images of a first item in the picture to be searched for, for example, the picture to be searched input by the user may be a picture taken by a handheld terminal device, or may be another picture obtained by a device in a storage manner or a downloading manner, and the picture may include a plurality of items, for example, may be a picture including two items, namely, a desk and a cup. By utilizing the existing object detection technology, two first object images of a desk and a teacup in a picture can be identified.
In addition, the similarity calculation unit may calculate the similarity between each stored agricultural product stored in the database unit and the product to be searched. For each stored agricultural product, the similarity calculation unit may calculate the similarity between the stored agricultural product and the product to be searched, for example, as follows: for each picture in the picture data of the stored agricultural product, performing object detection on the picture to obtain all identified second item images in the picture data of the stored agricultural product (which may be implemented by using a technology similar to the above-mentioned detection of the first item image, and is not described here again).
Then, the similarity calculation unit may perform contour retrieval on all identified second item images in the picture data of the stored agricultural product, respectively, to determine whether a second item contour of each second item image is complete.
Then, in all the identified second item images (including complete and incomplete outlines) in the picture data of the stored agricultural products, the similarity calculation unit may calculate the similarity between each second item image and each first item image (for example, may be implemented by using an existing image similarity calculation method).
Then, the similarity calculation unit may determine, for each second item image of the stored agricultural product, the number of first item images having a similarity higher than a seventh threshold with respect to the second item image, as a first correlation between the second item image and a product to be searched, and cumulatively calculate a sum of the first correlations corresponding to the respective second item images of the stored agricultural product.
Then, the similarity calculation unit may determine, for each second item image with a complete contour of the stored agricultural product, the number of first item images with a similarity higher than a seventh threshold to the second item image, as a second correlation between the second item image and a product to be searched, and cumulatively calculate a sum of second correlations corresponding to each second item image of the stored agricultural product.
Then, the similarity calculation unit may calculate the text similarity between the text data of the stored agricultural product and the text to be retrieved of the product to be searched, for example, the similarity calculation unit may be implemented by using an existing character string similarity calculation method.
In this way, the similarity calculation unit may determine the total similarity between the stored agricultural product and the product to be searched according to the sum of the first correlations (denoted as f1), the sum of the second correlations (denoted as f2) and the text similarity (denoted as f3), for example, the total similarity may be equal to f1+ f2+ f3, or may be equal to a weighted sum of the three, such as qq1 f1+ qq2 f2+ qq3 f3, where qq1 qq3 are preset weights of f1 to f3, and may be set according to experience.
In this way, the presentation unit may present the stored agricultural products having the total similarity to the product to be searched higher than the eighth threshold to the user as the search result.
The fifth threshold, the sixth threshold, the seventh threshold, and the eighth threshold may be set according to an empirical value, or determined through an experiment, for example, and are not described herein again.
In addition, according to the embodiment of the invention, a processing method of the agricultural internet of things integrated service management system is further provided, and the processing method is realized based on the agricultural internet of things integrated service management system, and the system comprises a monitoring subsystem, a meteorological subsystem, a ground water level monitoring subsystem and a control center subsystem.
The processing method comprises the following steps: the method comprises the steps that a plurality of monitoring points are arranged on a monitoring subsystem, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, video data of a corresponding area are captured through the at least one video device, soil environment data corresponding to the monitoring points are obtained through the at least one first sensor, and the video data and the soil environment data obtained through the corresponding monitoring points are sent to a control center subsystem through the first communication device.
In the processing method, a plurality of weather monitoring stations are arranged in a weather subsystem, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the air environment data corresponding to the weather monitoring stations are acquired through the second sensors, and the air environment data corresponding to the weather monitoring stations are sent to the control center subsystem through the second communication device.
In the treatment method, a plurality of underground water level monitoring points are arranged on an underground water level monitoring subsystem, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, underground water level data of a corresponding position is obtained in real time through the underground water level monitoring devices, and the obtained underground water level data is sent to a control center subsystem through the third communication device.
In the processing method, a fourth communication device and a control processing device are arranged in a control center subsystem, and all data from a monitoring subsystem, a meteorological subsystem and an underground water level monitoring subsystem are received through the fourth communication device and are sent to the control processing device.
In the processing method, the corresponding crop growth vigor is predicted and the soil element information influencing the crop growth is obtained at least based on the video data and the environmental data corresponding to each monitoring point received from the monitoring subsystem; acquiring environmental element information in the air influencing the growth of crops at least based on the corresponding air environment data at each weather monitoring station received from the weather subsystem; and monitoring the underground water level change conditions of the underground water level monitoring points at least based on the underground water level data corresponding to the underground water level monitoring points received from the underground water level monitoring subsystem.
As an example, the agricultural internet of things integrated service management system further comprises a geographic information subsystem, an agricultural unmanned aerial vehicle and a satellite remote sensing subsystem; the geographic information subsystem comprises an electronic map of a preset farm, and marking information is arranged at a plurality of preset positions on the electronic map; the agricultural unmanned aerial vehicle and satellite remote sensing subsystem comprises an unmanned aerial vehicle end, a satellite communication end and a server end; the unmanned aerial vehicle end is suitable for collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times and sending the low-altitude remote sensing images to the server end in real time; the satellite communication terminal is suitable for acquiring a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the high-altitude remote sensing image to the server terminal in real time; the server side is suitable for at least one function of crop growth prediction, insect pest detection and flood disaster analysis and early warning based on a low-altitude remote sensing image from the unmanned aerial vehicle side and/or a high-altitude remote sensing image from the satellite communication side.
As an example, in the processing method, for example, the received low-altitude remote sensing image and high-altitude remote sensing image may be grouped, and one video to be detected is generated by using each group of images, so as to obtain a plurality of videos to be detected; receiving a target video; determining a plurality of scene switching moments in a target video; aiming at each scene switching moment in a target video, acquiring a switched video frame corresponding to the scene switching moment in the target video; and taking the first frame image of the target video and the switched video frames corresponding to all scene switching moments in the target video as a plurality of target frame images, and recording the total number of all the target frame images as N, wherein N is a non-negative integer.
In addition, for each video to be detected in a preset video database, a plurality of scene switching moments in the video to be detected are determined, a switched video frame corresponding to each scene switching moment in the video to be detected is obtained, and a first frame image of the video to be detected and switched video frames corresponding to all scene switching moments in the video to be detected are used as frame images to be detected.
In addition, for each target frame image, the similarity between each frame image to be detected of each video to be detected and the target frame image is calculated, and the frame image to be detected, of which the similarity with the target frame image is higher than a first threshold value, is determined as a candidate frame image corresponding to the video to be detected.
In addition, for each video to be detected, calculating the number of candidate frame images corresponding to the video to be detected, which is denoted as a1, a1 is a non-negative integer, calculating the number of all target frame images related to each candidate frame image corresponding to the video to be detected, which is denoted as a2, a2 is a non-negative integer, and calculating a first score of the video to be detected according to the following formula, wherein S1 is q1 × a1+ q2 × a2, wherein S1 is the first score of the video to be detected, q1 represents a weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents a weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, wherein q1 is equal to a preset first weight, q2 is equal to a preset second weight when a2 is N, and q2 is equal to a preset third weight when a2 is less than N, wherein the second weight is greater than a third weight.
In this way, similar videos of the target video can be determined in the videos to be detected according to the first score of each video to be detected.
As an example, in the processing method, similar videos of the target video may be determined in the videos to be detected according to the first score of each video to be detected, for example, as follows: and selecting the video to be detected with the first score higher than the second threshold value from all the videos to be detected as the similar video of the target video.
As an example, in the processing method, similar videos of the target video may be determined in the videos to be detected according to the first score of each video to be detected, for example, as follows: selecting the video to be detected with the first score higher than a second threshold value from all the videos to be detected as candidate videos; then, segmenting the target video based on a plurality of scene switching moments of the target video to obtain a plurality of first video segments corresponding to the target video, and recording the total number of all the first video segments in the target video as M, wherein M is a non-negative integer; and for each candidate video, segmenting the candidate video based on a plurality of scene switching moments of the candidate video to obtain a plurality of second video segments corresponding to the candidate video.
For a second video segment corresponding to each candidate frame image of each candidate video, selecting a first video segment related to a target frame image corresponding to the candidate frame image from a plurality of first video segments, performing similarity calculation on the selected first video segment and the second video segment, and if the similarity between the first video segment and the second video segment is higher than a third threshold value, determining the second video segment as a similar segment corresponding to the first video segment.
In addition, for each candidate video, calculating the number of similar segments contained in the candidate video, which is denoted as b1 and b1 which are non-negative integers, calculating the number of all first video segments related to the similar segments contained in the candidate video, which is denoted as b2 and b2 which is a non-negative integer, and calculating a second score of the candidate video according to the following formula, wherein S2 is q3 × b1+ q4 × b2, wherein S2 is the second score of the candidate video, q3 represents the weight corresponding to the number of similar segments contained in the candidate video, q4 represents the weight corresponding to the number of all first video segments related to the similar segments contained in the candidate video, wherein q3 is equal to a preset fourth weight value, q4 is equal to a preset fifth weight value when b2 is M, and q4 is equal to a preset sixth weight value when b2 is less than M, wherein the fifth weight value is greater than the sixth weight value.
Then, similar videos of the target video are determined among the candidate videos according to the second score of each candidate video.
As an example, in the processing method, similar videos of the target video may be determined in the candidate videos according to the second score of each candidate video, for example, as follows: among all the candidate videos, a candidate video in which the second score is higher than the fourth threshold is selected as a similar video of the target video.
As an example, in the processing method, the first weight value is 0.5, the second weight value is 1, and the third weight value is 0.5, and preferably, the second weight value is × d, and d is a real number greater than 1, where d is greater than or equal to 2.
As an example, in the processing method, for example, a yield prediction process may be included.
In the yield prediction processing, for example, each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data can be used as input, the real yield grades corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data are used as output, a preset convolutional neural network model is trained, and the trained preset convolutional neural network model is used as a first prediction model; the historical data comprises a plurality of groups of low-altitude remote sensing images and high-altitude remote sensing images, and real yield grades, corresponding weather data and corresponding pest data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images.
In the yield prediction processing, for example, a first prediction yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in historical data can be obtained by using a first prediction model, the first prediction yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, corresponding weather data and corresponding pest damage data are used as input, real yield grades corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data are used as output, a predetermined BP neural network model is trained, and the trained predetermined BP neural network model is used as a second prediction model.
In the yield prediction process, for example, the image to be predicted may be input into a first prediction model, and a first prediction yield level corresponding to the image to be predicted may be obtained.
In the yield prediction processing, for example, a first predicted yield grade corresponding to the image to be predicted, weather data and pest data corresponding to the image to be predicted are input into the second prediction model, and a second predicted yield grade corresponding to the image to be predicted is obtained.
In the yield prediction processing, for example, a corresponding similar case can be determined by using the image to be predicted, and a prediction yield value corresponding to the image to be predicted is calculated based on the real yield of the similar case and the second prediction yield level corresponding to the obtained image to be predicted.
As an example, in the yield prediction process, similar cases can be determined, for example, as follows: and calculating the similarity between each image and each image in the images to be predicted according to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, and determining the number of images with the similarity higher than a fifth threshold value in the images to be predicted as the first score of the images.
Thus, for each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, the sum of the first scores of the images in the group of low-altitude remote sensing images and high-altitude remote sensing images is used as the first score of the group of low-altitude remote sensing images and high-altitude remote sensing images, the similarity between the weather data corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images and the weather data corresponding to a predicted image is used as the second score of the group of low-altitude remote sensing images and high-altitude remote sensing images, the similarity between the pest data corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images and the pest data corresponding to a predicted image is used as the third score of the group of low-altitude remote sensing images and high-altitude remote sensing images, and the first scores corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images, and the weighted sum of the second score and the third score is used as the total score of the group of low-altitude remote sensing images and the high-altitude remote sensing images.
And then, taking the first N groups of low-altitude remote sensing images with the highest total score and N historical cases corresponding to the high-altitude remote sensing images as similar cases corresponding to the images to be predicted, wherein N is 1, 2 or 3.
As an example, in the yield prediction process, the prediction may be done, for example, as follows: and determining the weight of each similar case according to the total score corresponding to each similar case, and calculating the weighted sum of the real yields of the N similar cases according to the determined weights, wherein the sum of the weights of the N similar cases is 1.
And if the yield grade corresponding to the weighted sum of the real yields of the N similar cases obtained by calculation is the same as the second prediction yield grade corresponding to the image to be predicted, taking the weighted sum of the real yields of the N similar cases as the prediction yield numerical value corresponding to the image to be predicted.
And if the yield grade corresponding to the weighted sum of the real yields of the N similar cases is higher than the second prediction yield grade corresponding to the image to be predicted, taking the maximum value in the yield numerical range corresponding to the second prediction yield grade corresponding to the image to be predicted as the prediction yield numerical value corresponding to the image to be predicted.
And if the yield grade corresponding to the weighted sum of the real yields of the N similar cases obtained by calculation is lower than the second prediction yield grade corresponding to the image to be predicted, taking the minimum value in the yield numerical range corresponding to the second prediction yield grade corresponding to the image to be predicted as the prediction yield numerical value corresponding to the image to be predicted.
As an example, in the processing method, for example, agricultural product search processing may be included.
In the agricultural product search processing, for example, picture data and character data of a plurality of stored agricultural products are stored in advance, wherein the picture data of each stored agricultural product includes one or more pictures.
In the agricultural product searching process, for example, a picture to be searched and/or characters to be retrieved of a product to be searched from a user terminal are received in advance, the similarity between each stored agricultural product and the product to be searched, which are stored in a database unit, is calculated, object detection is performed on the picture to be searched of the product to be searched, and all identified first article images in the picture to be searched are obtained.
For each stored agricultural product, the similarity between the stored agricultural product and the product to be searched (i.e. the total similarity described below) may be calculated, for example, as follows: performing object detection on each picture in the picture data of the stored agricultural products to obtain all identified second item images in the picture data of the stored agricultural products, performing contour retrieval on all identified second item images in the picture data of the stored agricultural products respectively to determine whether the contour of the second item of each second item image is complete, calculating the similarity between each second item image and each first item image in all identified second item images in the picture data of the stored agricultural products, determining the number of first item images with the similarity higher than a seventh threshold value with each second item image for each second item image of the stored agricultural products, taking the number as the first correlation between the second item image and the product to be searched, and accumulating and calculating the sum of the first correlations corresponding to each second item image of the stored agricultural products, and determining the number of first item images with the similarity higher than a seventh threshold value with respect to each second item image with the complete outline of the stored agricultural product, taking the number as a second correlation degree of the second item image and the product to be searched, calculating the sum of the second correlation degrees corresponding to each second item image of the stored agricultural product in an accumulated manner, calculating the text similarity between text data of the stored agricultural product and the text to be retrieved of the product to be searched, and determining the total similarity of the stored agricultural product and the product to be searched according to the sum of the first correlation degrees, the sum of the second correlation degrees and the text similarity corresponding to the stored agricultural product.
In this way, stored agricultural products having a total similarity to the product to be searched that is higher than the eighth threshold value may be presented to the user as search results.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention and the advantageous effects thereof have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. The agricultural Internet of things integrated service management system is characterized by comprising a monitoring subsystem, a meteorological subsystem, an underground water level monitoring subsystem and a control center subsystem;
the monitoring subsystem comprises a plurality of monitoring points, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, the at least one video device is used for capturing video data of a corresponding area, the at least one first sensor is used for acquiring soil environment data corresponding to the monitoring point, and the first communication device is used for sending the video data and the soil environment data acquired by the corresponding monitoring point to the control center subsystem;
the weather subsystem comprises a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the second sensors are used for acquiring air environment data corresponding to the weather monitoring station, and the second communication device is used for sending the air environment data corresponding to the weather monitoring station to the control center subsystem;
the underground water level monitoring subsystem comprises a plurality of underground water level monitoring points, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, the underground water level monitoring device is used for acquiring underground water level data at a corresponding position in real time and transmitting the acquired underground water level data to the control center subsystem through the third communication device; and
the control center subsystem comprises a fourth communication device and a control processing device, wherein the fourth communication device is used for receiving all data from the monitoring subsystem, the meteorological subsystem and the underground water level monitoring subsystem and sending the data to the control processing device; the control processing means is for:
predicting the growth of corresponding crops and acquiring soil element information influencing the growth of the crops at least based on the video data and the environmental data corresponding to each monitoring point received from the monitoring subsystem;
acquiring environmental element information in the air influencing the growth of crops at least based on the corresponding air environment data at each weather monitoring station received from the weather subsystem; and
monitoring underground water level change conditions of the underground water level monitoring points at least based on underground water level data corresponding to the underground water level monitoring points received from the underground water level monitoring subsystem;
the agricultural Internet of things integrated service management system further comprises a geographic information subsystem, an agricultural unmanned aerial vehicle and a satellite remote sensing subsystem;
the geographic information subsystem comprises an electronic map of a preset farm, and marking information is arranged at a plurality of preset positions on the electronic map;
the agricultural unmanned aerial vehicle and satellite remote sensing subsystem comprises an unmanned aerial vehicle end, a satellite communication end and a server end;
the unmanned aerial vehicle end is suitable for collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times and sending the low-altitude remote sensing images to the server end in real time;
the satellite communication terminal is suitable for acquiring a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the high-altitude remote sensing image to the server terminal in real time;
the server side is suitable for realizing at least one function of crop growth prediction, insect pest detection and flood disaster analysis and early warning based on at least a low-altitude remote sensing image from the unmanned aerial vehicle side and/or a high-altitude remote sensing image from the satellite communication side;
the server side is used for:
grouping the received low-altitude remote sensing images and high-altitude remote sensing images, and generating a video to be detected by using each group of images to obtain a plurality of videos to be detected;
receiving a target video;
determining a plurality of scene switching moments in the target video;
aiming at each scene switching moment in the target video, obtaining a switched video frame corresponding to the scene switching moment in the target video;
taking a first frame image of the target video and switched video frames corresponding to all scene switching moments in the target video as a plurality of target frame images, and recording the total number of all the target frame images as N, wherein N is a non-negative integer;
for each video to be detected in a predetermined video database,
determining a plurality of scene switching moments in the video to be detected,
obtaining switched video frames corresponding to each scene switching time in the video to be detected,
taking a first frame image of the video to be detected and switched video frames corresponding to all scene switching moments in the video to be detected as frame images to be detected;
calculating the similarity between each frame image to be detected of each video to be detected and the target frame image aiming at each target frame image, and determining the frame image to be detected with the similarity higher than a first threshold value with the target frame image as a candidate frame image corresponding to the video to be detected;
for each video to be detected,
calculating the number of candidate frame images corresponding to the video to be detected, recording as a1, wherein a1 is a non-negative integer,
calculating the number of all target frame images related to each candidate frame image corresponding to the video to be detected, recording as a2, wherein a2 is a non-negative integer,
calculating a first score of the video to be detected according to the following formula, wherein S1= q1 × a1+ q2 × a2, S1 is the first score of the video to be detected, q1 represents the weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents the weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, wherein q1 is equal to a preset first weight value,
q2 is equal to a preset second weight value when a2= N, and q2 is equal to a preset third weight value when a2 < N, wherein the second weight value is greater than the third weight value;
determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected;
the server is used for determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected in the following mode:
selecting the video to be detected with the first score higher than a second threshold value from all the videos to be detected as candidate videos;
dividing the target video based on a plurality of scene switching moments of the target video to obtain a plurality of first video clips corresponding to the target video, and recording the total number of all the first video clips in the target video as M, wherein M is a non-negative integer;
for each candidate video, segmenting the candidate video based on a plurality of scene switching moments of the candidate video to obtain a plurality of second video segments corresponding to the candidate video;
for a second video segment corresponding to each candidate frame image of each candidate video,
selecting a first video segment related to a target frame image corresponding to the candidate frame image among the plurality of first video segments,
performing similarity calculation between the selected first video segment and the selected second video segment,
if the similarity between the first video clip and the second video clip is higher than a third threshold, determining the second video clip as a similar clip corresponding to the first video clip;
for each of the candidate videos, one or more of the candidate videos is selected,
calculating the number of similar segments contained in the candidate video, and marking as b1, wherein b1 is a non-negative integer,
calculating the number of all first video segments related to similar segments contained in the candidate video, which is marked as b2, b2 is a non-negative integer,
calculating a second score of the candidate video according to the following formula, wherein S2= q3 × b1+ q4 × b2, S2 is the second score of the candidate video, q3 represents a weight corresponding to the number of similar segments included in the candidate video, q4 represents a weight corresponding to the number of all first video segments related to each similar segment included in the candidate video, and q3 is equal to a preset fourth weight value,
q4 is equal to a preset fifth weight value when b2= M, and q4 is equal to a preset sixth weight value when b2 < M, wherein the fifth weight value is greater than the sixth weight value;
determining similar videos of the target video in the candidate videos according to the second score of each candidate video.
2. The agricultural internet of things integrated service management system of claim 1, wherein the server is configured to determine similar videos of the target video in the candidate videos according to the second score of each candidate video as follows:
selecting a candidate video with a second score higher than a fourth threshold value from all the candidate videos as a similar video of the target video.
3. The agricultural internet of things integrated service management system of claim 1, the second weight value = the third weight value × d, d being a real number greater than 1.
4. The agricultural internet of things integrated service management system of any one of claims 1 to 3, wherein the first weight value =0.5, the second weight value =1, and the third weight value = 0.5.
5. The agricultural internet of things integrated service management system according to any one of claims 1 to 3, further comprising a yield prediction platform, the yield prediction platform comprising:
the first model training unit is used for taking each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as input, taking the real yield grades corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, training a preset convolutional neural network model, and taking the trained preset convolutional neural network model as a first prediction model; the historical data comprises a plurality of groups of low-altitude remote sensing images and high-altitude remote sensing images, and real yield grades, corresponding weather data and corresponding pest data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images;
the second model training unit is used for obtaining a first predicted yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in historical data by using the first prediction model, taking the first predicted yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, corresponding weather data and corresponding pest damage data as input, taking the real yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, training a preset BP neural network model, and taking the trained preset BP neural network model as a second prediction model;
the first prediction unit is used for inputting the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted into the first prediction model to obtain a first prediction yield grade corresponding to the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted;
the second prediction unit is used for inputting the first prediction yield grade corresponding to the current low-altitude remote sensing image and the high-altitude remote sensing image to be predicted, the weather data and the pest damage data corresponding to the current low-altitude remote sensing image and the high-altitude remote sensing image to be predicted into the second prediction model to obtain the second prediction yield grade corresponding to the current low-altitude remote sensing image and the high-altitude remote sensing image to be predicted;
and the third prediction unit is used for determining a corresponding similar case by using the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted, and calculating a prediction yield value corresponding to the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted based on the real yield of the similar case and the obtained second prediction yield grade corresponding to the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted.
6. The agricultural internet of things integrated service management system of claim 5, wherein the third prediction unit comprises a similar case determination module and a prediction module;
the similar case determination module is to:
calculating the similarity between each image and each image in the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently according to each group of low-altitude remote sensing images and each image in the high-altitude remote sensing images in the historical data, and determining the number of images with the similarity between the images in the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently and higher than a fifth threshold value as a first score of the images;
for each set of low altitude remote sensing images and high altitude remote sensing images in the historical data,
taking the sum of the first scores of the images in the group of low-altitude remote sensing images and the high-altitude remote sensing images as the first score of the group of low-altitude remote sensing images and the high-altitude remote sensing images,
taking the similarity between the weather data corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images and the weather data corresponding to the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted at present as a second score of the group of low-altitude remote sensing images and the high-altitude remote sensing images,
taking the similarity between the pest control data corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images and the pest control data corresponding to the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted at present as a third score of the group of low-altitude remote sensing images and the high-altitude remote sensing images,
calculating a weighted sum of a first score, a second score and a third score corresponding to the group of low-altitude remote sensing images and the group of high-altitude remote sensing images as a total score of the group of low-altitude remote sensing images and the group of high-altitude remote sensing images;
taking N historical cases corresponding to the first N groups of low-altitude remote sensing images and high-altitude remote sensing images with the highest total score as similar cases corresponding to the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted currently, wherein N is 1, 2 or 3;
the prediction module is to:
determining the weight of each similar case according to the total score corresponding to each similar case, and calculating the weighted sum of the real yields of the N similar cases according to the determined weights, wherein the sum of the weights of the N similar cases is 1,
if the calculated weighted sum of the real yields of the N similar cases corresponds to the yield grade which is the same as the second predicted yield grade corresponding to the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted, taking the weighted sum of the real yields of the N similar cases as the predicted yield numerical value corresponding to the current low-altitude remote sensing image and the current high-altitude remote sensing image to be predicted,
if the calculated weighted sum of the real yields of the N similar cases is higher than the second predicted yield grade corresponding to the current low-altitude remote sensing image and high-altitude remote sensing image to be predicted, taking the maximum value in the yield value range corresponding to the second predicted yield grade corresponding to the current low-altitude remote sensing image and high-altitude remote sensing image to be predicted as the predicted yield value corresponding to the current low-altitude remote sensing image and high-altitude remote sensing image to be predicted,
and if the calculated weighted sum of the real yields of the N similar cases is lower than the second predicted yield grades corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, taking the minimum value in the yield numerical range corresponding to the second predicted yield grades corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently as the predicted yield numerical value corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently.
7. The agricultural internet of things integrated service management system according to any one of claims 1 to 3, further comprising an agricultural product search platform;
the agricultural product searching platform comprises a database unit, a similar calculation unit and a display unit;
the database unit is used for storing picture data and character data of a plurality of stored agricultural products, wherein the picture data of each stored agricultural product comprises one or more pictures;
the similarity calculation unit is used for receiving a picture to be searched and/or characters to be retrieved of a product to be searched from a user side, calculating the similarity between each stored agricultural product and the product to be searched, stored in the database unit, performing object detection on the picture to be searched of the product to be searched, and acquiring all identified first article images in the picture to be searched;
wherein, for each stored agricultural product, the similarity calculation unit calculates the similarity between the stored agricultural product and the product to be searched by the following method:
performing object detection on each picture in the picture data of the stored agricultural products to obtain all identified images of the second product in the picture data of the stored agricultural products,
performing contour retrieval on all identified second item images in the picture data of the stored agricultural products respectively to determine whether the second item contour of each second item image is complete or not,
calculating a similarity between each second item image and each first item image among all the identified second item images in the picture data of the stored agricultural products,
determining the number of first item images with the similarity higher than a seventh threshold value with respect to each second item image of the stored agricultural products, taking the number as the first correlation between the second item image and the product to be searched, cumulatively calculating the sum of the first correlation corresponding to each second item image of the stored agricultural products,
determining the number of first item images with the similarity higher than a seventh threshold value with respect to each second item image with the complete outline of the stored agricultural products, taking the number as the second correlation degree of the second item image and the product to be searched, accumulatively calculating the sum of the second correlation degrees corresponding to each second item image of the stored agricultural products,
calculating the character similarity between the character data of the stored agricultural product and the characters to be retrieved of the product to be searched,
determining the total similarity of the stored agricultural products and the products to be searched according to the sum of the first correlation degrees, the sum of the second correlation degrees and the character similarity corresponding to the stored agricultural products;
the display unit is used for displaying the stored agricultural products with the total similarity higher than the eighth threshold value with the products to be searched as search results to the user.
CN201910481452.9A 2019-06-04 2019-06-04 Agricultural Internet of things integrated service management system Expired - Fee Related CN110161970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910481452.9A CN110161970B (en) 2019-06-04 2019-06-04 Agricultural Internet of things integrated service management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910481452.9A CN110161970B (en) 2019-06-04 2019-06-04 Agricultural Internet of things integrated service management system

Publications (2)

Publication Number Publication Date
CN110161970A CN110161970A (en) 2019-08-23
CN110161970B true CN110161970B (en) 2020-07-07

Family

ID=67627373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910481452.9A Expired - Fee Related CN110161970B (en) 2019-06-04 2019-06-04 Agricultural Internet of things integrated service management system

Country Status (1)

Country Link
CN (1) CN110161970B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028586B (en) * 2022-12-08 2024-01-05 广东省现代农业装备研究所 Agricultural industry chain information service method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072342A (en) * 2006-07-01 2007-11-14 腾讯科技(深圳)有限公司 Situation switching detection method and its detection system
KR20140072454A (en) * 2012-12-04 2014-06-13 주식회사 두드림 A Reality Associated Growing Crops Game Service System based Video Streaming and Method thereof
CN204576202U (en) * 2015-05-21 2015-08-19 河南省华西高效农业有限公司 A kind of ecological organic agricultural water KXG
CN107807598A (en) * 2017-11-24 2018-03-16 吉林省农业机械研究院 Internet of Things+water saving, the fertile Precision Irrigation system and method for section
CN109470299A (en) * 2018-10-19 2019-03-15 江苏大学 A kind of plant growth information monitoring system and method based on Internet of Things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072342A (en) * 2006-07-01 2007-11-14 腾讯科技(深圳)有限公司 Situation switching detection method and its detection system
KR20140072454A (en) * 2012-12-04 2014-06-13 주식회사 두드림 A Reality Associated Growing Crops Game Service System based Video Streaming and Method thereof
CN204576202U (en) * 2015-05-21 2015-08-19 河南省华西高效农业有限公司 A kind of ecological organic agricultural water KXG
CN107807598A (en) * 2017-11-24 2018-03-16 吉林省农业机械研究院 Internet of Things+water saving, the fertile Precision Irrigation system and method for section
CN109470299A (en) * 2018-10-19 2019-03-15 江苏大学 A kind of plant growth information monitoring system and method based on Internet of Things

Also Published As

Publication number Publication date
CN110161970A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110188962B (en) Rice supply chain information processing method based on agricultural Internet of things
CN110213376B (en) Information processing system and method for insect pest prevention
CN110197308B (en) Crop monitoring system and method for agricultural Internet of things
CN110210408B (en) Crop growth prediction system and method based on satellite and unmanned aerial vehicle remote sensing combination
CN110197381B (en) Traceable information processing method based on agricultural Internet of things integrated service management system
CN106971167A (en) Crop growth analysis method and its analysis system based on unmanned aerial vehicle platform
US11134221B1 (en) Automated system and method for detecting, identifying and tracking wildlife
Roth et al. Repeated multiview imaging for estimating seedling tiller counts of wheat genotypes using drones
CN108776106A (en) A kind of crop condition monitoring method and system based on unmanned plane low-altitude remote sensing
CN112613438A (en) Portable online citrus yield measuring instrument
Haas-Stapleton et al. Assessing mosquito breeding sites and abundance using an unmanned aircraft
CN211477203U (en) Refined monitoring equipment system based on high-resolution remote sensing image
CN110161970B (en) Agricultural Internet of things integrated service management system
CN113822198A (en) Peanut growth monitoring method, system and medium based on UAV-RGB image and deep learning
Rumora et al. Spatial video remote sensing for urban vegetation mapping using vegetation indices
Maan et al. Tree species biomass and carbon stock measurement using ground based-LiDAR
CN106780323A (en) A kind of collection of agriculture feelings and real time updating method and system based on smart mobile phone
CN110138879B (en) Processing method for agricultural Internet of things
CN110175267B (en) Agricultural Internet of things control processing method based on unmanned aerial vehicle remote sensing technology
CN116189076A (en) Observation and identification system and method for bird observation station
CN115379150A (en) System and method for automatically generating dynamic video of rice growth process in remote way
CN112287787B (en) Crop lodging grading method based on gradient histogram characteristics
US20220358641A1 (en) Information processing device and index value calculation method
CN115830474A (en) Method and system for identifying wild Tibetan medicine lamiophlomis rotata and distribution thereof and calculating yield thereof
Dimitrov et al. Infrared thermal monitoring of intelligent grassland via drone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Peng Rongjun

Inventor after: Zhang Yafei

Inventor after: Wang Ping

Inventor after: Wang Min

Inventor after: Yan Daming

Inventor after: Ding Wenqiang

Inventor after: Han Tianjia

Inventor after: Tang Jilong

Inventor after: An Hongyan

Inventor after: Meng Qingmin

Inventor after: Meng Qingshan

Inventor after: Tang Qinggang

Inventor after: Li Ying

Inventor after: Liu Shulin

Inventor before: Peng Rongjun

Inventor before: Wang Min

Inventor before: Yan Daming

Inventor before: Ding Wenqiang

Inventor before: Han Tianjia

Inventor before: Tang Jilong

Inventor before: An Hongyan

Inventor before: Meng Qingmin

Inventor before: Meng Qingshan

Inventor before: Li Ying

Inventor before: Liu Shulin

Inventor before: Zhang Yafei

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210224

Address after: No.263, Hanshui Road, Nangang District, Harbin City, Heilongjiang Province

Patentee after: Heilongjiang Beidahuang Agriculture Co.,Ltd.

Address before: 154000 Qixing farm, Sanjiang Administration Bureau of agricultural reclamation, Jiamusi City, Heilongjiang Province

Patentee before: Qixing Farm in Heilongjiang Province

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200707