CN116229975B - System and method for voice reporting of field diseases and insect pests in intelligent interaction scene - Google Patents

System and method for voice reporting of field diseases and insect pests in intelligent interaction scene Download PDF

Info

Publication number
CN116229975B
CN116229975B CN202310259980.6A CN202310259980A CN116229975B CN 116229975 B CN116229975 B CN 116229975B CN 202310259980 A CN202310259980 A CN 202310259980A CN 116229975 B CN116229975 B CN 116229975B
Authority
CN
China
Prior art keywords
voice
correction
filling
pest
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310259980.6A
Other languages
Chinese (zh)
Other versions
CN116229975A (en
Inventor
徐玮
张露露
符首夫
管征超
王丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yinghe Jiatian Technology Co ltd
Original Assignee
Hangzhou Yinghe Jiatian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yinghe Jiatian Technology Co ltd filed Critical Hangzhou Yinghe Jiatian Technology Co ltd
Priority to CN202310259980.6A priority Critical patent/CN116229975B/en
Publication of CN116229975A publication Critical patent/CN116229975A/en
Application granted granted Critical
Publication of CN116229975B publication Critical patent/CN116229975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention relates to the technical field of pest and disease monitoring, in particular to a field pest and disease voice reporting system and method under an intelligent interaction scene; inquiring basic information of a field to be subjected to pest and disease investigation to a filling person through a voice assistant, and completing voice filling of a first part; respectively constructing a voice recognition matching database corresponding to voice reporting of each item of plant diseases and insect pests investigation data, and carrying out voice recognition on the input voice in real time to complete second part voice reporting; capturing and collecting to obtain a first characteristic correction library and a second characteristic correction library; capturing individual characteristic correction chains of all the filling staff and constructing an individual characteristic correction library of all the filling staff; respectively carrying out continuous acquisition and analysis on voice filling records of all the filling staff, and continuously supplementing and optimizing the personalized feature correction library of all the filling staff; and automatically correcting the text content obtained after the voice recognition of the voice information according to a certain personalized feature correction chain.

Description

System and method for voice reporting of field diseases and insect pests in intelligent interaction scene
Technical Field
The invention relates to the technical field of pest and disease monitoring, in particular to a field pest and disease voice reporting system and method under an intelligent interaction scene.
Background
In recent years, the repeated and frequent occurrence of the plant diseases and insect pests is reduced, the measure and report technicians are reduced, the field investigation means mainly take manpower as a main part, the labor intensity is high, at least two persons are required for conventional investigation, one person is used for implementing investigation, one person records data, the monitoring efficiency is relatively low, the monitoring accuracy is influenced, the equipment with a voice recognition interaction function is adopted for carrying out voice interaction on the whole monitoring process, the voice data is collected and analyzed, the manpower is effectively reduced, and the monitoring efficiency and the monitoring accuracy of the field investigation are improved; however, since pest and disease monitoring is often long in process, and the data base needing to be collected is large, if recognition errors occur in the voice analysis process, great trouble is brought to data calibration in the later stage.
Disclosure of Invention
The invention aims to provide a field pest and disease damage voice reporting system and method under an intelligent interaction scene so as to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: a field pest and disease damage voice reporting method under intelligent interaction scene comprises the following steps:
step S100: when a person to be filled selects to enter voice filling in a system interface, the system acquires real-time information of the current position of the person to be filled and the current weather condition; the system inquires the basic information of the field to be subjected to the investigation of the plant diseases and insect pests through a voice assistant, the information feedback is carried out on the basic information of the field by the personnel to be subjected to the voice input mode, the voice recognition is carried out on the input voice in real time by the system, the corresponding text content is obtained, and the first part of voice input is completed; allowing the filling personnel to manually correct the content obtained by voice filling in real time;
step S200: the field to be subjected to the investigation of the plant diseases and insect pests is subjected to investigation of various plant diseases and insect pests by a filling staff in a voice recording mode; respectively constructing a voice recognition matching database corresponding to each disease and pest investigation data when voice reporting is completed; the system carries out voice recognition on the input voice in real time to obtain corresponding text content, and completes the second part of voice reporting; allowing a person to correct voice of the content obtained by voice filling in real time or manually;
step S300: respectively extracting historical correction records of the filling staff during the first part of voice filling, and capturing and collecting the characteristic correction chains of the filling staff to obtain a first characteristic correction library; extracting a history correction record of the filling staff during the second part of voice filling, and capturing and collecting the characteristic correction chains of the filling staff to obtain a second characteristic correction library;
step S400: capturing individual characteristic correction chains of all the filling staff based on the first characteristic correction library and the second characteristic correction library, and constructing individual characteristic correction libraries of all the filling staff; respectively carrying out continuous acquisition and analysis on voice filling records of all the filling staff, and continuously supplementing and optimizing the personalized feature correction library of all the filling staff;
step S500: when a certain person to be filled carries out the first part of voice filling or the second part of voice filling, the voice data which accords with a certain personalized feature correction chain of the certain person to be filled is captured in the voice information recorded by the certain person to be filled, and the text content obtained after voice recognition of the voice information is automatically corrected according to the personalized feature correction chain.
Further, step S100 includes: respectively setting a plurality of voice prompts for basic information acquisition in the fields for carrying out the investigation of the plant diseases and insect pests; and inquiring the personnel to be filled one by one according to the corresponding voice prompts by utilizing the voice assistant, performing voice monitoring, collecting voice information fed back by the personnel to be filled, and completing voice filling of the basic information in the field by the personnel to be filled.
Further, step S200 includes:
step S201: setting an input format for matching during voice input and filling of pest investigation data: survey item + quantity; wherein, survey items refer to: pest name + pest growth stage; allowing a person to be filled to realize information coverage of the last voice information by recording the voice information which is the same as the investigation item in the last voice information in the first interval time; allowing a person to enter a new piece of voice information in a second interval time to realize information coverage of the last piece of voice information; wherein the first interval time is greater than the second interval time;
step S202: information acquisition is carried out on pest and disease items which can appear in various types of fields from literature records, experiment records, investigation records and news report records related to the field pest and disease, information acquisition is carried out on growth stages which can appear in each pest and disease, and the pest and disease items are collected into a pest and disease database; based on the disease and insect pest database, locking all the disease and insect pest items to be researched in the field; respectively carrying out Chinese pinyin analysis of the disease and pest names of all the disease and pest damage to obtain Chinese pinyin expressions corresponding to the disease and pest names; based on the Chinese phonetic expression, extracting the phonetic structure corresponding to each Chinese character in each disease and insect pest name; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure;
step S203: collecting growth stages which are generated by correspondence of all plant diseases and insect pests, respectively carrying out Chinese pinyin analysis of stage names on each growth stage to respectively obtain Chinese pinyin expressions corresponding to the stage names, and extracting pinyin structures corresponding to each Chinese character in the stage names based on the Chinese pinyin expressions; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure; when voice filling of corresponding various disease and pest investigation data is completed, a corresponding voice recognition matching database is constructed, wherein the voice recognition matching database comprises pinyin structures corresponding to all Chinese characters appearing in all disease and pest names and pinyin structures corresponding to all Chinese characters appearing in all stage names;
because the personnel to be filled often carry out voice filling according to a set of flow when carrying out pest and disease damage monitoring to the target field, and the filling content that involves in the pest and disease damage monitoring process often involves professional vocabulary and rare vocabulary, the vocabulary that appears in the text content that the personnel to be filled and the voice recognition obtained is also limited, and the process of constructing the voice recognition matching database, namely, the voice text data that involves in the voice recognition process for the system is effectively reduced.
Further, step S300 includes:
step S301: respectively extracting information of each correction record of the first part of voice filling or the second part of voice filling of the filling personnel to obtain a plurality of text correction strips corresponding to each correction record; recording the character content corresponding to each filling person before manual correction in the ith correction record as A i The corresponding text content obtained by each filling person after manual correction isB i ;B i And A is a i The text correction bars existing between are gathered asWherein, the liquid crystal display device comprises a liquid crystal display device,respectively shown in the following description A i Conversion to B i 1 st, 2 nd, … th, n th word correcting bar produced in the process of (a); in each character correcting strip, carrying out Chinese character pinyin analysis on characters before correction and characters after correction respectively to obtain pinyin structures corresponding to the characters; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure; capturing all the differential pinyin structure pairs between the characters before correction and the characters after correction for each character correction strip;
step S302: if some character correcting bar g1→g2 has a pair { r1, r2} of the pinyin structure, it means that the character g1 is corrected to the character g2 because the r1 structure in the character g1 is corrected to the r2 structure; a characteristic correction chain is constructed between a pair { r1, r2} of the distinguishing pinyin structure and a certain character correction bar g1→g2: g1→g2→ { r1, r2};
step S303: extracting and constructing a feature correction chain contained in each correction record in the first part of voice filling, and accumulating the occurrence frequency of each feature correction chainWherein x is 1 Representing the total number of times a feature correction chain appears during a first portion of the voice fill; y is 1 Representing the total number of feature correction chains captured in all correction records occurring at the time of the first partial voice fill; setting a first frequency threshold, and collecting all feature correction chains larger than the first frequency threshold to obtain a first feature correction library;
step S304: extracting and constructing in the second part of voice filling, correcting the characteristics contained in each correction record, and accumulating the occurrence frequency of each characteristic correction chainWherein x is 2 Representing the total number of times a feature correction chain appears during a second portion of the voice fill; y is 2 Representing the total number of feature correction chains captured in all correction records occurring at the time of the second partial voice fill; setting a second frequency threshold, and collecting all feature correction chains larger than the second frequency threshold to obtain a second feature correction library;
because of different growth environments and different pronunciation habits, mandarin utterances by different people often have different intonation habits, pronunciation habits and character biting habits; for example, if the fowler frequently has "F" and "h" score, voice recognition errors caused by the "F" and "h" score may occur when the characteristic is reported by the person reporting the characteristic; the process is a process of capturing the characteristics, and the regular correction mode is effectively captured through correction records with a certain base number.
Further, step S400 includes:
step S401: respectively acquiring a characteristic correcting chain set Q1 contained in a first characteristic correcting library and a characteristic correcting chain set Q2 contained in a second characteristic correcting library of each filling person, and calculating Q2n_Q1=Q3;
step S402: and respectively taking the characteristic correction chains contained in the set Q3 as the individual characteristic correction chains corresponding to the filling staff, and respectively collecting all the individual characteristic correction chains of the filling staff to obtain an individual characteristic correction library of the filling staff.
In order to better realize the method, a field pest and disease damage voice reporting system under an intelligent interaction scene is also provided, and the system comprises: the system comprises a first part of voice reporting module, a correction module, a second part of voice reporting module, a characteristic correction library construction module, a personalized characteristic correction library construction module and an automatic correction module;
the first part voice reporting module is used for acquiring real-time information of the current position and the current weather condition when a reporting person starts voice reporting, inquiring basic information of a field to be subjected to plant diseases and insect pests investigation to the reporting person through a voice assistant, and performing voice recognition on input voice of the reporting person in real time to acquire corresponding text content so as to complete the first part voice reporting;
the correction module is used for capturing and collecting correction records generated by the person filling the first part of voice and the second part of voice respectively;
the second part of voice reporting module is used for assisting a reporting person to conduct various plant diseases and insect pests data investigation in the field to be conducted with plant diseases and insect pests investigation in a voice recording mode; respectively constructing a voice recognition matching database corresponding to each disease and pest investigation data when voice reporting is completed; performing voice recognition on the input voice in real time to obtain corresponding text content, and completing voice reporting of a second part;
the characteristic correction library construction module is used for receiving the data in the correction module, constructing a first characteristic correction library for each person filling, and constructing a second characteristic correction library for each person filling;
the individual characteristic correction library construction module is used for receiving the data in the characteristic correction library construction module, capturing individual characteristic correction chains for each person filling, and constructing an individual characteristic correction library of each person filling;
and the automatic correction module captures voice data containing an individual characteristic correction chain for voice information recorded by each person filling the first part of voice or the second part of voice in real time, and automatically corrects the text content obtained after voice recognition of the voice information according to the corresponding individual characteristic correction chain.
Further, the correction module comprises a first correction unit and a second correction unit;
the first correction unit is used for collecting correction records generated by the filling personnel during the first part of voice filling;
and the second correction unit is used for collecting correction records generated by the filling personnel during the second part of voice filling.
Further, the feature correction library construction module comprises a first feature correction library construction unit and a second feature correction library construction unit;
the first characteristic correction library construction unit is used for extracting history correction records which appear when the first part of voice is filled out for each filling person and capturing and collecting characteristic correction chains of each filling person;
and the second characteristic correction library construction unit is used for extracting the history correction records which appear when the second part of voice is filled by each filling person and capturing and collecting the characteristic correction chains of each filling person.
Compared with the prior art, the invention has the following beneficial effects: according to the invention, the equipment with the voice recognition interaction function is adopted to replace the traditional manual mode, and in the invention, the characteristic voice reporting habit reflected by the reporting personnel in the operation process is captured regularly, so that the problem of voice recognition errors caused by the fact that the tone, the intonation and the character biting are not standardized in the voice reporting process of each reporting personnel is considered, all text information appearing in the field disease and insect detection process is integrated, the related voice recognition data in the operation process is effectively shortened, meanwhile, the voice recognition errors caused by the individualized reporting habit of each reporting personnel and the rules presented in the correction time are captured, the individualized information is stored for each reporting personnel, the automatic reporting content correction of the reporting personnel in the reporting process is assisted, the reporting efficiency is improved, and the data calibration rate is improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention, without limitation to the invention. In the drawings:
FIG. 1 is a schematic flow chart of a field pest and disease damage voice reporting method under an intelligent interaction scene of the invention;
fig. 2 is a schematic structural diagram of a field pest voice reporting system in an intelligent interaction scene of the invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only one structural embodiment of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, the present invention provides the following technical solutions: a field pest and disease damage voice reporting method under intelligent interaction scene comprises the following steps:
step S100: when a person to be filled selects to enter voice filling in a system interface, the system acquires real-time information of the current position of the person to be filled and the current weather condition; the system inquires the basic information of the field to be subjected to the investigation of the plant diseases and insect pests through a voice assistant, the information feedback is carried out on the basic information of the field by the personnel to be subjected to the voice input mode, the voice recognition is carried out on the input voice in real time by the system, the corresponding text content is obtained, and the first part of voice input is completed; allowing the filling personnel to manually correct the content obtained by voice filling in real time;
wherein, step S100 includes: respectively setting a plurality of voice prompts for basic information acquisition in the fields for carrying out the investigation of the plant diseases and insect pests; inquiring the personnel to be filled one by one according to the corresponding voice prompts by utilizing the voice assistant, performing voice monitoring, collecting voice information fed back by the personnel to be filled, and completing voice filling of the basic information in the field by the personnel to be filled;
step S200: the field to be subjected to the investigation of the plant diseases and insect pests is subjected to investigation of various plant diseases and insect pests by a filling staff in a voice recording mode; respectively constructing a voice recognition matching database corresponding to each disease and pest investigation data when voice reporting is completed; the system carries out voice recognition on the input voice in real time to obtain corresponding text content, and completes the second part of voice reporting; allowing a person to correct voice of the content obtained by voice filling in real time or manually;
for example, after the system broadcasts weather and investigation places, a voice assistant initiates a voice prompt to ask whether the system is a system field or a general investigation field, and the type information of the field blocks is collected;
for example, a voice assistant initiates a voice prompt of what the field is in a growing period, and if a reporter feeds back the rice growing period, specific classification of the rice growing period is automatically obtained, so that the reporter can select and fill in the rice growing period;
wherein, step S200 includes:
step S201: setting an input format for matching during voice input and filling of pest investigation data: survey item + quantity; wherein, survey items refer to: pest name + pest growth stage;
for example, the rice planthopper conventional system investigates the sogatella furcifera, brown planthopper; the disease and pest growth stage of Bai Beifei lice comprises: adults, short-wing adults, five-instar, four-instar, three-instar and one-instar;
the documenter can enter the example: 100 adult (only) of Bai Beifei lice, 70 adult (only) of short wing type, 32 (only) of five-age, 21 (only) of four-age, 17 (only) of three-age, and 60 (only) of one-second-age; adult brown planthoppers 120 heads (only), adult short wings 80 heads (only), five-aged 15 heads (only), four-aged 33 heads (only), three-aged 59 heads (only), two-aged 100 heads (only) ";
if the person to be filled needs to correct the data of the recorded 33 heads of the four ages of the white-back planthoppers, the information coverage of the 33 heads of the four ages of the white-back planthoppers can be realized by voice recording the 35 heads of the four ages of the white-back planthoppers;
allowing a person to be filled to realize information coverage of the last voice information by recording the voice information which is the same as the investigation item in the last voice information in the first interval time; allowing a person to enter a new piece of voice information in a second interval time to realize information coverage of the last piece of voice information; wherein the first interval time is greater than the second interval time;
step S202: information acquisition is carried out on pest and disease items which can appear in various types of fields from literature records, experiment records, investigation records and news report records related to the field pest and disease, information acquisition is carried out on growth stages which can appear in each pest and disease, and the pest and disease items are collected into a pest and disease database; based on the disease and insect pest database, locking all the disease and insect pest items to be researched in the field; respectively carrying out Chinese pinyin analysis of the disease and pest names of all the disease and pest damage to obtain Chinese pinyin expressions corresponding to the disease and pest names; based on the Chinese phonetic expression, extracting the phonetic structure corresponding to each Chinese character in each disease and insect pest name; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure;
step S203: collecting growth stages which are generated by correspondence of all plant diseases and insect pests, respectively carrying out Chinese pinyin analysis of stage names on each growth stage to respectively obtain Chinese pinyin expressions corresponding to the stage names, and extracting pinyin structures corresponding to each Chinese character in the stage names based on the Chinese pinyin expressions; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure; when voice filling of corresponding various disease and pest investigation data is completed, a corresponding voice recognition matching database is constructed, wherein the voice recognition matching database comprises pinyin structures corresponding to all Chinese characters appearing in all disease and pest names and pinyin structures corresponding to all Chinese characters appearing in all stage names;
for example, after selecting the plant diseases and insect pests as rice planthoppers, the conventional investigation items of the rice planthoppers include the sogatella furcifera and the brown planthoppers; the growing stage comprises adults, short-wing adults, five ages, four ages, three ages and two ages;
for example, after selecting a rice leaf roller as a pest, the conventional investigation items of the rice leaf roller include the moth-expelling amount, the larva hundred-bundle amount, the egg amount and the leaf rolling rate;
step S300: respectively extracting historical correction records of the filling staff during the first part of voice filling, and capturing and collecting the characteristic correction chains of the filling staff to obtain a first characteristic correction library; extracting a history correction record of the filling staff during the second part of voice filling, and capturing and collecting the characteristic correction chains of the filling staff to obtain a second characteristic correction library;
wherein, step S300 includes:
step S301: respectively extracting information of each correction record of the first part of voice filling or the second part of voice filling of the filling personnel to obtain a plurality of text correction strips corresponding to each correction record; record in the first placein the i correction records, the corresponding text content before manual correction of each filling person is A i The corresponding text content obtained by each filling person after manual correction is B i ;B i And A is a i The text correction bars existing between are gathered asWherein, the liquid crystal display device comprises a liquid crystal display device,respectively shown in the following description A i Conversion to B i 1 st, 2 nd, … th, n th word correcting bar produced in the process of (a); in each character correcting strip, carrying out Chinese character pinyin analysis on characters before correction and characters after correction respectively to obtain pinyin structures corresponding to the characters; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure; capturing all the differential pinyin structure pairs between the characters before correction and the characters after correction for each character correction strip;
step S302: if some character correcting bar g1→g2 has a pair { r1, r2} of the pinyin structure, it means that the character g1 is corrected to the character g2 because the r1 structure in the character g1 is corrected to the r2 structure; a characteristic correction chain is constructed between a pair { r1, r2} of the distinguishing pinyin structure and a certain character correction bar g1→g2: g1→g2→ { r1, r2};
step S303: extracting and constructing a feature correction chain contained in each correction record in the first part of voice filling, and accumulating the occurrence frequency of each feature correction chainWherein x is 1 Representing the total number of times a feature correction chain appears during a first portion of the voice fill; y is 1 Representing the total number of feature correction chains captured in all correction records occurring at the time of the first partial voice fill; setting a first frequency threshold, and collecting all feature correction chains larger than the first frequency threshold to obtain a first feature correction library;
step S304: extracting and constructing voice fills in the second partIn the report, the feature correction included in each correction record is used for respectively accumulating the occurrence frequency of each feature correction chainWherein x is 2 Representing the total number of times a feature correction chain appears during a second portion of the voice fill; y is 2 Representing the total number of feature correction chains captured in all correction records occurring at the time of the second partial voice fill; setting a second frequency threshold, and collecting all feature correction chains larger than the second frequency threshold to obtain a second feature correction library;
step S400: capturing individual characteristic correction chains of all the filling staff based on the first characteristic correction library and the second characteristic correction library, and constructing individual characteristic correction libraries of all the filling staff; respectively carrying out continuous acquisition and analysis on voice filling records of all the filling staff, and continuously supplementing and optimizing the personalized feature correction library of all the filling staff;
wherein, step S400 includes:
step S401: respectively acquiring a characteristic correcting chain set Q1 contained in a first characteristic correcting library and a characteristic correcting chain set Q2 contained in a second characteristic correcting library of each filling person, and calculating Q2n_Q1=Q3;
step S402: respectively taking the characteristic correction chains contained in the set Q3 as the individual characteristic correction chains corresponding to each person to be filled, and respectively collecting all the individual characteristic correction chains of each person to be filled to obtain an individual characteristic correction library of each person to be filled;
step S500: when a certain person to be filled carries out the first part of voice filling or the second part of voice filling, capturing voice data which accords with a certain personalized feature correction chain of the certain person to be filled in voice information recorded by the certain person to be filled, and automatically correcting the text content obtained after voice recognition of the voice information according to the certain personalized feature correction chain;
in order to better realize the method, a field pest and disease damage voice reporting system under an intelligent interaction scene is also provided, and the system comprises: the system comprises a first part of voice reporting module, a correction module, a second part of voice reporting module, a characteristic correction library construction module, a personalized characteristic correction library construction module and an automatic correction module;
the first part voice reporting module is used for acquiring real-time information of the current position and the current weather condition when a reporting person starts voice reporting, inquiring basic information of a field to be subjected to plant diseases and insect pests investigation to the reporting person through a voice assistant, and performing voice recognition on input voice of the reporting person in real time to acquire corresponding text content so as to complete the first part voice reporting;
the correction module is used for capturing and collecting correction records generated by the person filling the first part of voice and the second part of voice respectively;
the correction module comprises a first correction unit and a second correction unit;
the first correction unit is used for collecting correction records generated by the filling personnel during the first part of voice filling;
the second correction unit is used for collecting correction records generated by the filling personnel during the second part of voice filling;
the second part of voice reporting module is used for assisting a reporting person to conduct various plant diseases and insect pests data investigation in the field to be conducted with plant diseases and insect pests investigation in a voice recording mode; respectively constructing a voice recognition matching database corresponding to each disease and pest investigation data when voice reporting is completed; performing voice recognition on the input voice in real time to obtain corresponding text content, and completing voice reporting of a second part;
the characteristic correction library construction module is used for receiving the data in the correction module, constructing a first characteristic correction library for each person filling, and constructing a second characteristic correction library for each person filling;
the feature correction library construction module comprises a first feature correction library construction unit and a second feature correction library construction unit;
the first characteristic correction library construction unit is used for extracting history correction records which appear when the first part of voice is filled out for each filling person and capturing and collecting characteristic correction chains of each filling person;
the second characteristic correction library construction unit is used for extracting history correction records which appear when the second part of voice is filled by each filling person and capturing and collecting characteristic correction chains of each filling person;
the individual characteristic correction library construction module is used for receiving the data in the characteristic correction library construction module, capturing individual characteristic correction chains for each person filling, and constructing an individual characteristic correction library of each person filling;
and the automatic correction module captures voice data containing an individual characteristic correction chain for voice information recorded by each person filling the first part of voice or the second part of voice in real time, and automatically corrects the text content obtained after voice recognition of the voice information according to the corresponding individual characteristic correction chain.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. The field pest and disease damage voice reporting method under the intelligent interaction scene is characterized by comprising the following steps:
step S100: when a person to be filled selects to enter voice filling in a system interface, the system acquires real-time information of the current position and the current weather condition of the person to be filled; the system inquires basic information of a field to be subjected to plant diseases and insect pests investigation from a filling staff through a voice assistant, the filling staff feeds back the basic information of the field in a voice recording mode, the system carries out voice recognition on recorded voice in real time to obtain corresponding text content, and the first part of voice filling is completed; allowing the filling personnel to manually correct the content obtained by voice filling in real time;
step S200: the field to be subjected to the investigation of the plant diseases and insect pests is subjected to investigation of various plant diseases and insect pests by a filling staff in a voice recording mode; respectively constructing a voice recognition matching database corresponding to each disease and pest investigation data when voice reporting is completed; the system carries out voice recognition on the input voice in real time to obtain corresponding text content, and completes the second part of voice reporting; allowing a person to correct voice of the content obtained by voice filling in real time or manually;
step S300: respectively extracting historical correction records of the filling staff during the first part of voice filling, and capturing and collecting the characteristic correction chains of the filling staff to obtain a first characteristic correction library; extracting a history correction record of the filling staff during the second part of voice filling, and capturing and collecting the characteristic correction chains of the filling staff to obtain a second characteristic correction library;
the step S300 includes:
step S301: respectively extracting information of each correction record of the first part of voice filling or the second part of voice filling of the filling personnel to obtain a plurality of text correction strips corresponding to each correction record; recording the character content corresponding to each filling person before manual correction in the ith correction record as A i The corresponding text content obtained by each filling person after manual correction is B i ;B i And A is a i The text correction bar set existing between is { L ] 1 ,L 2 ,…,L n -a }; wherein L is 1 ,L 2 ,…,L n Respectively shown in the following description A i Conversion to B i 1 st, 2 nd, … th, n th word correcting bar produced in the process of (a); in each character correcting strip, carrying out Chinese character pinyin analysis on characters before correction and characters after correction respectively to obtain pinyin structures corresponding to the characters; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure; capturing all the differential pinyin structure pairs between the characters before correction and the characters after correction for each character correction strip;
step S302: if a word correction bar g1→g2 has a pair { r1, r2} of the different pinyin structures, it means that the word g1 is corrected to the word g2 because the result appears after the r1 structure in the word g1 is corrected to the r2 structure; and constructing a characteristic correction chain between the pair { r1, r2} of the distinguishing pinyin structure and the certain character correction bar g1→g2: g1→g2→ { r1, r2};
step S303: extracting and constructing a feature correction chain contained in each correction record in the first part of voice filling, and accumulating the occurrence frequency of each feature correction chainWherein x is 1 Representing the total number of times a feature correction chain appears during a first portion of the voice fill; y is 1 Representing the total number of feature correction chains captured in all correction records occurring at the time of the first partial voice fill; setting a first frequency threshold, and collecting all feature correction chains larger than the first frequency threshold to obtain a first feature correction library;
step S304: extracting and constructing in the second part of voice filling, correcting the characteristics contained in each correction record, and accumulating the occurrence frequency of each characteristic correction chainWherein x is 2 Representing the total number of times a feature correction chain appears during a second portion of the voice fill; y is 2 Indicated in the second partThe total number of the characteristic correction chains captured and obtained in all correction records occurring during voice filling; setting a second frequency threshold, and collecting all feature correction chains larger than the second frequency threshold to obtain a second feature correction library;
step S400: capturing individual characteristic correction chains of all the filling staff based on the first characteristic correction library and the second characteristic correction library, and constructing individual characteristic correction libraries of all the filling staff; respectively carrying out continuous acquisition and analysis on voice filling records of all the filling staff, and continuously supplementing and optimizing the personalized feature correction library of all the filling staff;
step S500: when a certain person to be filled carries out first part voice filling or second part voice filling, capturing voice data which accords with a certain personalized feature correction chain of the certain person to be filled in voice information recorded by the certain person to be filled, and automatically correcting the text content obtained after voice recognition of the voice information according to the certain personalized feature correction chain.
2. The voice reporting method for field diseases and insect pests in intelligent interaction scenario of claim 1, wherein the step S100 includes: respectively setting a plurality of voice prompts for basic information acquisition in the fields for carrying out the investigation of the plant diseases and insect pests; and inquiring the personnel to be filled one by one according to the corresponding voice prompts by utilizing the voice assistant, performing voice monitoring, collecting voice information fed back by the personnel to be filled, and completing voice filling of the basic information in the field by the personnel to be filled.
3. The voice reporting method for field diseases and insect pests in intelligent interaction scenario of claim 1, wherein the step S200 includes:
step S201: setting an input format for matching during voice input and filling of pest investigation data: survey item + quantity; wherein, the investigation item refers to: pest name + pest growth stage; allowing a person to be filled to realize information coverage of the last voice information by recording the voice information which is the same as the investigation item in the last voice information in a first interval time; allowing a person to enter a new piece of voice information in a second interval time to realize information coverage of the last piece of voice information; wherein the first interval time is greater than the second interval time;
step S202: information acquisition is carried out on pest and disease items which can appear in various types of fields from literature records, experiment records, investigation records and news report records related to the field pest and disease, information acquisition is carried out on growth stages which can appear in each pest and disease, and the pest and disease items are collected into a pest and disease database; based on the disease and insect pest database, locking all the disease and insect pest items which occur in the field to be subjected to the disease and insect pest investigation; respectively carrying out Chinese pinyin analysis of the disease and pest names of all the disease and pest damage to obtain Chinese pinyin expressions corresponding to the disease and pest names; extracting pinyin structures corresponding to all Chinese characters in all plant diseases and insect pests based on the pinyin expression; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure;
step S203: collecting growth stages which are generated by correspondence of all plant diseases and insect pests, respectively carrying out Chinese pinyin analysis of stage names on each growth stage to respectively obtain Chinese pinyin expressions corresponding to the stage names, and extracting pinyin structures corresponding to each Chinese character in the stage names based on the Chinese pinyin expressions; wherein the pinyin structure comprises an initial structure, a final structure and a character tone structure; and constructing a corresponding voice recognition matching database when voice filling of various plant diseases and insect pests investigation data is completed, wherein the voice recognition matching database comprises pinyin structures corresponding to all Chinese characters appearing in all plant diseases and insect pests names and pinyin structures corresponding to all Chinese characters appearing in all stage names.
4. The voice reporting method for field diseases and insect pests in intelligent interaction scenario of claim 1, wherein the step S400 includes:
step S401: respectively acquiring a characteristic correcting chain set Q1 contained in the first characteristic correcting library by each filling person, and calculating a characteristic correcting chain set Q2 contained in the second characteristic correcting library, wherein Q2 is equal to Q1 = Q3;
step S402: and respectively taking the characteristic correction chains contained in the set Q3 as the individual characteristic correction chains corresponding to the filling staff, and respectively collecting all the individual characteristic correction chains of the filling staff to obtain an individual characteristic correction library of the filling staff.
5. A field pest voice reporting system in an intelligent interactive scenario applied to the field pest voice reporting method in the intelligent interactive scenario of any one of claims 1-4, the system comprising: the system comprises a first part of voice reporting module, a correction module, a second part of voice reporting module, a characteristic correction library construction module, a personalized characteristic correction library construction module and an automatic correction module;
the first part voice reporting module is used for acquiring real-time information of the current position and the current weather condition when the reporting personnel start voice reporting, inquiring basic information of fields to be subjected to plant diseases and insect pests investigation to the reporting personnel through a voice assistant, and performing voice recognition on input voice of the reporting personnel in real time to obtain corresponding text content so as to finish the first part voice reporting;
the correction module is used for capturing and collecting correction records generated by the person filling the first part of voice and the second part of voice respectively;
the second part voice reporting module is used for assisting a reporting person to conduct various plant diseases and insect pests data investigation in the field to be conducted with plant diseases and insect pests investigation in a voice recording mode; respectively constructing a voice recognition matching database corresponding to each disease and pest investigation data when voice reporting is completed; performing voice recognition on the input voice in real time to obtain corresponding text content, and completing voice reporting of a second part;
the characteristic correction library construction module is used for receiving the data in the correction module, constructing a first characteristic correction library for each person filling, and constructing a second characteristic correction library for each person filling;
the characteristic correction library construction module is used for receiving the data in the characteristic correction library construction module, capturing a characteristic correction chain for each person filling, and constructing a characteristic correction library for each person filling;
the automatic correction module captures voice data containing a personalized feature correction chain for voice information recorded by each person filling the first part of voice or the second part of voice in real time, and automatically corrects the text content obtained after voice recognition of the voice information according to the corresponding personalized feature correction chain.
6. The voice-based field pest and disease damage reporting system under intelligent interaction scene of claim 5, wherein the correction module comprises a first correction unit and a second correction unit;
the first correction unit is used for collecting correction records generated by the filling personnel during the first part of voice filling;
the second correction unit is used for collecting correction records generated by the filling personnel during the second part of voice filling.
7. The voice-based field pest and disease damage reporting system under the intelligent interaction scene of claim 5, wherein the feature correction library construction module comprises a first feature correction library construction unit and a second feature correction library construction unit;
the first characteristic correction library construction unit is used for extracting history correction records which appear when the first part of voice is filled by each filling person and capturing and collecting characteristic correction chains of each filling person;
the second feature correction library construction unit is used for extracting the history correction records of the filling personnel when the second part of voice is filled, and capturing and collecting the feature correction chains of the filling personnel.
CN202310259980.6A 2023-03-17 2023-03-17 System and method for voice reporting of field diseases and insect pests in intelligent interaction scene Active CN116229975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310259980.6A CN116229975B (en) 2023-03-17 2023-03-17 System and method for voice reporting of field diseases and insect pests in intelligent interaction scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310259980.6A CN116229975B (en) 2023-03-17 2023-03-17 System and method for voice reporting of field diseases and insect pests in intelligent interaction scene

Publications (2)

Publication Number Publication Date
CN116229975A CN116229975A (en) 2023-06-06
CN116229975B true CN116229975B (en) 2023-08-18

Family

ID=86585615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310259980.6A Active CN116229975B (en) 2023-03-17 2023-03-17 System and method for voice reporting of field diseases and insect pests in intelligent interaction scene

Country Status (1)

Country Link
CN (1) CN116229975B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105702255A (en) * 2016-03-28 2016-06-22 华智水稻生物技术有限公司 Agricultural data acquisition method, agricultural data acquisition device and mobile terminal
CN106487531A (en) * 2015-08-26 2017-03-08 重庆西线科技有限公司 A kind of voice automatic record method with automatic error correction function
CN106534548A (en) * 2016-11-17 2017-03-22 科大讯飞股份有限公司 Voice error correction method and device
CN106782532A (en) * 2016-12-23 2017-05-31 陈勇 Personal word tone and the corresponding error correcting system of word
CN107678561A (en) * 2017-09-29 2018-02-09 百度在线网络技术(北京)有限公司 Phonetic entry error correction method and device based on artificial intelligence
CN109189765A (en) * 2018-10-18 2019-01-11 成都东谷利农农业科技有限公司 Field investigation recording method, device and readable storage medium storing program for executing
JP2019197210A (en) * 2018-05-08 2019-11-14 日本放送協会 Speech recognition error correction support device and its program
CN111243593A (en) * 2018-11-09 2020-06-05 奇酷互联网络科技(深圳)有限公司 Speech recognition error correction method, mobile terminal and computer-readable storage medium
CN114678027A (en) * 2020-12-24 2022-06-28 深圳Tcl新技术有限公司 Error correction method and device for voice recognition result, terminal equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7111758B2 (en) * 2020-03-04 2022-08-02 株式会社東芝 Speech recognition error correction device, speech recognition error correction method and speech recognition error correction program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106487531A (en) * 2015-08-26 2017-03-08 重庆西线科技有限公司 A kind of voice automatic record method with automatic error correction function
CN105702255A (en) * 2016-03-28 2016-06-22 华智水稻生物技术有限公司 Agricultural data acquisition method, agricultural data acquisition device and mobile terminal
CN106534548A (en) * 2016-11-17 2017-03-22 科大讯飞股份有限公司 Voice error correction method and device
CN106782532A (en) * 2016-12-23 2017-05-31 陈勇 Personal word tone and the corresponding error correcting system of word
CN107678561A (en) * 2017-09-29 2018-02-09 百度在线网络技术(北京)有限公司 Phonetic entry error correction method and device based on artificial intelligence
JP2019197210A (en) * 2018-05-08 2019-11-14 日本放送協会 Speech recognition error correction support device and its program
CN109189765A (en) * 2018-10-18 2019-01-11 成都东谷利农农业科技有限公司 Field investigation recording method, device and readable storage medium storing program for executing
CN111243593A (en) * 2018-11-09 2020-06-05 奇酷互联网络科技(深圳)有限公司 Speech recognition error correction method, mobile terminal and computer-readable storage medium
CN114678027A (en) * 2020-12-24 2022-06-28 深圳Tcl新技术有限公司 Error correction method and device for voice recognition result, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN116229975A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
US20230186173A1 (en) Method of analyzing influence factor for predicting carbon dioxide concentration of any spatiotemporal position
CN107958351A (en) Teaching quality assessment cloud service platform
CN108154304A (en) There is the server of Teaching Quality Assessment
US20020083103A1 (en) Machine editing system incorporating dynamic rules database
CN108960269B (en) Feature acquisition method and device for data set and computing equipment
CN105096224A (en) Application recommendation method and system
Milone et al. Computational method for segmentation and classification of ingestive sounds in sheep
CN102339606A (en) Depressed mood phone automatic speech recognition screening system
CN110889092A (en) Short-time large-scale activity peripheral track station passenger flow volume prediction method based on track transaction data
CN111428152A (en) Method and device for constructing similar communities of scientific research personnel
CN116229975B (en) System and method for voice reporting of field diseases and insect pests in intelligent interaction scene
KR102095539B1 (en) Method for measuring growth amount by image analyzing ginseng
CN109726665A (en) A kind of agricultural pests detection method based on dynamic trajectory analysis
CN115527130A (en) Grassland pest mouse density investigation method and intelligent evaluation system
CN114663060A (en) Product manufacturing production line collaborative intelligent management system based on digital twin technology
CN111695763B (en) Scheduling system and method based on voice question and answer
CN114021842A (en) Remote education data acquisition and analysis method, equipment and computer storage medium
CN109783681B (en) Agricultural product price information acquisition and processing device and method
CN113515599A (en) Method for arranging help semantic analysis and scheme recommendation
US20200042926A1 (en) Analysis method and computer
CN110110583A (en) A kind of real-time online integration bridge mode automatic recognition system
CN111062430A (en) Pedestrian re-identification evaluation method based on probability density function
CN112488574B (en) Travel demand prediction method based on space-time feature extraction
KR102587573B1 (en) Picture inspection evaluation report automatic generation system using big data
CN114999453B (en) Preoperative visit system based on voice recognition and corresponding voice recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant