CN108305625B - Voice control method and device, electronic equipment and computer readable storage medium - Google Patents

Voice control method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN108305625B
CN108305625B CN201810085318.2A CN201810085318A CN108305625B CN 108305625 B CN108305625 B CN 108305625B CN 201810085318 A CN201810085318 A CN 201810085318A CN 108305625 B CN108305625 B CN 108305625B
Authority
CN
China
Prior art keywords
instruction
target
planting
voice
planting task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810085318.2A
Other languages
Chinese (zh)
Other versions
CN108305625A (en
Inventor
卢吉
爱德温·范德·克纳普
黄元钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heyuan Hongjia Agricultural Technology Co ltd
Original Assignee
Shenzhen Chunmuyuan Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chunmuyuan Holdings Co Ltd filed Critical Shenzhen Chunmuyuan Holdings Co Ltd
Priority to CN201810085318.2A priority Critical patent/CN108305625B/en
Priority to PCT/CN2018/088459 priority patent/WO2019144543A1/en
Publication of CN108305625A publication Critical patent/CN108305625A/en
Application granted granted Critical
Publication of CN108305625B publication Critical patent/CN108305625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides a voice control method and device, electronic equipment and a computer readable storage medium. The voice control method comprises the following steps: configuring a corresponding relation between a voice instruction and a planting task instruction, and generating an instruction operation set; receiving a target voice; identifying the target voice to obtain a target voice instruction; determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction; and controlling the planting operation of the planting equipment according to the target planting task instruction. The voice operation method can perform voice operation on the planting task of the crops, is convenient and quick, and brings better operation experience to users.

Description

Voice control method and device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of intelligent voice technologies, and in particular, to a voice control method and apparatus, an electronic device, and a computer-readable storage medium.
Background
At present, agricultural operation scenes are mostly in wide space and have the characteristics of diversity, variability, dispersibility and the like, so that the operation of users is inconvenient due to the fact that agricultural collection sites are complex, for example:
(1) in indoor or outdoor environment, the screen of the handheld device is small, so that the user can not smoothly wear the protective device.
(2) Under severe weather environments such as burning sun, sand blown by wind, rain and snow, the user is inconvenient to take protective measures.
(3) As the user also undertakes the farming operation while operating the planting equipment, the efficiency of man-machine interaction is influenced.
(4) The user carries the handheld device to move and operate inconveniently.
(5) The operation of the planting device has certain requirements on the knowledge level and the operation level of the user.
Disclosure of Invention
In view of the above, there is a need to provide a voice control method and apparatus, an electronic device, and a computer-readable storage medium, which can perform voice operation on a crop planting task, are convenient and fast, and bring better operation experience to a user.
A method of voice control, the method comprising:
configuring a corresponding relation between a voice instruction and a planting task instruction, and generating an instruction operation set;
receiving a target voice;
identifying the target voice to obtain a target voice instruction;
determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction;
and controlling the planting operation of the planting equipment according to the target planting task instruction.
According to the preferred embodiment of the present invention, the determining the target planting task instruction matched with the target voice instruction as the target planting task instruction includes:
judging whether the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data which is configured in the instruction operation set and corresponds to the target voice instruction;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to the preferred embodiment of the present invention, the determining the target planting task instruction matched with the target voice instruction as the target planting task instruction includes:
judging whether the crops corresponding to the target voice instructions are the same as the crops corresponding to the target voice instructions and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to the preferred embodiment of the present invention, the determining the target planting task instruction matched with the target voice instruction as the target planting task instruction includes:
judging whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the invention, the method further comprises:
and when the planting task instruction matched with the target voice instruction cannot be determined to serve as the target planting task instruction, giving an alarm.
According to a preferred embodiment of the invention, the method further comprises:
when a request for changing the voiceprint identity data of the target voice command in the command operation set is received, acquiring a voice command of the updated voiceprint identity data;
and updating the target voice instruction into the voice instruction of the updated voiceprint identity data.
According to a preferred embodiment of the invention, the method further comprises:
when an input instruction of a planting task instruction of a target crop is acquired, acquiring voice instruction data matched with the planting task instruction from the instruction operation set;
and configuring a planting task instruction corresponding to the target crop based on the matched voice instruction data.
A voice-controlled apparatus, the apparatus comprising:
the configuration unit is used for configuring the corresponding relation between the voice command and the planting task command and generating a command operation set;
a receiving unit for receiving a target voice;
the recognition unit is used for recognizing the target voice to obtain a target voice instruction;
the determining unit is used for determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction;
and the control unit is used for controlling the planting operation of the planting equipment according to the target planting task instruction.
According to a preferred embodiment of the present invention, the determining unit is specifically configured to:
judging whether the crops corresponding to the target voice instructions are the same as the crops corresponding to the target voice instructions and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the present invention, the determining unit is further specifically configured to:
judging whether the crops corresponding to the target voice instructions are the same as the crops corresponding to the target voice instructions and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the present invention, the determining unit is further specifically configured to:
judging whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the invention, the apparatus further comprises:
and the alarm unit gives an alarm when the planting task instruction matched with the target voice instruction cannot be determined to serve as the target planting task instruction.
According to a preferred embodiment of the invention, the apparatus further comprises:
an obtaining unit, configured to obtain a voice instruction of updated voiceprint identity data when a request for changing voiceprint identity data of a target voice instruction in the instruction operation set is received;
and the updating unit is used for updating the target voice command into the voice command of the updated voiceprint identity data.
According to the preferred embodiment of the present invention, the obtaining unit is further configured to obtain, when an input instruction of a planting task instruction for a target crop is obtained, voice instruction data matched with the planting task instruction from the instruction operation set;
the configuration unit is further configured to configure a planting task instruction corresponding to the target crop based on the matched voice instruction data.
An electronic device, the electronic device comprising:
a processor; and
a memory, instructions stored in the memory being executed by the processor to implement the voice control method.
A computer-readable storage medium having instructions stored therein for execution by a processor in an electronic device to implement the voice control method.
According to the technical scheme, the instruction operation set is generated by configuring the corresponding relation between the voice instruction and the planting task instruction, integrates all the operation instructions to be used together, is convenient to store and manage, can identify the received voice subsequently, and triggers the corresponding instruction matched with the operation instruction set, so that the planting equipment communicated with the electronic equipment is controlled to execute the related planting operation, and thus, the planting task is executed on the crops through the voice operation, and the method is convenient and quick, and brings better operation experience to users.
Drawings
FIG. 1 is a diagram of an application environment for implementing a voice control method according to a preferred embodiment of the present invention.
FIG. 2 is a flow chart of a voice control method according to a preferred embodiment of the present invention.
FIG. 3 is a functional block diagram of a preferred embodiment of the voice control apparatus of the present invention.
FIG. 4 is a schematic structural diagram of an electronic device implementing a voice control method according to a preferred embodiment of the invention.
Description of the main elements
Electronic device 1
Memory device 12
Processor with a memory having a plurality of memory cells 13
Voice control device 11
Configuration unit 110
Receiving unit 111
Identification unit 112
Determining unit 113
Control unit 114
Acquisition unit 115
Alarm unit 116
Updating unit 117
Planting equipment 2
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, fig. 1 is an application environment diagram of a preferred embodiment of the method for implementing voice control according to the present invention. The application environment diagram comprises electronic equipment 1 and planting equipment 2, and the electronic equipment 1 is communicated with the planting equipment 2.
The electronic equipment 1 is used for controlling the planting operation of the planting equipment 2;
the planting device 2 is used for performing a planting operation.
FIG. 2 is a flow chart of a voice control method according to a preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The voice control method is applied to one or more electronic devices 1, where the electronic device 1 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and hardware thereof includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device 1 may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive web Television (IPTV), an intelligent wearable device, and the like.
The electronic device 1 may also comprise a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device 1 is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
And S10, the electronic equipment 1 configures the corresponding relation between the voice command and the planting task command to generate a command operation set.
In at least one embodiment of the present invention, in order to facilitate the subsequent recognition of the received voice by using the instruction operation set to control the planting operation of the planting device 2 communicating with the electronic device 1, the electronic device 1 configures a corresponding relationship between the voice instruction and the planting task instruction, and generates an instruction operation set.
Preferably, the voice instruction may be custom set by the user. Specifically, the user may configure the voice command according to commonly used technical terms in the technical field, so that a relevant operator can quickly grasp the voice command under the condition of having basic common knowledge in the technical field, and misoperation of the relevant operator due to input of an incorrect voice command due to tedious setting of the voice command is avoided.
Of course, in other embodiments, the voice command may also include other setting manners, and the invention is not limited thereto.
Preferably, the planting task instructions include, but are not limited to: seeding, fertilizing, insecticide spraying, watering and the like.
In at least one embodiment of the present invention, the electronic device 1 configures a corresponding relationship between a voice instruction and a planting task instruction, and the generating of the instruction operation set includes:
the electronic device 1 obtains at least one planting task instruction, and configures corresponding voice instructions for the at least one planting task instruction, and the electronic device 1 merges the configured at least one planting task instruction and the corresponding voice instruction into the instruction operation set.
For example: the electronic equipment 1 is configured with a planting task instruction tomato fertilization corresponding to a voice instruction fertilization A, the electronic equipment 1 is configured with a planting task instruction cucumber insecticide spraying corresponding to a voice instruction insecticide spraying B, the electronic equipment 1 is configured with a planting task instruction lettuce sowing corresponding to a voice instruction sowing C and the like, and the electronic equipment 1 merges the corresponding relations into the instruction operation set.
S11, the electronic device 1 receives the target voice.
In at least one embodiment of the present invention, the electronic device 1 receives a target voice for subsequent recognition of a target voice instruction from the target voice.
Specifically, the present invention does not limit the manner in which the electronic device 1 receives the target voice. For example: the electronic device 1 may receive the target voice or the like through a voice collecting device communicating with the electronic device 1.
S12, the electronic device 1 recognizes the target speech to obtain a target speech instruction.
In at least one embodiment of the present invention, the electronic device 1 recognizes the target voice, and obtaining the target voice instruction includes: a model training stage and a recognition stage.
Specifically, the model training phase comprises two parts of training of an acoustic model and training of a language model. The acoustic model mainly adopts context triphones as a modeling unit, a voice corpus is established by collecting a large number of voice samples, and the model is trained by adopting algorithms such as hidden Markov models Baum-Welch and the like to obtain a stable acoustic model. Before the training of the acoustic model is performed, the electronic device 1 further needs to perform preprocessing on the sound signal, extract stable acoustic features, and the like, and specifically, the electronic device 1 may use a Mel Frequency Cepstrum Coefficient (MFCC) or a Perceptual Linear Prediction Coefficient (PLP) as features to solve the feature extraction problem. The training of the language model is mainly to process text information, the electronic device 1 firstly extracts a large amount of texts from a specific application scene by using a text extraction tool, then establishes a corpus, performs semantic analysis on the corpus, and simultaneously infers a grammatical structure in the corpus to form a series of grammatical rules, namely, the language model is obtained by training.
Further, the recognition phase is a process of decoding the voice signal, and the electronic device 1 may use a Viterbi Algorithm (Viterbi Algorithm) based on a decoding process of the hidden markov model. Firstly, the electronic device 1 performs feature extraction on an input voice signal at the front end, and performs acoustic comparison on an obtained feature vector and the acoustic model. Then, the electronic device 1 selects the candidate word with the closest probability distribution from the pronunciation dictionary, and further performs constraint by using the language model to obtain a final recognition result.
It should be noted that, in other embodiments, the electronic device 1 may also recognize the target speech in other manners, and the invention is not limited herein.
And S13, the electronic equipment 1 determines a planting task instruction matched with the target voice instruction as a target planting task instruction.
In at least one embodiment of the present invention, the determining, by the electronic device 1, a target planting task instruction matching the target voice instruction as the target planting task instruction includes:
and the electronic equipment 1 judges whether the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set, and if so, the voiceprint identity data is used as the target planting task instruction.
Specifically, the electronic device 1 determines whether voiceprint identity data of the target voice instruction is the same as voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set, and when the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set, the electronic device 1 controls planting operation of planting equipment according to the target planting task instruction.
For example: when the electronic device 1 determines that the voiceprint identity data of the target voice instruction belongs to the user D, if the electronic device 1 simultaneously determines that the voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set also belongs to the user D, the electronic device 1 controls the planting operation of the planting device according to the target planting task instruction.
Therefore, only when the appointed personnel send the target voice instruction, the electronic equipment 1 can control the planting operation of the planting equipment, and the safety of the planting operation is guaranteed.
In at least one embodiment of the present invention, the electronic device 1 determines a target planting task instruction matching with the target voice instruction, and further includes, as the target planting task instruction:
and the electronic equipment 1 judges whether the crops corresponding to the target voice instruction are the same as the crops corresponding to the target voice instruction and configured in the instruction operation set, and if so, the crops are taken as the target planting task instruction.
Specifically, the electronic device 1 determines a crop corresponding to the target voice instruction by using a voice recognition technology, determines whether the crop corresponding to the target voice instruction is the same as the crop corresponding to the target voice instruction configured in the instruction operation set, and when the crop corresponding to the target voice instruction is the same as the crop corresponding to the target voice instruction configured in the instruction operation set, the electronic device 1 controls the planting operation of the planting device according to the target planting task instruction.
For example: when a target voice command input by a user at the electronic device 1 is 'fertilizer application tomato', the electronic device 1 can determine that a crop corresponding to the target voice command is a tomato by using a voice recognition technology, and if the electronic device 1 simultaneously determines that the crop corresponding to the target voice command and configured in the command operation set is also a tomato, the electronic device 1 controls the planting operation of the planting device according to the target planting task command.
Therefore, only when the crops are matched, the electronic equipment 1 can control the planting operation of the planting equipment, and the phenomenon of misoperation is avoided.
In at least one embodiment of the present invention, the electronic device 1 determines a target planting task instruction matching with the target voice instruction, and further includes, as the target planting task instruction:
and the electronic equipment 1 judges whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction and configured in the instruction operation set, and if so, the planting link is taken as the target planting task instruction.
Specifically, the electronic device 1 determines, by using a voice recognition technology, a planting link corresponding to the target voice instruction, determines whether the planting link corresponding to the target voice instruction is the same as the planting link corresponding to the target voice instruction configured in the instruction operation set, and when the planting link corresponding to the target voice instruction is the same as the planting link corresponding to the target voice instruction configured in the instruction operation set, the electronic device 1 controls the planting operation of the planting device according to the target planting task instruction.
For example: when the electronic device 1 determines that the planting link corresponding to the target voice instruction is seeding, if the electronic device 1 simultaneously determines that the planting link corresponding to the target voice instruction configured in the instruction operation set is also seeding, the electronic device 1 controls the planting operation of the planting device according to the target planting task instruction.
Therefore, only when the planting links are matched, the electronic equipment 1 can control the planting operation of the planting equipment, and the phenomenon of misoperation can be avoided.
In at least one embodiment of the present invention, after the voiceprint identity data of the target voice instruction is matched, the electronic device 1 may further perform matching on the corresponding crop and/or the corresponding planting link, so as to ensure that the planting operation is executed more safely. It should be noted that, in other embodiments, the three matching processes may be implemented in combination according to actual needs, and the order, combination manner, and the like of the three matching processes in the combined implementation manner are not limited by the present invention. For example: the electronic device 1 may select all or part of the three matching processes, and perform the processes according to a configuration sequence, where the configuration sequence may be set by a relevant worker according to an actual situation.
In at least one embodiment of the invention, the electronic device 1 may also issue a false alarm, in particular:
and when the electronic equipment 1 cannot determine the planting task instruction matched with the target voice instruction to serve as the target planting task instruction, the electronic equipment 1 gives an alarm.
For example: when the electronic device 1 determines that the planting link corresponding to the target voice instruction is seeding, if the electronic device 1 simultaneously determines that the planting link corresponding to the target voice instruction configured in the instruction operation set is fertilizing and the planting link corresponding to the target voice instruction is not matched with the fertilizing, the electronic device 1 gives an alarm.
Specifically, the alarm may be in the form of an alarm sound, a light flashing, or the like, and the present invention is not limited thereto.
Further, the electronic device 1 may prompt the user for the reason of the alarm, so that the user may issue a correct voice command again.
For example: the electronic device 1 can prompt the user with voice that "planting links are not matched, please re-input instructions, thank you" and the like.
And S14, the electronic device 1 controls the planting operation of the planting device according to the target planting task instruction.
In at least one embodiment of the invention, when the voiceprint identity data of the target voice instruction in the instruction operation set changes, the method further comprises:
when a request for changing the voiceprint identity data of the target voiceprint instruction in the instruction operation set is received, the electronic device 1 obtains the updated voiceprint identity data voiceprint instruction and updates the target voiceprint instruction into the updated voiceprint identity data voiceprint instruction.
Thus, when the designated operator corresponding to the target voice instruction changes, the electronic device 1 may acquire voiceprint identity data of a new operator, update the voiceprint identity data of the new operator to the voiceprint identity data of the target voice instruction, and update the target voice instruction to the instruction input by the new operator, so as to avoid the situation that the instruction is unavailable due to personnel change.
In at least one embodiment of the invention, in order to configure the same target voice command for different crops, the method further comprises:
when an input instruction of a planting task instruction of a target crop is acquired, the electronic device 1 acquires voice instruction data matched with the planting task instruction from the instruction operation set, and the electronic device 1 configures the planting task instruction corresponding to the target crop based on the matched voice instruction data.
Therefore, when the same target voice command can control the planting tasks of different crops simultaneously, the electronic equipment 1 can directly acquire the voice command data matched with the planting task command from the command operation set, and the redundant and repeated workload caused by the repeated instruction input of related command operators is avoided.
In summary, the present invention can configure the corresponding relationship between the voice command and the planting task command, and generate a command operation set; receiving a target voice; identifying the target voice to obtain a target voice instruction; determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction; and controlling the planting operation of the planting equipment according to the target planting task instruction. Therefore, the voice operation can be carried out on the planting task of the crops, the operation is convenient and quick, and better operation experience is brought to a user.
Fig. 3 is a functional block diagram of a voice control apparatus according to a preferred embodiment of the present invention. The voice control apparatus 11 includes a configuration unit 110, a receiving unit 111, a recognition unit 112, a determination unit 113, a control unit 114, an acquisition unit 115, an alarm unit 116, and an update unit 117. The module/unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor 13 and that can perform a fixed function, and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
The configuration unit 110 configures the corresponding relationship between the voice command and the planting task command, and generates a command operation set.
In at least one embodiment of the present invention, in order to facilitate the subsequent recognition of the received voice by using the instruction operation set to control the planting operation of the planting device 2 communicating with the electronic device 1, the configuration unit 110 configures a corresponding relationship between the voice instruction and the planting task instruction, and generates an instruction operation set.
Preferably, the voice instruction may be custom set by the user. Specifically, the user may configure the voice command according to commonly used technical terms in the technical field, so that a relevant operator can quickly grasp the voice command under the condition of having basic common knowledge in the technical field, and misoperation of the relevant operator due to input of an incorrect voice command due to tedious setting of the voice command is avoided.
Of course, in other embodiments, the voice command may also include other setting manners, and the invention is not limited thereto.
Preferably, the planting task instructions include, but are not limited to: seeding, fertilizing, insecticide spraying, watering and the like.
In at least one embodiment of the present invention, the configuring unit 110 configures a corresponding relationship between the voice instruction and the planting task instruction, and the generating the instruction operation set includes:
the configuration unit 110 obtains at least one task planting instruction, and configures corresponding voice instructions for the at least one task planting instruction, and the configuration unit 110 merges the configured at least one task planting instruction and the corresponding voice instruction into the instruction operation set.
For example: the configuration unit 110 configures planting task instructions such as tomato fertilization corresponding to a voice instruction fertilization A, the configuration unit 110 configures planting task instructions such as cucumber insecticide spraying corresponding to a voice instruction insecticide spraying B, the configuration unit 110 configures planting task instructions such as lettuce sowing corresponding to a voice instruction sowing C and the like, and the configuration unit 110 merges the corresponding relations into the instruction operation set.
The receiving unit 111 receives the target voice.
In at least one embodiment of the present invention, the receiving unit 111 receives a target voice so as to recognize a target voice instruction from the target voice.
Specifically, the present invention does not limit the manner in which the receiving unit 111 receives the target voice. For example: the receiving unit 111 may receive the target voice or the like through a voice collecting device communicating with the electronic apparatus 1.
The recognition unit 112 recognizes the target voice to obtain a target voice instruction.
In at least one embodiment of the present invention, the recognizing unit 112 recognizes the target speech, and obtaining the target speech instruction includes: a model training stage and a recognition stage.
Specifically, the model training phase comprises two parts of training of an acoustic model and training of a language model. The acoustic model mainly adopts context triphones as a modeling unit, a voice corpus is established by collecting a large number of voice samples, and the model is trained by adopting algorithms such as hidden Markov models Baum-Welch and the like to obtain a stable acoustic model. Before the training of the acoustic model is performed, the recognition unit 112 further needs to perform preprocessing on the sound signal, extract stable acoustic features, and the like, and specifically, the recognition unit 112 may use features such as Mel Frequency Cepstrum Coefficient (MFCC) or Perceptual Linear Prediction Coefficient (PLP) to solve the feature extraction problem. The training of the language model is mainly to process text information, the recognition unit 112 first extracts a large amount of texts from a specific application scene by using a text extraction tool, then establishes a corpus, performs semantic analysis on the corpus, and simultaneously infers a grammatical structure in the corpus to form a series of grammatical rules, i.e., trains to obtain the language model.
Further, the recognition stage is a process of decoding the voice signal, and the recognition unit 112 may be based on a decoding process of a hidden markov model and adopt a Viterbi Algorithm (Viterbi Algorithm). First, the recognition unit 112 performs feature extraction on the input speech signal at the front end, and performs acoustic comparison between the obtained feature vector and the acoustic model. Then, the recognition unit 112 selects a candidate word with the closest probability distribution from the pronunciation dictionary, and further performs constraint by using the language model to obtain a final recognition result.
It should be noted that, in other embodiments, the recognition unit 112 may also recognize the target speech in other manners, and the invention is not limited herein.
The determination unit 113 determines a planting task instruction matching the target voice instruction as a target planting task instruction.
In at least one embodiment of the present invention, the determining unit 113 determines the target planting task instruction matching with the target voice instruction, and the determining as the target planting task instruction includes:
the determining unit 113 determines whether the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data corresponding to the target voice instruction configured in the instruction operation set, and if so, the voiceprint identity data is used as the target planting task instruction.
Specifically, the determining unit 113 determines whether the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set, and when the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set, the control unit 114 controls the planting operation of the planting device according to the target planting task instruction.
For example: when the determining unit 113 determines that the voiceprint identity data of the target voice instruction belongs to the user D, if the determining unit 113 determines that the voiceprint identity data corresponding to the target voice instruction and configured in the instruction operation set also belongs to the user D, the control unit 114 controls the planting operation of the planting device according to the target planting task instruction.
In this way, only when the designated personnel sends the target voice command, the control unit 114 can control the planting operation of the planting device, and the safety of the planting operation is ensured.
In at least one embodiment of the present invention, the determining unit 113 determines, as the target planting task instruction, the target planting task instruction matching the target voice instruction, including:
the determining unit 113 determines whether the crop corresponding to the target voice instruction is the same as the crop corresponding to the target voice instruction configured in the instruction operation set, and if so, the crop is taken as the target planting task instruction.
Specifically, the determining unit 113 determines a crop corresponding to the target voice instruction by using a voice recognition technology, the determining unit 113 determines whether the crop corresponding to the target voice instruction is the same as the crop corresponding to the target voice instruction configured in the instruction operation set, and when the crop corresponding to the target voice is the same as the crop corresponding to the target voice instruction configured in the instruction operation set, the controlling unit 114 controls the planting operation of the planting device according to the target planting task instruction.
For example: when the target voice command input by the user at the electronic device 1 is "fertilizer application tomato", the determining unit 113 may determine that the crop corresponding to the target voice command is a tomato by using a voice recognition technology, and if the determining unit 113 determines that the crop corresponding to the target voice command and configured in the command operation set is also a tomato at the same time, the control unit 114 controls the planting operation of the planting device according to the target planting task command.
In this way, the control unit 114 can control the planting operation of the planting device only when the crops are matched, so as to avoid the phenomenon of misoperation.
In at least one embodiment of the present invention, the determining unit 113 determines, as the target planting task instruction, the target planting task instruction matching the target voice instruction, including:
the determining unit 113 determines whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction configured in the instruction operation set, and if so, the planting link is taken as the target planting task instruction.
Specifically, the determining unit 113 determines, by using a voice recognition technology, a planting link corresponding to the target voice instruction, the determining unit 113 determines whether the planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction configured in the instruction operation set, and when the planting link corresponding to the target voice instruction is the same as the planting link corresponding to the target voice instruction configured in the instruction operation set, the control unit 114 controls the planting operation of the planting device according to the target planting task instruction.
For example: when the determining unit 113 determines that the planting link corresponding to the target voice instruction is seeding, if the determining unit 113 determines that the planting link corresponding to the target voice instruction configured in the instruction operation set is also seeding, the control unit 114 controls the planting operation of the planting device according to the target planting task instruction.
In this way, the control unit 114 can control the planting operation of the planting device only when the planting links are matched, and the phenomenon of misoperation can be avoided.
In at least one embodiment of the present invention, after the voiceprint identity data of the target voice instruction is matched, the electronic device 1 may further perform matching on the corresponding crop and/or the corresponding planting link, so as to ensure that the planting operation is executed more safely. It should be noted that, in other embodiments, the three matching processes may be implemented in combination according to actual needs, and the order, combination manner, and the like of the three matching processes in the combined implementation manner are not limited by the present invention. For example: the electronic device 1 may select all or part of the three matching processes, and perform the processes according to a configuration sequence, where the configuration sequence may be set by a relevant worker according to an actual situation.
In at least one embodiment of the present invention, the alarm unit 116 may also issue a false alarm, specifically:
when the determination unit 113 cannot determine a planting task instruction matching the target voice instruction as a target planting task instruction, the alarm unit 116 issues an alarm.
For example: when the determining unit 113 determines that the planting link corresponding to the target voice instruction is seeding, if the determining unit 113 determines that the planting link corresponding to the target voice instruction configured in the instruction operation set is fertilizing at the same time, and the two links are not matched, the alarm unit 116 gives an alarm.
Specifically, the alarm may be in the form of an alarm sound, a light flashing, or the like, and the present invention is not limited thereto.
Further, the alarm unit 116 may prompt the user for the reason of the alarm, so that the user can issue the correct voice command again.
For example: the alarm unit 116 may prompt the user with a voice "plant link mismatch, please re-enter instructions, thank you" etc.
The control unit 114 controls the planting operation of the planting device according to the target planting task instruction.
In at least one embodiment of the invention, when the voiceprint identity data of the target voice instruction in the instruction operation set changes, the method further comprises:
when the obtaining unit 115 receives a request for changing the voiceprint identity data of the target voice command in the command operation set, the obtaining unit 115 obtains a voice command of updated voiceprint identity data, and the updating unit 117 updates the target voice command to the voice command of the updated voiceprint identity data.
In this way, when the designated operator corresponding to the target voice command changes, the obtaining unit 115 may obtain voiceprint identity data of a new operator, and the updating unit 117 updates the voiceprint identity data of the new operator to the voiceprint identity data of the target voice command and updates the target voice command to the command input by the new operator, so as to avoid a situation that the command is unavailable due to personnel change.
In at least one embodiment of the invention, in order to configure the same target voice command for different crops, the method further comprises:
when the obtaining unit 115 obtains an input instruction of a planting task instruction for a target crop, the obtaining unit 115 obtains voice instruction data matched with the planting task instruction from the instruction operation set, and the configuration unit 110 configures the planting task instruction corresponding to the target crop based on the matched voice instruction data.
Thus, when the same target voice command can control the planting tasks of different crops at the same time, the obtaining unit 115 can directly obtain the voice command data matched with the planting task command from the command operation set, and the situation that redundant and repeated workload is caused by repeated instruction input by related command operators is avoided.
In summary, the present invention can configure the corresponding relationship between the voice command and the planting task command, and generate a command operation set; receiving a target voice; identifying the target voice to obtain a target voice instruction; determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction; and controlling the planting operation of the planting equipment according to the target planting task instruction. Therefore, the voice operation can be carried out on the planting task of the crops, the operation is convenient and quick, and better operation experience is brought to a user.
Fig. 4 is a schematic structural diagram of an electronic device implementing a voice control method according to a preferred embodiment of the present invention.
The electronic device 1 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device 1 may also be, but not limited to, any electronic product that can perform human-computer interaction with a user through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device 1 may also be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices.
The Network where the electronic device 1 is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
In one embodiment of the present invention, the electronic device 1 includes, but is not limited to, a memory 12, a processor 13, and a computer program, such as a voice control program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by a person skilled in the art that the schematic diagram is only an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and that it may comprise more or less components than shown, or some components may be combined, or different components, e.g. the electronic device 1 may further comprise an input output device, a network access device, a bus, etc.
The Processor 13 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The processor 13 is an operation core and a control center of the electronic device 1, and is connected to each part of the whole electronic device 1 by various interfaces and lines, and executes an operating system of the electronic device 1 and various installed application programs, program codes, and the like.
The processor 13 executes an operating system of the electronic device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in the various voice control method embodiments described above, such as steps S10, S11, S12, S13, S14 shown in fig. 1.
Alternatively, the processor 13, when executing the computer program, implements the functions of the modules/units in the above device embodiments, for example: configuring a corresponding relation between a voice instruction and a planting task instruction, and generating an instruction operation set; receiving a target voice; identifying the target voice to obtain a target voice instruction; determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction; and controlling the planting operation of the planting equipment according to the target planting task instruction.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the electronic device 1. For example, the computer program may be divided into a configuration unit 110, a receiving unit 111, a recognition unit 112, a determination unit 113, a control unit 114, an acquisition unit 115, an alarm unit 116, and an update unit 117.
The memory 12 can be used for storing the computer programs and/or modules, and the processor 13 implements various functions of the electronic device 1 by running or executing the computer programs and/or modules stored in the memory 12 and calling data stored in the memory 12. The memory 12 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 12 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The memory 12 may be an external memory and/or an internal memory of the electronic device 1. Further, the Memory 12 may be a circuit having a Memory function without any physical form In the integrated circuit, such as a RAM (Random-Access Memory), a FIFO (First In First Out), and the like. Alternatively, the memory 12 may be a memory in a physical form, such as a memory stick, a TF Card (Trans-flash Card), or the like.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Referring to fig. 2, the memory 12 of the electronic device 1 stores a plurality of instructions to implement a voice control method, and the processor 13 can execute the plurality of instructions to implement: configuring a corresponding relation between a voice instruction and a planting task instruction, and generating an instruction operation set; receiving a target voice; identifying the target voice to obtain a target voice instruction; determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction; and controlling the planting operation of the planting equipment according to the target planting task instruction.
According to a preferred embodiment of the present invention, the processor 13 further executes a plurality of instructions including:
judging whether the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data which is configured in the instruction operation set and corresponds to the target voice instruction;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the present invention, the processor 13 further executes a plurality of instructions including:
judging whether the crops corresponding to the target voice instructions are the same as the crops corresponding to the target voice instructions and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the present invention, the processor 13 further executes a plurality of instructions including:
judging whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
According to a preferred embodiment of the present invention, the processor 13 further executes a plurality of instructions including:
and when the planting task instruction matched with the target voice instruction cannot be determined to serve as the target planting task instruction, giving an alarm.
According to a preferred embodiment of the present invention, the processor 13 further executes a plurality of instructions including:
when a request for changing the voiceprint identity data of the target voice command in the command operation set is received, acquiring a voice command of the updated voiceprint identity data;
and updating the target voice instruction into the voice instruction of the updated voiceprint identity data.
According to a preferred embodiment of the present invention, the processor 13 further executes a plurality of instructions including:
when an input instruction of a planting task instruction of a target crop is acquired, acquiring voice instruction data matched with the planting task instruction from the instruction operation set;
and configuring a planting task instruction corresponding to the target crop based on the matched voice instruction data.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 2, and details thereof are not repeated herein.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (9)

1. A method for voice control, the method comprising:
configuring a corresponding relation between a voice instruction and a planting task instruction, and generating an instruction operation set;
receiving a target voice;
identifying the target voice to obtain a target voice instruction;
determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction, wherein the method comprises the following steps: judging whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction and configured in the instruction operation set; if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is used as the target planting task instruction;
and controlling the planting operation of the planting equipment according to the target planting task instruction.
2. The voice control method of claim 1, wherein the determining a planting task instruction that matches the target voice instruction as a target planting task instruction comprises:
judging whether the voiceprint identity data of the target voice instruction is the same as the voiceprint identity data which is configured in the instruction operation set and corresponds to the target voice instruction;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
3. The voice-controlled method according to claim 1, wherein the determining a planting task instruction matching the target voice instruction as a target planting task instruction further comprises:
judging whether the crops corresponding to the target voice instructions are the same as the crops corresponding to the target voice instructions and configured in the instruction operation set;
and if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is taken as the target planting task instruction.
4. The voice control method according to any one of claims 1 to 3, characterized in that the method further comprises:
and when the planting task instruction matched with the target voice instruction cannot be determined to serve as the target planting task instruction, giving an alarm.
5. The voice control method of claim 1, wherein the method further comprises:
when a request for changing the voiceprint identity data of the target voice command in the command operation set is received, acquiring a voice command of the updated voiceprint identity data;
replacing the target voice in the instruction operation set with a voice instruction of the updated voiceprint identity data.
6. The voice control method of claim 1, wherein the method further comprises:
when an input instruction of a planting task instruction of a target crop is acquired, acquiring voice instruction data matched with the planting task instruction from the instruction operation set;
and configuring a planting task instruction corresponding to the target crop based on the matched voice instruction data.
7. A voice control apparatus, characterized in that the apparatus comprises:
the configuration unit is used for configuring the corresponding relation between the voice command and the planting task command and generating a command operation set;
a receiving unit for receiving a target voice;
the recognition unit is used for recognizing the target voice to obtain a target voice instruction;
the determining unit is used for determining a planting task instruction matched with the target voice instruction to serve as a target planting task instruction, and comprises the following steps: judging whether a planting link corresponding to the target voice instruction is the same as a planting link corresponding to the target voice instruction and configured in the instruction operation set; if the target planting task instruction is the same as the target planting task instruction, the target planting task instruction is used as the target planting task instruction;
and the control unit is used for controlling the planting operation of the planting equipment according to the target planting task instruction.
8. An electronic device, characterized in that the electronic device comprises:
a processor; and
memory, the instructions stored in the memory being executable by the processor to implement the voice control method of any of claims 1 to 6.
9. A computer-readable storage medium characterized by: the instructions stored in the computer-readable storage medium are executable by a processor in an electronic device to implement the voice control method of any of claims 1 to 6.
CN201810085318.2A 2018-01-29 2018-01-29 Voice control method and device, electronic equipment and computer readable storage medium Active CN108305625B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810085318.2A CN108305625B (en) 2018-01-29 2018-01-29 Voice control method and device, electronic equipment and computer readable storage medium
PCT/CN2018/088459 WO2019144543A1 (en) 2018-01-29 2018-05-25 Voice control method and apparatus, electronic device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810085318.2A CN108305625B (en) 2018-01-29 2018-01-29 Voice control method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108305625A CN108305625A (en) 2018-07-20
CN108305625B true CN108305625B (en) 2020-12-18

Family

ID=62866995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810085318.2A Active CN108305625B (en) 2018-01-29 2018-01-29 Voice control method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108305625B (en)
WO (1) WO2019144543A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112309369A (en) * 2020-09-29 2021-02-02 天津工程机械研究院有限公司 Rice transplanter unmanned system and method based on voice recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010018453A3 (en) * 2008-08-15 2011-04-14 University Of Cape Town System and method for processing electronically generated text
CN105702255A (en) * 2016-03-28 2016-06-22 华智水稻生物技术有限公司 Agricultural data acquisition method, agricultural data acquisition device and mobile terminal
JP2016116525A (en) * 2016-02-09 2016-06-30 小橋工業株式会社 Remote control holder
CN106857199A (en) * 2017-03-01 2017-06-20 深圳春沐源农业科技有限公司 A kind of wireless remote irrigation method and system
CN107193391A (en) * 2017-04-25 2017-09-22 北京百度网讯科技有限公司 The method and apparatus that a kind of upper screen shows text message
CN206620072U (en) * 2017-04-24 2017-11-07 南京师范大学 A kind of seeding and control system
CN107464555A (en) * 2016-06-03 2017-12-12 索尼移动通讯有限公司 Background sound is added to the voice data comprising voice

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6368438A (en) * 1986-09-08 1988-03-28 Iseki & Co Ltd Voice integral type processor for farm working machinery or the like
JP2647063B2 (en) * 1995-09-25 1997-08-27 井関農機株式会社 Voice recognition device
DE102004032642A1 (en) * 2004-07-06 2006-02-16 Rabe Agrarsysteme Gmbh & Co. Kg Agricultural machine is controlled by a driver issuing spoken commands interpreted by a controller with speech recognition
CN101477799B (en) * 2009-01-19 2011-02-09 北京农业信息技术研究中心 System for controlling agricultural apparatus by voice and control method
CN102332265B (en) * 2011-06-20 2014-04-16 浙江吉利汽车研究院有限公司 Method for improving voice recognition rate of automobile voice control system
CN103345709A (en) * 2013-07-01 2013-10-09 南通农业职业技术学院 Agricultural informationization service system and information transferring method thereof
DE102015102881A1 (en) * 2015-02-27 2016-09-01 Claas Saulgau Gmbh Control system for an agricultural implement
EP3111738B1 (en) * 2015-07-01 2018-09-19 Kverneland Group Mechatronics BV Method for controlling operation of an agricultural machine and system
CN106601250A (en) * 2015-11-10 2017-04-26 刘芨可 Speech control method and device and equipment
CN106356054A (en) * 2016-11-23 2017-01-25 广西大学 Method and system for collecting information of agricultural products based on voice recognition
CN106683673B (en) * 2016-12-30 2020-11-13 智车优行科技(北京)有限公司 Method, device and system for adjusting driving mode and vehicle
CN107145549B (en) * 2017-04-27 2020-01-14 深圳智高点知识产权运营有限公司 Database cache control method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010018453A3 (en) * 2008-08-15 2011-04-14 University Of Cape Town System and method for processing electronically generated text
JP2016116525A (en) * 2016-02-09 2016-06-30 小橋工業株式会社 Remote control holder
CN105702255A (en) * 2016-03-28 2016-06-22 华智水稻生物技术有限公司 Agricultural data acquisition method, agricultural data acquisition device and mobile terminal
CN107464555A (en) * 2016-06-03 2017-12-12 索尼移动通讯有限公司 Background sound is added to the voice data comprising voice
CN106857199A (en) * 2017-03-01 2017-06-20 深圳春沐源农业科技有限公司 A kind of wireless remote irrigation method and system
CN206620072U (en) * 2017-04-24 2017-11-07 南京师范大学 A kind of seeding and control system
CN107193391A (en) * 2017-04-25 2017-09-22 北京百度网讯科技有限公司 The method and apparatus that a kind of upper screen shows text message

Also Published As

Publication number Publication date
CN108305625A (en) 2018-07-20
WO2019144543A1 (en) 2019-08-01

Similar Documents

Publication Publication Date Title
CN106875941B (en) Voice semantic recognition method of service robot
US20230237328A1 (en) Information processing method and terminal, and computer storage medium
US20190164549A1 (en) Method and apparatus for controlling page
US10134390B2 (en) Electronic device and voice recognition method thereof
CN109885810A (en) Nan-machine interrogation's method, apparatus, equipment and storage medium based on semanteme parsing
CN114627863B (en) Speech recognition method and device based on artificial intelligence
CN109767763B (en) Method and device for determining user-defined awakening words
CN108632653B (en) Voice control method, smart television and computer readable storage medium
CN101470732A (en) Auxiliary word stock generation method and apparatus
CN110765759A (en) Intention identification method and device
CN104866275B (en) Method and device for acquiring image information
CN110110095A (en) A kind of power command text matching technique based on shot and long term memory Recognition with Recurrent Neural Network
CN107463356A (en) The execution method and apparatus of flow of task
CN111310440A (en) Text error correction method, device and system
CN107564522A (en) A kind of intelligent control method and device
CN108257593A (en) A kind of audio recognition method, device, electronic equipment and storage medium
WO2020233381A1 (en) Speech recognition-based service request method and apparatus, and computer device
CN108305625B (en) Voice control method and device, electronic equipment and computer readable storage medium
CN109859747A (en) Voice interactive method, equipment and storage medium
CN112509566A (en) Voice recognition method, device, equipment, storage medium and program product
CN111554270B (en) Training sample screening method and electronic equipment
CN111627445B (en) Matching method and system for sites or personnel
CN111128130B (en) Voice data processing method and device and electronic device
CN111680514A (en) Information processing and model training method, device, equipment and storage medium
CN110263346B (en) Semantic analysis method based on small sample learning, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220429

Address after: 517000 room 317-1, enterprise service center building, No. 8, Longling Third Road, Longling Industrial Park, Yuancheng District, Heyuan City, Guangdong Province

Patentee after: HEYUAN HONGJIA AGRICULTURAL TECHNOLOGY CO.,LTD.

Address before: 518052 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Patentee before: SHENZHEN SPRINGWOODS HOLDING Co.,Ltd.