CN111007846A - Unmanned ship control method and device, terminal equipment and storage medium - Google Patents

Unmanned ship control method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111007846A
CN111007846A CN201911082052.7A CN201911082052A CN111007846A CN 111007846 A CN111007846 A CN 111007846A CN 201911082052 A CN201911082052 A CN 201911082052A CN 111007846 A CN111007846 A CN 111007846A
Authority
CN
China
Prior art keywords
control
information
voice
unmanned ship
data packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911082052.7A
Other languages
Chinese (zh)
Inventor
赵继成
张伟斌
侯俊兆
罗朋飞
秦梓荷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Yunzhou Intelligence Technology Ltd
Original Assignee
Zhuhai Yunzhou Intelligence Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Yunzhou Intelligence Technology Ltd filed Critical Zhuhai Yunzhou Intelligence Technology Ltd
Priority to CN201911082052.7A priority Critical patent/CN111007846A/en
Publication of CN111007846A publication Critical patent/CN111007846A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles

Abstract

The invention is suitable for the technical field of unmanned ship control, and provides an unmanned ship control method, an unmanned ship control device, terminal equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining voice information sent by control personnel, determining the voice information as control information, obtaining a recognition result according to the control information and a preset voice database, generating a control instruction according to the recognition result, and controlling the unmanned ship to complete operation corresponding to the control instruction. According to the unmanned ship control method and the unmanned ship control system, the voice sent by the control personnel is obtained and recognized, the control instruction is obtained to control the unmanned ship to complete the designated operation, the control instruction is accurately recognized, the accuracy and the efficiency of the unmanned ship control operation are improved, the safety of the unmanned ship is improved, and a large amount of manpower and material resources are saved.

Description

Unmanned ship control method and device, terminal equipment and storage medium
Technical Field
The invention belongs to the technical field of unmanned ship control, and particularly relates to an unmanned ship control method, an unmanned ship control device, terminal equipment and a storage medium.
Background
With the high-speed development of unmanned boats, the application of unmanned boats is increasingly wide. At present, a voice device is arranged on a boat body in the unmanned boat, and information interaction and remote control on normal navigation of the boat are realized in a voice control mode. However, the voice device arranged on the body of the unmanned ship is influenced by severe sea conditions on the sea surface and interference of electromagnetic interference signals sent by the ship in an operation environment, so that the situations of voice control command interference or wrong command receiving and the like of the unmanned ship occur, and potential safety hazards are buried in safe navigation of the unmanned ship.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for controlling an unmanned ship, a terminal device, and a storage medium, so as to solve the problem in the prior art that a voice device provided on the unmanned ship is interfered by factors such as an external environment and an electromagnetic signal, and then a voice control command of the unmanned ship is interfered or an erroneous command is received, and a certain potential safety hazard exists during the navigation of the unmanned ship.
A first aspect of an embodiment of the present invention provides an unmanned ship control method, including:
acquiring voice information sent by a control person;
determining the voice information as control information;
obtaining a recognition result according to the control information and a preset voice database;
and generating a control instruction according to the identification result, and controlling the unmanned ship to finish the operation corresponding to the control instruction.
Optionally, determining that the voice information is control information includes:
analyzing the voice information, and judging whether the voice information contains a preset control confirmation symbol and control content;
and if the voice information contains the preset control confirmer and the control content, judging that the voice information is the control information.
Optionally, a control instruction is generated according to the recognition result, and the unmanned ship is controlled to complete an operation corresponding to the control instruction, including:
acquiring a preset control confirmer and control content in the identification result;
and generating a control instruction according to the control content, and controlling the unmanned ship to finish the operation corresponding to the control instruction.
Optionally, after generating the control instruction according to the recognition result, the method further includes:
broadcasting the control instruction;
and if no other control instruction is received or control instruction confirmation information sent by the base station is received after a preset time period, controlling the unmanned ship to complete corresponding operation according to the control instruction.
Optionally, before acquiring the voice information sent by the control personnel, the method further includes:
acquiring the identity identification information of the preset control personnel; wherein the identification information comprises at least one of the following information: fingerprint information, palm print information, iris information and face image information of a control person;
and binding the identity identification information of the control personnel with the corresponding voice data packet.
Optionally, obtaining a recognition result according to the control information and a preset voice database, and obtaining the recognition result includes:
if the voice information is control information, matching with a voice data packet in a voice database according to the voice information;
if the matched voice data packet exists, acquiring the identity identification information of the control personnel corresponding to the matched voice data packet;
acquiring identity identification information;
and judging whether the identity identification information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet.
If the identity identification information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet, activating the matched voice data packet;
and identifying the voice information according to the voice data packet to obtain an identification result.
Optionally, before acquiring the voice information sent by the control personnel, the method includes:
acquiring a preset voice control instruction input by a control person;
and correspondingly generating a voice data packet according to the voice control instruction, and storing the voice data packet to the voice database.
A second aspect of an embodiment of the present invention provides an unmanned ship control device, including:
the acquisition module is used for acquiring voice information sent by a control person;
the judging module is used for determining the voice information as control information;
the recognition module is used for obtaining a recognition result according to the control information and a preset voice database;
and the control module is used for generating a control command according to the identification result and controlling the unmanned ship to complete the operation corresponding to the control command.
A third aspect of an embodiment of the present invention provides a terminal device, including: at least one memory; and a processor communicatively connected to the at least one memory, the memory storing a computer program executable by the at least one processor, the at least one processor implementing the steps of the method as described above when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above.
According to the embodiment of the invention, the voice sent by the control personnel is acquired and recognized, the control instruction is acquired to control the unmanned ship to complete the designated operation, the control instruction is accurately recognized, the accuracy and the efficiency of the unmanned ship control operation are improved, the safety of the unmanned ship is improved, and a large amount of manpower and material resources are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an unmanned ship control method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a dynamic time reduction algorithm provided in an embodiment of the present invention;
fig. 3 is a schematic flow chart of an unmanned ship control method according to a second embodiment of the present invention;
fig. 4 is a schematic flow chart of an unmanned ship control method according to a third embodiment of the present invention;
fig. 5 is a schematic flow chart of an unmanned ship control method according to a fourth embodiment of the present invention;
fig. 6 is a schematic flow chart of an unmanned ship control method according to a fifth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an unmanned ship control device according to a sixth embodiment of the present invention;
fig. 8 is a schematic diagram of a terminal device according to a seventh embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one
As shown in fig. 1, the present embodiment provides an unmanned ship control method, which may be applied to a terminal device such as an unmanned ship control base station. The unmanned ship control method provided by the embodiment comprises the following steps:
and S101, acquiring voice information sent by a control person.
In specific application, a sound pickup device can be preset at the control base station end to acquire voice information sent by a control person, the voice information is transmitted to the control base station end through the sound pickup device, and the voice information is processed to acquire corresponding control information.
And S102, determining the voice information as control information.
In a specific application, the preset control information may include a preset control confirmation symbol and control content, and the voice information is analyzed and recognized, and if the analysis result includes the preset control confirmation symbol and the control content, the voice information is the control information. And if the analysis result does not comprise any item of the preset control confirmer or the control content, the voice information is not the control information and is not subjected to subsequent processing.
And S103, obtaining a recognition result according to the control information and a preset voice database.
In specific application, if the voice information is judged to be the control information according to the analysis result of the voice information, the control information is ensured to be sent out by a control person through matching of a preset voice database and the control information, misoperation of other non-control persons is avoided, and then the control information is recognized through a preset algorithm to obtain a recognition result (namely a corresponding control instruction). The preset algorithm includes, but is not limited to, a Dynamic Time Warping (DTW) algorithm.
As known to those skilled in the art, since the recognition and matching of the voice information needs to solve a key problem: the same user has not exactly the same characteristics for two (or more) pronunciations of the same vocabulary. The difference between the two pronunciations includes, but is not limited to, the magnitude of the intensity of the sound, the offset of the spectrum, and the syllable length of the two pronunciations. Meanwhile, linear corresponding relation does not exist between the sound pitches of the two pronunciations. Therefore, a nonlinear time warping technique (such as a dynamic time warping algorithm) conforming to the actual situation needs to be selected to identify and match the voice information.
The principle of the dynamic time reduction algorithm is shown in fig. 2.
In a two-dimensional rectangular coordinate system, each frame number N of a test template is marked on a horizontal axis, N is an integer larger than 0, each frame M of a reference template is marked on a vertical axis, M is an integer larger than 0, 2, M is an integer larger than 0, and a vertical line and a horizontal line are drawn according to the marked coordinates to form a grid. Each intersection (ti, rj) in the grid represents the intersection of a frame (n ═ i) in the test template with a frame (m ═ j) in the reference template.
The dynamic time normalization algorithm firstly calculates the distance between the frames of the two templates (namely, a frame matching distance matrix is solved), and then an optimal matching path is found out in the frame matching distance matrix; wherein the content of the first and second substances,
the process of obtaining the best path is as follows: starting from point (1, 1), the local path constraint is as shown on FIG. 2, and the previous lattice points reachable by any point (in, im) include (in-1, im), (in-1, im-l), and (in-1, im-2). And selecting the grid point corresponding to the minimum value in the distances between the (in, im) grid points and the three grid points as a grid point before the point (in, im). At this time, the cumulative distance of the paths is:
D(in,im)=d(T(in),R(im))+min{D(in-1,im),D(in-1,im-1),D(in-1,im-2)};
according to the method, a best matching path can be obtained by starting from the point (l, 1) (making D (1, 1) ═ 0), calculating the accumulated distance of the obtained path, and repeating recursion until the point (N, M) is searched, wherein D (N, M) is the matching distance corresponding to the best matching path.
When the voice recognition is carried out, the test template is matched with all the reference templates, and the voice information corresponding to the minimum value Dmin (N, M) in all the matching distances is the recognition result.
In conclusion, the dynamic time normalization algorithm is accurate in recognition of specific people, small vocabularies and isolated vocabularies, and is suitable for accurate control of unmanned ships and boats; moreover, the voice of the person who enters can only be recognized through the preset algorithm, so that even if a voice control instruction is leaked in the marine countermeasure process, the opposite party cannot control the unmanned ship of the party.
And S104, generating a control command according to the identification result, and controlling the unmanned ship to complete the operation corresponding to the control command.
In a specific application, the preset control confirmer in the identification result is removed, the control content in the identification result is reserved to generate a corresponding control command, and the control command is sent to the unmanned ship to control the unmanned ship to complete the designated operation corresponding to the control command, so that the unmanned ship can normally operate.
In one embodiment, after step S104, the method further includes:
broadcasting a control instruction;
and if no other control instruction is received or control instruction confirmation information sent by the base station is received after the preset time period, controlling the unmanned ship to complete corresponding operation according to the control instruction.
In specific application, the control instruction is broadcasted, so that a control person of the base station confirms the control instruction, and the correctness of the control instruction is ensured. And if other control instructions are received after the preset time period, executing control operation according to the other control instructions. For example, after receiving the instruction for maintaining the original operation state after the preset time period, the unmanned ship sends the instruction for maintaining the original operation state to the unmanned ship, so that the unmanned ship maintains the original operation state.
And if the control instruction confirmation information sent by the base station is received after the preset time period, controlling the unmanned ship to finish corresponding operation according to the control instruction.
And if no other control instruction is received after the preset time period, executing control operation according to the control instruction to enable the unmanned ship to normally operate. The preset time period can be specifically set according to specific conditions, for example, if the preset time period is set to be 3s, if no other control instruction is received after 3s, the control operation is executed according to the control instruction, so that the unmanned ship normally operates.
This embodiment is through the pronunciation that acquisition control personnel sent and discerning, and acquisition control command accomplishes appointed operation in order to control unmanned ship, and accurate discernment control command has improved precision and efficiency to unmanned ship control operation, has improved the security of unmanned ship, has practiced thrift a large amount of manpower and materials.
Example two
As shown in fig. 3, this embodiment is a further description of the method steps in the first embodiment. In this embodiment, step S102 includes:
s1021, analyzing the voice message, and judging whether the voice message contains a preset control confirmation symbol and control content.
In the specific application, the voice information is analyzed, the analysis result is obtained, and whether the analysis result contains the preset control confirmer and the control content is judged. The preset control confirmer is confirmation information for confirming whether the voice information is a control command, and can be specifically set according to actual conditions. For example, a control command confirmer 0101 is added to the head of the control command (for example, "0" and "1" can be pronounced according to "hole" and "unimer") to increase the accuracy of voice recognition and reduce the error control of the voice control personnel on the unmanned ship during communication. Specifically, "0101 left rudder 10" may represent left rudder 10 degrees; "0101 right advance 3" may represent right push 3 gear; "0101 orientation 1322" may mean stationary 132.2 degree directional navigation; "0101 fixed speed 102" may represent a fixed 10.2 knots speed voyage; "0101 full speed forward" can mean left push, right push while maximum gear is forward; "0101 full speed reverse" may mean left push, right push while maximum gear reverse; "0101 track 013" can indicate a 013 ship to which the tracking environment sensor is identified; "0101 standby" may mean standby at the current location point; "0101 return voyage" may mean returning to the mother vessel or near the base station.
S1022, if the voice message includes the preset control confirmer and the control content, the voice message is determined to be the control message.
In a specific application, if the voice message includes a preset control confirmation symbol and control content, the voice message is determined to be the control message. For example, if the voice information is "0101 return," it is confirmed that the voice information has the preset control confirmation symbol 0101 and the control content is "return," and the voice information is determined to be the control information.
According to the embodiment, the control instruction confirmer is preset, whether the voice information is the control information is confirmed according to the preset control instruction confirmer, and further operation is performed according to the confirmation result, so that the safety and reliability of the voice control unmanned ship are improved.
EXAMPLE III
As shown in fig. 4, this embodiment is a further description of the method steps in the first embodiment. In this embodiment, step S104 includes:
and S1041, acquiring a preset control confirmer and control content in the identification result.
In a specific application, since the predetermined control confirmer is only used for confirming whether the voice message is the control message, and the predetermined control confirmer does not have other substantial meanings, the predetermined control confirmer is removed, and the control content is reserved.
And S1042, generating a control instruction according to the control content, and controlling the unmanned ship to complete the operation corresponding to the control instruction.
In a specific application, for example, if the voice information is '0101 return voyage', a return voyage control instruction is generated to control the unmanned ship to complete the return voyage operation.
According to the embodiment, the control command is generated according to the control content by removing the preset control confirmer, so that the precision of the control command is improved, and the time consumed in the command transmission process is shortened.
Example four
As shown in fig. 5, this embodiment is a further description of the method steps in the first embodiment. In this embodiment, before step S101, the method includes:
s201, acquiring a preset voice control instruction input by a controller.
In specific application, a preset voice control instruction input by a control person is obtained. The number of control personnel with voice control commands can be specifically set according to the actual conditions of the operation environment of the boat, the application of the boat, the control authority and the like.
For example, the preset control personnel are set to be the control personnel with the control authority, and the number of the preset control personnel is the number of the control personnel with the control authority.
And S202, correspondingly generating a voice data packet according to the voice control instruction, and storing the voice data packet in a voice database.
In the specific application, according to a voice control instruction input by a preset controller, preset voice data packets corresponding to the number of the controller are generated and stored in a preset voice database, so that the voice control instruction of the controller and the corresponding voice data packets form an incidence relation, and after the voice data packets are bound with identity information of the controller subsequently, an incidence relation that one controller corresponds to one identity information and one identity information corresponds to one voice data packet is formed, so that voice data is verified in a dual verification mode of voice recognition and identity recognition information verification, and misoperation is avoided. For example, a voice control command input by a controller with an ID of 001 may generate a voice data packet with a reference number of 001, and the voice data packet is bound to the ID of the controller.
In one embodiment, before step S101, the method further includes:
acquiring preset identity identification information of control personnel; wherein the identification information comprises at least one of the following information: fingerprint information, palm print information, iris information and face image information of a control person;
and binding the identity identification information of the control personnel with the corresponding voice data packet.
In specific application, the preset identity recognition information of the control personnel is obtained, and the identity recognition information of the control personnel and the corresponding voice data packet of the control personnel are bound. The identification information includes, but is not limited to, at least one of ID of a control person, fingerprint information, palm print information, iris information, and face image information. For example, when fingerprint information of a controller with an ID of 001 is acquired, the fingerprint information is bound with a voice data packet with a label of 001, so as to form a voice database in which the voice data packet and the controller are mapped one to one. If the number of the acquired identification information is less than the number of the preset identification information of the control personnel, the acquired identification information of the control personnel and the corresponding voice data packet of the control personnel are bound, and prompt information for not acquiring the identification information of the control personnel is generated and displayed according to other information (for example, the name of the control personnel input in advance) of the control personnel corresponding to the voice data packet without the identification information.
The embodiment generates the voice data packets in one-to-one correspondence according to the voice information input by the control personnel so as to perform voice recognition, thereby improving the accuracy and the authenticity of the voice recognition.
EXAMPLE five
As shown in fig. 6, this embodiment is a further description of the method steps in the first embodiment. In this embodiment, step S103 includes:
and S1031, if the voice information is the control information, matching with the voice data packet in the voice database according to the voice information.
In a specific application, if the voice information is identified as the control information, matching is performed according to the voice information and a voice data packet in a voice database, and whether the voice information is sent by a control person with a control authority is judged.
And S1032, if the matched voice data packet exists, acquiring the identity identification information of the controller corresponding to the matched voice data packet.
In specific application, if the corresponding voice data packet is matched according to the voice information, the identity identification information of the control personnel corresponding to the matched voice data packet is obtained, and the next authentication is carried out.
And S1033, acquiring the identification information.
In a specific application, the identification information of a person sending voice information is obtained.
S1034, whether the identity recognition information is matched with the identity recognition information of the control personnel corresponding to the matched voice data packet or not is judged.
In the specific application, whether the identity recognition information is matched with the identity recognition information of the control personnel corresponding to the matched voice data packet is judged. And if not, judging that the voice recognition result is wrong, wherein the person sending the voice information is not a control person with the control authority, or the person sending the voice information is not a control person for performing the control operation.
And S1035, if the identification information is matched with the identification information of the control personnel corresponding to the matched voice data packet, activating the matched voice data packet.
In the specific application, if the fingerprint information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet, the voice identification is judged to be correct, the personnel sending the voice information is confirmed to be the control personnel with the control authority, the matched voice data packet is activated, and the voice data packet is identified.
S1036, identifying the voice information according to the voice data packet, and obtaining an identification result.
In a specific application, due to individual pronunciation difference, the voice information can be identified in detail according to the voice data packet, so as to accurately acquire the control information in the voice information.
The double verification mode of speech recognition and identity recognition information verification is passed through to this embodiment, avoids appearing the maloperation phenomenon that the spurious triggering leads to, has improved accuracy and security to unmanned ships and light boats control.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
EXAMPLE six
As shown in fig. 7, the present embodiment provides an unmanned boat control device 100 for performing the method steps of the first embodiment. The unmanned ship control device 100 according to the present embodiment includes:
the acquisition module 101 is used for acquiring voice information sent by a control person;
the judging module 102 is configured to determine that the voice information is control information;
the recognition module 103 is used for obtaining a recognition result according to the control information and a preset voice database;
and the control module 104 is used for generating a control instruction according to the identification result and controlling the unmanned ship to complete the operation corresponding to the control instruction.
In one embodiment, the determining module 102 includes:
the analysis unit is used for analyzing the voice information and judging whether the voice information contains a preset control confirmation symbol and control content;
the first judging unit is used for judging the voice information as the control information if the voice information contains the preset control confirmer and the control content.
In one embodiment, the control module 104 includes:
the first acquisition unit is used for acquiring a preset control identifier and control content in the identification result;
and the generating unit is used for generating a control instruction according to the control content and controlling the unmanned ship to finish the operation corresponding to the control instruction.
In one embodiment, the apparatus 100, further comprises:
a broadcasting unit for broadcasting a control instruction;
and the execution unit is used for controlling the unmanned ship to complete corresponding operation according to the control instruction if other control instructions are not received or control instruction confirmation information sent by the base station is received after a preset time period.
In one embodiment, the apparatus 100, further comprises:
the second acquisition module is used for acquiring a preset voice control instruction input by a control person;
and the storage unit is used for correspondingly generating the voice data packet according to the voice control instruction and storing the voice data packet into the voice database.
In one embodiment, the apparatus 100, further comprises:
the third acquisition module is used for acquiring the preset identification information of the control personnel; wherein the identification information comprises at least one of the following information: fingerprint information, palm print information, iris information and face image information of a control person;
and the binding module is used for binding the identity identification information of the control personnel with the corresponding voice data packet.
In one embodiment, the identification module 103 includes:
the matching unit is used for matching the voice information with the voice data packet in the voice database according to the voice information if the voice information is the control information;
the second acquisition unit is used for acquiring the identity identification information of the control personnel corresponding to the matched voice data packet if the matched voice data packet is obtained;
a third obtaining unit, configured to obtain identity identification information;
and the second judgment unit is used for judging whether the identity identification information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet.
The activation unit is used for activating the matched voice data packet if the identity identification information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet;
and the fourth acquisition unit is used for identifying the voice information according to the voice data packet and acquiring an identification result.
This embodiment is through the pronunciation that acquisition control personnel sent and discerning, and acquisition control command accomplishes appointed operation in order to control unmanned ship, and accurate discernment control command has improved precision and efficiency to unmanned ship control operation, has improved the security of unmanned ship, has practiced thrift a large amount of manpower and materials.
EXAMPLE seven
Fig. 8 is a schematic diagram of the terminal device provided in this embodiment. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81, and a computer program 82, such as an unmanned boat control program, stored in the memory 81 and operable on the processor 80. The processor 80, when executing the computer program 82, implements the steps in the various unmanned boat control method embodiments described above, such as steps S101 to S104 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 101 to 104 shown in fig. 7.
Illustratively, the computer program 82 may be divided into one or more modules/units, which are stored in the memory 81 and executed by the processor 80 to carry out the invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into an acquisition module, a determination module, an identification module, and a control module, each module having the following specific functions:
the acquisition module is used for acquiring voice information sent by a control person;
the judging module is used for determining the voice information as the control information;
the recognition module is used for obtaining a recognition result according to the control information and a preset voice database;
and the control module is used for generating a control command according to the identification result and controlling the unmanned ship to complete the operation corresponding to the control command.
The terminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8 and does not constitute a limitation of terminal device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk provided on the terminal device 8, a Smart Media Card (SMC), a Secure Digital Card (SD), a flash memory Card (FlashCard), and the like. Further, the memory 81 may also include both an internal storage unit of the terminal device 8 and an external storage device. The memory 81 is used to store computer programs and other programs and data required by the terminal device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An unmanned ship control method is characterized by comprising the following steps:
acquiring voice information sent by a control person;
determining the voice information as control information;
obtaining a recognition result according to the control information and a preset voice database;
and generating a control instruction according to the identification result, and controlling the unmanned ship to finish the operation corresponding to the control instruction.
2. The unmanned boat control method of claim 1, wherein the determining that the voice information is control information comprises:
analyzing the voice information, and judging whether the voice information contains a preset control confirmation symbol and control content;
and if the voice information contains the preset control confirmer and the control content, judging that the voice information is the control information.
3. The unmanned ship control method of claim 2, wherein the generating a control command according to the recognition result, and operating an unmanned ship to perform an operation corresponding to the control command comprises:
acquiring a preset control confirmer and control content in the identification result;
and generating a control instruction according to the control content, and controlling the unmanned ship to finish the operation corresponding to the control instruction.
4. The unmanned ship control method of claim 2, further comprising, after generating a control command based on the recognition result:
broadcasting the control instruction;
and if no other control instruction is received or control instruction confirmation information sent by the base station is received after a preset time period, controlling the unmanned ship to complete corresponding operation according to the control instruction.
5. The unmanned ship control method of claim 1, wherein said obtaining voice information from a controlling person comprises:
acquiring the identity identification information of the preset control personnel; wherein the identification information comprises at least one of the following information: fingerprint information, palm print information, iris information and face image information of a control person;
and binding the identity identification information of the control personnel with the corresponding voice data packet.
6. The unmanned ship control method of claim 5, wherein obtaining a recognition result based on the control information and a preset voice database comprises:
if the voice information is control information, matching with a voice data packet in a voice database according to the voice information;
if the matched voice data packet exists, acquiring the identity identification information of the control personnel corresponding to the matched voice data packet;
acquiring identity identification information;
and judging whether the identity identification information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet.
If the identity identification information is matched with the identity identification information of the control personnel corresponding to the matched voice data packet, activating the matched voice data packet;
and identifying the voice information according to the voice data packet to obtain an identification result.
7. The unmanned boat control method of claim 6, wherein before obtaining the voice message from the controlling person, comprising:
acquiring a preset voice control instruction input by a control person;
and correspondingly generating a voice data packet according to the voice control instruction, and storing the voice data packet to the voice database.
8. An unmanned boat control device, comprising:
the acquisition module is used for acquiring voice information sent by a control person;
the judging module is used for determining the voice information as control information;
the recognition module is used for obtaining a recognition result according to the control information and a preset voice database;
and the control module is used for generating a control command according to the identification result and controlling the unmanned ship to complete the operation corresponding to the control command.
9. A terminal device comprising at least one memory; and a processor communicatively connected to the at least one memory, the memory storing a computer program executable by the at least one processor, wherein the at least one processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by at least one processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911082052.7A 2019-11-07 2019-11-07 Unmanned ship control method and device, terminal equipment and storage medium Pending CN111007846A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911082052.7A CN111007846A (en) 2019-11-07 2019-11-07 Unmanned ship control method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911082052.7A CN111007846A (en) 2019-11-07 2019-11-07 Unmanned ship control method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111007846A true CN111007846A (en) 2020-04-14

Family

ID=70111365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911082052.7A Pending CN111007846A (en) 2019-11-07 2019-11-07 Unmanned ship control method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111007846A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034853A (en) * 2020-09-04 2020-12-04 南京凌华微电子科技有限公司 Working method of household monitoring robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104865856A (en) * 2015-03-30 2015-08-26 成都好飞机器人科技有限公司 Voice control method for unmanned aerial vehicle
CN107799120A (en) * 2017-11-10 2018-03-13 北京康力优蓝机器人科技有限公司 Service robot identifies awakening method and device
CN108122553A (en) * 2017-12-20 2018-06-05 深圳市道通智能航空技术有限公司 A kind of unmanned aerial vehicle (UAV) control method, apparatus, remote control equipment and UAV system
CN108181899A (en) * 2017-12-14 2018-06-19 北京汽车集团有限公司 Control the method, apparatus and storage medium of vehicle traveling
CN108780301A (en) * 2017-02-13 2018-11-09 深圳市大疆创新科技有限公司 Control method, unmanned plane and the remote control equipment of unmanned plane
CN109979443A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 A kind of rights management control method and device for robot
CN110033764A (en) * 2019-03-08 2019-07-19 中国科学院深圳先进技术研究院 Sound control method, device, system and the readable storage medium storing program for executing of unmanned plane

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104865856A (en) * 2015-03-30 2015-08-26 成都好飞机器人科技有限公司 Voice control method for unmanned aerial vehicle
CN108780301A (en) * 2017-02-13 2018-11-09 深圳市大疆创新科技有限公司 Control method, unmanned plane and the remote control equipment of unmanned plane
CN107799120A (en) * 2017-11-10 2018-03-13 北京康力优蓝机器人科技有限公司 Service robot identifies awakening method and device
CN108181899A (en) * 2017-12-14 2018-06-19 北京汽车集团有限公司 Control the method, apparatus and storage medium of vehicle traveling
CN108122553A (en) * 2017-12-20 2018-06-05 深圳市道通智能航空技术有限公司 A kind of unmanned aerial vehicle (UAV) control method, apparatus, remote control equipment and UAV system
CN109979443A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 A kind of rights management control method and device for robot
CN110033764A (en) * 2019-03-08 2019-07-19 中国科学院深圳先进技术研究院 Sound control method, device, system and the readable storage medium storing program for executing of unmanned plane

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034853A (en) * 2020-09-04 2020-12-04 南京凌华微电子科技有限公司 Working method of household monitoring robot

Similar Documents

Publication Publication Date Title
CN107729300B (en) Text similarity processing method, device and equipment and computer storage medium
CN110265037B (en) Identity verification method and device, electronic equipment and computer readable storage medium
CN111292752B (en) User intention recognition method and device, electronic equipment and storage medium
CN108877782A (en) Audio recognition method and device
CN110070081A (en) Automatic information input method, device, storage medium and electronic equipment
CN111143925B (en) Drawing labeling method and related products
CN110246249A (en) Substation's business is self-service to handle method and device
CN108805035A (en) Interactive teaching and learning method based on gesture identification and device
CN113327620A (en) Voiceprint recognition method and device
CN110110320A (en) Automatic treaty review method, apparatus, medium and electronic equipment
CN111007846A (en) Unmanned ship control method and device, terminal equipment and storage medium
CN109829431B (en) Method and apparatus for generating information
CN113033297B (en) Method, device, equipment and storage medium for programming real object
CN112351047B (en) Double-engine based voiceprint identity authentication method, device, equipment and storage medium
CN113948090A (en) Voice detection method, session recording product and computer storage medium
CN113051384A (en) User portrait extraction method based on conversation and related device
CN112669850A (en) Voice quality detection method and device, computer equipment and storage medium
CN109524009B (en) Policy entry method and related device based on voice recognition
CN116680385A (en) Dialogue question-answering method and device based on artificial intelligence, computer equipment and medium
CN111949965A (en) Artificial intelligence-based identity verification method, device, medium and electronic equipment
CN109903054B (en) Operation confirmation method and device, electronic equipment and storage medium
CN112802495A (en) Robot voice test method and device, storage medium and terminal equipment
CN111680514A (en) Information processing and model training method, device, equipment and storage medium
CN111414732A (en) Text style conversion method and device, electronic equipment and storage medium
CN109493868B (en) Policy entry method and related device based on voice recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519080 rooms 311 and 312A, 3 / F, Xiangshan ocean science and technology port, 3888 North Lovers Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province

Applicant after: Zhuhai Yunzhou Intelligent Technology Co.,Ltd.

Address before: Room 2 214, teaching area, No.1, software garden road, Tangjiawan Town, Zhuhai City, Guangdong Province

Applicant before: ZHUHAI YUNZHOU INTELLIGENCE TECHNOLOGY Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200414

RJ01 Rejection of invention patent application after publication