CN111681658A - Voice control method and device for vehicle-mounted APP - Google Patents

Voice control method and device for vehicle-mounted APP Download PDF

Info

Publication number
CN111681658A
CN111681658A CN202010507524.5A CN202010507524A CN111681658A CN 111681658 A CN111681658 A CN 111681658A CN 202010507524 A CN202010507524 A CN 202010507524A CN 111681658 A CN111681658 A CN 111681658A
Authority
CN
China
Prior art keywords
vehicle
app
instruction
coordinate information
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010507524.5A
Other languages
Chinese (zh)
Inventor
李凯
曾春华
何跃进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AI Speech Ltd
Original Assignee
AI Speech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AI Speech Ltd filed Critical AI Speech Ltd
Priority to CN202010507524.5A priority Critical patent/CN111681658A/en
Publication of CN111681658A publication Critical patent/CN111681658A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a voice control method and a voice control device for a vehicle-mounted APP, wherein the voice control method for the vehicle-mounted APP comprises the following steps: responding to the starting of the vehicle-mounted APP, and acquiring at least one controllable area on a display interface of the vehicle-mounted APP; analyzing and obtaining key words corresponding to the at least one controllable area and coordinate information corresponding to the at least one controllable area; responding to a voice instruction of a user, and performing voice recognition on the voice instruction to obtain a corresponding text instruction; performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key words of the at least one controllable area or not based on the result of the semantic analysis; if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface; and simulating to control the display interface at least based on the coordinate information.

Description

Voice control method and device for vehicle-mounted APP
Technical Field
The invention belongs to the technical field of intelligent vehicle-mounted technologies, and particularly relates to a voice control method and device of a vehicle-mounted APP.
Background
In the prior art, the following scheme is generally adopted for realizing the voice function of the vehicle-mounted APP: the vehicle APP provides an SDK (Software Development Kit), and then the voice developer voice integrates the related functions.
The inventor finds that the prior art scheme has at least the following defects in the process of implementing the application: this approach requires the in-vehicle APP to provide an interface to the voice developer, and not all in-vehicle APPs can provide an interface. Moreover, even if all vehicle-mounted APPs can provide the SDK, the implementation usually needs to be realized by adopting a mode of docking APP one by one, and the implementation has certain technical difficulty and needs to modify the source code of the android system for adaptation.
Disclosure of Invention
The embodiment of the invention provides a voice control method and a voice control device for a vehicle-mounted APP, which are used for solving at least one of the technical problems.
In a first aspect, an embodiment of the present invention provides a voice control method for a vehicle-mounted APP, including: responding to the starting of the vehicle-mounted APP, and acquiring at least one controllable area on a display interface of the vehicle-mounted APP; analyzing to obtain key words corresponding to the at least one controllable area and coordinate information corresponding to the at least one controllable area; responding to a voice instruction of a user, and performing voice recognition on the voice instruction to obtain a corresponding text instruction; performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key words of the at least one controllable area or not based on the result of the semantic analysis; if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface; and simulating to control the display interface at least based on the coordinate information.
In a second aspect, an embodiment of the present invention provides a voice control apparatus for a vehicle-mounted APP, including: the controllable area obtaining module is configured to respond to the starting of the vehicle-mounted APP and obtain at least one controllable area on a display interface of the vehicle-mounted APP; the analysis module is configured to analyze the key words corresponding to the at least one controllable area and the coordinate information corresponding to the at least one controllable area; the voice recognition module is configured to respond to a voice instruction of a user, perform voice recognition on the voice instruction and obtain a corresponding text instruction; the analysis matching module is configured to perform semantic analysis on the text instruction, and judge whether the text instruction is matched with the key words of the at least one controllable area based on the result of the semantic analysis; the coordinate information acquisition module is configured to acquire coordinate information of characters on the interface if the text instruction is matched with key characters in any controllable area; and the control module is configured to control the display interface at least based on the coordinate information simulation.
In a third aspect, an electronic device is provided, comprising: the device comprises at least one processor and a memory which is in communication connection with the at least one processor, wherein the memory stores instructions which can be executed by the at least one processor, and the instructions are executed by the at least one processor so as to enable the at least one processor to execute the steps of the voice control method of the vehicle-mounted APP of any embodiment of the invention.
In a fourth aspect, the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by a computer, the computer executes the steps of the voice control method for an in-vehicle APP according to any embodiment of the present invention.
The method and the device provided by the application can acquire the controllable area on the display interface of the vehicle-mounted APP after the vehicle-mounted APP is started, key characters and coordinate information corresponding to the controllable area are acquired, then after the voice instruction of the user is acquired, whether the matching is the operation instruction of the controllable area is judged, if the matching is successful, the corresponding coordinate information is found for simulation control, and therefore the voice control of the vehicle-mounted APP can be realized without adopting a mode of docking an SDK. And then can let pronunciation quick control vehicle-mounted APP, need not dock APP one by one, improved pronunciation development efficiency and compatibility.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a voice control method for a vehicle-mounted APP according to an embodiment of the present invention;
fig. 2 is a flowchart of another speech control method for a vehicle-mounted APP according to an embodiment of the present invention;
fig. 3 is a flowchart of a voice control method for a vehicle-mounted APP according to an embodiment of the present invention;
fig. 4 is a flowchart of a specific embodiment of a voice control scheme for a vehicle-mounted APP according to an embodiment of the present invention;
5-10 are interface diagrams of particular embodiments of voice control schemes for an in-vehicle APP, in accordance with embodiments of the present invention;
fig. 11 is a block diagram of a voice control apparatus of a vehicle-mounted APP according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, which shows a flowchart of an embodiment of a voice control method of a vehicle-mounted APP according to the present application, the voice control method of the vehicle-mounted APP according to the present embodiment may be applied to perform voice control on the vehicle-mounted APP, and the present application is not limited herein.
As shown in fig. 1, in step 101, in response to a vehicle-mounted APP starting, at least one controllable region on a display interface of the vehicle-mounted APP is acquired;
in step 102, analyzing to obtain a key word corresponding to the at least one controllable area and coordinate information corresponding to the at least one controllable area;
in step 103, in response to a voice instruction of a user, performing voice recognition on the voice instruction to obtain a corresponding text instruction;
in step 104, performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key words of the at least one controllable area based on a result of the semantic analysis;
in step 105, if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface;
in step 106, the display interface is simulated to be manipulated based on at least the coordinate information.
In this embodiment, for step 101, the voice control apparatus of the vehicle-mounted APP first obtains at least one controllable region on the display interface of the vehicle-mounted APP after the vehicle-mounted APP is started, where the controllable region may be a region where the interface changes after being controlled by a user at ordinary times, for example, a button, a text box, a selection box, and the like on the interface. The button may be a common "confirm/deny" button, a "login" button, or a "previous", "next", "play/pause" button in the music playing interface, and the application is not limited herein. The textbox is generally a collection box for collecting information input by the user, such as a "user name", "password" textbox in the login interface, and so on, and is not limited herein. The selection boxes generally include a radio box and a multiple selection box, which are not described in detail herein, and the present application is not limited thereto. The above data can be obtained by analyzing the display interface of the vehicle-mounted APP, and are not repeated herein.
Then, in step 102, the voice control device of the vehicle-mounted APP parses the keyword corresponding to the at least one controllable region and the coordinate information corresponding to the at least one controllable region. After the controllable areas on the interface are found, the key characters corresponding to the controllable areas are determined to be convenient for subsequent matching of voice instructions, and the coordinate information corresponding to the controllable areas is determined to be convenient for subsequent operation of the controllable areas, so that the controllable areas can be better butted with subsequent voice control, and preparation work for the voice control is completed. It should be noted that, although the above step 102 and step 101 are described separately, they may also be completed in one operation, and the key words and corresponding coordinate information corresponding to each steerable area may be analyzed or acquired while the steerable area is acquired. For example, in a music playing interface, after each controllable region is obtained, key words and coordinate information corresponding to each controllable region are simultaneously analyzed, where the key words are not necessarily unique, and may be multiple similar instructions, for example, "next head", "next song", "change head", and the like may all correspond to the same button, which is not described herein any more, and the present application is not limited herein.
Then, for step 103, after receiving the voice command of the user, the voice control device of the vehicle-mounted APP performs voice recognition on the voice command to obtain a corresponding text command; then, in step 104, performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key characters of at least one controllable area based on the result of the semantic analysis; the voice instruction of the user is analyzed through the processes of voice recognition and semantic understanding, and then the voice instruction is matched with the key characters of each controllable area.
Then, for step 105, if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface; after the instruction is matched with the key words in any controllable area, it is stated that the current voice instruction is an operation instruction on the interface of the current APP, so that corresponding coordinate information can be acquired, and finally, for step 106, the voice control device of the vehicle-mounted APP can at least perform control on the display interface according to the simulation of the coordinate information, the control can include clicking, single selection, multiple selection or words input in a corresponding text box, and the like, and is not repeated herein, and the application is not limited herein.
According to the method, after the vehicle-mounted APP is started, the controllable area on the display interface of the vehicle-mounted APP is obtained, the key characters and the coordinate information corresponding to the controllable area are acquired, then after the voice instruction of the user is acquired, whether the matching is the operation instruction of the controllable area is judged, if the matching is successful, the corresponding coordinate information is found for simulation control, and therefore the voice control of the vehicle-mounted APP can be achieved without adopting a mode of docking the SDK. And then can let pronunciation quick control vehicle-mounted APP, need not dock APP one by one, improved pronunciation development efficiency and compatibility.
In some optional embodiments, acquiring at least one controllable region on the display interface of the in-vehicle APP includes: drawing and acquiring at least one controllable area on a display interface of the vehicle-mounted APP through a view layer, wherein the controllable area comprises a button, a text box and a selection box. Therefore, the controllable area and subsequent key characters and coordinates corresponding to the controllable area can be obtained in a view layer drawing mode, and therefore the subsequent voice command of the user for the controllable area can be better subjected to simulation control processing.
In other optional embodiments, if the controllable area is a button, the simulating the operation of the display interface based on at least the coordinate information includes: and clicking the corresponding coordinates on the display interface based on the coordinate information simulation. Therefore, after the corresponding coordinate information is acquired for the controllable areas of the buttons, the control of the corresponding voice instruction on the vehicle-mounted APP can be realized by simulating the clicking operation on the coordinates.
In other optional embodiments, if the steerable area is a text box, the simulating steering the display interface based on at least the coordinate information includes: and simulating a corresponding text box on the display interface based on the coordinate information to input text information, wherein the text information is attribute information which is extracted from the text instruction and corresponds to the key words. Therefore, for the controllable area of the text box type, after the voice instruction of the user is collected, the corresponding text information can be input into the text box in a simulated mode to finish the conversion from the voice instruction to the simulated control, namely the attribute information corresponding to the key words in the voice instruction of the user. For example, after the user says "user name 12345", the key word is "user name", the text information is "12345", then the coordinate information of the corresponding controllable area (user name text box) is found, and the conversion from the voice instruction to the control of the corresponding vehicle-mounted APP interface can be completed by simulating the input of the text information in the text box.
In other optional embodiments, if the steerable area is a selection box, the simulating steering the display interface based on at least the coordinate information includes: and clicking the corresponding coordinates on the display interface based on the coordinate information simulation. For the selection frame, the selection of one or more selection frames can be completed according to the voice instruction of the user, which is not limited in this application and is not described herein again.
Further referring to fig. 2, it shows a flowchart of another voice control method of an in-vehicle APP according to an embodiment of the present application.
As shown in fig. 2, in step 201, in response to the startup of the vehicle-mounted host, acquiring each vehicle-mounted APP installed on the vehicle-mounted host;
in step 202, responding to a vehicle-mounted APP starting instruction of a user, analyzing the vehicle-mounted APP starting instruction to obtain a vehicle-mounted APP to be started, and judging whether the vehicle-mounted APP to be started belongs to each installed vehicle-mounted APP;
in step 203, if the to-be-started vehicle-mounted APP belongs to the installed vehicle-mounted APPs, the to-be-started vehicle-mounted APP is started.
In this embodiment, for step 201, after the vehicle-mounted host computer is started, the voice control apparatus of the vehicle-mounted APP acquires each vehicle-mounted APP installed on the vehicle-mounted host computer, because the voice command is not controlled by the original display interface, there may be a case where the APP that the user wants to start does not exist on the vehicle-mounted display interface at all, and at this time, the vehicle-mounted APP installed on the vehicle-mounted host computer needs to be acquired. Then, for step 202, responding to a vehicle-mounted APP start instruction of a user, analyzing the vehicle-mounted APP start instruction to obtain a vehicle-mounted APP to be started, judging whether the vehicle-mounted APP to be started belongs to each installed vehicle-mounted APP, and determining what the next part can do by judging whether the vehicle-mounted APP to be started already exists. Finally, for step 203, if the on-vehicle APP that waits to start belongs to each on-vehicle APP installed, start the on-vehicle APP that waits to start, to the condition that exists already, directly start this on-vehicle APP can.
According to the method, for the vehicle-mounted APP which is started by the voice instruction, due to the non-contact particularity of the voice instruction, the vehicle-mounted APP which is started by the voice instruction may not be installed, and therefore after the vehicle-mounted APP which is started by the user is collected, whether the vehicle-mounted APP already exists or not is judged firstly, and if the vehicle-mounted APP exists, the vehicle-mounted APP can be directly started.
Further referring to fig. 3, it shows a flowchart of another voice control method of an in-vehicle APP according to an embodiment of the present application.
In step 301, if the to-be-started vehicle-mounted APP does not belong to the installed vehicle-mounted APPs, notifying a user that the to-be-started APP is not installed, and inquiring whether the user installs the to-be-started APP;
in step 302, in response to an installation instruction of a user, the APP to be started is searched and installed.
According to the method, when the vehicle-mounted APP corresponding to the user voice instruction is not installed on the vehicle-mounted host, the vehicle-mounted APP can be installed for the user, so that a voice control closed loop is achieved, and the vehicle-mounted APP is started from a simulation to the simulation.
The following description is provided to enable those skilled in the art to better understand the present disclosure by describing some of the problems encountered by the inventors in implementing the present disclosure and by describing one particular embodiment of the finally identified solution.
Firstly, when controlling the car-mounted APPs by voice, a person skilled in the art generally adopts a mode of docking the car-mounted APPs one by one, and this mode generally has a certain technical difficulty, and requires modifying a system (e.g., Android) source code for adaptation, which is complicated.
The scheme of the application controls the vehicle-mounted APP in a mode of simulating human operation after the voice instruction is issued.
One specific example of manipulation includes the following steps:
step one, a voice command is issued to start the app,
starting the app, and drawing and acquiring characters/buttons/input boxes and related coordinates of an app display interface by the voice simulation control center through a view layer;
when a user speaks a certain function of the operation display interface, the voice converts the characters into corresponding actions through semantic analysis and sends the corresponding actions to the simulation control center;
matching the key characters by the analog control center according to the voice instruction to acquire the id of the option to be operated or the coordinates of the X axis and the Y axis;
simulating and clicking the position on the UI according to the coordinates of the X axis and the Y axis;
the inventors have also considered the following alternatives in the course of implementing the present application: if need pass through speech control APP, need APP to provide SDK and live the control interface and give pronunciation, pronunciation are integrated. The defects of the scheme are that interfaces are required to be provided one by one for development and butt joint; the advantage is that the interface reliability that APP inside provided is stronger.
Referring to fig. 4, a simplified flow of initiating an in-vehicle APP via simulated clicker control in accordance with user voice instructions is shown.
With further reference to fig. 5-10, interfaces of some specific examples of the voice-operated in-vehicle APP of the present application are shown.
As shown in fig. 5 and fig. 6, in a specific example, after the user opens the music vehicle-mounted APP, the user may enter the APP display interface as shown in the figure, and then may acquire each controllable region on the interface in a view manner, such as "set", "search", "like song", "recently play", "cancel" (five stars in the figure), "previous", "pause", "next", "switch play mode", and so on, when the voice instruction of the user is acquired as "small speed", next ", the voice simulation center may acquire the coordinate corresponding to" next ", and then perform a click operation on the coordinate, so as to implement the" next "function. When the user instruction is the keyword related to the vehicle-mounted song list, the area of the vehicle-mounted song list on the figure 5 can be clicked in an analog mode, for example, the center coordinates of the area are clicked, and therefore the vehicle-mounted song list interface of the figure 6 can be switched to. In the vehicle-mounted song list interface shown in fig. 6, if a voice instruction of returning to the main interface of the user is received, the user can return to the main interface shown in fig. 5 by clicking the small triangle at the upper left corner in a simulated manner, which is not described herein again.
As shown in fig. 7-10, in a specific example, in an APP for vehicle rescue, after receiving a voice start instruction for the APP from a user, the interface shown in fig. 7 is opened, and then the voice control device of the vehicle-mounted APP first obtains a plurality of controllable areas on the interface and key text and coordinate information corresponding to the controllable areas. For example, in fig. 7, "trailer", "tire changing", "oil delivery", "dilemma rescue", "release rescue", "plus" - "and other controllable areas, and corresponding key characters" trailer "," tire changing "," oil delivery "," dilemma rescue "," release rescue "," zoom in "," zoom out ", and the like, and corresponding coordinate information. After the user sends the voice command of 'trailer', the user can simulate clicking 'trailer' and 'release rescue' to enter the contact mobile phone number input interface shown in fig. 8, and after the user gives the command that 'telephone number is XXX', the user can simulate inputting the telephone number of the user into the corresponding telephone number text box, and then simulate clicking 'release rescue'. The subsequent interface manipulation process is similar, and is not described herein again.
The embodiment of the application provides a method for controlling intelligent vehicle-mounted APP through voice, the vehicle-mounted APP can be controlled quickly through voice, the APP does not need to be docked one by one, and the voice development efficiency and compatibility are improved.
Referring to fig. 11, a block diagram of a voice control apparatus for a vehicle-mounted APP according to an embodiment of the present invention is shown.
As shown in fig. 11, the speech control apparatus 1100 of the vehicle-mounted APP includes a controllable region obtaining module 1110, an analyzing module 1120, a speech recognition module 1130, an analyzing and matching module 1140, a coordinate information obtaining module 1150, and a control module 1160.
The controllable region obtaining module 1110 is configured to respond to starting of a vehicle-mounted APP and obtain at least one controllable region on a display interface of the vehicle-mounted APP; the parsing module 1120 is configured to parse the keyword corresponding to the at least one controllable region and the coordinate information corresponding to the at least one controllable region; the voice recognition module 1130 is configured to respond to a voice instruction of a user, perform voice recognition on the voice instruction, and obtain a corresponding text instruction; an analysis matching module 1140, configured to perform semantic analysis on the text instruction, and determine whether the text instruction matches with the key words of the at least one steerable area based on the result of the semantic analysis; a coordinate information obtaining module 1150 configured to obtain coordinate information of the characters on the interface if the text instruction matches a key character of any one of the controllable areas; and a manipulation module 1160 configured to simulate manipulation of the display interface based at least on the coordinate information.
It should be understood that the modules recited in fig. 11 correspond to various steps in the method described with reference to fig. 1. Thus, the operations and features described above for the method and the corresponding technical effects are also applicable to the modules in fig. 11, and are not described again here.
It should be noted that the modules in the embodiments of the present application are not intended to limit the solution of the present application, and for example, the receiving module may be described as a module that receives a voice recognition request. In addition, the related functional modules may also be implemented by a hardware processor, for example, the receiving module may also be implemented by a processor, which is not described herein again.
In other embodiments, an embodiment of the present invention further provides a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions may execute the voice control method of the vehicle-mounted APP in any of the above method embodiments;
as one embodiment, a non-volatile computer storage medium of the present invention stores computer-executable instructions configured to:
responding to the starting of the vehicle-mounted APP, and acquiring at least one controllable area on a display interface of the vehicle-mounted APP;
analyzing to obtain key words corresponding to the at least one controllable area and coordinate information corresponding to the at least one controllable area;
responding to a voice instruction of a user, and performing voice recognition on the voice instruction to obtain a corresponding text instruction;
performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key words of the at least one controllable area or not based on the result of the semantic analysis;
if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface;
and simulating to control the display interface at least based on the coordinate information.
The non-volatile computer-readable storage medium may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the stored data area may store data created from use of the voice control apparatus of the in-vehicle APP, and the like. Further, the non-volatile computer-readable storage medium may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the non-volatile computer readable storage medium optionally includes memory located remotely from the processor, which may be connected to the voice control of the in-vehicle APP over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Embodiments of the present invention further provide a computer program product, where the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, and the computer program includes program instructions, where the program instructions, when executed by a computer, cause the computer to execute any one of the above-mentioned voice control methods for an in-vehicle APP.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 12, the electronic device includes: one or more processors 1210 and a memory 1220, with one processor 1210 being an example in fig. 12. The device of the voice control method of the vehicle-mounted APP can further comprise: an input device 1230 and an output device 1240. The processor 1210, memory 1220, input device 1230, and output device 1240 may be connected by a bus or other means, such as by a bus connection in fig. 12. The memory 1220 is a non-volatile computer-readable storage medium as described above. The processor 1210 executes various functional applications and data processing of the server by running the nonvolatile software programs, instructions and modules stored in the memory 1220, namely, implements the voice control method of the vehicle-mounted APP according to the above method embodiment. The input device 1230 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the voice control device of the in-vehicle APP. The output device 1240 may include a display device such as a display screen.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
As an embodiment, the electronic device is applied to a voice control device of a vehicle-mounted APP, and includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to:
responding to the starting of the vehicle-mounted APP, and acquiring at least one controllable area on a display interface of the vehicle-mounted APP;
analyzing to obtain key words corresponding to the at least one controllable area and coordinate information corresponding to the at least one controllable area;
responding to a voice instruction of a user, and performing voice recognition on the voice instruction to obtain a corresponding text instruction;
performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key words of the at least one controllable area or not based on the result of the semantic analysis;
if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface;
and simulating to control the display interface at least based on the coordinate information.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A voice control method of a vehicle-mounted APP comprises the following steps:
responding to the starting of the vehicle-mounted APP, and acquiring at least one controllable area on a display interface of the vehicle-mounted APP;
analyzing to obtain key words corresponding to the at least one controllable area and coordinate information corresponding to the at least one controllable area;
responding to a voice instruction of a user, and performing voice recognition on the voice instruction to obtain a corresponding text instruction;
performing semantic analysis on the text instruction, and judging whether the text instruction is matched with key words of the at least one controllable area or not based on the result of the semantic analysis;
if the text instruction is matched with the key words in any controllable area, acquiring coordinate information of the words on the interface;
and simulating to control the display interface at least based on the coordinate information.
2. The method of claim 1, wherein the obtaining at least one manipulable region on a display interface of the in-vehicle APP comprises:
drawing and acquiring at least one controllable area on a display interface of the vehicle-mounted APP through a view layer, wherein the controllable area comprises a button, a text box and a selection box.
3. The method of claim 2, wherein, if the manipulatable region is a button, the simulating manipulation of the display interface based on at least the coordinate information comprises:
and clicking the corresponding coordinates on the display interface based on the coordinate information simulation.
4. The method of claim 2, wherein, if the manipulatable region is a text box, the simulating manipulation of the display interface based on at least the coordinate information comprises:
and simulating a corresponding text box on the display interface based on the coordinate information to input text information, wherein the text information is attribute information which is extracted from the text instruction and corresponds to the key words.
5. The method of claim 2, wherein, if the actionable area is a checkbox, the simulating the manipulation of the display interface based at least on the coordinate information comprises:
and clicking the corresponding coordinates on the display interface based on the coordinate information simulation.
6. The method of claim 1, wherein prior to said obtaining at least one manipulable region on a display interface of an in-vehicle APP in response to an in-vehicle APP launch, the method further comprises:
responding to the starting of the vehicle-mounted host, and acquiring each vehicle-mounted APP installed on the vehicle-mounted host;
responding to a vehicle-mounted APP starting instruction of a user, analyzing the vehicle-mounted APP starting instruction to obtain a vehicle-mounted APP to be started, and judging whether the vehicle-mounted APP to be started belongs to each installed vehicle-mounted APP;
and if the vehicle-mounted APP to be started belongs to the installed vehicle-mounted APPs, starting the vehicle-mounted APP to be started.
7. The method of claim 6, wherein after determining whether the onboard APP to be started belongs to the installed onboard APPs, the method further comprises:
if the to-be-started vehicle-mounted APP does not belong to the installed vehicle-mounted APPs, informing a user that the to-be-started APP is not installed, and inquiring whether the user installs the to-be-started APP;
and searching and installing the APP to be started in response to an installation instruction of a user.
8. A speech control device of an on-vehicle APP comprises:
the controllable area obtaining module is configured to respond to the starting of the vehicle-mounted APP and obtain at least one controllable area on a display interface of the vehicle-mounted APP;
the analysis module is configured to analyze the key words corresponding to the at least one controllable area and the coordinate information corresponding to the at least one controllable area;
the voice recognition module is configured to respond to a voice instruction of a user, perform voice recognition on the voice instruction and obtain a corresponding text instruction;
the analysis matching module is configured to perform semantic analysis on the text instruction, and judge whether the text instruction is matched with the key words of the at least one controllable area based on the result of the semantic analysis;
the coordinate information acquisition module is configured to acquire coordinate information of characters on the interface if the text instruction is matched with key characters in any controllable area;
and the control module is configured to control the display interface at least based on the coordinate information simulation.
9. A computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the voice control method of in-vehicle APP of any one of claims 1 to 7.
10. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the method of any one of claims 1 to 7.
CN202010507524.5A 2020-06-05 2020-06-05 Voice control method and device for vehicle-mounted APP Withdrawn CN111681658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010507524.5A CN111681658A (en) 2020-06-05 2020-06-05 Voice control method and device for vehicle-mounted APP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507524.5A CN111681658A (en) 2020-06-05 2020-06-05 Voice control method and device for vehicle-mounted APP

Publications (1)

Publication Number Publication Date
CN111681658A true CN111681658A (en) 2020-09-18

Family

ID=72435192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507524.5A Withdrawn CN111681658A (en) 2020-06-05 2020-06-05 Voice control method and device for vehicle-mounted APP

Country Status (1)

Country Link
CN (1) CN111681658A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112397068A (en) * 2020-11-16 2021-02-23 深圳市朗科科技股份有限公司 Voice instruction execution method and storage device
CN114327185A (en) * 2021-12-29 2022-04-12 盯盯拍(深圳)技术股份有限公司 Vehicle screen control method and device, medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293600A (en) * 2016-08-05 2017-01-04 三星电子(中国)研发中心 A kind of sound control method and system
CN108829371A (en) * 2018-06-19 2018-11-16 Oppo广东移动通信有限公司 interface control method, device, storage medium and electronic equipment
CN110085224A (en) * 2019-04-10 2019-08-02 深圳康佳电子科技有限公司 Intelligent terminal whole process speech control processing method, intelligent terminal and storage medium
CN111199734A (en) * 2018-11-20 2020-05-26 奥迪股份公司 Control method and device of mobile terminal, computer equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293600A (en) * 2016-08-05 2017-01-04 三星电子(中国)研发中心 A kind of sound control method and system
CN108829371A (en) * 2018-06-19 2018-11-16 Oppo广东移动通信有限公司 interface control method, device, storage medium and electronic equipment
CN111199734A (en) * 2018-11-20 2020-05-26 奥迪股份公司 Control method and device of mobile terminal, computer equipment and readable storage medium
CN110085224A (en) * 2019-04-10 2019-08-02 深圳康佳电子科技有限公司 Intelligent terminal whole process speech control processing method, intelligent terminal and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112397068A (en) * 2020-11-16 2021-02-23 深圳市朗科科技股份有限公司 Voice instruction execution method and storage device
CN112397068B (en) * 2020-11-16 2024-03-26 深圳市朗科科技股份有限公司 Voice instruction execution method and storage device
CN114327185A (en) * 2021-12-29 2022-04-12 盯盯拍(深圳)技术股份有限公司 Vehicle screen control method and device, medium and electronic equipment
CN114327185B (en) * 2021-12-29 2024-02-09 盯盯拍(深圳)技术股份有限公司 Vehicle-mounted screen control method and device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109671421B (en) Off-line navigation customizing and implementing method and device
EP3652734B1 (en) Voice data processing method and electronic device supporting the same
US9211854B2 (en) System and method for incorporating gesture and voice recognition into a single system
CN112970059B (en) Electronic device for processing user utterance and control method thereof
CN111049996A (en) Multi-scene voice recognition method and device and intelligent customer service system applying same
CN112286485B (en) Method and device for controlling application through voice, electronic equipment and storage medium
CN104144192A (en) Voice interaction method and device and vehicle-mounted communication terminal
CN111681658A (en) Voice control method and device for vehicle-mounted APP
CN111130807A (en) Vehicle-mounted account management method based on voiceprint recognition
EP3139377B1 (en) Guidance device, guidance method, program, and information storage medium
CN106601242A (en) Executing method and device of operation event and terminal
CN108831444B (en) Semantic resource training method and system for voice conversation platform
CN110660391A (en) Method and system for customizing voice control of large-screen terminal based on RPA (resilient packet Access) interface
CN114327185B (en) Vehicle-mounted screen control method and device, medium and electronic equipment
CN110767219B (en) Semantic updating method, device, server and storage medium
AU2019201441B2 (en) Electronic device for processing user voice input
KR20210036527A (en) Electronic device for processing user utterance and method for operating thereof
CN110473524B (en) Method and device for constructing voice recognition system
CN112002325B (en) Multi-language voice interaction method and device
CN112908322A (en) Voice control method and device for toy vehicle
US11670294B2 (en) Method of generating wakeup model and electronic device therefor
CN112861542A (en) Method and device for limiting scene voice interaction
US11195509B2 (en) System and method for interactive virtual assistant generation for assemblages
CN109857308B (en) Method and device for operating application
CN111797636A (en) Offline semantic parsing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant after: Sipic Technology Co.,Ltd.

Address before: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant before: AI SPEECH Co.,Ltd.

CB02 Change of applicant information
WW01 Invention patent application withdrawn after publication

Application publication date: 20200918

WW01 Invention patent application withdrawn after publication