CN106297791B - Whole-process voice implementation method and system - Google Patents

Whole-process voice implementation method and system Download PDF

Info

Publication number
CN106297791B
CN106297791B CN201610722144.7A CN201610722144A CN106297791B CN 106297791 B CN106297791 B CN 106297791B CN 201610722144 A CN201610722144 A CN 201610722144A CN 106297791 B CN106297791 B CN 106297791B
Authority
CN
China
Prior art keywords
application program
voice
whole
information
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610722144.7A
Other languages
Chinese (zh)
Other versions
CN106297791A (en
Inventor
卢伟超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201610722144.7A priority Critical patent/CN106297791B/en
Publication of CN106297791A publication Critical patent/CN106297791A/en
Application granted granted Critical
Publication of CN106297791B publication Critical patent/CN106297791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a method and a system for realizing whole-course voice. The implementation method comprises the following steps: A. opening a client socket when an application program is started, and connecting the client socket with a whole-process voice server socket; B. when the application program is switched to the foreground, the control node information of the application program is submitted to a whole-course voice service end through a reflection mechanism, and the whole-course voice service end records the application program currently in the foreground; C. after receiving the voice information, the whole-process voice server analyzes the voice information and sends the analyzed semantic information to an application program in a socket mode; D. and after receiving the semantic information sent by the whole-process voice server, the application program calls a corresponding control response function to perform corresponding response operation. The whole-course voice implementation method and the whole-course voice implementation system have the advantages of good flexibility, high adaptability and good universality.

Description

Whole-process voice implementation method and system
Technical Field
The invention relates to the field of android voice application, in particular to a whole-process voice implementation method and system.
Background
In the prior art, the whole voice interaction requires that a voice assistant acquires information of an application and an interface which are interacted by a current user, and also acquires information displayed in the application interface so as to preferentially hit a function on the current interface and realize voice call when performing voice recognition and semantic understanding. The language points define the information such as the application and the interface as personalized scene information. The personalized scene information includes: scene ID and dynamic data. The scene ID is a unique scene ID of each application that needs to support the whole-process voice interaction under each different scene interface, and the possible function combinations of the applications can be independently defined in the speech point through the scene ID (for example, the applications simultaneously support the voice collection and the voice play control function, etc.). The dynamic data refers to dynamic information (such as names of all films in a film and television retrieval list, obtained through dynamic retrieval) provided by an application interface.
The whole course voice interaction scheme in the prior art has the following principle:
when voice interaction is started, a voice point informs an application program in a Broadcast message mode and informs the current application program of starting to submit scene information; if the application program is in foreground display, submitting the current scene information to the language point in a startservice mode; the voice assistant can call the application program by three modes of appointed startActivity (starting event), startService (starting service) and sendBascaadcast, and the application program can complete the functions required by the user after receiving the call instruction.
The technical scheme in the prior art has the disadvantages that all application programs needing to respond to voice instructions need to be in butt joint and adapted with a voice assistant, so that the workload is large, the transportability and the maintainability are poor, and the whole-process voice scheme in the prior art has poor flexibility, low adaptability and poor universality.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
In view of the above deficiencies of the prior art, an object of the present invention is to provide a method and a system for implementing a full-range speech, which are used to solve the problems of poor flexibility, poor adaptability, and poor versatility of the full-range speech scheme in the prior art.
The technical scheme of the invention is as follows:
a full-range voice implementation method comprises the following steps:
step A, opening a client socket when starting an application program, and connecting the client socket with a whole-process voice server socket;
step B, when the application program is switched to the foreground, the control node information of the application program is submitted to the whole-course voice service end through a reflection mechanism, and the whole-course voice service end records the application program currently positioned in the foreground;
step C, after the voice information is received by the whole-process voice server, analyzing the voice information, and sending the analyzed semantic information to an application program in a socket mode;
and D, after receiving the semantic information sent by the whole-process voice server, the application program calls a corresponding control response function to perform corresponding response operation.
The whole-course voice implementation method includes the following specific steps:
step C1, after the voice service end receives the voice information in the whole process, semantic analysis is carried out;
step C2, judging whether the semantic information analyzed by the semantics is matched with the data submitted by the application program, if so, entering step C3;
and step C3, sending the analyzed semantic information to the application program in a socket mode.
The whole-course voice implementation method comprises the following steps:
and starting a thread when the application program is started every time, circularly waiting the semantic information sent by the whole-course voice server, and performing corresponding response operation according to the received semantic information.
The whole-process voice implementation method comprises the following steps before the step A:
step A0, add annotations to the UI controls of the application in advance.
The whole-course voice implementation method comprises the following specific steps:
b1, acquiring a root node of the application program switched to the foreground;
b2, after the root node is obtained, obtaining a corresponding class name through a reflection method, and then traversing the root node;
b3, after traversing the root node, obtaining the control node information of the application program according to the annotation.
A system for global speech implementation, comprising:
the socket connection module is used for opening a client socket when an application program is started and connecting a whole-process voice server socket;
the recording module is used for submitting control node information of the application program to the whole-course voice server through a reflection mechanism when the application program is switched to the foreground, and the whole-course voice server records the application program currently positioned on the foreground;
the voice sending module is used for analyzing the voice information after the voice service end in the whole process receives the voice information and sending the analyzed semantic information to the application program in a socket mode;
and the operation response module is used for calling a corresponding control response function after the application program receives the semantic information sent by the whole-process voice server and carrying out corresponding response operation.
The whole-course voice implementation system, wherein the voice sending module specifically includes:
the semantic analysis unit is used for carrying out semantic analysis after the whole-process voice service end receives the voice information;
the semantic matching unit is used for judging whether semantic information obtained by semantic analysis is matched with data submitted by an application program or not, and if yes, entering the semantic sending unit;
and the semantic sending unit is used for sending the analyzed semantic information to the application program in a socket mode.
The whole-course voice implementation system, wherein the socket connection module further comprises:
and the thread starting unit is used for starting a thread when the application program is started every time, circularly waiting the semantic information sent by the whole-course voice server, and performing corresponding response operation according to the received semantic information.
The whole-course voice implementation system further comprises:
and the annotation adding module is used for adding annotations to the UI control of the application program in advance.
The whole-course voice implementation system comprises the following recording modules:
a root node acquisition unit configured to acquire a root node of an application program switched to a foreground;
the root node traversing unit is used for acquiring the corresponding class name through a reflection method after acquiring the root node, and then traversing the root node;
and the control node information acquisition unit is used for acquiring the control node information of the application program according to the annotation after traversing the root node.
Has the advantages that: by the method and the system for realizing the whole-course voice, a third-party developer does not need to know the requirement of the whole-course voice, and can realize the support of the whole-course voice only by installing the developed application into the system supporting the whole-course voice, so that the method and the system have the advantages of good flexibility, high adaptability and good universality.
Drawings
FIG. 1 is a flowchart of a method for implementing global speech according to a preferred embodiment of the present invention.
Fig. 2 is a detailed flowchart of step S103 in the method shown in fig. 1.
FIG. 3 is a block diagram of a system for implementing global speech according to a preferred embodiment of the present invention.
Fig. 4 is a block diagram showing a specific structure of a voice transmission module in the system shown in fig. 3.
Detailed Description
The present invention provides a method and a system for implementing a whole-course voice, and the present invention is further described in detail below in order to make the purpose, technical scheme and effect of the present invention clearer and clearer. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for implementing a full-range speech according to a preferred embodiment of the present invention, as shown in the figure, the method includes:
step S101, opening a client socket when starting an application program, and connecting the client socket with a whole-process voice server socket;
step S102, when the application program is switched to the foreground, the control node information of the application program is submitted to a whole-course voice service end through a reflection mechanism, and the whole-course voice service end records the application program currently positioned at the foreground;
step S103, after the voice service end in the whole process receives the voice information, the voice information is sent to the application program in a socket mode;
and step S104, after receiving the voice information sent by the whole-process voice server, the application program calls a corresponding control response function to perform corresponding response operation.
Because the applications in the android system are mostly developed based on the controls in the android system, and the organization structures of the controls are organized according to the tree structure in the system, the attribute information of the controls in each application can be obtained and operated, and the controls can be controlled through voice.
Specifically, in step S101, modifying a source code of the android system, specifically, locating a start entry portion of each application program in the android system, that is, modifying a code portion to be executed when the application program is started each time, and implementing two functions by modifying the step, where one function is to open a corresponding client socket when the application program is started each time, to connect a full-range voice server socket, and to implement socket connection between the two sockets; and the other function is to start a thread, circularly wait the semantic information sent by the whole-course voice server, and perform corresponding response operation according to the received semantic information.
In step S102, when the application program is switched to the foreground, the control node information of the application program may be obtained through the reflection mechanism and submitted to the global voice server, and the global voice server records the application program currently in the foreground.
In the android system, the root node of each application is not PhoneWindow $ DecorView but rootviewprompt, each android application Activity component is connected in series by the nodes of PhoneWindow $ DecorView and rootviewprompt, and is organized into a tree structure and is uniformly managed by WindowManger Impl. And then traversing all child nodes on the root node to acquire the relevant information of the child nodes.
However, at this time, attribute information of each node, for example, information such as an ID, a length, a width, and a height of a child node (that is, an application control), whether to respond to an operation such as a key, or not, is not acquired. In order to solve the problems, in the invention, a comment is added to a display part in an android system framework, namely, related codes such as view, namely, a comment is added to a UI control of an application program in advance.
That is, before step S101, the method further includes: annotations are added to UI controls of an application in advance. Therefore, the attribute information of the UI control can be read subsequently through the transmitting mechanism. Therefore, when voice information is input subsequently, the corresponding UI control can be responded according to the analyzed voice instruction.
In addition, the step 102 specifically includes:
s201, acquiring a root node of an application program switched to a foreground;
s202, after the root node is obtained, the corresponding class name is obtained through a reflection method, and then the root node is traversed;
and S203, after traversing the root node, acquiring control node information of the application program according to the annotation.
Among them, the reflection mechanism is a class that is applied in JAVA, loaded at runtime, ascertained, and completely unknown during compilation. In other words, a JAVA program can load a class whose name is known at runtime and learn its complete construct and generate its object entities or value its fields or invoke its methods.
In step S103, after receiving the voice message at the full-stroke voice server, the voice message is analyzed, and the analyzed semantic information is sent to the application program in the aforementioned socket connection manner.
Specifically, as shown in fig. 2, the step S103 specifically includes:
step S301, after the voice information is received by the whole-course voice service end, semantic analysis is carried out;
step S302, judging whether the semantic information obtained by semantic analysis is matched with the data submitted by the application program, if so, entering step S303;
and step S303, sending the analyzed semantic information to the application program in a socket mode.
Firstly, carrying out semantic analysis on the received voice information, judging whether the semantic information obtained by the semantic analysis is matched with the data submitted by the application program or not, if not, finishing the process, and if so, indicating that the semantic information is a correct voice instruction, and sending the corresponding semantic information to the application program through the established socket connection. The submitted data refers to the control node information submitted by the previous application program to the whole-course voice server.
In the step S104, the application program circularly receives the semantic information sent by the full-process voice server through the thread established in the step S101, and then calls a corresponding control response function to perform a corresponding response operation.
By adopting the voice implementation scheme of the invention, a third party developer does not need to know the concept of the whole-course voice, and can realize the support of the whole-course voice only by installing the developed application program into the system supporting the whole-course voice, thereby realizing wider universality, higher flexibility and higher adaptability.
Based on the above method, the present invention further provides a preferred embodiment of a system for implementing full-range speech, as shown in fig. 3, which includes:
the socket connection module 100 is used for opening a client socket when an application program is started, and is used for connecting a whole-process voice server socket;
the recording module 200 is configured to submit control node information of an application program to the full-process voice server through a reflection mechanism when the application program is switched to a foreground, where the full-process voice server records the application program currently in the foreground;
the voice sending module 300 is configured to, after the voice server receives the voice message in the whole process, analyze the voice message, and send the analyzed semantic information to the application program in a socket manner;
and the operation response module 400 is configured to, after receiving the semantic information sent by the full-process voice server, the application program calls a corresponding control response function to perform a corresponding response operation.
Further, as shown in fig. 4, the voice sending module 300 specifically includes:
the semantic analysis unit 310 is configured to perform semantic analysis after the voice server receives the voice message in the whole process;
the semantic matching unit 320 is used for judging whether the semantic information obtained by semantic parsing is matched with the data submitted by the application program, and if so, entering a semantic sending unit;
and the semantic sending unit 330 is configured to send the parsed semantic information to the application program in a socket manner.
Further, the socket connection module further comprises:
and the thread starting unit is used for starting a thread when the application program is started every time, circularly waiting the semantic information sent by the whole-course voice server, and performing corresponding response operation according to the received semantic information.
Further, the system further comprises:
and the annotation adding module is used for adding an annotation to the UI control of the application program in advance, wherein the annotation comprises a control response function and control attribute information.
The recording module 200 specifically includes:
a root node acquisition unit configured to acquire a root node of an application program switched to a foreground;
the root node traversing unit is used for acquiring the corresponding class name through a reflection method after acquiring the root node, and then traversing the root node;
and the control node information acquisition unit is used for acquiring the control node information of the application program according to the annotation after traversing the root node.
The technical details of the above module unit have been described in the foregoing method, and thus are not described again.
In summary, with the method and system for implementing the whole-process voice provided by the present invention, a third party developer does not need to know the requirement of the whole-process voice, and can implement the support of the whole-process voice by installing the developed application into the system supporting the whole-process voice, and the method and system have the advantages of good flexibility, high adaptability and good universality.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (6)

1. A method for realizing full-range voice is characterized by comprising the following steps:
a0, adding annotations to UI controls of the application program in advance;
step A, opening a client socket when starting an application program, and connecting the client socket with a whole-process voice server socket; starting a thread when an application program is started every time, circularly waiting for semantic information sent by a full-course voice server, and performing corresponding response operation according to the received semantic information;
step B, when the application program is switched to the foreground, the control node information of the application program is submitted to the whole-course voice service end through a reflection mechanism, and the whole-course voice service end records the application program currently positioned in the foreground;
step C, after the voice information is received by the whole-process voice server, analyzing the voice information, and sending the analyzed semantic information to an application program in a socket mode;
and D, after receiving the semantic information sent by the whole-process voice server, the application program calls a corresponding control response function to perform corresponding response operation.
2. The global speech realization method according to claim 1, wherein the step C specifically includes:
step C1, after the voice service end receives the voice information in the whole process, semantic analysis is carried out;
step C2, judging whether the semantic information analyzed by the semantics is matched with the data submitted by the application program, if so, entering step C3;
and step C3, sending the analyzed semantic information to the application program in a socket mode.
3. The global speech realization method according to any one of claims 1-2, wherein step B specifically comprises:
b1, acquiring a root node of the application program switched to the foreground;
b2, after the root node is obtained, obtaining a corresponding class name through a reflection method, and then traversing the root node;
b3, after traversing the root node, obtaining the control node information of the application program according to the annotation.
4. A system for global speech implementation, comprising:
the annotation adding module is used for adding annotations to the UI control of the application program in advance;
the socket connection module is used for opening a client socket when an application program is started and connecting a whole-process voice server socket; the thread starting unit is used for starting a thread when an application program is started every time, circularly waiting for semantic information sent by the whole-course voice server, and performing corresponding response operation according to the received semantic information;
the recording module is used for submitting control node information of the application program to the whole-course voice server through a reflection mechanism when the application program is switched to the foreground, and the whole-course voice server records the application program currently positioned on the foreground;
the voice sending module is used for analyzing the voice information after the voice service end in the whole process receives the voice information and sending the analyzed semantic information to the application program in a socket mode;
and the operation response module is used for calling a corresponding control response function after the application program receives the semantic information sent by the whole-process voice server and carrying out corresponding response operation.
5. The global speech realization system according to claim 4, wherein the speech sending module specifically comprises:
the semantic analysis unit is used for carrying out semantic analysis after the whole-process voice service end receives the voice information;
the semantic matching unit is used for judging whether semantic information obtained by semantic analysis is matched with data submitted by an application program or not, and if yes, entering the semantic sending unit;
and the semantic sending unit is used for sending the analyzed semantic information to the application program in a socket mode.
6. The global speech realization system according to any one of claims 4-5, wherein the recording module specifically comprises:
a root node acquisition unit configured to acquire a root node of an application program switched to a foreground;
the root node traversing unit is used for acquiring the corresponding class name through a reflection method after acquiring the root node, and then traversing the root node;
and the control node information acquisition unit is used for acquiring the control node information of the application program according to the annotation after traversing the root node.
CN201610722144.7A 2016-08-25 2016-08-25 Whole-process voice implementation method and system Active CN106297791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610722144.7A CN106297791B (en) 2016-08-25 2016-08-25 Whole-process voice implementation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610722144.7A CN106297791B (en) 2016-08-25 2016-08-25 Whole-process voice implementation method and system

Publications (2)

Publication Number Publication Date
CN106297791A CN106297791A (en) 2017-01-04
CN106297791B true CN106297791B (en) 2020-08-18

Family

ID=57616379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610722144.7A Active CN106297791B (en) 2016-08-25 2016-08-25 Whole-process voice implementation method and system

Country Status (1)

Country Link
CN (1) CN106297791B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277225B (en) * 2017-05-04 2020-04-24 北京奇虎科技有限公司 Method and device for controlling intelligent equipment through voice and intelligent equipment
CN107507614B (en) * 2017-07-28 2018-12-21 北京小蓦机器人技术有限公司 Method, equipment, system and the storage medium of natural language instructions are executed in conjunction with UI
CN109658934B (en) * 2018-12-27 2020-12-01 苏州思必驰信息科技有限公司 Method and device for controlling multimedia app through voice

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103618732A (en) * 2013-12-05 2014-03-05 用友软件股份有限公司 Execution flow of App engine structure of PaaS and Erlang
CN104243281A (en) * 2014-08-20 2014-12-24 北京比邻在线信息技术有限公司 Voice communication method based on mobile Internet
CN105161106A (en) * 2015-08-20 2015-12-16 深圳Tcl数字技术有限公司 Voice control method of intelligent terminal, voice control device and television system
CN105610605A (en) * 2015-12-18 2016-05-25 成都广达新网科技股份有限公司 Message reverse push method, network management system alarm method and state update method
CN105825851A (en) * 2016-05-17 2016-08-03 Tcl集团股份有限公司 Method and system for speech control based on Android system
CN106098061A (en) * 2016-06-01 2016-11-09 Tcl集团股份有限公司 A kind of voice interactive method based on Android system and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001093061A1 (en) * 2000-05-26 2001-12-06 Vocaltec Ltd. Communications protocol

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103618732A (en) * 2013-12-05 2014-03-05 用友软件股份有限公司 Execution flow of App engine structure of PaaS and Erlang
CN104243281A (en) * 2014-08-20 2014-12-24 北京比邻在线信息技术有限公司 Voice communication method based on mobile Internet
CN105161106A (en) * 2015-08-20 2015-12-16 深圳Tcl数字技术有限公司 Voice control method of intelligent terminal, voice control device and television system
CN105610605A (en) * 2015-12-18 2016-05-25 成都广达新网科技股份有限公司 Message reverse push method, network management system alarm method and state update method
CN105825851A (en) * 2016-05-17 2016-08-03 Tcl集团股份有限公司 Method and system for speech control based on Android system
CN106098061A (en) * 2016-06-01 2016-11-09 Tcl集团股份有限公司 A kind of voice interactive method based on Android system and device

Also Published As

Publication number Publication date
CN106297791A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN110543297B (en) Method and apparatus for generating source code
GB2589658A (en) Method and apparatus for running an applet
CN106297791B (en) Whole-process voice implementation method and system
CN111324342B (en) Method, device, medium and electronic equipment for generating interface layer code
CN110825430A (en) API document generation method, device, equipment and storage medium
CN111338944B (en) Remote Procedure Call (RPC) interface testing method, device, medium and equipment
CN111309375A (en) Method, device, medium and electronic equipment for generating remote procedure call toolkit
CN113778897B (en) Automatic test method, device and equipment for interface and storage medium
CN114443905A (en) Interface document updating method and device, electronic equipment and readable storage medium
CN107172013B (en) Data transmission method and system
CN107239265B (en) Binding method and device of Java function and C function
CN109672732B (en) Interface adaptation method, device and system
Hamza et al. TCAIOSC: application code conversion
CN111488151A (en) Method and device for page interaction among Android modules
CN113626321B (en) Bridging test method, device, system and storage medium
CN111209195A (en) Method and device for generating test case
CN110825622A (en) Software testing method, device, equipment and computer readable medium
CN110806967A (en) Unit testing method and device
CN111488268A (en) Dispatching method and dispatching device for automatic test
CN113051173B (en) Method, device, computer equipment and storage medium for arranging and executing test flow
EP4044043A1 (en) Storage process running method and apparatus, database system, and storage medium
CN110825370A (en) Mobile terminal application development method, device and system
CN113761588A (en) Data verification method and device, terminal equipment and storage medium
CN111414161B (en) Method, device, medium and electronic equipment for generating IDL file
CN113448689A (en) Dubbo protocol conversion device and method in operation period

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL technology building, No.17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL RESEARCH AMERICA Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant