CN116069975A - Playback method, recording method and related equipment - Google Patents

Playback method, recording method and related equipment Download PDF

Info

Publication number
CN116069975A
CN116069975A CN202111278494.6A CN202111278494A CN116069975A CN 116069975 A CN116069975 A CN 116069975A CN 202111278494 A CN202111278494 A CN 202111278494A CN 116069975 A CN116069975 A CN 116069975A
Authority
CN
China
Prior art keywords
recording
playback
server
information
semantics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111278494.6A
Other languages
Chinese (zh)
Inventor
张龙
彭泽宇
邓成瑞
张蕃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111278494.6A priority Critical patent/CN116069975A/en
Priority to PCT/CN2022/125809 priority patent/WO2023071857A1/en
Publication of CN116069975A publication Critical patent/CN116069975A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The embodiment of the application discloses a playback method, a recording method and related equipment, which are applied to the field of automation and provide a basis for adding time information into semantics for realizing automation as server scheduling and distributing a plurality of semantics. After the server acquires the plurality of semantics, the server can distribute the plurality of semantics to the corresponding playback devices according to the time information of each semantic in the plurality of semantics and the time sequence, so that each playback device can orderly execute the semantics from the server, and further orderly play back is realized. Because the server can replace manual implementation to issue the semantics in sequence, disorder of the sequence of executing the semantics by each playback device in the system can be avoided, and further, the automation flow of the system comprising a plurality of devices can be ensured to be realized.

Description

Playback method, recording method and related equipment
Technical Field
The embodiment of the application relates to the field of automation, in particular to a playback method, a recording method and related equipment.
Background
Automation (Automation) technology is a technology that performs repetitive operations by one or more devices instead of manually. In general, workflows with rule-driven, repeatable, etc. features can be potential targets for automation.
In the conventional technology, when a plurality of operations requiring manual intervention on a device are automated, a user can copy semantics generated based on the plurality of operations into the device, and the device realizes corresponding operations based on the plurality of semantics, thereby realizing automation. When a system composed of a plurality of devices needs to be automated, a user needs to write semantics based on each device in the system, and copy semantics corresponding to each device to corresponding devices in the system.
However, since the plurality of devices in the aforementioned system are distributed, each device only perceives the semantics input to the device by the user, and does not perceive the progress of the other devices in executing the semantics. If there is data or signaling interaction between multiple devices in the system (for example, after a certain device in the system executes a certain semantic, another device in the system needs to execute another semantic), the sequence of executing the semantic by each device in the system is disordered, which results in difficulty in implementing the automation flow of the system.
Disclosure of Invention
The embodiment of the application provides a playback method, a recording method and related equipment, which are used for realizing an automatic flow of a system comprising a plurality of equipment in a distributed scene.
The playback method provided by the application can be used in scenes involving no recording device and only playback device.
In a first aspect, the present application provides a playback method for use in a scenario involving a recording device and a playback device. In the method, each recording device in the recording group generates at least one semantic item based on user operations on the recording device. The server then receives at least one semantic from each recording device in the recording group. Each semantic is information describing the user's operation on the recording device. Wherein the semantics include time information, object information, and action information. The time information is used for indicating the moment when the recording device detects the operation, and can reflect when a user operates the recording device. The object information is used for indicating the object of the operation corresponding to the semantics, namely, which part on the recording device is operated by the user. The action information is used to indicate the content of the operation, i.e. which actions the user has operated on the recording device. And then, the server sends each semantic item to the corresponding playback device in the playback group according to the time information of each semantic item and the time sequence.
Alternatively, when the server receives all of the semantics generated by each recording device in the recording group, the server may obtain multiple semantics. In one example, the plurality of semantics may be a set of semantics. In addition, before distributing the semantics, the server needs to sort the semantics in the semantic set to ensure that the semantics sent to the playback group by the server are complete, thereby being beneficial to ensuring that the operation of the playback equipment in the playback group based on the semantics is complete and error-free.
It should be understood that the at least one semantic meaning of each recording device may be directly sent by the recording device to the server, i.e., the server receives, from each recording device in the recording group, the at least one semantic meaning generated by each recording device; the at least one semantic of each recording device may also be that the recording device sends the message to a network element (e.g., a proxy network element, a switch, etc.) for forwarding the message, and then the message is sent to the server by the network element. The specific examples are not limited herein.
In this embodiment, it is proposed that time information is added in semantics for realizing automation as a basis for scheduling and distributing a plurality of semantics by a server, where after the server acquires the plurality of semantics, the server can distribute the previous plurality of semantics to corresponding playback devices according to time sequence according to time information of each of the plurality of semantics, so that each playback device can orderly execute the semantics from the server, and further orderly play back is realized. In the application, the server can replace manual implementation to issue the plurality of semantics in sequence, so that disorder of the sequence of executing the semantics by each playback device in the system can be avoided, and further the automatic flow of the system comprising the plurality of devices can be ensured to be realized.
In this application, the server needs to not only learn in which order to send each of the previous plurality of semantics, but also determine which playback device in the playback group to send each of the previous plurality of semantics.
In a possible implementation manner, the server stores a first correspondence, where the first correspondence is a correspondence between indication information of each recording device in the recording group and indication information of a playback device in the playback group, and the first correspondence is used to indicate that the playback device can execute semantics generated by the corresponding recording device. It is also understood that the semantics generated by a recording device in the recording group may be run on a playback device in the playback group that has a first correspondence with the recording device. Thus, the server may determine to which playback device in the playback group the semantics generated by each recording device in the recording group should be sent based on the aforementioned first correspondence.
It should be appreciated that the foregoing first correspondence may be preset by a person, or may be determined by the server according to information of the recording device (e.g., a device type of the recording device) and information of the playback device (e.g., a device type of the playback device). It should be appreciated that the first correspondence may be stored in the server in the form of an array, table, or other data structure capable of representing the association. The specific examples are not limited herein.
In this embodiment, a first correspondence is stored in the server, where the first correspondence is a correspondence between indication information of each recording device in the recording group and indication information of a playback device in the playback group. Thus, the server is able to determine, based on the first correspondence, to which playback device in the playback group the semantics generated by a particular one of the recording devices in the recording group should be sent for execution. The method and the device are beneficial to ensuring the accuracy of the server in the process of distributing the semantics, ensuring the efficiency of the server in the process of distributing the semantics, and avoiding the confusion of playback tasks of playback devices caused by the fact that the server distributes the semantics to wrong playback devices.
In a possible implementation manner, the indication information in the first corresponding relationship includes a device identifier and/or a device type. For example, the indication information of the recording device includes a device identification of the recording device and/or a device type of the recording device; the indication information of the playback device includes a device identification of the playback device and/or a device type of the playback device. Alternatively, the indication information may be other information that can distinguish a recording device or a playback device.
Optionally, the device type is used to indicate a software type and/or a hardware type supported by the recording device.
Where the hardware type refers to the physical structure that the device has. Illustratively, in a User Interface (UI) based multi-terminal automation scenario, the hardware types include a touch screen, a keyboard, a mouse, and the like. For example, in a multi-terminal automation scenario involving audio-visual push, the hardware types include cameras, speakers, microphones, touch screens, and the like. Illustratively, in a manufacturing-based automation scenario, the hardware types include distance sensors, infrared thermometers, and the like. In addition, the software type refers to a software function that the device can implement by running a program. Illustratively, in an automation scenario based on a User Interface (UI) application, the software type includes a system type recording what function interface the application calls and a file type of a UI storage format, and the system type mainly includes a Web application calling a container interface such as a browser, a Windows desktop application calling the system interface, and an android (android)/IOS/hong mong (harmyos) native application; the file types of the UI storage format mainly comprise an application interface based on the dom tree expression, an application interface supporting json format storage, an application interface supporting xml format storage, an application interface supporting pdf format storage and the like. Illustratively, in a manufacturing-based automation scenario, the software types include supported host computer types, sensor supported application types. Illustratively, in a shortcut-based automation scenario, the software type includes related applications that execute a certain service, e.g., a search engine class application, a shopping class application, a video playback class application.
In one possible implementation, the software types supported by the recording device have a non-empty intersection with the software types supported by the corresponding playback device; and/or, the hardware type supported by the recording device has a non-empty intersection with the hardware type supported by the corresponding playback device. It is also understood that the recording device is identical or partially identical to the type of software supported by the playback device, or the recording device is identical or partially identical to the type of hardware supported by the playback device.
In a possible embodiment, each of the semantics includes an indication of the recording apparatus. The server sends the semantics to the corresponding playback devices in the playback group according to the time information of each piece of semantics and time sequence, and the method comprises the following steps: the server sends the semantics to the corresponding playback devices in the playback group according to the time information of each semantic, the indication information of the recording device in each semantic and the first corresponding relation and the time sequence.
In this embodiment, it is proposed to set the indication information of the recording device in the semantics, so that the server determines, according to the indication information of the recording device in the semantics, which recording device in the recording group the semantics is generated from. Because the server also stores a first correspondence, the first correspondence is a correspondence between the indication information of each recording device in the recording group and the indication information of the playback device in the playback group. Therefore, the server can determine the playback device corresponding to the semantic meaning according to the indication information of the recording device in the semantic meaning and the first correspondence relation.
In one possible implementation, the server stores a device list storing information of a recording group including a device identification of each recording device in the recording group and information of the playback group including a device identification of each playback device in the playback group. Before the server receives at least one semantic from each recording device in the recording group, the method further comprises: the server receives a plurality of registration messages, each of the registration messages including a device identification of a device that sent the registration message; the server determines that the device is a recording device or a playback device based on the device list and the device identification of the device in the registration message.
In this embodiment, the recording device and the playback device may provide information of the recording device (for example, a device identifier of the recording device) and information of the playback device (for example, a device identifier of the playback device) to the server by way of registration. And the server needs to determine whether the device transmitting the registration request is a recording device or a playback device through a pre-stored device list. By the registration management mode, the server can only manage the registered recording equipment and playback equipment, and the efficiency of managing the recording equipment and the playback equipment is improved.
In a possible embodiment, the registration message further includes a device type of the device, the device type of the device including a device type of the recording device and a device type of the playback device. The server correspondingly stores the equipment type of the recording equipment and the equipment identifier of the recording equipment; the server stores the device type of the playback device in correspondence with the device identification of the playback device.
In this embodiment, it is proposed that when the registration message includes a device type (for example, a device type of a recording device or a device type of a playback device), the server further stores a device identifier of the device and the device type in correspondence, so that the server can relatively conveniently refer to the device type of each device, which is further beneficial to making a decision based on the device identifier and the device type by the server.
In one possible implementation, after the server receives the plurality of registration messages, before the server receives at least one semantic from each recording device in the recording group, the method further comprises: the server sends a first recording instruction to each recording device in the recording group, wherein the first recording instruction is used for indicating each recording device to start recording, the first recording instruction comprises system clock information of the server, and the system clock information is used for indicating each recording device in the recording group to perform clock synchronization according to the system clock of the server.
In this embodiment, it is provided that the server can control the recording device to start recording through the first recording instruction, and the server can carry the system clock information of the server in the first recording instruction, so that the recording device resets the system clock of the recording device based on the system clock information of the server. Because the server broadcasts the first recording instruction to each recording device in the recording group at the same time, each recording device in the recording group can receive the first recording instruction. Thus, each recording device in the recording group may be clocked back on the same system clock (i.e., the system clock of the server) to ensure that the system clocks of the recording devices in the recording group are synchronized.
Optionally, the first recording instruction further includes at least one recording interface, where the at least one recording interface is configured to obtain original information of at least one operation corresponding to a task type of the recording task, where the original information is information related to an operation of a user on the recording device detected by the recording device. Illustratively, the recording interface includes an input/output (I/O) interface of a microphone, an I/O interface of a speaker, an I/O interface of a mouse, an I/O interface of a keyboard, etc., which are not limited herein.
In this embodiment, the server may carry the recording interface in the first recording instruction, so that the recording device only obtains the original information from the recording interface in the first recording instruction. The server is favorable for controlling the recording equipment to record based on the granularity of the control interface, and the information of each interface is not acquired by the recording equipment. The processing load of the recording device is saved.
In one possible implementation, after the server sends the first recording instruction to each of the recording devices in the recording group, the method further includes: the server sends a second recording instruction to at least one recording device in the recording group, wherein the second recording instruction is used for indicating the recording device to stop recording.
In this application, when the server sends the semantics in the plurality of semantics (for example, the semantic set) to the corresponding playback device, the server may send only one semantic at a time to the corresponding playback device, or may send a group of semantics (including a plurality of semantics) to a corresponding playback device at a time. Optionally, the server may also use the receipt of a notification message from the playback device as a trigger to send the next semantic (or next group). The notification message is used to indicate that the playback device has performed the received semantics, which may also be understood that the notification message is used to indicate that the playback device has successfully played the received semantics.
In one possible embodiment, if the plurality of semantics includes N semantics, N is an integer greater than 1. The server sends the semantic meaning of the plurality of semantic meaning to the corresponding playback device in the playback group according to the sequence according to the time sequence information of each semantic meaning, and the method comprises the following steps: the server sorts the N semantemes according to the time sequence information of each semantic in the N semantemes to obtain a first semantic sequence determined by the N semantemes; the server sends the ith semantic item in the first semantic sequence to a corresponding playback device in the playback group; the server receives a notification message corresponding to the ith semantic, wherein the notification message corresponding to the ith semantic is used for indicating that the playback of the ith semantic is successful, and i is an integer greater than or equal to 1 and less than or equal to N.
In this embodiment, after the server sends one semantic item to the playback devices in the playback group, the server needs to receive a notification message from the playback device to play back the successful notification message, and then triggers the sending of the next semantic item. Therefore, the method is beneficial to orderly executing the received semantics by the playback devices in the playback group, and is also beneficial to retransmitting the semantics to the playback devices when the playback devices fail to send the notification message, so that each playback device in the playback group can execute the corresponding semantics, and further the orderly execution of the whole playback task is ensured.
In one possible embodiment, if the plurality of semantics includes N semantics, N is an integer greater than 1. The server sends the semantic meaning of the plurality of semantic meaning to the corresponding playback device in the playback group according to the sequence according to the time sequence information of each semantic meaning, and the method comprises the following steps: the server sorts the N semantemes according to the time sequence information of each semantic in the N semantemes to obtain a first semantic sequence determined by the N semantemes; the server divides the first semantic sequence into a plurality of second semantic sequences, each of which is a sequence of at least one semantic determination performed by the same playback device in succession in the first semantic sequence; the server sends the second semantic sequence to a corresponding playback device in the playback group.
In this embodiment, it is proposed that the server transmits a set of semantics (i.e. a second sequence of semantics) to a playback device in the playback group at a time, the set of semantics being performed by a certain playback device. After the playback device has performed the aforementioned set of semantics, the server sends the next set of semantics to the playback device. The embodiment is beneficial to saving the signaling overhead between the server and the playback device. Further, since time information is contained in each of the semantics, the playback device can determine the order in which the respective semantics in the aforementioned set of semantics are executed based on the time information of the respective semantics in the set of semantics. Thus, the playback devices in the playback group can also ensure that the entire playback task proceeds in order.
In a second aspect, the present application provides a playback method for use in a scenario involving a recording device and a playback device. In this approach, semantics may be manually copied to the server. The server obtains a plurality of semantics, each of the semantics being used to indicate an operation to be performed on a playback device in a playback group, the operation comprising controlling one playback device in the playback group to interact with another playback device in the playback group. The semantics include timing information, object information, and action information. Wherein the timing information is used to indicate the order of the semantic relative to other ones of the previous plurality of semantics. The object information is used to indicate an object of the operation corresponding to the semantic, that is, the playback device performs the operation on which component of the playback device according to the semantic. The action information is used to indicate the content of the operation, i.e. which actions the playback device performs at the components of the playback device according to the semantics. Then, the server transmits the semantics to the corresponding playback devices in the playback group according to the order indicated by the timing information of each of the semantics.
In this embodiment, it is proposed that timing information is added to semantics for realizing automation as a basis for scheduling and distributing a plurality of semantics by a server, where after the server acquires the plurality of semantics, the server can determine an order of distributing the semantics according to the timing information of each of the plurality of semantics, and then distribute the plurality of semantics to corresponding playback devices according to an order indicated by the timing information, so that each playback device can sequentially execute the semantics from the server, thereby realizing sequential playback. In the method, the server can replace manual implementation to issue the semantics in sequence, so that disorder of the sequence of executing the semantics by each playback device in the system can be avoided, and further the automation flow of the system comprising a plurality of devices can be ensured to be realized.
In one possible implementation, the server stores indication information for each playback device in the playback group, the indication information for each playback device identifying one playback device in the playback group.
Alternatively, the server may manage each playback device in the playback group by means of registration. For example, each playback device in the playback group provides the playback device with indication information of the playback device by sending a registration message to the server.
In a possible implementation manner, each semantic item further comprises indication information of the playback device, and the indication information of the playback device in the semantic item is used for indicating the playback device corresponding to the semantic item.
In this embodiment, each of the foregoing semantics includes the indication information of the playback device, and the indication information of the playback device in the semantics is used to indicate which playback device the semantics are to be executed by, so that the server distributes the semantics to the corresponding playback device based on the indication information of the playback device in each of the semantics.
In one possible implementation manner, the server sends the semantics to the corresponding playback devices in the playback group according to the sequence indicated by the timing information of each of the semantics, including: the server sends the semantics to the playback device indicated by the indication information of the playback device according to the time sequence information of each semantic and the indication information of the playback device in each semantic.
In this embodiment, it is proposed that when each semantic includes indication information of a playback device, and when the indication information of each playback device in the playback group is stored in the server, the server may determine a playback device corresponding to the semantic based on the indication information of the playback device in each semantic, and further the server may send the semantic to the corresponding playback device based on the indication information of the playback device in each semantic.
In one possible implementation, before the server sends the semantics to the corresponding playback devices in the playback group according to the order indicated by the timing information of each of the semantics, the method further includes: the server acquires a second corresponding relation, wherein the second corresponding relation comprises a corresponding relation between the time sequence information and the indication information of the playback equipment, and the second corresponding relation is used for indicating that the semantics containing the time sequence information can be operated in the playback equipment indicated by the indication information of the playback equipment; the server sends the semantics to the corresponding playback devices in the playback group according to the sequence indicated by the time sequence information of each piece of semantics, and the method comprises the following steps: and the server sends the semantics to the corresponding playback equipment in the playback group according to the sequence indicated by the time sequence information according to the time sequence information of each piece of semantics and the second corresponding relation.
In this embodiment, the proposal server may determine to which playback device each semantic should be transmitted based on the timing information of each semantic. Specifically, a second correspondence relationship can be obtained in the server, where the second correspondence relationship is a correspondence relationship between the time sequence information and the indication information of the playback device. Thus, the second correspondence can indicate that the semantics including the timing information should be run in the playback device indicated by the indication information of the playback device. Accordingly, the server may determine the playback device to which each semantic corresponds based on only the timing information of the semantic and the second correspondence, and may not parse other information in the semantic.
In a possible embodiment, the indication information of the playback device includes a device identification of the playback device and/or a device type of the playback device, where the device type of the playback device is used to indicate a software type and/or a hardware type supported by the playback device. Optionally, the device type is used to indicate a software type and/or a hardware type supported by the recording device.
Where the hardware type refers to the physical structure that the device has. For example, in a multi-terminal automation scenario based on a user interface UI, the hardware types include a touch screen, a keyboard, a mouse, and the like. For example, in a multi-terminal automation scenario involving audio-visual push, the hardware types include cameras, speakers, microphones, touch screens, and the like. Illustratively, in a manufacturing-based automation scenario, the hardware types include distance sensors, infrared thermometers, and the like. In addition, the software type refers to a software function that the device can implement by running a program. Illustratively, in an automation scenario based on a User Interface (UI) application, the software type includes a system type recording what function interface the application calls and a file type of a UI storage format, and the system type mainly includes a Web application calling a container interface such as a browser, a Windows desktop application calling the system interface, and an android (android)/IOS/hong mong (harmyos) native application; the file types of the UI storage format mainly comprise an application interface based on the dom tree expression, an application interface supporting json format storage, an application interface supporting xml format storage, an application interface supporting pdf format storage and the like. Illustratively, in a manufacturing-based automation scenario, the software types include supported host computer types, sensor supported application types. Illustratively, in a shortcut-based automation scenario, the software type includes related applications that execute a certain service, e.g., a search engine class application, a shopping class application, a video playback class application.
In a third aspect, the present application provides a recording method, where the recording method is applied to a recording apparatus. Specifically, the recording device generates at least one semantic item. Then, the recording device sends the at least one semantic item to a server, so that the server sends the semantic item to a corresponding playback device in a playback group according to the time information of each semantic item in time sequence.
Each piece of semantic is information which is generated by the recording device and used for describing the operation of a user on the recording device, the semantic comprises time information, object information and action information, the time information is used for indicating the moment when the recording device detects the operation, the object information is used for indicating an object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation.
In this embodiment, it is proposed that the semantics generated by the recording apparatus have time information, which is the time at which the recording apparatus detects the operation. When the semantics generated by each recording device in the recording group carry time information, the semantics from different recording devices can be ordered by the time information, so that the server can know the time sequence of the semantics generated by each recording device in the recording group. In the conventional technology, the semantics generated by the recording device have no time information, and when the server obtains the semantics from different recording devices, the server cannot determine which semantics are generated first and which semantics are generated later. Therefore, the server also cannot implement the ordered distribution semantics. In the application, time information is added in the semantics, so that the server can obtain the basis (namely, the time information) for ordering the semantics, and further the server is favorable for orderly distributing the semantics.
In one possible implementation, the recording device is one of a plurality of recording devices in a recording group, and the operation includes controlling one recording device in the recording group to interact with another recording device in the recording group.
In this embodiment, it is proposed that each recording device in the recording group will generate semantics based on the foregoing manner, that is, the semantics generated by each recording device in the recording group include time information.
In one possible implementation, the recording device generates at least one semantic item, including: the recording device acquires at least one piece of original information, wherein each piece of original information is information which is detected by the recording device and related to the operation of a user on the recording device, and each piece of original information comprises operation time, an operation object and an operation type; the operation time is the time when the recording device detects the operation of the user on the recording device, the operation object is an object which is detected by the recording device and operated by the user on the recording device, and the operation type is the action type which is detected by the recording device and executed by the user through hardware and/or software supported by the recording device. The recording device determines the operation time as the semantic time information; the recording device determines object information of the semantics according to the operation object; the recording device determines action information of the semantics according to the operation type.
Optionally, the semantics further include a device type of the recording device.
In a possible implementation manner, the recording device determines action information of the semantics according to the operation type, including: the recording device determines that the action characteristic corresponding to the operation type is action information of the semantic according to the operation type and a mapping rule, wherein the mapping rule comprises a corresponding relation between the action characteristic and a plurality of operation types, and each operation type in the plurality of operation types meets the action characteristic.
In this embodiment, it is proposed that a mapping rule is stored in the recording device, where the mapping rule is used to convert an operation type in original information into action information in semantics. The action information in the semantics is an action feature abstracted based on the operation type, and the action feature can be applied to various systems. The method is beneficial to improving the universality of the semantics, so that the semantics can be executed in different systems.
The plurality of operation types include a first operation type that is pressed on the user interface through the touch screen and a second operation type that is pressed on the display screen; the first action feature corresponding to the first operation type and the second operation type is a press.
The plurality of operation types include a third operation type that slides in a first direction on the user interface through the touch screen and a fourth operation type that scrolls the progress bar in the first direction through the mouse; the second motion feature corresponding to the third operation type and the fourth operation type is movement in the first direction.
In a possible embodiment, before the recording device generates the at least one semantic item, the method further comprises: the recording device sends a registration message to the server, wherein the registration message comprises indication information of the recording device, and the indication information of the recording device comprises a device type of the recording device and a device identification of the recording device.
In one possible implementation, after the recording device sends the registration message to the server, the method further includes:
the recording device receives a first recording instruction from the server, the first recording instruction is used for instructing the recording device to start recording, the first recording instruction comprises system clock information of the server, and the system clock information is used for instructing each recording device in the recording group to perform clock synchronization according to the system clock of the server.
In this embodiment, it is proposed that the recording device is capable of receiving a first recording instruction from the server and starting recording based on the first recording instruction. In addition, since the first recording instruction carries the system clock information of the server, the recording device can reset the system clock of the recording device based on the system clock information of the server. When the server can broadcast the first recording instruction to each recording device in the recording group at the same time, each recording device in the recording group can receive the first recording instruction, and each recording device in the recording group can reset the clock based on the same system clock (i.e. the system clock of the server). Thus, it is advantageous to ensure that the system clocks of the recording devices in the recording group are synchronized.
In one possible implementation, after the recording device receives the first recording instruction from the server, the method further includes: the recording device receives a second recording instruction from the server, wherein the second recording instruction is used for instructing the recording device to stop recording.
In a fourth aspect, the present application provides a server that, when the server obtains semantics from a recording group, includes: a transceiver module and a processing module. The receiving and transmitting module is used for receiving at least one semantic from each recording device in the recording group, each semantic is information describing the operation of a user on the recording device, the semantic comprises time information, object information and action information, the time information is used for indicating the moment when the recording device detects the operation, the object information is used for indicating an object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation. The processing module is used for determining the time sequence of each semantic according to the time information of each semantic, and controlling the receiving and transmitting module to send the semantic to the corresponding playback equipment in the playback group according to the time sequence.
In a possible implementation manner, the server further includes a storage module, where the storage module is configured to store a first correspondence, where the first correspondence is a correspondence between indication information of each recording device in the recording group and indication information of a playback device in the playback group, and the first correspondence is used to indicate that the playback device is capable of executing semantics generated by the corresponding recording device.
Optionally, the indication information includes a device identifier and/or a device type, where the device type is used to indicate a software type and/or a hardware type supported by the recording device.
Optionally, a non-empty intersection exists between the software type supported by the recording device and the software type supported by the corresponding playback device; and/or, the hardware type supported by the recording device has a non-empty intersection with the hardware type supported by the corresponding playback device.
In a possible implementation manner, when each semantic includes the indication information of the recording device, the processing module is specifically configured to send the semantic to a playback device corresponding to the playback group according to the time information of each semantic, the indication information of the recording device in each semantic, and the first correspondence, and according to the time sequence.
In one possible implementation, the storage module in the server is further configured to store a list of devices. The device list stores information of a recording group and information of the playback group. Wherein the information of the recording group includes a device identification of each recording device in the recording group, and the information of the playback group includes a device identification of each playback device in the playback group. At this time, the transceiver module in the server is further configured to receive a plurality of registration messages, where each registration message includes a device identifier of a device that sends the registration message; the processing module in the server is further configured to determine that the device is a recording device or a playback device according to the device list and the device identifier of the device in the registration message.
In a possible embodiment, the registration message further includes a device type of the device, the device type of the device including a device type of the recording device and a device type of the playback device. When the registration message further includes a device type of the device, the storage module in the server is further configured to store the device type of the recording device in correspondence with the device identifier of the recording device; and storing the device type of the playback device in correspondence with the device identification of the playback device.
In a possible implementation manner, the transceiver module in the server is further configured to send a first recording instruction to each recording device in the recording group, where the first recording instruction is used to instruct each recording device to start recording, and the first recording instruction includes system clock information of the server, where the system clock information is used to instruct each recording device in the recording group to perform clock synchronization according to a system clock of the server.
In a possible implementation manner, the transceiver module in the server is further configured to send a second recording instruction to at least one recording device in the recording group, where the second recording instruction is used to instruct the recording device to stop recording.
It should be noted that, in this embodiment, there are various other embodiments, and specific reference may be made to the specific embodiment of the first aspect and the beneficial effects thereof, which are not described herein.
In a fifth aspect, the present application provides a server, where the server may directly obtain multiple semantics of the artificial copy, and obtain a semantic collection. At this time, the server includes an acquisition module and a processing module. The acquisition module is used for acquiring a plurality of semantics. And the processing module is used for controlling the receiving and transmitting module to transmit the semantics to the corresponding playback equipment in the playback group according to the sequence indicated by the time sequence information of each semantic.
Wherein each of the semantics is used to indicate an operation that needs to be performed on a playback device in a playback group, the operation comprising controlling one playback device in the playback group to interact with another playback device in the playback group. In addition, the semantics include timing information, object information, and action information. The timing information is used for indicating the sequence of the semantic relative to other semantic in the plurality of semantic, the object information is used for indicating the object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation.
In one possible implementation, the server stores indication information for each playback device in the playback group, the indication information for each playback device identifying one playback device in the playback group.
In a possible implementation manner, each semantic item further comprises indication information of the playback device, and the indication information of the playback device in the semantic item is used for indicating the playback device corresponding to the semantic item.
In a possible implementation manner, the processing module is specifically configured to control the transceiver module to send the semantics to the playback device indicated by the indication information of the playback device according to the timing information of each of the semantics and the indication information of the playback device in each of the semantics.
In a possible implementation manner, the obtaining module is further configured to obtain a second correspondence, where the second correspondence includes a correspondence between the timing information and indication information of the playback device, and the second correspondence is used to indicate that semantics including the timing information can be executed in a playback device indicated by the indication information of the playback device. The processing module is further configured to determine a playback device corresponding to each semantic according to the timing information of each semantic and the second corresponding relationship, and control the transceiver module to send the semantic to a playback device corresponding to the playback group according to the sequence indicated by the timing information.
In a possible embodiment, the indication information of the playback device includes a device identification of the playback device and/or a device type of the playback device, where the device type of the playback device is used to indicate a software type and/or a hardware type supported by the playback device.
It should be noted that, in this embodiment, there are various other embodiments, and specific reference may be made to the specific embodiment of the second aspect and the beneficial effects thereof, which are not described herein.
In a sixth aspect, the present application provides a recording apparatus, including a processing module and a transceiver module. The processing module is used for generating at least one semantic, each semantic is information which is generated by the recording device and used for describing the operation of a user on the recording device, the semantic comprises time information, object information and action information, the time information is used for indicating the moment when the recording device detects the operation, the object information is used for indicating an object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation. The receiving and transmitting module is used for transmitting the at least one semantic meaning to the server, so that the server transmits the semantic meaning to corresponding playback equipment in the playback group according to the time information of each semantic meaning in time sequence.
In one possible implementation, the recording device is one of a plurality of recording devices in a recording group, and the operation includes controlling one recording device in the recording group to interact with another recording device in the recording group.
Optionally, the semantics further include a device type of the recording device.
In a possible embodiment, the processing module is specifically configured to:
acquiring at least one piece of original information, wherein each piece of original information is information related to the operation of a user on the recording device, detected by the recording device, and comprises operation time, an operation object and an operation type; the operation time is the time when the recording device detects the operation of the user on the recording device, the operation object is an object which is detected by the recording device and operated by the user on the recording device, and the operation type is the action type which is detected by the recording device and executed by the user through hardware and/or software supported by the recording device;
determining the operation time as the time information of the semantic;
determining object information of the semantics according to the operation object;
action information of the semantics is determined according to the operation type.
In a possible implementation manner, the processing module is specifically configured to determine that the action feature corresponding to the operation type is action information of the semantic according to the operation type and a mapping rule, where the mapping rule includes a correspondence between an action feature and a plurality of operation types, and each operation type in the plurality of operation types satisfies the action feature.
In a possible implementation manner, the transceiver module is further configured to send a registration message to the server, where the registration message includes indication information of the recording device, and the indication information of the recording device includes a device type of the recording device and a device identifier of the recording device.
In a possible implementation manner, the transceiver module is further configured to receive a first recording instruction from the server, where the first recording instruction is used to instruct the recording device to start recording, and the first recording instruction includes system clock information of the server, where the system clock information is used to instruct each recording device in the recording group to perform clock synchronization according to a system clock of the server.
In a possible implementation manner, the transceiver module is further configured to receive a second recording instruction from the server, where the second recording instruction is used to instruct the recording device to stop recording.
It should be noted that, in this embodiment, there are various other embodiments, and specific reference may be made to the specific embodiment of the third aspect and the beneficial effects thereof, which are not described herein.
In a seventh aspect, the present application provides a server comprising a processor and a memory; wherein the memory stores a computer program; the processor invokes the computer program to cause the server to perform the method of the first aspect or any of the embodiments of the first aspect or to cause the server to perform the method of the second aspect or any of the embodiments of the second aspect.
In an eighth aspect, the present application provides a recording apparatus comprising a processor and a memory; wherein the memory stores a computer program; the processor invokes the computer program to cause the recording apparatus to perform the method of the third aspect or any implementation of the third aspect.
In a ninth aspect, the present application provides an automated system comprising: a playback device, a server in any one of the fourth aspect and the fourth implementation manner, and a recording device in any one of the sixth aspect and the sixth implementation manner.
In a tenth aspect, the present application provides an automated system comprising: playback device, and server in any of the embodiments of the fourth aspect and the fourth aspect.
In an eleventh aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described in the first, second or third aspects and any of the various embodiments of the various aspects.
In a twelfth aspect, embodiments of the present application provide a computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform a method as described in the foregoing first, second or third aspects, and any of the various embodiments of the foregoing aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application.
FIG. 1A is a diagram of a system architecture to which the method of the present application is applicable;
FIG. 1B is a diagram illustrating an example of a scenario in which the method proposed in the present application is applicable;
FIG. 1C is a diagram illustrating another scenario in which the method proposed in the present application is applicable;
FIG. 1D is a diagram illustrating another scenario in which the method proposed in the present application is applicable;
FIG. 2 is a flowchart of a playback method and a recording method in the present application;
FIG. 3 is an exemplary diagram of semantics in the present application;
FIG. 4 is another flow chart of a playback method in the present application;
FIG. 5 is a schematic diagram of one embodiment of a server in the present application;
FIG. 6 is a schematic diagram of another embodiment of a server in the present application;
fig. 7 is a schematic diagram of an embodiment of the device in the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, some terms relating to embodiments of the present application are described below:
semantics: is an executable file written according to a certain format using a specific descriptive language. The semantics can be recognized by a device such that the device controls software and/or hardware in the device based on the semantics, thereby causing the device to perform semantically corresponding operations. Generally, a semantic corresponds to an operation. In some scenarios, semantics are also referred to as scripts (Script). In different embodiments of the present application, the foregoing semantics may be semantics generated by the recording device or semantics encoded manually.
Recording: refers to the process by which a device records the user's operations on the device in a computer language (e.g., script, semantics, etc.). For example, the recording procedure includes: control lookup- & gt execution and recording operation- & gt semantic generation. In this application, a device that generates semantics in a recording flow is referred to as a recording device, and a group of devices including a plurality of recording devices is referred to as a recording group.
Playback: refers to a process in which a device looks up an object described in a computer language (e.g., script, semantics, etc.) based on the computer language and performs an operation corresponding to the computer language. For example, the playback flow includes: parsing semantics, finding controls, and executing operations. In this application, a device that parses semantics in a playback flow is referred to as a playback device, and a group of devices including a plurality of playback devices is referred to as a playback group.
Multiterminal collaboration: the method is a scene of multi-device interaction, and refers to that a plurality of devices with distributed designs mutually assist to complete the same task. The collaboration may be software capability collaboration, hardware capability collaboration, or both software and hardware capability collaboration.
Cross-end migration: and transmitting the semantics generated by one device to another device for operation, so that the other device can execute the operation corresponding to the semantics.
The following describes the system architecture and application field Jing Jinhang related to the playback method and recording method proposed in the present application:
as shown in fig. 1A, a system architecture diagram applicable to the playback method and the recording method provided in the present application is shown. The system includes a server 01 and a playback group 02. Wherein the server 01 is a structure newly added in the system in the present application for distributing semantics to the respective playback devices (e.g., playback device 1 and playback device 2) in the playback group 02. And the respective playback devices (e.g., playback device 1 and playback device 2) in the playback group 02 are configured to receive the semantics from the server 01 and perform operations corresponding to the semantics in accordance with the semantics to cooperatively complete a certain task.
Optionally, the system may further comprise a recording group 03. The recording group 03 includes a plurality of recording apparatuses (e.g., recording apparatus 1 and recording apparatus 2) for detecting operations on the recording apparatuses and generating semantics based on the detected operations. Because the server 01 is newly added in the system, the recording devices in the recording group can also send the generated semantics to the server 01, so that the server 01 schedules the semantics generated by the recording devices. It should be appreciated that the number of recording devices in recording group 03 is not necessarily the same as the number of playback devices in playback group 02.
It should be understood that the devices (e.g., recording device and playback device) involved in the present application may be Mobile Terminals (MT) such as mobile phones (mobile phones), tablet computers (Pad); or Virtual Reality (VR) terminal equipment and augmented reality (augmented reality, AR) terminal equipment; other areas of automation may be required, such as terminals in industrial control (industrial control), in unmanned (self-driving), in teleoperation (remote medical surgery), in smart grid (smart grid), in transportation safety (transportation safety), in smart city (smart home), in smart home (smart home), etc. It should be understood that the device involved in this application may be other devices that can execute scripts or semantics, or other devices that have an automation requirement. The specific technology and specific form adopted by the device are not limited in the application.
In addition, the playback method and the recording method provided by the application are mainly applied to an automation scene involving a plurality of devices.
By way of example, the foregoing automation scenario involving multiple devices may be an automation scenario based on human-machine interaction. For example, as shown in fig. 1B, a scenario in which mail is edited by both a mobile phone and a computer may be adopted. In this scenario, cell phone 031 and computer 032 are used as recording devices in recording group 03. The mobile phone 031 converts the operation of inputting text or picture by the user into semantic meaning and sends the semantic meaning generated by the mobile phone 031 to the server 01, and the computer 032 converts the operation of inputting text by the user in the mail into semantic meaning and sends the semantic meaning generated by the computer 032 to the server 01. Then, the server 01 distributes the received semantics to the mobile phone 021 and the computer 022 in the playback group 02 for playback, so that the mobile phone 021 and the computer 022 can cooperate with each other to realize mail editing.
For example, the foregoing automation scenario involving multiple devices may be in a multi-terminal automation scenario involving audio-visual push. For example, as shown in fig. 1C, as a recording device in the recording group 03, the mobile phone 033 converts a series of operations such as video editing and debugging into semantics and reports the semantics to the server 01. Then, the server 01 distributes the received semantics to the computer 024 and the earphone 023 in the playback group 02 for playback, so that the computer 024 performs the semantics to realize playback of the picture of the video, and the earphone 023 performs the semantics to realize playback of the audio of the video.
For example, the foregoing automation scenario involving multiple devices may be an automation scenario between multiple production devices in the manufacturing domain. For example, as shown in fig. 1D, there may be a scenario in which a certain component is cooperatively manufactured by one production apparatus and another production apparatus. In this scenario, the user records the operations of two production devices (e.g., production device a and production device B) in virtual software through computer 034 and reports the generated semantics to server 01. Then, the server 01 distributes semantics to the two production apparatuses so that the production apparatus a performs punching processing on the component first, then performs cutting processing on the aforementioned component by the production apparatus B, and then performs further punching processing on the component by the production apparatus a. Thereby causing the two production devices to cooperate with each other to complete the manufacturing process.
It should be understood that the foregoing fig. 1B, 1C, and 1D are merely examples of automation scenarios involving multiple devices to which the present methods may be applied. In practical applications, the playback method and the recording method described above may also be applied to single-device automation scenes. For example, the recording group and the playback group each contain only one device, the server 01 collects semantics from the recording devices in the recording group, and then the server distributes semantics to the playback devices in the playback group. And are not described in detail herein.
As shown in fig. 2, when the playback method proposed in the present application involves a recording apparatus, the server, the recording apparatus, and the playback apparatus will perform the steps of:
step 201a, a recording device sends a registration message to a server; accordingly, the server receives a registration message from the recording device.
Step 201b, the playback device sends a registration message to the server; accordingly, the server receives a registration message from the playback device.
In this embodiment, the recording device can provide the information of the recording device to the server through the registration message, so that the recording device registers with the server, and the server manages the recording device based on the information of the recording device. Similarly, the playback device can provide information of the playback device to the server through the aforementioned registration message, so that the playback device is registered with the server, and further, the server manages the playback device based on the aforementioned information of the playback device.
Since both the recording device and the playback device are included in the system of this embodiment, the registration message may be from the recording device in the recording group or from the playback device in the playback group for the server. Therefore, the server needs to determine the device transmitting the registration message as the recording device or the playback device according to the content of each registration message.
Specifically, the aforementioned registration message includes a device identification of the device that sent the registration message. The device identification may be information capable of uniquely identifying a device, such as an internet protocol (internet protocol, IP) address, a device factory number, etc. Specifically, if the device sending the registration message is a recording device, the registration message carries a device identifier of the recording device; if the device sending the registration message is a playback device, the registration message carries a device identifier of the playback device. Thus, if the server knows the device identifier of the recording device and the device identifier of the playback device, the server can determine that the sender of the registration message is the recording device or the playback device through the content carried by the registration message.
Optionally, the foregoing registration message further includes a device type of the device. Specifically, when the device sending the registration message is a recording device, the device type carried in the registration message is the device type of the recording device; when the device sending the registration message is a playback device, the device type carried in the registration message is the device type of the playback device.
The device type is used for indicating the hardware type and/or the software type supported by the device.
Where the hardware type refers to the physical structure that the device has. Illustratively, in a User Interface (UI) based multi-terminal automation scenario, the hardware types include a touch screen, a keyboard, a mouse, and the like. For example, in a multi-terminal automation scenario involving audio-visual push, the hardware types include cameras, speakers, microphones, touch screens, and the like. Illustratively, in a manufacturing-based automation scenario, the hardware types include distance sensors, infrared thermometers, and the like.
In addition, the software type refers to a software function that the device can implement by running a program. Illustratively, in a multi-terminal automation scenario based on a user interface UI, the software type includes a system type that records what function interfaces the application calls and a file type of the UI storage format. The system type mainly comprises a Web application for calling a container interface such as a browser, a Windows desktop application for calling the system interface, and an android/IOS/hong Mongolian (Harmonyos) native application; the file types of the UI storage format mainly comprise an application interface based on the dom tree expression, an application interface supporting json format storage, an application interface supporting xml format storage, an application interface supporting pdf format storage and the like. Illustratively, in a manufacturing-based automation scenario, the software types include supported host computer types, sensor supported application types. Illustratively, in a shortcut-based automation scenario, the software type includes related applications that execute a certain service, e.g., a search engine class application, a shopping class application, a video playback class application.
It should be understood that, for different application fields and different application scenarios, the specific implementation of the foregoing hardware type is different, and the specific implementation of the foregoing software type is also different, and the specific application is not limited to the specific implementation of the hardware type and the specific implementation of the software type.
It should be understood that in this embodiment, the server may perform the foregoing steps 201a and 201b multiple times until the server receives the registration messages of all the recording devices in the recording group and the registration messages of all the playback devices in the playback group. In one implementation, the server may receive multiple registration messages at the same time period, e.g., the recording devices in the recording group and the playback devices in the playback group each send registration messages to the server within a certain preset time range. In another implementation, the server may also receive registration messages from all of the recording devices in the recording group first, and then at some later time, receive registration messages from all of the playback devices in the playback group.
Further, each time the server receives the aforementioned registration message, the server determines the device that transmitted the registration message as a recording device or a playback device according to the content of the registration message.
In one possible implementation, the server stores a device list storing information of the recording group and information of the playback group. The information of the recording group comprises the equipment identification of each recording equipment in the recording group, and the information of the playback group comprises the equipment identification of each playback equipment in the playback group. Each time a registration message is received by the server, the server compares the device identification in the registration message with the device identification in the device list (i.e., the device identification of the recording device or the device identification of the playback device). When the equipment identification in the registration message is consistent with the equipment identification of the recording equipment, determining that the equipment for transmitting the registration message is the recording equipment; when the device identification in the registration message coincides with the device identification of the playback device, it is determined that the device that sent the registration message is the playback device.
It should be appreciated that the device list contains the device identification of each recording device in a recording group, and thus the server may determine not only whether the device sending the registration message is a recording device based on the registration message, but also whether the recording devices in the recording group are all registered with the server. Similarly, the device list contains the device identification of each playback device in a playback group, so the server can determine not only whether the device that sent the registration message is a playback device based on the registration message, but also whether all playback devices in the playback group are registered with the server.
By way of example, the list of devices may be as shown in Table 1-1 below:
TABLE 1-1
Recording group 1 LZ-01;LZ-02;LZ-03;
Playback group 1 HF-01;HF-02;HF-03;
In the device list shown in Table 1-1, the recording group 1 contains device identifiers of 3 recording devices, namely "LZ-01", "LZ-02" and "LZ-03", respectively; the playback group 1 contains device identifications of 3 playback devices, "HF-01", "HF-02", and "HF-03", respectively. If the device identifier carried in a registration message received by the server is "LZ-02", the server may determine that the device sending the registration message is a recording device in recording group 1. When the server acquires "LZ-01", "LZ-02", and "LZ-03" from the three registration messages received, respectively, the server may determine that all recording apparatuses in recording group 1 are registered.
Optionally, when the registration message received by the server includes a device type of the device (for example, a device type of the recording device and a device type of the playback device), the server may store the device type correspondence in the device list, so that the subsequent process refers to the device type of the recording device and/or the device type of the playback device. Specifically, the server may store the device type of the recording device in correspondence with the device identifier of the recording device, and store the device type of the playback device in correspondence with the device identifier of the playback device.
For example, in a multi-terminal automation scenario based on human-machine interaction, the list of devices may be as shown in tables 1-2 below:
TABLE 1-2
Figure BDA0003330435200000161
In step 202, the server obtains a first correspondence, where the first correspondence is used to indicate that the playback device can execute the semantics generated by the corresponding recording device.
In this embodiment, the first correspondence is a correspondence between indication information of each recording device in the recording group and indication information of a playback device in the playback group. Wherein the indication information comprises a device identification and/or a device type. Alternatively, the indication information may be other information capable of distinguishing between different recording devices, and other information capable of distinguishing between different playback devices.
In a possible implementation manner, the foregoing indication information is a device identifier, that is, the indication information of the recording device in the first correspondence is a device identifier of the recording device, and the indication information of the playback device in the first correspondence is a device identifier of the playback device. Taking the device identifier in the foregoing table 1-1 as an example, the first correspondence relationship includes: "LZ-01" corresponds to "HF-02"; "LZ-02" corresponds to "HF-03"; "LZ-03" corresponds to "HF-01".
The present embodiment may be applied to a scene in which the device type of the recording device in the recording group is the same as or similar to the device type of the playback device in the playback group. At this time, the device identifiers of the devices are adopted to represent the first corresponding relationship, which is favorable for ensuring that the recording device and the playback device are definitely corresponding to each other and avoiding confusion.
In another possible implementation manner, the foregoing indication information is a device type, that is, the indication information of the recording device in the first correspondence is a device type of the recording device, and the indication information of the playback device in the first correspondence is a device type of the playback device. Taking the device types in the foregoing tables 1-2 as an example, the first correspondence relationship includes: the 'recording device supporting Windows' corresponds to the 'playback device supporting Windows'; the recording equipment supporting Android corresponds to playback equipment supporting Android; the "IOS-enabled recording device" corresponds to the "IOS-enabled playback device". For another example, the first correspondence relationship includes: the recording device supporting IOS corresponds to the playback device supporting Windows; the recording equipment supporting Android corresponds to playback equipment supporting Android; the "Windows-enabled recording device" corresponds to the "IOS-enabled playback device". For another example, the first correspondence relationship includes: the recording device supporting the keyboard and the mouse corresponds to the playback device supporting the keyboard and the mouse; the recording device supporting the touch screen corresponds to the playback device supporting the touch screen; the "recording device supporting a speaker" corresponds to the "playback device supporting a speaker". In practical applications, there are other examples of using the device type to represent the first correspondence, which are not described herein.
In practical applications, the first correspondence may also be represented by at least one hardware type and/or at least one software type of the device types. For example, the first correspondence relationship includes: the recording device supporting the keyboard, the mouse and the xml format corresponds to the playback device supporting the keyboard, the mouse and the xml format; the recording device supporting the loudspeaker and the Android system corresponds to the playback device supporting the loudspeaker and the IOS system; the recording device supporting the touch screen and the Android system corresponds to the playback device supporting the touch screen and the Android system. And in particular are not listed here.
It should be appreciated that the first correspondence may be stored in the server in the form of an array, table, or other data structure capable of representing the association. The specific examples are not limited herein.
In this embodiment, the foregoing first correspondence may be preset by a person, or may be determined by the server according to a device type of the recording device and a device type of the playback device. The following description will be made respectively:
in one possible embodiment, the first correspondence is a preset correspondence. In this case, the step of obtaining the first correspondence by the server may be understood as that the server reads the foregoing first correspondence from the storage medium, or may be that the server obtains the first correspondence set manually through an input/output (I/O) interface.
In another possible embodiment, the first correspondence is determined by the server based on a device type of the recording device and a device type of the playback device. Specifically, the server may obtain the device identifier and the device type of each recording device from the registration message in step 201a, and obtain the device identifier and the device type of each playback device from the registration message in step 201b, so as to obtain a device list including the device identifier of the recording device, the device type of the recording device, the device identifier of the playback device, and the device type of the playback device. Such as the list of devices shown in tables 1-2 above.
In one implementation, the server may determine that recording devices and playback devices supporting the same device type have a first correspondence. It is also understood that there is a non-empty intersection of the software types supported by the recording device and the software types supported by the corresponding playback device (i.e., the playback device having the first correspondence with the recording device); and/or, there is a non-null intersection of the hardware type supported by the recording device and the hardware type supported by the corresponding playback device (i.e., the playback device having the first correspondence with the recording device). For example, the server determines that the recording device supporting the Windows system in the recording group and the playback device supporting the Windows system in the playback group have a first correspondence. For another example, the server determines that a recording device of the recording group that supports a camera and a playback device of the playback group that supports a camera have a first correspondence. For another example, the server determines that a recording device of the recording group supporting speakers and a playback device of the playback group supporting speakers have a first correspondence. The specific examples are not limited herein.
The first correspondence may be shown in the following table 2-1, for example:
TABLE 2-1
Figure BDA0003330435200000171
In the example shown in table 2-1, the recording device and the playback device located in the same line have a first correspondence. With the second example of the behavior in table 2-1, the recording device LZ-01 supporting the Windows system and supporting the keyboard and the mouse in the recording group and the recording device HF-01 supporting the Windows system and supporting the keyboard and the mouse in the playback group have the first correspondence. With the third behavior example in table 2-1, the recording device LZ-02 supporting the Android system and supporting the touch screen in the recording group and the recording device HF-02 supporting the Android system and supporting the touch screen in the playback group have a first correspondence. And so on, are not specifically recited herein.
Optionally, if the task list for recording the recording task of the recording device is stored in the server, the non-empty intersection is related to the recording task of the recording device. It is also understood that the type of software supported by the recording device and the type of software supported by the playback device are both related to the recording task; and/or, the hardware type supported by the recording device and the hardware type supported by the playback device are both related to the recording task.
The task list comprises task information of each recording device in the recording group, and the task information comprises task types of the recording devices and hardware types and/or software types required to be supported by the devices. Generally, different task types require the device to support different software types and/or hardware types. Specifically, the server will determine the recording device and the playback device having the first correspondence based on the task type of each task and the hardware type and/or software type that the task type requires device support. For example, for tasks involving photographing, video recording, and the like, it is required that both the recording device and the playback device support cameras. For another example, for tasks involving audio playback, it is required that both the recording device and the playback device support speakers. For another example, for tasks involving voice calls, it is required that both the recording device and the playback device support microphones. And in particular are not listed here.
It should be noted that the device type of the recording device and the device type of the playback device having the first correspondence are not necessarily identical. For example, for tasks involving photographing, video recording, and the like, both the recording device and the playback device are required to support cameras, but it is not necessarily required that both the recording device and the playback device support the same system type. At this time, it may be determined that the recording device supporting the Android system and supporting the camera and the playback device supporting the IOS system and supporting the camera have the first correspondence. For another example, for tasks involving mail editing, both the recording device and the playback device are required to support a certain mailbox application, but it is not necessarily required that both the recording device and the playback device support touch screen input. At this time, a mobile phone supporting the aforementioned mailbox application and supporting the touch screen input may be determined as the recording device, and a computer supporting the aforementioned mailbox application and supporting the keyboard input may be determined as the playback device.
It should be appreciated that steps 201a, 201b and 202 may all be performed before step 203, i.e. before recording, both the recording device and the playback device are registered with the server and the server obtains the first correspondence. Step 201b and step 202 may also be performed after step 203 and before step 206, i.e. the recording devices in the recording group are first registered to the server and recorded under the instruction of the server, while the playback devices in the playback group only need to be registered to the server before the server distributes semantics.
Step 203, the server sends a first recording instruction to each recording device in the recording group; accordingly, the recording device receives the first recording instruction from the server.
The first recording instruction is used for instructing each recording device to start recording. After the recording device receives the first recording instruction, the recording device starts recording. Wherein, starting recording refers to a process that a recording device starts to detect an operation of a user on the recording device and generates semantics based on the operation detected by the recording device.
Optionally, the first recording instruction includes system clock information of the server, where the system clock information is used to instruct each recording device in the recording group to perform clock synchronization according to a system clock of the server. Specifically, after each recording device receives the system clock information in the first recording instruction, the recording device may reset the system clock. This embodiment is advantageous for ensuring that the system clocks of the individual recording devices in the recording group are synchronized.
Optionally, when the task list is stored in the server, and the task information in the task list includes a recording interface corresponding to each task, the first recording instruction further includes at least one recording interface, where the at least one recording interface is related to the task. The recording interface includes, for example, an I/O interface of a microphone, an I/O interface of a speaker, an I/O interface of a mouse, an I/O interface of a keyboard, etc., and is not limited thereto.
At step 204, the recording device generates at least one semantic item.
Wherein the foregoing semantics are information generated by the recording device describing the user's operations on the recording device. It is also understood that the foregoing semantics are an abstract expression of the recording device's operation on the recording device by the user. The semantics include time information, object information, and action information. The time information is used for indicating the moment when the recording device detects the operation, and can reflect when a user operates the recording device. The object information is used for indicating the object of the operation corresponding to the semantics, namely, which part on the recording device is operated by the user. The action information is used to indicate the content of the operation, i.e. which actions the user has operated on the recording device.
In this embodiment, after the recording device receives the first recording instruction, the recording device starts recording and generates semantics. Specifically, the recording device detects an operation of a user on the recording device through a recording interface, obtains original information corresponding to the operation, and generates a semantic item based on one or more pieces of the original information. It should be understood that if the first recording instruction includes at least one recording interface, the recording device only detects the operation of each of the at least one recording interface during recording. For example, if the first recording instruction includes the I/O interface of the mouse and the I/O interface of the keyboard, but does not include the I/O interface of the microphone and the I/O interface of the speaker, the recording device may only obtain information related to the user operation from the I/O interface of the mouse and the I/O interface of the keyboard, but not obtain information from the I/O interface of the microphone and the I/O interface of the speaker during recording.
In addition, each semantic item includes indication information of the recording device, and the indication information of the recording device in the semantic item is used for indicating which recording device the semantic item is generated by, so that the server distributes the semantic item to the corresponding playback device based on the indication information of the recording device in each semantic item.
In this embodiment, when detecting an operation of a user, the recording device may acquire original information corresponding to the operation, and then generate semantics based on the original information. Wherein each piece of original information is information related to the operation of the user on the recording device detected by the recording device. Each piece of original information includes an operation time, an operation object, and an operation type. The operation time is the time when the recording device detects the operation of the user on the recording device; the operation object is an object which is detected by the recording device and operated by a user on the recording device; the operation type is the action type executed by the user checked by the recording device through the hardware and/or software supported by the recording device. Then, the recording apparatus determines the aforementioned operation time as semantic time information. In addition, the recording apparatus determines object information of the semantics from the operation object. At the same time, the recording device determines action information of the semantics according to the operation type.
The recording device stores a mapping rule for determining action information, wherein the mapping rule comprises a corresponding relation between an action feature and a plurality of operation types, and each operation type in the plurality of operation types meets the corresponding action feature. Specifically, the recording device determines action information with action characteristics corresponding to the operation type as semantics according to the operation type and the mapping rule. For example, if the plurality of operation types in the mapping rule include a first operation type and a second operation type, and the first operation type is that a user presses a left key on a display screen through a touch screen, and the second operation type is that a first action feature corresponding to the first operation type and the second operation type is that a user presses a left key on the display screen through a mouse. For example, if the plurality of operation types in the mapping rule include a third operation type and a fourth operation type, and the third operation type is sliding on the user interface in the first direction through the touch screen, the fourth operation type is scrolling the progress bar in the first direction through the mouse, then the second action feature corresponding to the third operation type and the fourth operation type is moving in the first direction.
For ease of understanding, a multi-terminal automation scenario based on a user interface UI is taken as an example. If the recording device detects that the user applies action pressing at the point A with the coordinates of (x, y) on the touch screen through the I/O interface of the touch screen and the current system time is T1, the recording device acquires the original information { T1; a (x, y); press on touch screen }. Where "T1" is an operation time, "a (x, y)" is an operation object, and "press on the touch screen" is an operation type. At this time, the recording apparatus will determine "T1" as semantic time information. At the same time, the recording device takes the element or picture determining the position of the coordinates A (x, y) as the object information. In addition, the recording device will also determine action information based on the operation type and the mapping rule.
For example, if the mapping rule stored in the recording apparatus at this time is as shown in table 3-1 below, the recording apparatus will determine the action information as "press down".
TABLE 3-1
Figure BDA0003330435200000201
Illustratively, in a multi-terminal automation scenario based on a user interface UI, for example, a UI-based front-end test scenario. Some semantics generated by the recording device may be as shown in fig. 3. Wherein, "end_id: LZ-01 "means that the device identification of the recording device is" LZ-01". "end_type: android, file_type: XML refers to the device type of the recording device, wherein the recording task of the recording device needs to support the Android system type and the XML file type. "time: t1 "represents time information, and means that the time when the recording apparatus detects the operation of the user on the recording apparatus is T1."object_type: an element; object_value: { name: name_string; class: class_string; path: element_path_string } "represents object information, wherein" object_type: element "means that the object information is some element in the user interface; "object_value: { name: name_string; class: class_string; path: element_path_string "indicates that the aforementioned element is found in the user interface, and can also be understood as representing the characteristics of the aforementioned element. Wherein, "name: name_string "indicates that the name of the element in the system is" name_string "," class: class_string "indicates that the element has an element level of" class_string "in the system; "path: element_path_string "indicates that the search path of the element in the system is" path: element_path_string). "action_type: point_down; action_value "indicates action information, wherein" point_down "indicates that an action feature is pressed, and" action_value "is used for recording other information related to the aforementioned action.
Illustratively, if a Backus-Van-form (backus normal form, BNF) is defined to represent the semantics of the recording device generation, the semantics of the recording device generation may be represented by the paradigm and meaning as set forth in Table 4-1 below.
TABLE 4-1
Figure BDA0003330435200000211
It should be understood that the example set forth in fig. 3 and table 4-1 above is but one example of many of the many possible examples of this embodiment. In practical applications, the foregoing semantics may also be represented in other manners. The embodiment does not limit the implementation of the meaning.
In this embodiment, the recording device generates semantics based on the detected original information until the recording device receives a second recording instruction from the server, where the second recording instruction is used to instruct the recording device to stop recording; or until the recording device completes all operations in the recording task.
Step 205, the recording device sends the at least one semantic meaning to a server; accordingly, the server receives the aforementioned at least one semantic from the recording device.
In this embodiment, after the recording device generates the semantics, the recording device needs to send the semantics to the server. Specifically, the recording device may send the semantic meaning to the server every time the semantic meaning is generated; the recording device may also send all the semantics generated by the recording device to the server after stopping recording, which is not limited herein.
It should be understood that the recording device sends the semantics to the server, and the recording device may send the data packet carrying the semantics to the server between the recording devices, or the recording device may send the data packet carrying the semantics to a network element (for example, a proxy network element, a switch, etc.) that relays the message, and then send the data packet carrying the semantics to the server by the network element. The specific examples are not limited herein.
It will be appreciated that the server will receive at least one semantic from each recording device in the recording group and will obtain multiple semantics. For example, the plurality of semantics may form a semantic set that includes all semantics generated by each recording device in the recording group. For ease of description, semantic collections are described below as examples.
And 206, the server sends the semantics to the corresponding playback devices in the playback group according to the time information of each semantic according to the time sequence.
In this step, the server will first determine the playback device to which each semantic meaning corresponds (i.e., to which playback device in the playback group each semantic meaning should be sent); then, the server analyzes the time information of each semantic, and all the semantics in the semantic set are sequenced according to the time sequence; and then, the server sends the semantics in the semantics set to the corresponding playback devices in the playback group according to the time sequence.
The server may determine a playback device corresponding to each semantic meaning according to the first correspondence described above. Since each semantic includes the indication information of the recording device and the first correspondence stores the correspondence between the indication information of each recording device in the recording group and the indication information of the playback device in the playback group, the server can determine the playback device corresponding to the semantic based on the indication information of the recording device and the first correspondence. For ease of understanding, taking the first correspondence as an example in table 2-1, if the server determines that the indication information of the recording device carried in the semantic 01 is "LZ-01", the server can determine that the playback device corresponding to the semantic 01 is a playback device whose indication information is "HF-01".
In addition, when the server sends the semantics in the semantic set to the corresponding playback device, the server only sends one piece of semantics to the corresponding playback device at a time, or sends a group of semantics (including a plurality of semantics) to a corresponding one of the playback devices at a time. Optionally, the server may also use the receipt of a notification message from the playback device as a trigger to send the next semantic (or next group). The notification message is used to indicate that the playback device has performed the received semantics, which may also be understood that the notification message is used to indicate that the playback device has successfully played the received semantics.
In one possible implementation, the set of semantics includes N semantics, where N is an integer greater than 1. The server sorts the semantics in the semantics set according to the N pieces of time sequence information of the semantics in the semantics set to obtain a first semantics sequence determined by the N pieces of semantics. Then, the server transmits the ith semantic item in the first semantic sequence (where the value of i is an integer greater than or equal to 1 and less than or equal to N) to the corresponding playback device in the playback group. Then, the server receives a notification message corresponding to the ith semantic, wherein the notification message corresponding to the ith semantic is used for indicating that the playback of the ith semantic is successful.
In this embodiment, after the server sends one semantic item to the playback devices in the playback group, the server needs to receive a notification message from the playback device to play back the successful notification message, and then triggers the sending of the next semantic item. Therefore, the method is beneficial to orderly executing the received semantics by the playback devices in the playback group, and is also beneficial to retransmitting the semantics to the playback devices when the playback devices fail to send the notification message, so that each playback device in the playback group can execute the corresponding semantics, and further the orderly execution of the whole playback task is ensured.
In another possible embodiment, the set of semantics includes N semantics, N being an integer greater than 1. The server sorts the semantics in the semantics set according to the N pieces of time sequence information of the semantics in the semantics set to obtain a first semantics sequence determined by the N pieces of semantics. The server then divides the first semantic sequence into a plurality of second semantic sequences, each second semantic sequence being a sequence of at least one semantic determination of the first semantic sequence that is consecutive to be performed by the same playback device. Then, the server sequentially transmits the second semantic sequences to the corresponding playback devices in the playback group until the server transmits all the second semantic sequences determined based on the foregoing semantic set to the playback devices.
In this embodiment, it is proposed that the server transmits a set of semantics (i.e. a second sequence of semantics) to a playback device in the playback group at a time, the set of semantics being performed by a certain playback device. After the playback device has performed the aforementioned set of semantics, the server sends the next set of semantics to the playback device. The embodiment is beneficial to saving the signaling overhead between the server and the playback device. Further, since time information is contained in each of the semantics, the playback device can determine the order in which the respective semantics in the aforementioned set of semantics are executed based on the time information of the respective semantics in the set of semantics. Thus, the playback devices in the playback group can also ensure that the entire playback task proceeds in order.
In this embodiment, it is proposed to add time information to semantics for realizing automation as a basis for scheduling and distributing a plurality of semantics by a server, where after the server acquires a plurality of semantics (for example, a semantic set), the server can distribute the plurality of semantics to corresponding playback devices according to time sequence according to time information of each of the plurality of semantics, so that each playback device can sequentially execute the semantics from the server, and further realize sequential playback. In the application, the server can replace manual implementation to issue the plurality of semantics in sequence, so that disorder of the sequence of executing the semantics by each playback device in the system can be avoided, and further the automatic flow of the system comprising the plurality of devices can be ensured to be realized.
As shown in fig. 4, when the playback method proposed in the present application involves only a playback device, the server and the playback device will perform the steps of:
in step 401, a server obtains a plurality of semantics.
The plurality of semantics may be that the user directly inputs the server through the I/O interface, or that the server reads from an external storage medium, which is not limited herein. Illustratively, the plurality of semantics may form a semantic set. For ease of description, semantic collections are described below as examples.
Wherein each semantic is used to indicate an operation that needs to be performed on a playback device in a playback group, the operation comprising controlling one playback device in the playback group to interact with another playback device in the playback group.
The semantics include timing information, object information, and action information. Wherein the timing information is used to indicate the order of the semantic relative to other semantics in the semantic set. For example, the timing information may be "1", "2", "3", or the like, which can indicate sequential arabic numerals. The object information is used to indicate an object of the operation corresponding to the semantic, that is, the playback device performs the operation on which component of the playback device according to the semantic. The action information is used to indicate the content of the operation, i.e. which actions the playback device performs at the components of the playback device according to the semantics.
Optionally, each of the foregoing semantics includes indication information of the playback device, and the indication information of the playback device in the semantics is used to indicate which playback device the semantics are to be executed by, so that the server distributes the semantics to the corresponding playback device based on the indication information of the playback device in each of the semantics.
Wherein the indication information of the playback device comprises a device identification of the playback device and/or a device type of the playback device.
In one possible implementation, the indication information of the playback device is a device identification of the playback device. At this time, the device identification of the playback device may be information capable of uniquely identifying one playback device, such as an IP address of the playback device, a factory number of the playback device, or the like.
In another possible embodiment, the indication information of the playback device is a device type of the playback device. Wherein the device type of the playback device is used to indicate the software type and/or hardware type supported by the playback device. Where the hardware type refers to the physical structure that the device has. For example, in a multi-terminal automation scenario based on a user interface UI, the hardware types include a touch screen, a keyboard, a mouse, and the like. For example, in a multi-terminal automation scenario involving audio-visual push, the hardware types include cameras, speakers, microphones, touch screens, and the like. Illustratively, in a manufacturing-based automation scenario, the hardware types include distance sensors, infrared thermometers, and the like. In addition, the software type refers to a software function that the device can implement by running a program. Illustratively, in a multi-terminal automation scenario based on a user interface UI, the software type includes a system type that records what function interfaces the application calls and a file type of the UI storage format. The system type mainly comprises a Web application for calling a container interface such as a browser, a Windows desktop application for calling the system interface, and an android/IOS/hong Mongolian (Harmonyos) native application; the file types of the UI storage format mainly comprise an application interface based on the dom tree expression, an application interface supporting json format storage, an application interface supporting xml format storage, an application interface supporting pdf format storage and the like. Please refer to the description of the device types in the foregoing step 201a and step 201b, and details thereof are not repeated here.
It should be understood that, for different application fields and different application scenarios, the specific implementation of the foregoing hardware type is different, and the specific implementation of the foregoing software type is also different, and the specific application is not limited to the specific implementation of the hardware type and the specific implementation of the software type.
In addition, the semantic representation in this embodiment may refer to the examples described in fig. 3 and table 4-1, and details thereof are not described herein.
In step 402, the server obtains information of a playback device.
In this embodiment, the server may obtain the information of the playback device by receiving a registration message from the playback device. Specifically, the server will receive one or more registration messages, each containing information of the device identification, device type, etc. of the playback device. The server then stores the device identification and device type of each playback device so that the server can obtain information for each playback device during subsequent distribution of semantics.
In step 403, the server sends the semantics to the corresponding playback devices in the playback group according to the sequence indicated by the timing information of each semantic.
In this step, the server will first determine the playback device to which each semantic meaning corresponds (i.e., to which playback device in the playback group each semantic meaning should be sent); then, the server analyzes the time sequence information of each semantic, and sends the semantic in the semantic set to the corresponding playback device in the playback group according to the sequence indicated by the time sequence information.
In this embodiment, the server has a plurality of ways to determine playback devices corresponding to each semantic, and the following descriptions are respectively provided:
in one possible implementation, each semantic includes indication information of a playback device, the indication information of the playback device in the semantic being used to indicate which playback device the semantic is to be executed by. Therefore, the server analyzes the received semantics to obtain the indication information of the playback devices in the semantics, then determines the playback devices corresponding to the semantics based on the indication information of the playback devices in the semantics, and further sends the semantics in the semantics set to the playback devices corresponding to each semantics.
In another possible embodiment, the server can obtain a second correspondence, where the second correspondence includes a correspondence between the timing information and indication information of the playback device, where the second correspondence is used to indicate that semantics including the timing information can be executed in the playback device indicated by the indication information of the playback device. Then, the server may determine a playback device corresponding to each of the semantics according to the timing information and the second correspondence relation of each of the semantics, and transmit the semantics to the corresponding playback devices in the playback group in an order indicated by the timing information.
The second correspondence may be shown in the following table 5-1, for example:
TABLE 5-1
Timing information Indication information of playback device
1,3,5 HF-01
2,4,6 HF-02
In the example shown in Table 5-1, "HF-01" and "HF-02" represent device identifications of playback devices; the Arabic numerals represent timing information. Wherein the second row represents semantics of the timing information being "1", "3", and "5", which should be sent to the playback device execution with device identification "HF-01"; the third row represents the semantics of the timing information being "2", "4", and "6", which should be sent to the playback device execution with device identification "HF-02".
In addition, when the server sends the semantics in the semantic set to the corresponding playback device, the server only sends one piece of semantics to the corresponding playback device at a time, or sends a group of semantics (including a plurality of semantics) to a corresponding one of the playback devices at a time. Optionally, the server may also use the receipt of a notification message from the playback device as a trigger to send the next semantic (or next group). The notification message is used to indicate that the playback device has performed the received semantics, which may also be understood that the notification message is used to indicate that the playback device has successfully played the received semantics. Specifically, please refer to the related description in the step 207, which is not repeated here.
In this embodiment, it is proposed that timing information is added to semantics for realizing automation as a basis for scheduling and distributing a plurality of semantics by a server, where after the server acquires the plurality of semantics, the server can determine an order of distributing the semantics according to the timing information of each of the plurality of semantics, and then distribute the plurality of semantics to corresponding playback devices according to an order indicated by the timing information, so that each playback device can sequentially execute the semantics from the server, thereby realizing orderly playback. In the application, the server can replace manual implementation to issue the plurality of semantics in sequence, so that disorder of the sequence of executing the semantics by each playback device in the system can be avoided, and further the automatic flow of the system comprising the plurality of devices can be ensured to be realized.
In addition, as shown in fig. 5, the present application further provides a server 50, and fig. 5 is a schematic structural diagram of the server 50 provided in the present application. The server can be a cloud server or a local server. The server may provide services to the user as SaaS (Software as a Service). A user may obtain services of server 50 through web access, application software, plug-ins, open source code or toolkits, etc. on the recording device. It should be understood that the servers in the foregoing embodiments of the method corresponding to fig. 2 or fig. 4 may be based on the structure of the server 50 shown in fig. 5 in this embodiment.
As shown in fig. 5, the server 50 may include a processor 510, a memory 520, and a transceiver 530. Wherein the processor 510 is coupled to the memory 520, the processor 510 is coupled to the transceiver 530.
The transceiver 530 may be referred to as a transceiver unit, a transceiver device, or the like. Alternatively, the device for implementing the receiving function in the transceiver unit may be regarded as a receiving unit, and the device for implementing the transmitting function in the transceiver unit may be regarded as a transmitting unit, that is, the transceiver unit includes a receiving unit and a transmitting unit, where the receiving unit may also be referred to as a receiver, an input port, a receiving circuit, etc., and the transmitting unit may be referred to as a transmitter, or a transmitting circuit, etc. In this application, the transceiver 530 may receive a registration message (e.g., a registration message from a recording device or a registration message from a playback device). The transceiver 530 may send a message to the recording device, for example, a first recording instruction or a second recording instruction, etc., to the recording device.
The aforementioned processor 510 may be a central processing unit (central processing unit, CPU), application-specific integrated circuit (ASIC), programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof. Processor 510 may refer to one processor or may include multiple processors, and is not limited in this regard.
In addition, the aforementioned memory 520 is mainly used for storing software programs and data. The memory 520 may be separate and coupled to the processor 510. Alternatively, the memory 520 may be integrated with the processor 510, such as within one or more chips. The memory 520 is capable of storing program codes for implementing the technical solutions of the embodiments of the present application, and is controlled to be executed by the processor 510, and various types of executed computer program codes may also be regarded as drivers of the processor 510. Memory 520 may include volatile memory (RAM), such as random-access memory (RAM); the memory may also include a nonvolatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a hard disk (HDD) or a Solid State Drive (SSD); memory 520 may also include a combination of the above types of memory. The memory 520 may refer to one memory or may include a plurality of memories. Illustratively, the memory 520 is configured to store information of the recording device (e.g., device identification and device type of the recording device) and information of the playback device (e.g., device identification and device type of the playback device). The memory 520 is further configured to store the first correspondence relationship or the second correspondence relationship described above. The first correspondence is a correspondence between indication information of each recording device in the recording group and indication information of a playback device in the playback group, and the first correspondence is used for indicating that the playback device can execute semantics generated by the corresponding recording device. The second correspondence includes a correspondence between timing information and indication information of a playback device, the second correspondence being for indicating that semantics including the timing information can be run in the playback device indicated by the indication information of the playback device.
In one implementation, memory 520 has stored therein computer readable instructions comprising a plurality of software modules, such as a transmit module 521, a process module 522, and a receive module 523. Processor 510, upon execution of the various software modules, may perform the corresponding operations as directed by the various software modules. In this embodiment, the operations performed by one software module actually refer to operations performed by the processor 510 according to instructions of the software module.
Specifically, when the server 50 is used to perform the method in the foregoing corresponding embodiment of fig. 2, the main functions of the transmitting module 521, the processing module 522, and the receiving module 523 in the server 50 are as follows:
the receiving module 523 is configured to receive at least one semantic item from each recording device in the recording group. The processing module 522 is configured to determine a time sequence of each semantic item according to the time information of each semantic item, and control the transceiver module to send the semantic item to a corresponding playback device in the playback group according to the time sequence. Each piece of semantic is information describing an operation of a user on the recording device, the semantic comprises time information, object information and action information, the time information is used for indicating the moment when the recording device detects the operation, the object information is used for indicating an object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation.
In a possible implementation manner, the memory 520 is configured to store a first correspondence between the indication information of each recording device in the recording group and the indication information of the playback device in the playback group, where the first correspondence is used to indicate that the playback device can execute the semantics generated by the corresponding recording device.
Optionally, the indication information includes a device identifier and/or a device type, where the device type is used to indicate a software type and/or a hardware type supported by the recording device.
Optionally, a non-empty intersection exists between the software type supported by the recording device and the software type supported by the corresponding playback device; and/or, the hardware type supported by the recording device has a non-empty intersection with the hardware type supported by the corresponding playback device.
In a possible implementation manner, when each semantic includes the indication information of the recording device, the processing module 522 is specifically configured to send the semantic to the playback device corresponding to the playback group according to the chronological order according to the time information of each semantic, the indication information of the recording device in each semantic, and the first correspondence.
In one possible implementation, the memory 520 in the server 50 is also used to store a list of devices. The device list stores information of a recording group and information of the playback group. Wherein the information of the recording group includes a device identification of each recording device in the recording group, and the information of the playback group includes a device identification of each playback device in the playback group. At this time, the receiving module 523 is further configured to receive a plurality of registration messages, where each registration message includes a device identifier of a device that sends the registration message; the processing module 522 is further configured to control the memory 520 to determine that the device is a recording device or a playback device according to the device list and the device identification of the device in the registration message.
In a possible embodiment, the registration message further includes a device type of the device, the device type of the device including a device type of the recording device and a device type of the playback device. When the registration message further includes a device type of the device, the storage module in the server is further configured to store the device type of the recording device in correspondence with the device identifier of the recording device; and storing the device type of the playback device in correspondence with the device identification of the playback device.
In a possible implementation manner, the sending module 521 is further configured to send a first recording instruction to each recording device in the recording group, where the first recording instruction is used to instruct each recording device to start recording, and the first recording instruction includes system clock information of the server, where the system clock information is used to instruct each recording device in the recording group to perform clock synchronization according to a system clock of the server.
In a possible implementation manner, the sending module 521 is further configured to send a second recording instruction to at least one recording device in the recording group, where the second recording instruction is used to instruct the recording device to stop recording.
Specifically, when the server 50 is used to perform the method in the foregoing corresponding embodiment of fig. 4, the main functions of the transmitting module 521, the processing module 522, and the receiving module 523 in the server 50 are as follows: the processing module 522 controls the external interface to obtain a plurality of semantics. In addition, the processing module 522 controls the transceiver module to send the semantic meaning to the corresponding playback device in the playback group according to the sequence indicated by the timing information of each semantic meaning. Wherein each of the semantics is used to indicate an operation that needs to be performed on a playback device in a playback group, the operation comprising controlling one playback device in the playback group to interact with another playback device in the playback group. In addition, the semantics include timing information, object information, and action information. The timing information is used for indicating the sequence of the semantic relative to other semantic in the plurality of semantic, the object information is used for indicating the object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation.
In one possible implementation, the memory 520 in the server 50 stores indication information for each playback device in the playback group, each of the indication information for identifying one playback device in the playback group.
In a possible implementation manner, each semantic item further comprises indication information of the playback device, and the indication information of the playback device in the semantic item is used for indicating the playback device corresponding to the semantic item.
In a possible implementation manner, the processing module 522 is configured to control the transceiver module to send the semantics to the playback device indicated by the indication information of the playback device according to the timing information of each of the semantics and the indication information of the playback device in each of the semantics.
In one possible implementation, the processing module 522 controls the external interface to obtain a second correspondence, where the second correspondence includes a correspondence between the timing information and indication information of the playback device, where the second correspondence is used to indicate that semantics including the timing information can be executed in a playback device indicated by the indication information of the playback device. The processing module 522 is further configured to determine a playback device corresponding to each semantic item according to the timing information of each semantic item and the second correspondence, and control the transceiver module to send the semantic item to a playback device corresponding to the playback group according to the order indicated by the timing information.
In a possible embodiment, the indication information of the playback device includes a device identification of the playback device and/or a device type of the playback device, where the device type of the playback device is used to indicate a software type and/or a hardware type supported by the playback device.
The rest may refer to the method of the server in the corresponding embodiment of fig. 2 or fig. 4, which is not described herein again.
Illustratively, when the foregoing server 50 is applied in a multi-terminal automation scenario based on a user interface UI, the processing module 522 in the server 50 may be functionally divided into the structures shown in fig. 6. The processing module 522 includes a recording controller 5223, a semantic ordering scheduling module 5222, and a playback controller 5221.
The recording controller 5223 is a functional module for controlling the recording process of the recording apparatus. Before recording is performed, each recording apparatus in the recording group is registered through the recording controller 5223, and the server 50 receives a registration message through the receiving module 523. Wherein the registration message includes a device identification of the recording device and a device type of the recording device. Optionally, the registration message further includes system clock information of the recording device. In the recording process, the recording controller 5223 issues an instruction for starting recording (i.e., a first recording instruction) and an instruction for stopping recording (i.e., a second recording instruction), so as to obtain semantics generated by each recording device in the recording group, and further obtain a semantic set including multiple semantics.
The semantic ordering scheduling module 5222 is a data storage and maintenance structure of the server 50. The method is used for sorting the semantics in the semantic set according to the time information and then distributing the semantics in the playback stage according to the time sequence. In addition, the semantic ordering scheduling module 5222 further stores a first correspondence between the indication information of each recording device in the recording group and the indication information of the playback device in the playback group, where the first correspondence is used to indicate that the playback device can execute the semantics generated by the corresponding recording device.
The playback controller 5221 is a functional module that controls a playback process. Before playback proceeds, each playback device in the playback group is registered by the playback controller 5221. The server 50 receives the registration message through the receiving module 523. Wherein the registration message includes a device identification of the playback device and a device type of the playback device. Optionally, the registration message further includes system clock information of the playback device. During playback, the playback controller 5221 distributes the semantics in the foregoing semantic collections to the respective playback devices in the playback group in chronological order. Optionally, the playback controller 5221 may also collect a notification message returned by the playback device, where the notification message is used to indicate that the playback device has performed a certain semantic item, so that the playback controller 5221 controls the sending module 521 to send the next semantic item.
As shown in fig. 7, a schematic structural diagram of an apparatus 70 is provided herein. The device 70 may be a recording device or a playback device. The device may be a Mobile Terminal (MT) such as a mobile phone (mobile phone) and a tablet pc (Pad); but also Virtual Reality (VR) terminal equipment, augmented reality (augmented reality, AR) terminal equipment; other areas of automation may be required, such as terminals in industrial control (industrial control), in unmanned (self-driving), in teleoperation (remote medical surgery), in smart grid (smart grid), in transportation safety (transportation safety), in smart city (smart home), in smart home (smart home), etc. It should be appreciated that the foregoing device 70 may also be other devices that can execute scripts or semantics, or other devices that have an automation requirement. The devices (e.g., recording device or playback device) in the foregoing embodiments of the methods of fig. 2 and 4 may each be based on the structure of recording device 70 shown in fig. 7.
The recording device 70 comprises at least one processor 701 and at least one memory 702. It should be appreciated that fig. 7 shows only one processor 701 and one memory 702.
The processor 701 may be a general-purpose central processing unit CPU, microprocessor, network processor (network processor, NP) or application-specific integrated circuit (application-specific integrated circuit), or one or more integrated circuits for controlling the execution of programs of the present application. The processor 701 may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. The processor 701 may refer to one or more devices, circuitry, and/or processing cores for processing data (e.g., computer program instructions). Further, the processor 701 may be a single semiconductor chip, may be integrated with other circuits into a single semiconductor chip, for example, may form a system-on-a-chip (SoC) with other circuits (such as a codec circuit, a hardware accelerator circuit, or various buses and interface circuits), or may be integrated into an application specific integrated circuit (application specific integrated circuit, ASIC) as a built-in processor of the ASIC, and the ASIC with integrated processor may be packaged separately or may be packaged with other circuits.
The memory 702 may be a read-only memory ROM, other types of static storage devices capable of storing static information and instructions, a random access memory RAM, other types of dynamic storage devices capable of storing information and instructions, or an electrically erasable programmable read-only memory (EEPROM), and is not limited thereto. The memory 702 may be separate but coupled to the processor 701 as described above. Alternatively, the memory 702 may be integrated with the processor 701. For example, integrated within one or more chips.
In addition, the memory 702 is also used for storing program codes for executing the technical solutions of the embodiments of the present application. For example, the mapping rule described in the foregoing embodiment includes a correspondence relationship between an action feature and a plurality of operation types, each of which satisfies the action feature.
The device 70 further comprises a communication interface 703, the communication interface 703 being adapted to communicate with a server such that the device can obtain the first recording instruction or the second recording instruction from the server. The device 70 may also send registration messages to the server via the communication interface 703.
The rest may refer to the methods of the recording device and the playback device in the corresponding embodiments of fig. 2 and fig. 4, which are not described herein.
When the aforementioned device 70 is a recording device, the processor 701 is capable of performing at least the functions of a monitor and an analyzer. The monitor is used for monitoring the operation of interfaces such as a keyboard/key, a mouse, a touch screen and the like through the I/O interface so as to obtain the original information. The aforementioned raw information is then transmitted to an analyzer. The analyzer generates semantics based on the original information by searching and analyzing mapping rules stored in the recording device. The specific manner in which the recording device generates the semantics can be referred to the description of step 205, which is not described herein.
When the aforementioned device 70 is a playback device, the processor 701 is capable of performing at least the functions of a parser and a UI driver. The parser is used for converting the received semantics into instructions based on the received semantics and the mapping rules. The parser then transmits the aforementioned instructions to the UI driver. The UI driver is configured to reproduce the UI interaction scenario based on the aforementioned instruction.
It will be appreciated that in implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein. It should also be understood that the first, second, third, fourth, and various numerical numbers referred to herein are merely descriptive convenience and are not intended to limit the scope of embodiments of the present application.
Furthermore, the present application provides a computer program product comprising one or more computer instructions. When loaded and executed on a computer, produces, in whole or in part, a flow or function consistent with embodiments of the present application. For example, a server-related method as in the previous fig. 2 or fig. 4 is implemented. As another example, a method associated with the recording apparatus as in fig. 2 is implemented. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Furthermore, the present application also provides a computer-readable storage medium storing a computer program to be executed by a processor to implement the server-related method as in the previous fig. 2 or fig. 4.
Furthermore, the present application also provides a computer readable storage medium storing a computer program to be executed by a processor to implement a method related to a recording apparatus as in the foregoing fig. 2.
It should be understood that the term "and/or" in this application is merely one type of association that describes an associated object. The term "and/or" means that there may be three relationships, e.g., a and/or B, which may represent: a alone, a and B together, and B alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (28)

1. A playback method, comprising:
a server receives at least one semantic from each recording device in a recording group, each semantic is information describing operation of a user on the recording device, the operation comprises controlling one recording device in the recording group to interact with another recording device in the recording group, the semantic comprises time information, object information and action information, the time information is used for indicating the moment when the recording device detects the operation, the object information is used for indicating an object of the operation corresponding to the semantic, and the action information is used for indicating the content of the operation;
and the server sends the semantics to the corresponding playback equipment in the playback group according to the time information of each piece of semantics and the time sequence.
2. The method of claim 1, wherein the server stores a first correspondence between indication information for each recording device in the recording group and indication information for a playback device in the playback group, the first correspondence being used to indicate that the playback device is capable of executing semantics generated by the corresponding recording device.
3. Method according to claim 2, characterized in that the indication information comprises a device identification and/or a device type indicating the type of software and/or hardware supported by the recording device.
4. The method of claim 3, wherein the recording device supports software types that have a non-empty intersection with corresponding playback device supported software types; and/or, the hardware type supported by the recording device and the hardware type supported by the corresponding playback device have a non-empty intersection.
5. The method according to any one of claims 2 to 4, wherein each of said semantics includes indication information of said recording apparatus;
the server sends the semantics to the corresponding playback devices in the playback group according to the time information of each piece of semantics and the time sequence, and the method comprises the following steps:
the server sends the semantics to the corresponding playback devices in the playback group according to the time sequence according to the time information of each semantic, the indication information of the recording device in each semantic and the first corresponding relation.
6. The method according to any one of claims 2 to 5, wherein the server stores a device list storing information of a recording group including a device identification of each recording device in the recording group and information of the playback group including a device identification of each playback device in the playback group;
Before the server receives at least one semantic from each recording device in the recording group, the method further comprises:
the server receives a plurality of registration messages, each registration message comprising a device identification of a device that sent the registration message;
and the server determines that the equipment is recording equipment or playback equipment according to the equipment list and the equipment identification of the equipment in the registration message.
7. The method of claim 6, wherein the registration message further comprises a device type of the device, the device type of the device comprising a device type of the recording device and a device type of the playback device;
the method further comprises the steps of:
the server correspondingly stores the equipment type of the recording equipment and the equipment identifier of the recording equipment;
the server stores the device type of the playback device in correspondence with the device identification of the playback device.
8. The method of claim 6 or 7, wherein after the server receives the plurality of registration messages, the method further comprises, before the server receives at least one semantic from each recording device in the recording group:
The server sends a first recording instruction to each recording device in the recording group, wherein the first recording instruction is used for indicating each recording device to start recording, the first recording instruction comprises system clock information of the server, and the system clock information is used for indicating each recording device in the recording group to perform clock synchronization according to the system clock of the server.
9. The method of claim 8, wherein after the server sends a first recording instruction to each of the recording devices in the recording group, the method further comprises:
the server sends a second recording instruction to at least one recording device in the recording group, wherein the second recording instruction is used for indicating the recording device to stop recording.
10. A playback method, comprising:
a server acquires a plurality of semantemes, each semanteme is used for indicating an operation needing to be executed on a playback device in a playback group, the operation comprises controlling one playback device in the playback group to interact with another playback device in the playback group, the semanteme comprises time sequence information, object information and action information, the time sequence information is used for indicating the sequence of the semanteme relative to other semanteme in the plurality of semantemes, the object information is used for indicating an object of the operation corresponding to the semanteme, and the action information is used for indicating the content of the operation;
And the server sends the semantics to corresponding playback equipment in the playback group according to the sequence indicated by the time sequence information of each piece of semantics.
11. The method of claim 10, wherein the server stores indication information for each playback device in the playback group, the indication information for each playback device identifying one playback device in the playback group.
12. The method of claim 11, wherein each of the semantics further includes indication information of the playback device, the indication information of the playback device in the semantics being used to indicate the playback device corresponding to the semantics.
13. The method of claim 12, wherein the server transmitting the semantics to a corresponding playback device in the playback group according to the order indicated by the timing information for each of the semantics, comprising:
the server sends the semantics to the playback device indicated by the indication information of the playback device according to the sequence indicated by the sequence information according to the sequence information of each semantic and the indication information of the playback device in each semantic.
14. The method of claim 11, wherein before the server sends the semantics to a corresponding playback device in the playback group according to the order indicated by the timing information for each of the semantics, the method further comprises:
The server acquires a second corresponding relation, wherein the second corresponding relation comprises a corresponding relation between the time sequence information and the indication information of the playback equipment, and the second corresponding relation is used for indicating that the semantics containing the time sequence information can run in the playback equipment indicated by the indication information of the playback equipment;
the server sends the semantics to the corresponding playback devices in the playback group according to the sequence indicated by the time sequence information of each semantic, and the method comprises the following steps:
and the server sends the semantics to the corresponding playback devices in the playback group according to the sequence indicated by the time sequence information according to the time sequence information of each piece of semantics and the second corresponding relation.
15. The method according to any one of claims 11 to 14, wherein the indication information of the playback device includes a device identification of the playback device and/or a device type of the playback device, the device type of the playback device being used to indicate a software type and/or a hardware type supported by the playback device.
16. A recording method, comprising:
generating at least one semantic item by the recording device, wherein each semantic item is information which is generated by the recording device and used for describing the operation of a user on the recording device, the semantic item comprises time information, object information and action information, the time information is used for indicating the moment when the recording device detects the operation, the object information is used for indicating an object of the operation corresponding to the semantic item, and the action information is used for indicating the content of the operation;
The recording device sends the at least one semantic item to a server, so that the server sends the semantic item to a corresponding playback device in a playback group according to the time information of each semantic item in time sequence.
17. The method of claim 16, wherein the recording device is one of a plurality of recording devices in a recording group, the operations comprising controlling one recording device in the recording group to interact with another recording device in the recording group.
18. The method of claim 16 or 17, wherein the recording device generates at least one semantic item comprising:
the recording device acquires at least one piece of original information, wherein each piece of original information is information which is detected by the recording device and related to the operation of a user on the recording device, and each piece of original information comprises operation time, an operation object and an operation type; the operation time is the time when the recording equipment detects the operation of the user on the recording equipment, the operation object is an object which is detected by the recording equipment and operated by the user on the recording equipment, and the operation type is the action type which is detected by the recording equipment and executed by the user through hardware and/or software supported by the recording equipment;
The recording device determines the operation time as the semantic time information;
the recording device determines the semantic object information according to the operation object;
and the recording equipment determines the action information of the semantics according to the operation type.
19. The method of claim 18, wherein the recording device determining action information of the semantics based on the operation type comprises:
the recording device determines that the action characteristic corresponding to the operation type is action information of the semantic according to the operation type and a mapping rule, wherein the mapping rule comprises a corresponding relation between one action characteristic and a plurality of operation types, and each operation type in the plurality of operation types meets the action characteristic.
20. The method of any of claims 16 to 19, wherein the semantics further comprise a device type of the recording device.
21. The method of any one of claims 16 to 20, wherein before the recording device generates at least one semantic item, the method further comprises:
the recording device sends a registration message to the server, wherein the registration message comprises indication information of the recording device, and the indication information of the recording device comprises a device type of the recording device and a device identifier of the recording device.
22. The method of claim 21, wherein after the recording device sends a registration message to the server, the method further comprises:
the recording device receives a first recording instruction from the server, wherein the first recording instruction is used for indicating the recording device to start recording, the first recording instruction comprises system clock information of the server, and the system clock information is used for indicating each recording device in the recording group to perform clock synchronization according to the system clock of the server.
23. The method of claim 22, wherein after the recording device receives the first recording instruction from the server, the method further comprises:
the recording device receives a second recording instruction from the server, wherein the second recording instruction is used for indicating the recording device to stop recording.
24. A server comprising a processor and a memory;
wherein the memory stores a computer program;
the processor invokes the computer program to cause the server to perform the method of any one of claims 1 to 9 or to perform the method of any one of claims 10 to 15.
25. A recording apparatus comprising a processor and a memory;
wherein the memory stores a computer program;
the processor invokes the computer program to cause the recording device to perform the method of any one of claims 16 to 23.
26. An automated system, comprising:
playback apparatus, server performing the method of any one of claims 1 to 9, and recording apparatus performing the method of any one of claims 16 to 23;
playback device, and server performing the method of any one of claims 10 to 15.
27. A computer readable storage medium storing instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 9, or to perform the method of any one of claims 10 to 15, or to perform the method of any one of claims 16 to 23.
28. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 9, or to perform the method of any one of claims 10 to 15, or to perform the method of any one of claims 16 to 23.
CN202111278494.6A 2021-10-30 2021-10-30 Playback method, recording method and related equipment Pending CN116069975A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111278494.6A CN116069975A (en) 2021-10-30 2021-10-30 Playback method, recording method and related equipment
PCT/CN2022/125809 WO2023071857A1 (en) 2021-10-30 2022-10-18 Playback method, recording method, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111278494.6A CN116069975A (en) 2021-10-30 2021-10-30 Playback method, recording method and related equipment

Publications (1)

Publication Number Publication Date
CN116069975A true CN116069975A (en) 2023-05-05

Family

ID=86159141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111278494.6A Pending CN116069975A (en) 2021-10-30 2021-10-30 Playback method, recording method and related equipment

Country Status (2)

Country Link
CN (1) CN116069975A (en)
WO (1) WO2023071857A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110138417A1 (en) * 2009-12-04 2011-06-09 Rovi Technologies Corporation Systems and methods for providing interactive content with a media asset on a media equipment device
CN102799428A (en) * 2012-06-28 2012-11-28 北京大学 Operation recording and playback method for interactive software
CN107050850A (en) * 2017-05-18 2017-08-18 腾讯科技(深圳)有限公司 The recording and back method of virtual scene, device and playback system
US11509726B2 (en) * 2017-10-20 2022-11-22 Apple Inc. Encapsulating and synchronizing state interactions between devices
CN112579596A (en) * 2020-12-09 2021-03-30 北京天融信网络安全技术有限公司 Data playback method and device, storage medium and electronic equipment
CN113271292B (en) * 2021-04-07 2022-05-10 中国科学院信息工程研究所 Malicious domain name cluster detection method and device based on word vectors

Also Published As

Publication number Publication date
WO2023071857A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
CN111291103B (en) Interface data analysis method and device, electronic equipment and storage medium
CN107957940B (en) Test log processing method, system and terminal
CN111782546B (en) Automatic interface testing method and device based on machine learning
CN116069838A (en) Data processing method, device, computer equipment and storage medium
CN112241362A (en) Test method, test device, server and storage medium
CN114172978A (en) Multi-protocol equipment access method and related device
CN109218338B (en) Information processing system, method and device
CN110874216A (en) Complete code generation method, device, equipment and storage medium
JP2005228183A (en) Program execution method and computer system for executing the program
CN117499287A (en) Web testing method, device, storage medium and proxy server
CN110839079B (en) BI node execution method, device, equipment and medium in workflow system
WO2023077866A1 (en) Multimedia data processing method and apparatus, electronic device, and storage medium
CN116069975A (en) Playback method, recording method and related equipment
CN114064429A (en) Audit log acquisition method and device, storage medium and server
CN111479142B (en) Program content updating method and system based on information release
CN112597119A (en) Method and device for generating processing log and storage medium
CN112579325A (en) Business object processing method and device, electronic equipment and storage medium
CN111353279A (en) Character code conversion method, device and computer storage medium
CN112668061B (en) Electronic equipment and equipment code reporting method thereof
CN117082017B (en) Method and device for managing expansion card of white box switch
CN112579553B (en) Method and apparatus for recording information
CN112787880B (en) Playback data acquisition and flow playback method, device and storage medium
CN113778886B (en) Processing method and device for test cases
CN112181242B (en) Page display method and device
CN116225857A (en) Method, system, equipment and storage medium for detecting state of Spark application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination