CN114415829B - Cross-platform equipment universal interface implementation method and system - Google Patents

Cross-platform equipment universal interface implementation method and system Download PDF

Info

Publication number
CN114415829B
CN114415829B CN202111634906.5A CN202111634906A CN114415829B CN 114415829 B CN114415829 B CN 114415829B CN 202111634906 A CN202111634906 A CN 202111634906A CN 114415829 B CN114415829 B CN 114415829B
Authority
CN
China
Prior art keywords
somatosensory
requirement
acquiring
key
limb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111634906.5A
Other languages
Chinese (zh)
Other versions
CN114415829A (en
Inventor
朱伟明
杨尉
王灏宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yingqing Electronic Technology Co ltd
Original Assignee
Guangzhou Yingqing Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yingqing Electronic Technology Co ltd filed Critical Guangzhou Yingqing Electronic Technology Co ltd
Priority to CN202111634906.5A priority Critical patent/CN114415829B/en
Publication of CN114415829A publication Critical patent/CN114415829A/en
Application granted granted Critical
Publication of CN114415829B publication Critical patent/CN114415829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

According to the method and the system for realizing the cross-platform device universal interface, after the operating platform bound by the target application program is determined, the parameter attribute of the corresponding somatosensory generation device can be determined, the control content can be converted to obtain the control protocol, then the target communication interface is determined according to the parameter attribute, and therefore the control protocol can be transmitted to the somatosensory generation device through the target communication interface to achieve targeted control over the somatosensory generation device. In the embodiment of the application, because the control protocol and the target communication interface are determined by parameter attributes, and different somatosensory generation devices have different parameter attributes, differentiated equipment control can be realized, and modification of a target application program can be avoided, so that software and hardware development efficiency for different VR, AR and MR technologies can be improved.

Description

Cross-platform equipment universal interface implementation method and system
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and a system for implementing a cross-platform device universal interface.
Background
In recent years, with the continuous maturity of technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), various underlying hardware technologies are in the same place and have various interface standards, which causes great inconvenience to the development of content such as VR, AR, and MR.
Disclosure of Invention
In order to solve the technical problems in the related art, the application provides a method and a system for realizing a cross-platform device universal interface.
In a first aspect, an embodiment of the present application further provides a method for implementing a cross-platform device universal interface, which is applied to a cross-platform device universal interface implementation system, and the method includes: determining an operation platform bound by a target application program, and acquiring parameter attributes of a motion sensing generation device corresponding to the operation platform; converting the received control content according to the parameter attribute to obtain a control protocol; and determining a target communication interface based on the parameter attribute, and transmitting the control protocol to the somatosensory generation device through the target communication interface.
In some design schemes that can be implemented independently, the obtaining of the parameter attribute of the motion sensing generation device corresponding to the operating platform includes: acquiring an interactive event feedback content set of the somatosensory generating device, wherein the interactive event feedback content set comprises i groups of interactive event feedback contents with a transfer relationship, and i is an integer not less than 1; acquiring a non-key feedback content set by combining the interaction event feedback content set, wherein the non-key feedback content set comprises i groups of non-key feedback contents with a transfer relationship; based on the interaction event feedback content set, acquiring an interaction event somatosensory requirement set through a first requirement mining unit covered by an interaction event recognition network, wherein the interaction event somatosensory requirement set comprises i interaction event somatosensory requirements; based on the non-key feedback content set, acquiring a non-key somatosensory demand set through a second demand mining unit covered by the interaction event recognition network, wherein the non-key somatosensory demand set comprises i non-key somatosensory demands; based on the interactive event somatosensory requirement set and the non-key somatosensory requirement set, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through a semantic recognition unit covered by the interactive event recognition network; and determining the parameter attribute of the interactive event feedback content set by combining the quantitative semantic label.
In some design solutions that can be implemented independently, the obtaining, by a semantic recognition unit included in the interactivity event recognition network, a quantized semantic tag corresponding to the interactivity event feedback content set based on the interactivity event somatosensory requirement set and the non-key somatosensory requirement set includes: based on the interaction event somatosensory requirement set, acquiring i first description arrays through a first limb attention unit covered by the interaction event identification network, wherein each first description array corresponds to an interaction event somatosensory requirement; based on the non-key somatosensory requirement set, acquiring i second descriptor arrays through a second first limb attention unit covered by the interaction event recognition network, wherein each second descriptor array corresponds to one non-key somatosensory requirement; combining the i first description arrays and the i second description arrays to obtain i target description arrays, wherein each target description array comprises a first description array and a second description array; based on the i target description arrays, acquiring quantitative semantic tags corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
wherein, the acquiring i first description arrays through a first limb attention unit covered by the interactive event recognition network based on the interactive event somatosensory requirement set includes: for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first local downsampling somatosensory requirement through a first downsampling thread covered by a first limb attention unit, wherein the first limb attention unit belongs to the interaction event identification network; for each group of interactive event somatosensory requirements in the interactive event somatosensory requirement set, acquiring a first global downsampling somatosensory requirement through a second downsampling thread covered by the first limb attention unit; for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first multi-mode somatosensory requirement through a sliding average thread covered by a first limb attention unit based on the first local downsampling somatosensory requirement and the first global downsampling somatosensory requirement; and for each group of interactive events in the interactive event somatosensory requirement set, acquiring a first descriptor array through a second downsampling thread covered by the first limb attention unit based on the first multi-mode somatosensory requirement and the interactive event somatosensory requirement.
In some independently implementable designs, the obtaining i second descriptor arrays from a second first limb unit of interest covered by the interactivity event recognition network based on the set of non-critical somatosensory requirements comprises: for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second local downsampling somatosensory requirement through a first downsampling thread covered by a second first limb attention unit, wherein the second first limb attention unit belongs to the interactive event identification network; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second global downsampling somatosensory requirement through a second downsampling thread covered by the second first limb attention unit; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second local downsampling somatosensory requirements and the second global downsampling somatosensory requirements, acquiring second multi-modal somatosensory requirements through a moving average thread covered by the second first limb attention unit; and for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second multi-modal somatosensory requirement and the non-key somatosensory requirements, acquiring a second descriptor array through a second down-sampling thread covered by the second first limb attention unit.
In some independently implementable designs, i is an integer greater than 1; the obtaining, by the semantic recognition unit covered by the interactivity event recognition network, a quantized semantic tag corresponding to the interactivity event feedback content set based on the i target description arrays includes: obtaining a multi-modal array of descriptions through a second limb unit of interest covered by the interactivity event recognition network based on the i arrays of target descriptions, wherein the multi-modal array of descriptions is determined in combination with the i arrays of target descriptions and i quantization indexes, each array of target descriptions corresponding to one quantization index; based on the multi-modal description array, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
wherein the obtaining of the multi-modal description array through the second limb attention unit covered by the interactivity event recognition network based on the i target description arrays comprises: based on the i target description arrays, acquiring i first periodic description arrays through first periodic threads covered by the second limb attention unit, wherein the second limb attention unit belongs to the interaction event recognition network; acquiring i second periodic description arrays through second periodic threads covered by the second limb attention unit based on the i first periodic description arrays; determining i quantization indices in combination with the i second-stage description arrays, wherein each quantization index corresponds to a target description array; determining the multi-modal description array in combination with the i target description arrays and the i quantization indices.
In a second aspect, an embodiment of the present application further provides a system for implementing a cross-platform device universal interface, including:
the motion sensing generation device comprises a motion platform identification module, a motion sensing generation module and a motion sensing generation module, wherein the motion platform identification module is used for determining a motion platform bound by a target application program and acquiring the parameter attribute of the motion sensing generation device corresponding to the motion platform;
the communication protocol conversion module is used for converting the received control content according to the parameter attribute to obtain a control protocol;
and the hardware equipment control module is used for determining a target communication interface based on the parameter attribute and transmitting the control protocol to the somatosensory generation device through the target communication interface.
Under some design schemes that can be implemented independently, the operation platform identification module obtains parameter attributes of the motion sensing generation device corresponding to the operation platform, and the parameter attributes include: acquiring an interactive event feedback content set of the somatosensory generating device, wherein the interactive event feedback content set comprises i groups of interactive event feedback contents with a transfer relationship, and i is an integer not less than 1; acquiring a non-key feedback content set by combining the interaction event feedback content set, wherein the non-key feedback content set comprises i groups of non-key feedback contents with a transfer relationship; based on the interactive event feedback content set, acquiring an interactive event somatosensory demand set through a first demand mining unit covered by an interactive event identification network, wherein the interactive event somatosensory demand set comprises i interactive event somatosensory demands; based on the non-key feedback content set, acquiring a non-key somatosensory demand set through a second demand mining unit covered by the interaction event recognition network, wherein the non-key somatosensory demand set comprises i non-key somatosensory demands; based on the interactive event somatosensory requirement set and the non-key somatosensory requirement set, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through a semantic recognition unit covered by the interactive event recognition network; and determining the parameter attribute of the interactive event feedback content set by combining the quantitative semantic label.
In some design schemes that can be implemented independently, the operating platform identification module obtains, based on the interaction event somatosensory requirement set and the non-key somatosensory requirement set, a quantized semantic tag corresponding to the interaction event feedback content set through a semantic identification unit included in the interaction event identification network, and includes: based on the interaction event somatosensory requirement set, acquiring i first description arrays through a first limb attention unit 1 covered by the interaction event identification network, wherein each first description array corresponds to one interaction event somatosensory requirement; based on the non-key somatosensory requirement set, acquiring i second descriptor arrays through a first limb attention unit 2 covered by the interaction event recognition network, wherein each second descriptor array corresponds to one non-key somatosensory requirement; combining the i first description arrays and the i second description arrays to obtain i target description arrays, wherein each target description array comprises a first description array and a second description array; based on the i target description arrays, acquiring quantitative semantic tags corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
the method includes the steps that the operation platform identification module acquires i first description arrays through a first limb attention unit covered by the interaction event identification network based on the interaction event somatosensory requirement set, and includes the following steps: for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first local downsampling somatosensory requirement through a first downsampling thread covered by a first limb attention unit, wherein the first limb attention unit belongs to the interaction event recognition network; for each group of interactive event somatosensory requirements in the interactive event somatosensory requirement set, acquiring a first global downsampling somatosensory requirement through a second downsampling thread covered by the first limb attention unit; for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first multi-mode somatosensory requirement through a sliding average thread covered by a first limb attention unit based on the first local downsampling somatosensory requirement and the first global downsampling somatosensory requirement; and for each group of interactive events in the interactive event somatosensory requirement set, acquiring a first descriptor array through a second downsampling thread covered by the first limb attention unit based on the first multi-mode somatosensory requirement and the interactive event somatosensory requirement.
In some embodiments, the method further includes acquiring, by the runtime identification module, i second descriptor arrays from a second first limb attention unit covered by the interactivity event identification network based on the set of non-critical somatosensory requirements, where the method includes: for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second local downsampling somatosensory requirement through a first downsampling thread covered by a second first limb attention unit, wherein the second first limb attention unit belongs to the interactive event identification network; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second global downsampling somatosensory requirement through a second downsampling thread covered by the second first limb attention unit; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second multi-modal somatosensory requirement through a moving average thread covered by a second first limb attention unit based on the second local downsampling somatosensory requirement and the second global downsampling somatosensory requirement; and for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second multi-modal somatosensory requirement and the non-key somatosensory requirements, acquiring a second descriptor array through a second down-sampling thread covered by the second first limb attention unit.
In some independently implementable designs, i is an integer greater than 1; the operating platform identification module acquires a quantitative semantic label corresponding to the interactive event feedback content set through the semantic identification unit covered by the interactive event identification network based on the i target description arrays, and includes: obtaining a multi-modal array of descriptions through a second limb unit of interest covered by the interactivity event recognition network based on the i sets of target descriptions, wherein the multi-modal array of descriptions is determined in combination with the i sets of target descriptions and i quantization indexes, each set of target descriptions corresponding to one quantization index; based on the multi-modal description array, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
the operating platform recognition module acquires a multi-modal description array through a second limb attention unit covered by the interactive event recognition network based on the i target description arrays, and the multi-modal description array comprises the following steps: based on the i target description arrays, acquiring i first periodic description arrays through first periodic threads covered by the second limb attention unit, wherein the second limb attention unit belongs to the interaction event recognition network; acquiring i second phased description arrays through second phased threads covered by the second limb attention unit based on the i first phased description arrays; determining i quantization indices in conjunction with the i second phased description arrays, wherein each quantization index corresponds to a target description array; determining the multi-modal description array in combination with the i target description arrays and the i quantization indices.
The method and the device are applied to the embodiment, after the operation platform bound by the target application program is determined, the parameter attribute of the corresponding motion sensing generation device is determined, the control content is converted to obtain the control protocol, then the target communication interface is determined according to the parameter attribute, and therefore the control protocol can be transmitted to the motion sensing generation device through the target communication interface to achieve targeted control over the motion sensing generation device. In the embodiment of the application, because the control protocol and the target communication interface are determined by parameter attributes, and different somatosensory generation devices have different parameter attributes, differentiated equipment control can be realized, and modification of a target application program can be avoided, so that software and hardware development efficiency for different VR, AR and MR technologies can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic hardware structure diagram of a cross-platform device universal interface implementation system provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a method for implementing a cross-platform device universal interface according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided by the embodiments of the present application may be executed in a cross-platform device common interface implementation system, a computer device, or a similar computing device. Taking the example of running on a cross-platform device universal interface implementation system, fig. 1 is a hardware structure block diagram of the cross-platform device universal interface implementation system implementing a cross-platform device universal interface implementation method according to the embodiment of the present application. As shown in fig. 1, the cross-platform device general-purpose interface implementation system 10 may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.) and a memory 104 for storing data, and optionally, may further include a transmission device 106 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the above-mentioned system for implementing a cross-platform device universal interface. For example, cross-platform device common interface implementation system 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to a cross-platform device common interface implementation method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, thereby implementing the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to cross-platform device common interface implementing system 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of such networks may include wireless networks provided by communication providers implementing system 10 across a platform device common interface. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Based on this, please refer to fig. 2, where fig. 2 is a schematic flowchart of a cross-platform device universal interface implementation method provided in an embodiment of the present invention, and the method is applied to a cross-platform device universal interface implementation system, and may further include the technical solutions described below.
And step 21, determining an operation platform bound by the target application program, and acquiring the parameter attribute of the motion sensing generation device corresponding to the operation platform.
In this embodiment of the application, the target application may be an application running in the motion sensing generation apparatus and/or the motion sensing generation apparatus, and the motion sensing generation apparatus may be a VR device, an AR device, or an MR device.
In some alternative embodiments, the running platform may be identified by the running platform identifier and the system flag, and then the generic API function may be converted according to the running platform feature, so that the skilled person may make the generic API function run on the platform.
Further, the parameter attribute may reflect the configuration of different motion sensing generation apparatuses from a device user motion sensing level, a device user visual level, and a device user auditory level (a person skilled in the art may obtain related configuration parameters based on the parameter attribute, which is not described further herein), and different motion sensing generation apparatuses have different parameter attributes.
Therefore, the embodiment of the application can realize conversion and transmission of the control protocol by determining the parameter attributes of the different motion sensing generation devices, can realize targeted equipment control processing and interface realization processing, and can improve the subsequent software and hardware development efficiency.
For some design ideas that can be implemented independently, the parameter attribute may be determined based on a user feedback layer, and based on this, the obtaining of the parameter attribute of the motion sensing generation device corresponding to the running platform described in step 21 may include the technical solutions described in steps 211 to 216.
Step 211, obtaining an interactive event feedback content set of the motion sensing generating device, where the interactive event feedback content set includes i groups of interactive event feedback contents having a transfer relationship, and i is an integer not less than 1.
Step 212, obtaining a non-key feedback content set by combining the interaction event feedback content set, wherein the non-key feedback content set includes i groups of non-key feedback contents having a transfer relationship.
Step 213, based on the interaction event feedback content set, obtaining an interaction event somatosensory requirement set through a first requirement mining unit covered by an interaction event identification network, wherein the interaction event somatosensory requirement set includes i interaction event somatosensory requirements.
Step 214, based on the non-key feedback content set, a non-key somatosensory requirement set is obtained through a second requirement mining unit covered by the interaction event recognition network, wherein the non-key somatosensory requirement set comprises i non-key somatosensory requirements.
Step 215, based on the interactive event somatosensory requirement set and the non-key somatosensory requirement set, obtaining a quantitative semantic label corresponding to the interactive event feedback content set through a semantic recognition unit covered by the interactive event recognition network.
And step 216, determining the parameter attribute of the interactive event feedback content set by combining the quantitative semantic label.
In the embodiment of the application, the quantized semantic tags can be understood as classification probability values, based on the classification probability values, the parameter tags corresponding to the quantized semantic tags can be determined through a preset mapping list, then the parameter attributes of the interactive event feedback content set are determined through the parameter tags, and due to the fact that the interactive event somatosensory requirement set and the non-key somatosensory requirement set are considered, the accuracy and the reliability of the obtained parameter attributes can be guaranteed.
For some design ideas which can be independently implemented, based on the interaction event somatosensory requirement set and the non-key somatosensory requirement set, obtaining a quantitative semantic tag corresponding to the interaction event feedback content set by a semantic recognition unit covered by the interaction event recognition network, and implementing the following technical scheme: based on the interaction event somatosensory requirement set, acquiring i first description arrays through a first limb attention unit covered by the interaction event identification network, wherein each first description array corresponds to an interaction event somatosensory requirement; based on the non-key somatosensory requirement set, acquiring i second description arrays through a second first limb attention unit covered by the interaction event recognition network, wherein each second description array corresponds to one non-key somatosensory requirement; combining the i first description arrays and the i second description arrays to obtain i target description arrays, wherein each target description array comprises a first description array and a second description array; based on the i target description arrays, acquiring quantitative semantic tags corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
further, the obtaining, based on the interaction event somatosensory requirement set, i first descriptor arrays through a first limb attention unit covered by the interaction event recognition network may include the following: for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first local downsampling somatosensory requirement through a first downsampling thread covered by a first limb attention unit, wherein the first limb attention unit belongs to the interaction event recognition network; for each group of interactive event somatosensory requirements in the interactive event somatosensory requirement set, acquiring a first global downsampling somatosensory requirement through a second downsampling thread covered by the first limb attention unit; for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first multi-mode somatosensory requirement through a sliding average thread covered by a first limb attention unit based on the first local downsampling somatosensory requirement and the first global downsampling somatosensory requirement; and for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first descriptor array through a second downsampling thread covered by the first limb attention unit based on the first multi-modal somatosensory requirement and the interaction event somatosensory requirements.
For some design ideas that can be implemented independently, the obtaining i second description arrays through a second first limb attention unit covered by the interaction event recognition network based on the non-key somatosensory requirement set includes: for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second local downsampling somatosensory requirement through a first downsampling thread covered by a second first limb attention unit, wherein the second first limb attention unit belongs to the interactive event identification network; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second global downsampling somatosensory requirement through a second downsampling thread covered by the second first limb attention unit; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second multi-modal somatosensory requirement through a moving average thread covered by a second first limb attention unit based on the second local downsampling somatosensory requirement and the second global downsampling somatosensory requirement; and for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second multi-mode somatosensory requirements and the non-key somatosensory requirements, acquiring a second descriptor array through a second downsampling thread covered by the second first limb attention unit.
In some examples, i is an integer greater than 1. Based on this, the obtaining, by the semantic recognition unit covered by the interactivity event recognition network, the quantized semantic tag corresponding to the interactivity event feedback content set based on the i target description arrays may include the following: obtaining a multi-modal array of descriptions through a second limb unit of interest covered by the interactivity event recognition network based on the i arrays of target descriptions, wherein the multi-modal array of descriptions is determined in combination with the i arrays of target descriptions and i quantization indexes, each array of target descriptions corresponding to one quantization index; based on the multi-mode description array, acquiring a quantized semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
still further, the obtaining of the multi-modal array of descriptions through the second limb attention unit covered by the interactivity event recognition network based on the i sets of target descriptions may include the following: based on the i target description arrays, acquiring i first stage description arrays through first stage threads covered by the second limb attention unit, wherein the second limb attention unit belongs to the interaction event recognition network; acquiring i second phased description arrays through second phased threads covered by the second limb attention unit based on the i first phased description arrays; determining i quantization indices in combination with the i second-stage description arrays, wherein each quantization index corresponds to a target description array; determining the multi-modal description array in combination with the i target description arrays and the i quantization indices.
And step 22, converting the received control content according to the parameter attribute to obtain a control protocol.
In the embodiment of the present application, the control content may be a series of control instructions for the motion sensing generation device, such as vibration, heat generation, and the like, and the control protocol is matched with the motion sensing generation device.
And step 23, determining a target communication interface based on the parameter attribute, and transmitting the control protocol to the somatosensory generation device through the target communication interface.
In the embodiment of the present application, the target communication interface may be an ethernet, bluetooth, 5G, WIFI, or the like interface.
It is understood that step 22 and step 23 can be implemented based on the related art, and will not be further described herein.
Based on the same inventive concept, a cross-platform device universal interface implementation system is also provided, and the system may include the following functional modules: the motion sensing generation device comprises a motion sensing generation module, a motion platform identification module and a motion sensing generation module, wherein the motion sensing generation module is used for determining a motion platform bound by a target application program and acquiring parameter attributes of a motion sensing generation device corresponding to the motion platform; the communication protocol conversion module is used for converting the received control content according to the parameter attribute to obtain a control protocol; and the hardware equipment control module is used for determining a target communication interface based on the parameter attribute and transmitting the control protocol to the somatosensory generating device through the target communication interface.
In some possible embodiments, the obtaining, by the running platform identification module, a parameter attribute of a motion sensing generation device corresponding to the running platform includes: acquiring an interactive event feedback content set of the somatosensory generating device, wherein the interactive event feedback content set comprises i groups of interactive event feedback contents with a transfer relationship, and i is an integer not less than 1; acquiring a non-key feedback content set by combining the interaction event feedback content set, wherein the non-key feedback content set comprises i groups of non-key feedback contents with a transfer relationship; based on the interactive event feedback content set, acquiring an interactive event somatosensory demand set through a first demand mining unit covered by an interactive event identification network, wherein the interactive event somatosensory demand set comprises i interactive event somatosensory demands; based on the non-key feedback content set, acquiring a non-key somatosensory demand set through a second demand mining unit covered by the interaction event recognition network, wherein the non-key somatosensory demand set comprises i non-key somatosensory demands; based on the interactive event somatosensory requirement set and the non-key somatosensory requirement set, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through a semantic recognition unit covered by the interactive event recognition network; and determining the parameter attribute of the interactive event feedback content set by combining the quantitative semantic label.
In some possible embodiments, the obtaining, by the running platform identification module, a quantized semantic tag corresponding to the interaction event feedback content set by a semantic identification unit included in the interaction event identification network based on the interaction event somatosensory requirement set and the non-key somatosensory requirement set includes: based on the interaction event somatosensory requirement set, acquiring i first description arrays through a first limb attention unit covered by the interaction event identification network, wherein each first description array corresponds to an interaction event somatosensory requirement; based on the non-key somatosensory requirement set, acquiring i second description arrays through a second first limb attention unit covered by the interaction event recognition network, wherein each second description array corresponds to one non-key somatosensory requirement; combining the i first description arrays and the i second description arrays to obtain i target description arrays, wherein each target description array comprises a first description array and a second description array; based on the i target description arrays, acquiring quantitative semantic tags corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
in some possible embodiments, the running platform identifying module obtains, based on the interaction event somatosensory requirement set, i first description arrays through a first limb attention unit covered by the interaction event identification network, including: for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first local downsampling somatosensory requirement through a first downsampling thread covered by a first limb attention unit, wherein the first limb attention unit belongs to the interaction event recognition network; for each group of interactive event somatosensory requirements in the interactive event somatosensory requirement set, acquiring a first global downsampling somatosensory requirement through a second downsampling thread covered by the first limb attention unit; for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first multi-modal somatosensory requirement through a moving average thread covered by the first limb attention unit based on the first local downsampling somatosensory requirement and the first global downsampling somatosensory requirement; and for each group of interactive events in the interactive event somatosensory requirement set, acquiring a first descriptor array through a second downsampling thread covered by the first limb attention unit based on the first multi-mode somatosensory requirement and the interactive event somatosensory requirement.
In some possible embodiments, the running platform identifying module obtains i second description arrays through a second first limb attention unit covered by the interaction event identification network based on the non-key somatosensory requirement set, including: for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second local downsampling somatosensory requirement through a first downsampling thread covered by a second first limb attention unit, wherein the second first limb attention unit belongs to the interactive event identification network; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second global downsampling somatosensory requirement through a second downsampling thread covered by the second first limb attention unit; for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second multi-modal somatosensory requirement through a moving average thread covered by a second first limb attention unit based on the second local downsampling somatosensory requirement and the second global downsampling somatosensory requirement; and for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second multi-modal somatosensory requirement and the non-key somatosensory requirements, acquiring a second descriptor array through a second down-sampling thread covered by the second first limb attention unit.
In some possible embodiments, i is an integer greater than 1; the operation platform identification module acquires the quantitative semantic labels corresponding to the interactive event feedback content set through the semantic identification unit covered by the interactive event identification network based on the i target description arrays, and the operation platform identification module comprises: obtaining a multi-modal array of descriptions through a second limb unit of interest covered by the interactivity event recognition network based on the i arrays of target descriptions, wherein the multi-modal array of descriptions is determined in combination with the i arrays of target descriptions and i quantization indexes, each array of target descriptions corresponding to one quantization index; based on the multi-mode description array, acquiring a quantized semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
in some possible embodiments, the running platform recognition module obtains a multi-modal description array through a second limb attention unit covered by the interactivity event recognition network based on the i target description arrays, including: based on the i target description arrays, acquiring i first periodic description arrays through first periodic threads covered by the second limb attention unit, wherein the second limb attention unit belongs to the interaction event recognition network; acquiring i second phased description arrays through second phased threads covered by the second limb attention unit based on the i first phased description arrays; determining i quantization indices in conjunction with the i second phased description arrays, wherein each quantization index corresponds to a target description array; determining the multi-modal description array in combination with the i target description arrays and the i quantization indices.
Further, a readable storage medium is provided, on which a program is stored which, when being executed by a processor, carries out the above-mentioned method.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a media service server 10, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A cross-platform equipment universal interface implementation method is applied to a cross-platform equipment universal interface implementation system, and comprises the following steps:
determining an operation platform bound by a target application program, and acquiring parameter attributes of a motion sensing generation device corresponding to the operation platform;
converting the received control content according to the parameter attribute to obtain a control protocol;
determining a target communication interface based on the parameter attribute, and transmitting the control protocol to the somatosensory generation device through the target communication interface;
the acquiring of the parameter attribute of the motion sensing generation device corresponding to the running platform comprises the following steps:
acquiring an interactive event feedback content set of the somatosensory generating device, wherein the interactive event feedback content set comprises i groups of interactive event feedback contents with a transfer relationship, and i is an integer not less than 1;
acquiring a non-key feedback content set by combining the interaction event feedback content set, wherein the non-key feedback content set comprises i groups of non-key feedback contents with a transfer relationship;
based on the interactive event feedback content set, acquiring an interactive event somatosensory demand set through a first demand mining unit covered by an interactive event identification network, wherein the interactive event somatosensory demand set comprises i interactive event somatosensory demands;
based on the non-key feedback content set, acquiring a non-key somatosensory demand set through a second demand mining unit covered by the interaction event recognition network, wherein the non-key somatosensory demand set comprises i non-key somatosensory demands;
based on the interactive event somatosensory requirement set and the non-key somatosensory requirement set, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through a semantic recognition unit covered by the interactive event recognition network;
and determining the parameter attribute of the interactive event feedback content set by combining the quantitative semantic label.
2. The method of claim 1, wherein the obtaining, by a semantic recognition unit covered by the interactivity event recognition network, a quantized semantic tag corresponding to the interactivity event feedback content set based on the interactivity event somatosensory requirement set and the non-key somatosensory requirement set comprises:
based on the interaction event somatosensory requirement set, acquiring i first description arrays through a first limb attention unit covered by the interaction event identification network, wherein each first description array corresponds to an interaction event somatosensory requirement;
based on the non-key somatosensory requirement set, acquiring i second description arrays through a second first limb attention unit covered by the interaction event recognition network, wherein each second description array corresponds to one non-key somatosensory requirement;
combining the i first description arrays and the i second description arrays to obtain i target description arrays, wherein each target description array comprises a first description array and a second description array;
based on the i target description arrays, obtaining a quantitative semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
wherein, the acquiring i first description arrays through a first limb attention unit covered by the interactive event recognition network based on the interactive event somatosensory requirement set includes:
for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first local downsampling somatosensory requirement through a first downsampling thread covered by a first limb attention unit, wherein the first limb attention unit belongs to the interaction event recognition network;
for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first global downsampling somatosensory requirement through a second downsampling thread covered by the first limb attention unit;
for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first multi-mode somatosensory requirement through a sliding average thread covered by a first limb attention unit based on the first local downsampling somatosensory requirement and the first global downsampling somatosensory requirement;
and for each group of interactive events in the interactive event somatosensory requirement set, acquiring a first descriptor array through a second downsampling thread covered by the first limb attention unit based on the first multi-mode somatosensory requirement and the interactive event somatosensory requirement.
3. The method of claim 2, wherein the obtaining i second descriptor arrays through a second first limb attention unit covered by the interactivity event recognition network based on the non-key somatosensory requirement set comprises:
for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring second local downsampling somatosensory requirements through a first downsampling thread covered by a second first limb attention unit, wherein the second first limb attention unit belongs to the interaction event recognition network;
for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second global downsampling somatosensory requirement through a second downsampling thread covered by the second first limb attention unit;
for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second local downsampling somatosensory requirements and the second global downsampling somatosensory requirements, acquiring second multi-modal somatosensory requirements through a moving average thread covered by the second first limb attention unit;
and for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second multi-modal somatosensory requirement and the non-key somatosensory requirements, acquiring a second descriptor array through a second down-sampling thread covered by the second first limb attention unit.
4. The method of claim 2, wherein i is an integer greater than 1;
the obtaining, by the semantic recognition unit covered by the interactivity event recognition network, a quantized semantic tag corresponding to the interactivity event feedback content set based on the i target description arrays includes:
obtaining a multi-modal array of descriptions through a second limb unit of interest covered by the interactivity event recognition network based on the i arrays of target descriptions, wherein the multi-modal array of descriptions is determined in combination with the i arrays of target descriptions and i quantization indexes, each array of target descriptions corresponding to one quantization index;
based on the multi-mode description array, acquiring a quantized semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
wherein the obtaining of the multi-modal description array through the second limb attention unit covered by the interactivity event recognition network based on the i target description arrays comprises:
based on the i target description arrays, acquiring i first periodic description arrays through first periodic threads covered by the second limb attention unit, wherein the second limb attention unit belongs to the interaction event recognition network;
acquiring i second phased description arrays through second phased threads covered by the second limb attention unit based on the i first phased description arrays;
determining i quantization indices in combination with the i second-stage description arrays, wherein each quantization index corresponds to a target description array;
determining the multi-modal description array in combination with the i target description arrays and the i quantization indices.
5. A cross-platform device universal interface implementation system, comprising:
the motion sensing generation device comprises a motion sensing generation module, a motion platform identification module and a motion sensing generation module, wherein the motion sensing generation module is used for determining a motion platform bound by a target application program and acquiring parameter attributes of a motion sensing generation device corresponding to the motion platform;
the communication protocol conversion module is used for converting the received control content according to the parameter attribute to obtain a control protocol;
the hardware equipment control module is used for determining a target communication interface based on the parameter attribute and transmitting the control protocol to the somatosensory generating device through the target communication interface;
the method for acquiring the parameter attribute of the motion sensing generation device corresponding to the running platform by the running platform identification module comprises the following steps:
acquiring an interactive event feedback content set of the somatosensory generating device, wherein the interactive event feedback content set comprises i groups of interactive event feedback contents with a transfer relationship, and i is an integer not less than 1;
acquiring a non-key feedback content set by combining the interaction event feedback content set, wherein the non-key feedback content set comprises i groups of non-key feedback contents with a transfer relationship;
based on the interactive event feedback content set, acquiring an interactive event somatosensory demand set through a first demand mining unit covered by an interactive event identification network, wherein the interactive event somatosensory demand set comprises i interactive event somatosensory demands;
based on the non-key feedback content set, acquiring a non-key somatosensory demand set through a second demand mining unit covered by the interaction event recognition network, wherein the non-key somatosensory demand set comprises i non-key somatosensory demands;
based on the interactive event somatosensory requirement set and the non-key somatosensory requirement set, acquiring a quantitative semantic label corresponding to the interactive event feedback content set through a semantic recognition unit covered by the interactive event recognition network;
and determining the parameter attribute of the interactive event feedback content set by combining the quantitative semantic label.
6. The system of claim 5, wherein the running platform recognition module, based on the interactivity event somatosensory requirement set and the non-key somatosensory requirement set, acquires a quantized semantic tag corresponding to the interactivity event feedback content set through a semantic recognition unit included in the interactivity event recognition network, and includes:
based on the interaction event somatosensory requirement set, acquiring i first description arrays through a first limb attention unit covered by the interaction event identification network, wherein each first description array corresponds to an interaction event somatosensory requirement;
based on the non-key somatosensory requirement set, acquiring i second description arrays through a second first limb attention unit covered by the interaction event recognition network, wherein each second description array corresponds to one non-key somatosensory requirement;
combining the i first description arrays and the i second description arrays to obtain i target description arrays, wherein each target description array comprises a first description array and a second description array;
based on the i target description arrays, acquiring quantitative semantic tags corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
the method for acquiring the i first description arrays by the running platform identification module through the first limb attention unit covered by the interactive event identification network based on the interactive event somatosensory requirement set comprises the following steps:
for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first local downsampling somatosensory requirement through a first downsampling thread covered by a first limb attention unit, wherein the first limb attention unit belongs to the interaction event recognition network;
for each group of interactive event somatosensory requirements in the interactive event somatosensory requirement set, acquiring a first global downsampling somatosensory requirement through a second downsampling thread covered by the first limb attention unit;
for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first multi-mode somatosensory requirement through a sliding average thread covered by a first limb attention unit based on the first local downsampling somatosensory requirement and the first global downsampling somatosensory requirement;
and for each group of interaction event somatosensory requirements in the interaction event somatosensory requirement set, acquiring a first descriptor array through a second downsampling thread covered by the first limb attention unit based on the first multi-modal somatosensory requirement and the interaction event somatosensory requirements.
7. The system of claim 6, wherein the running platform identification module obtains i second descriptor arrays from a second first limb attention unit covered by the interactivity event identification network based on the non-critical somatosensory requirement set, comprising:
for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second local downsampling somatosensory requirement through a first downsampling thread covered by a second first limb attention unit, wherein the second first limb attention unit belongs to the interactive event identification network;
for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, acquiring a second global downsampling somatosensory requirement through a second downsampling thread covered by the second first limb attention unit;
for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second local downsampling somatosensory requirements and the second global downsampling somatosensory requirements, acquiring second multi-modal somatosensory requirements through a moving average thread covered by the second first limb attention unit;
and for each group of non-key somatosensory requirements in the non-key somatosensory requirement set, based on the second multi-mode somatosensory requirements and the non-key somatosensory requirements, acquiring a second descriptor array through a second downsampling thread covered by the second first limb attention unit.
8. The system of claim 6, wherein i is an integer greater than 1;
the operation platform identification module acquires the quantitative semantic labels corresponding to the interactive event feedback content set through the semantic identification unit covered by the interactive event identification network based on the i target description arrays, and the operation platform identification module comprises:
obtaining a multi-modal array of descriptions through a second limb unit of interest covered by the interactivity event recognition network based on the i arrays of target descriptions, wherein the multi-modal array of descriptions is determined in combination with the i arrays of target descriptions and i quantization indexes, each array of target descriptions corresponding to one quantization index;
based on the multi-mode description array, acquiring a quantized semantic label corresponding to the interactive event feedback content set through the semantic recognition unit covered by the interactive event recognition network;
the operating platform identification module acquires a multi-modal description array through a second limb attention unit covered by the interaction event identification network based on the i target description arrays, and comprises the following steps:
based on the i target description arrays, acquiring i first periodic description arrays through first periodic threads covered by the second limb attention unit, wherein the second limb attention unit belongs to the interaction event recognition network;
acquiring i second periodic description arrays through second periodic threads covered by the second limb attention unit based on the i first periodic description arrays;
determining i quantization indices in combination with the i second-stage description arrays, wherein each quantization index corresponds to a target description array;
determining the multi-modal description array in combination with the i target description arrays and the i quantization indices.
CN202111634906.5A 2021-12-29 2021-12-29 Cross-platform equipment universal interface implementation method and system Active CN114415829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111634906.5A CN114415829B (en) 2021-12-29 2021-12-29 Cross-platform equipment universal interface implementation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111634906.5A CN114415829B (en) 2021-12-29 2021-12-29 Cross-platform equipment universal interface implementation method and system

Publications (2)

Publication Number Publication Date
CN114415829A CN114415829A (en) 2022-04-29
CN114415829B true CN114415829B (en) 2022-08-19

Family

ID=81268754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111634906.5A Active CN114415829B (en) 2021-12-29 2021-12-29 Cross-platform equipment universal interface implementation method and system

Country Status (1)

Country Link
CN (1) CN114415829B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0800218D0 (en) * 2005-06-29 2008-02-13 Roku Llc Device and method for aquaculture facilies for exposing marine organisms to light
CN105538307A (en) * 2014-11-04 2016-05-04 宁波弘讯科技股份有限公司 Control device, system and method
CN107491173A (en) * 2017-08-16 2017-12-19 歌尔科技有限公司 A kind of proprioceptive simulation control method and equipment
CN112464052A (en) * 2020-12-22 2021-03-09 游艺星际(北京)科技有限公司 Feedback information processing method, feedback information display device and electronic equipment
CN112818023A (en) * 2021-01-26 2021-05-18 龚世燕 Big data analysis method and cloud computing server in associated cloud service scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0800218D0 (en) * 2005-06-29 2008-02-13 Roku Llc Device and method for aquaculture facilies for exposing marine organisms to light
CN105538307A (en) * 2014-11-04 2016-05-04 宁波弘讯科技股份有限公司 Control device, system and method
CN107491173A (en) * 2017-08-16 2017-12-19 歌尔科技有限公司 A kind of proprioceptive simulation control method and equipment
CN112464052A (en) * 2020-12-22 2021-03-09 游艺星际(北京)科技有限公司 Feedback information processing method, feedback information display device and electronic equipment
CN112818023A (en) * 2021-01-26 2021-05-18 龚世燕 Big data analysis method and cloud computing server in associated cloud service scene

Also Published As

Publication number Publication date
CN114415829A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
US11677812B2 (en) Lightweight IoT information model
CN101061500B (en) The method of dynamic product information, system, equipment and computer program are provided in short-haul connections
CN105429858A (en) Real-time message transmission method among multiple robots
WO2015081808A1 (en) Method and apparatus for data transmission
CN102118430A (en) Cloud federation as a service
CN112527528A (en) Data transmission method, device and storage medium based on message queue
CN108476236B (en) Semantic-based content specification of internet of things data
CN103636273A (en) Method and apparatus for improving reception availability on multi-subscriber identity module devices
CN110297944B (en) Distributed XML data processing method and system
CN101807205B (en) Processing module, device, and method for processing of xml data
CN103532564B (en) Two-dimensional code data encoding method, decoding method, system and intelligent device
EA019680B1 (en) Service access method and system
CN107665237B (en) Data structure classification device, and unstructured data publishing and subscribing system and method
CN109495492A (en) Communication system for intelligent water utilities industry
CN107181794B (en) DICOM network transmission method based on DIMSE message sending and receiving
WO2008156640A2 (en) A method and apparatus for encoding data
CN114415829B (en) Cross-platform equipment universal interface implementation method and system
CN103647763A (en) Mobile terminal advertisement invoking method and system
CN105024923B (en) The method and device that message category based on XMPP extension message is realized
CN116738057A (en) Information recommendation method, device, computer equipment and storage medium
Lee et al. Software architecture for a multi-protocol RFID reader on mobile devices
CN102025755A (en) Method and device for aggregating network resources
Editya et al. Performance of IEEE 802.14. 5 and ZigBee protocol on realtime monitoring augmented reality based wireless sensor network system
CN112769741B (en) Message communication method and electronic equipment
CN106131169A (en) A kind of PET controls network communicating system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Method and System for Implementing a Cross Platform Universal Device Interface

Effective date of registration: 20230317

Granted publication date: 20220819

Pledgee: Bank of China Limited by Share Ltd. Guangzhou Panyu branch

Pledgor: GUANGZHOU YINGQING ELECTRONIC TECHNOLOGY Co.,Ltd.

Registration number: Y2023980035246