CN114722911A - Application operation terminal switching method and device, medium and electronic equipment - Google Patents

Application operation terminal switching method and device, medium and electronic equipment Download PDF

Info

Publication number
CN114722911A
CN114722911A CN202210259749.2A CN202210259749A CN114722911A CN 114722911 A CN114722911 A CN 114722911A CN 202210259749 A CN202210259749 A CN 202210259749A CN 114722911 A CN114722911 A CN 114722911A
Authority
CN
China
Prior art keywords
terminal
data
segment
sample
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210259749.2A
Other languages
Chinese (zh)
Inventor
冉光琴
帅朝春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210259749.2A priority Critical patent/CN114722911A/en
Publication of CN114722911A publication Critical patent/CN114722911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G21/00Input or output devices integrated in time-pieces
    • G04G21/04Input or output devices integrated in time-pieces using radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The disclosure provides an application operation terminal switching method, an application operation terminal switching device, a computer readable medium and an electronic device, and relates to the technical field of communication. The method is applied to a first terminal and a second terminal which establish communication connection, and comprises the following steps: responding to the running of a target application of a first terminal, and acquiring first mobile data of the first terminal and second mobile data of a second terminal corresponding to the first mobile data; determining a first terminal posture according to the first movement data, and determining a second terminal posture according to the second movement data; and when the first terminal posture and the second terminal posture accord with the switching posture corresponding to the target application, switching the running terminal of the target application into the second terminal based on the communication connection. The method and the device can intelligently switch the running terminal of the target application, and avoid the user from executing complex manual switching operation.

Description

Application operation terminal switching method and device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to an application operation terminal switching method, an application operation terminal switching apparatus, a computer-readable medium, and an electronic device.
Background
In recent years, with the rapid development of wearable devices, there have been increasing interlocks between various wearable devices and devices such as mobile phones, which perform device cooperation. For example, devices such as a bracelet and the like can directly acquire the exercise data of the user and send the exercise data to a mobile phone for analysis of the exercise data; for another example, a smart watch, as a cell phone accessory, may remotely control a cell phone to take pictures, receive and reply messages from the cell phone, and select and control the playing of music. However, in the related art, the above-described linkage system generally requires a user to perform control by manual operation, and cannot perform intelligent control.
Disclosure of Invention
The purpose of the present disclosure is to provide an application operation terminal switching method, an application operation terminal switching device, a computer readable medium, and an electronic device, so as to improve the intelligence degree of the switching process between the linkage terminals at least to a certain extent, and avoid the user from performing complicated manual switching operation.
According to a first aspect of the present disclosure, there is provided an application running terminal switching method, applied to a first terminal and a second terminal which establish a communication connection, including: responding to the running of a target application by the first terminal, and acquiring first mobile data of the first terminal and second mobile data of the second terminal corresponding to the first mobile data; determining a first terminal posture according to the first movement data, and determining a second terminal posture according to the second movement data; and when the first terminal posture and the second terminal posture accord with the switching posture corresponding to the target application, switching the running terminal of the target application to the second terminal based on the communication connection.
According to a second aspect of the present disclosure, there is provided an application running terminal switching apparatus, applied to a first terminal and a second terminal which establish a communication connection, including: the data acquisition module is used for responding to the running of a target application of the first terminal, and acquiring first mobile data of the first terminal and second mobile data of the second terminal corresponding to the first mobile data; the attitude determination module is used for determining a first terminal attitude according to the first mobile data and determining a second terminal attitude according to the second mobile data; and the terminal switching module is used for switching the running terminal of the target application to the second terminal based on the communication connection when the first terminal posture and the second terminal posture accord with the switching posture corresponding to the target application.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, characterized by comprising: a processor; and memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
According to the application operation terminal switching method provided by the embodiment of the disclosure, when a first terminal operates a target application, mobile data of the first terminal and mobile data of a second terminal, namely first mobile data and second mobile data, are collected, then a first terminal posture and a second terminal posture are determined according to the first mobile data and the second mobile data respectively, and when the first terminal posture and the second terminal posture accord with a switching posture corresponding to the target application, the operation terminal of the target application is switched to the second terminal based on communication connection. According to the embodiment of the disclosure, the corresponding postures of the first terminal and the second terminal are determined by respectively collecting the mobile data corresponding to the two terminals, and then when the postures of the first terminal and the second terminal accord with the corresponding switching postures of the target application, the operation terminal of the target application is intelligently switched, so that the user is prevented from executing complex manual switching operation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied;
fig. 3 schematically illustrates a flowchart of an application execution terminal switching method in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method of determining a first terminal pose and a second terminal pose in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart of a gesture recognition method in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method of training a gesture recognition model in an exemplary embodiment of the disclosure;
FIG. 7 schematically illustrates a flow chart of a method of mobile endpoint location of sample data in an exemplary embodiment of the disclosure;
FIG. 8 schematically illustrates a flow chart of another method of training a gesture recognition model in exemplary embodiments of the present disclosure;
fig. 9 schematically illustrates a flowchart of another application execution terminal switching method in an exemplary embodiment of the present disclosure;
fig. 10 schematically illustrates a composition diagram of an application operation terminal switching device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which an application execution terminal switching method and apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include two or more of the terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices 101, 102, 103 may be various electronic devices with mobile data acquisition and processing functions, including but not limited to desktop computers, portable computers, smart phones, tablets, smart wearable devices, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The application running terminal switching method provided by the embodiment of the present disclosure is generally executed by the terminal devices 101, 102, and 103, and accordingly, the application running terminal switching apparatus is generally disposed in the terminal devices 101, 102, and 103. However, it is easily understood by those skilled in the art that the application running terminal switching method provided in the embodiment of the present disclosure may also be executed by the server 105, and accordingly, the application running terminal switching device may also be disposed in the server 105, which is not particularly limited in the exemplary embodiment. For example, in an exemplary embodiment, any two of the terminal devices 101, 102, and 103 may be respectively used as a first terminal and a second terminal, when the first terminal runs the target application, the first terminal or the second terminal or the server 105 is used as a main body, the first mobile data and the second mobile data corresponding to the first mobile data are obtained through the network, then the first terminal posture and the second terminal posture are determined according to the first mobile data and the second mobile data, and when the first terminal posture and the second terminal posture conform to the switching posture corresponding to the target application, the running terminal of the target application is switched to the second terminal based on the network 104, and the like.
The exemplary embodiment of the present disclosure provides an electronic device for implementing an application running terminal switching method, which may be the terminal device 101, 102, 103 or the server 105 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the application execution terminal switching method via execution of the executable instructions.
The following takes the mobile terminal 200 in fig. 2 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also interface differently than shown in fig. 2, or a combination of multiple types of interface.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor module 280, a display 290, a camera module 291, an indicator 292, a motor 293, a button 294, and a Subscriber Identity Module (SIM) card interface 295. Wherein the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, and the like.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the mobile terminal 200, for example: image recognition, face recognition, speech recognition, text understanding, and the like. In some embodiments, the NPU may be configured to perform gesture recognition on the first sampling segment and the second sampling segment based on a gesture recognition model to obtain a first terminal gesture and a second terminal gesture.
A memory is provided in the processor 210. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and execution is controlled by processor 210.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. Wherein, the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals; the mobile communication module 250 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the mobile terminal 200; the modem processor may include a modulator and a demodulator; the Wireless communication module 260 may provide a solution for Wireless communication including a Wireless Local Area Network (WLAN) (e.g., a Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), and the like, applied to the mobile terminal 200. In some embodiments, antenna 1 of the mobile terminal 200 is coupled to the mobile communication module 250 and antenna 2 is coupled to the wireless communication module 260, such that the mobile terminal 200 may communicate with networks and other devices via wireless communication techniques. In some embodiments, the communication connection between the first terminal and the second terminal may be established through a wireless communication function.
Internal memory 221 may be used to store computer-executable program code, which includes instructions. The internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 210 executes various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor. In some embodiments, various functional applications of the mobile terminal may be controlled by the processor 210, thereby achieving the purpose of running the terminal of the cut flower target application.
The gyro sensor 2801 may be used to determine a motion gesture of the mobile terminal 200. In some embodiments, the angular velocity of the mobile terminal 200 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 2801. The gyro sensor 2801 can be used to photograph anti-shake, navigation, body-feel game scenes, and the like.
The acceleration sensor 2802 may detect the magnitude of acceleration of the mobile terminal 200 in various directions (generally, three axes). The magnitude and direction of gravity may be detected when the mobile terminal 200 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
In some embodiments, the movement data of the mobile terminal may be collected by sensors such as the gyro sensor 2801 and the acceleration sensor 2802.
The depth sensor 2803 is used to acquire depth information of the scene. In addition, sensors with other functions, such as a pressure sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be provided in the sensor module 280 according to actual needs.
In the related art, manual operation is generally required to be performed on a target application page to realize switching of a target application running terminal. Taking the target application as the call application as an example, when answering a call through terminal devices such as a smart watch, a call can be switched to a mobile phone answering state by performing manual operation on a call page.
In view of one or more of the above problems, the present exemplary embodiment provides an application execution terminal switching method. The application operation terminal switching method may be applied to the server 105, and may also be applied to one or more of the terminal devices 101, 102, and 103, which is not particularly limited in this exemplary embodiment. Referring to fig. 3, the application execution terminal switching method may include the following steps S310 to S330:
in step S310, in response to the first terminal running the target application, first movement data of the first terminal and second movement data of the second terminal corresponding to the first movement data are acquired.
The first movement data or the second movement data may include data collected based on a sensor and used for characterizing a movement attribute of the first terminal or the second terminal, and the data may include multiple kinds of data characterizing the movement attribute at the same time or only one kind of data characterizing the movement attribute. For example, the first movement data or the second movement data may include an acceleration collected by an acceleration sensor provided on the first terminal or the second terminal; as another example, the first movement data or the second movement data may include angular velocities around three axes (i.e., x, y, and z axes) acquired by a gyro sensor provided on the first terminal or the second terminal.
In an exemplary embodiment, an established communication connection exists between the first terminal and the second terminal, and the communication connection may include a communication connection established based on a communication manner such as bluetooth, a wireless network, an eSIM card, and the like. The transmission of the related running data of the target application can be realized through the communication connection, so that the switching of the running terminal is realized. For example, the first terminal and the second terminal may establish communication based on bluetooth; as another example, the first terminal and the second terminal may establish a communication connection by inserting a SIM card and embedding an eSIM card having the same number as the SIM card.
In step S320, a first terminal pose is determined according to the first movement data, and a second terminal pose is determined according to the second movement data.
The first terminal posture and the second terminal posture are respectively used for representing the posture realized by the movement of the first terminal or the second terminal. Taking the target application as the call application as an example, when the first terminal is the smart watch and the second terminal is the mobile phone, and the user is in call with the remote terminal through the smart watch, if the user needs to switch to the mobile phone to be in call with the remote terminal, the user can pick up the mobile phone to answer the call through the hand with the smart watch. At this time, the movement of the smart watch may be decomposed into a forward movement + upward lifting motion, or upward lifting motion, with respect to the user, while the movement of the mobile phone is expressed as a backward movement + upward lifting motion, or upward lifting motion, with respect to the user. Therefore, the gesture of the first terminal determined by the movement of the smart watch is the gesture of the user for picking up the mobile phone for answering, namely the gesture of the user for lifting up the arm with the smart watch and the gesture of the second terminal determined by the movement of the mobile phone. By acquiring the first mobile data and the second mobile data simultaneously, the gestures represented by the movement of the first terminal and the movement of the second terminal can be determined more accurately, and the situation that the gesture recognition is inaccurate and the switching is wrong due to the fact that the mobile data of a single terminal is possibly caused is avoided.
It should be noted that, in order to determine the first terminal pose and the second terminal pose, the correspondence between the first movement data and the second movement data generally refers to movement data acquired by the two terminals in a simultaneous period. That is, it is assumed that the time when the first terminal runs the target application is 1s, the acquired first mobile data is the mobile data of the first terminal acquired from the 1s, and the second mobile data of the second terminal corresponding to the first mobile data is the mobile data of the second terminal acquired from the 1 s.
It should be noted that, in some embodiments, different corresponding relationships may also be set according to different application scenarios, and the difference between the first terminal and the second terminal. For example, the acquisition periods of the first movement data and the second movement data may be set to have a fixed delay. That is, assuming that the time for the first terminal to run the target application is 1 st s, the acquired first mobile data is the mobile data of the first terminal acquired from the 1 st s, and assuming that the fixed delay time is 2s, the second mobile data of the second terminal corresponding to the first mobile data is the mobile data of the second terminal acquired from the 3 rd s.
In an exemplary embodiment, referring to fig. 4, when determining a first terminal pose according to the first movement data and determining a second terminal pose according to the second movement data, the following steps S410 and S420 may be included:
in step S410, when the first movement data satisfies a first change condition and the second movement data satisfies a second change condition, the first movement data and the second movement data are respectively sampled to obtain a first sampling segment and a second sampling segment.
The first change condition and the second change condition are respectively used for determining whether the first terminal or the second terminal moves relative to the user in the current state.
In an exemplary embodiment, the first variation condition may include a first variation threshold, and the second variation condition may include a second variation threshold. At this time, in the process sampling, a first variation at the t-th time may be calculated based on the first movement data, and a second variation at the t-th time may be calculated based on the second movement data; then when the first variation is larger than or equal to a first variation threshold and the second variation is larger than or equal to a second variation threshold, acquiring a first data segment after the t moment in the first movement data and acquiring a second data segment after the t moment in the second movement data; and then sampling the first data segment and the second data segment respectively to obtain a first sampling segment and a second sampling segment.
Specifically, in order to accurately identify whether the first terminal and the second terminal move, after the first mobile data and the second mobile data are collected, the variation of the mobile data at the t-th time may be calculated based on the first mobile data and the second mobile data, respectively. For example, when the first movement data and the second movement data are data collected by a triaxial acceleration sensor, the triaxial acceleration data at the t-th time is assumed to be a (t) ═ ax(t),ay(t),az(t)]The first n sample points are the initial state, and a (t) may be compared with the first n sample points one by one and the variation (la) (t) -a (t-n) is calculated.
Specifically, after the first variation and the second variation are obtained, whether a data point of the first terminal or the second terminal at the t-th time is a non-stationary data point or not can be judged through a preset variation threshold, and then whether subsequent processing is performed or not is determined. When the first variation is greater than or equal to the first variation threshold and the second variation is greater than or equal to the second variation threshold, that is, when the subsequent processing is required, a first data segment after the t-th time is collected in the first moving data based on the preset data window, and then a second data segment after the t-th time is collected in the second moving data based on the preset data window, so as to determine the first terminal posture and the second terminal posture based on the first data segment and the second data segment.
It should be noted that the first variation threshold and the second variation threshold may be set differently according to different application scenarios. Specifically, when the mobile data includes a plurality of types of data, a first change threshold and a second change threshold may be set for each type of data, and the specifically set values may be the same or different. For example, when the first terminal and the second terminal are a smart watch and a mobile phone, respectively, and the mobile data is data collected by an acceleration sensor, the first change threshold and the second change threshold may be set based on an acceleration collected by an arm in a stationary state with respect to the user as a constant.
In addition, the preset data window can be set differently according to different application scenarios. Specifically, when a plurality of kinds of data are included in the mobile data, a first data window and a second data window may be set for each kind of data, respectively. It is to be added that, in general, in order to ensure the accuracy of the first terminal pose and the second terminal pose, the first data window and the second data window are generally set to be the same size to ensure the correspondence of the first movement data and the second movement data. However, in a specific scenario, the first data window and the second data window may be set to different sizes according to an application scenario.
By calculating the variation and judging the relationship between the variation and the variation threshold, the collection of excessive static data points can be avoided, the data segment containing the motion data can be accurately detected, and the first terminal posture and the second terminal posture can be more accurately determined.
In addition, if the first sampling segment and the second sampling segment need to be processed through the model, when the first data segment and the second data segment are sampled, the sampling frequency of the sampled data needs to be ensured to be consistent with the sampling frequency of the sample data of the input model when the model is trained, so that the normal operation of the model is ensured.
In step S420, the first terminal pose is determined according to the first sampling segment, and the second terminal pose is determined according to the second sampling segment.
When the first terminal posture and the second terminal posture are determined, the postures can be determined in multiple modes. For example, the determination may be made by a machine learning model, a deep learning model, or the like; for another example, the determination may be made by way of a preset data comparison. When determining the attitude through the model, the first terminal attitude and the second terminal attitude may be output simultaneously based on the same model, or may be determined separately through different models. When the first terminal posture and the second terminal posture are output based on the same model, whether the first terminal posture and the second terminal posture conform to the switching posture corresponding to the target application or not is judged subsequently, so that the output result can be directly set to be whether the output result conforms to the switching posture corresponding to the target application or not.
In an exemplary embodiment, the first sampling segment and the second sampling segment may be gesture-recognized based on a gesture recognition model to determine the first terminal gesture and the second terminal gesture. The gesture recognition model may include a classification algorithm model constructed based on a classification algorithm such as LR (Logistic Regression), RF (Random Forest), SVM (Support Vector Machine), and the like.
In an exemplary embodiment, referring to fig. 5, performing gesture recognition on the first sampling segment and the second sampling segment based on a gesture recognition model to determine the first terminal gesture and the second terminal gesture may include the following steps S510 to S520:
in step S510, a first feature of the first sampling section and a second feature of the second sampling section are extracted.
In an exemplary embodiment, the first and second features extracted for the first and second sample segments may include time domain features. Wherein the time domain features may include a combination of one or more of the following features:
1. mean time domain features:
the amplitude of the acceleration sensor and the mean value of each axis (x, y, z), the amplitude of the gyroscope sensor and the mean value of each axis (x, y, z) are characterized by 6 dimensions;
standard deviation/maximum minimum difference: the amplitude values of the acceleration sensor and the gyroscope sensor and the standard deviation and the maximum and minimum difference values respectively corresponding to all the axes have 6-dimensional characteristics.
2. Moving variance class:
the calculation formula of the sensor amplitude values of the acceleration sensor, the gyroscope and the like and the corresponding variance of each axis is shown as formula (1):
Figure BDA0003550321300000111
wherein, aiThe data of each column (x, y, z) in the acquired data of the sensors such as the acceleration and the gyroscope are represented, the data of each column are calculated by adopting the calculation formula, and the features are 2-dimensional.
3. Simple moving average class:
the amplitude values of sensors such as an acceleration sensor and a gyroscope and the moving average value corresponding to each axis, and the calculation formula is as follows:
Figure BDA0003550321300000112
wherein ix,iy,izRespectively represent acceleration, and data corresponding to (x, y, z) axes contained in the acquired data in the gyroscope, and features are 2-dimensional.
4. Energy average class:
the amplitude values of sensors such as an acceleration sensor and a gyroscope and the energy average value corresponding to each axis are calculated according to the formula:
Figure BDA0003550321300000113
wherein h isiCorresponding to data of each column in (x, y, z) included in data collected by sensors such as an acceleration sensor and a gyroscope, the data of each column is calculated by adopting the calculation formula, and the features are 6-dimensional in total.
5. Direction vector moving variance class:
the direction vector movement variance corresponding to the sensors such as an acceleration sensor, a gyroscope and the like is calculated according to the formula:
Figure BDA0003550321300000114
wherein the content of the first and second substances,
Figure BDA0003550321300000115
p ═ {3,4}, a denotes acceleration, g denotes a gyroscope; p is a radical ofx,pyRepresenting the acceleration sensor and the gyro sensor corresponding to the acceleration sensor and the gyro sensor respectively extracted and calculated from the x-axis and y-axis data of the acceleration sensor and the gyro sensor
Figure BDA0003550321300000121
The value and the feature are 2-dimensional.
In the above 5 types of features, N is referred to, where N represents the data size of the single-sampling data, and its value is related to the time window and the sampling frequency. Specifically, N is the sampling frequency × the time window. For example, at a sampling frequency of 125Hz and time windows of 1s, 3s, and 5s, respectively, the corresponding data sizes are 125, 375, and 625, respectively. It should be noted that the time window may be set according to the duration of the switching gesture corresponding to the target application. For example, the switching gesture corresponding to the target application is that the user arm lifts up with the smart watch. At this time, a time window can be set according to the duration of the upward lifting motion of the arm, and then the data size of the single sampling data is determined.
In addition, in addition to the above 5 features, the feature extraction process may also extract more types of features, and the present disclosure is not particularly limited thereto.
In step S520, the first feature and the second feature are input into a gesture recognition model for gesture recognition.
In an exemplary embodiment, after extracting the first feature of the first sampling segment and the second feature of the second sampling segment, the first feature and the second feature may be input into a pre-trained gesture recognition model for gesture recognition to determine the first terminal gesture and the second terminal gesture.
In an exemplary embodiment, referring to fig. 6, the training process of the gesture recognition model may include the following steps S610 to S640:
in step S610, sample data is collected and sample marking is performed on the sample data.
The sample data may include first sample data and second sample data corresponding to the first sample data. The first sample data may include sample data collected based on a sensor provided at the first terminal; the second sample data may include sample data collected based on a sensor provided at the second terminal. It is to be added that, in order to ensure the normal operation of the gesture recognition model, the correspondence relationship existing between the first sample data and the second sample data is consistent with the correspondence relationship between the first movement data and the second movement data.
The sample marking of the sample data may include a positive sample marking and a negative sample marking, and the specific marking may be in the form of numbers, letters, and the like, which is not particularly limited in this disclosure. The positive sample mark is a mark for marking that the sample data belongs to the same switching posture corresponding to the target application; and the negative sample mark is a mark for identifying that the sample data belongs to the inconsistent switching posture corresponding to the target application. It should be noted that, in some embodiments, when the types of the negative sample data are more, the negative sample data may be further marked for each negative sample mark on the basis of the negative sample mark, so that training may be performed according to different types of negative sample marks during subsequent training.
For example, in an application scenario where a call application needs to be switched from a smart watch to a mobile phone, the following data of the smart watch may be collected as sample data of the smart watch:
class 1: wearing an arm corresponding to the smart watch, picking up the mobile phone, and placing the mobile phone beside an ear; -positive sample, y ═ 1;
class 2: wearing the smart watch corresponding to the arm to lift the arm but not pick up the mobile phone; negative samples, y ═ 0;
class 3: wearing a non-corresponding arm of the smart watch, picking up the mobile phone, and placing the mobile phone beside an ear; negative samples, y ═ 0;
category 4: the arm is in other states; negative example, y is 0.
The above feature extraction is performed when the smart watch is in a call state, wherein only the category 1 is a positive sample, and is marked as y being 1, which indicates that the first terminal posture and the second terminal posture accord with a switching posture; the remaining categories are negative examples, marked as y ═ 0, indicating that the first terminal pose and the second terminal pose do not conform to the handoff pose.
It should be noted that, when acquiring sample data, to avoid the data volume being too large or insufficient, different sampling frequencies may be selected according to different application scenarios to sample the original data acquired by the sensor. If the key information of the first terminal posture or the second terminal posture cannot be determined by the sample data acquired based on a certain sampling frequency, the sampling frequency can be changed for resampling, so that the feature pair of the input model is extracted based on the sample data acquired based on the proper sampling frequency during the posture recognition model training. In addition, when the gesture recognition model is trained and applied, in the process of sampling the data segment to obtain the sampling segment, the used sampling frequency is consistent with the sampling frequency corresponding to the characteristics of the input model during training, so that the normal operation of the gesture recognition model and the accuracy of the output result are ensured.
In step S620, positive sample data and negative sample data of a preset sampling ratio are obtained from the sample data based on the sample mark.
The preset sampling proportion can be set differently according to specific scenes. For example, in the class 1-class 4 sampling method, the ratio of class 1, class 2, class 3, and class 4 may be set to 3:1:1:1 in order to ensure the number of positive sample data and negative sample data and the number of different types of negative sample data to be balanced.
In step S630, a positive sample feature pair of the first sample data and the second sample data in the positive sample data is extracted, and a negative sample feature pair of the first sample data and the second sample data in the negative sample data is extracted.
Since the sample data includes the first sample data and the second sample data, when the feature extraction is performed, the features of the first sample data and the second sample data in the positive sample data and the negative sample data need to be extracted at the same time. Therefore, after feature extraction is performed on positive sample data and negative sample data, a positive sample feature pair and a negative sample feature pair are obtained, and each feature pair comprises a first feature and a second feature.
In step S640, a preset model is trained based on the positive sample feature pair and the negative sample feature pair, so as to obtain a gesture recognition model.
In an exemplary embodiment, after the positive sample feature pair and the negative sample feature pair are obtained, supervised model training may be performed on a preset model based on the positive sample feature pair and the negative sample feature pair to obtain a posture recognition model. It should be noted that, in some embodiments, the model training may also be performed in an unsupervised or semi-supervised mode according to different application scenarios, which is not particularly limited by the present disclosure.
It should be noted that, in the above embodiment, when performing gesture recognition on a switching gesture corresponding to a target application, only whether the first terminal gesture and the second terminal gesture conform to the switching gesture needs to be determined, and therefore, only positive sample marking and negative sample marking need to be performed on marking of sample data (with supervised training). While in other embodiments, it may be desirable to recognize multiple gestures simultaneously. For example, different switching gestures may be set for different target applications in the first terminal or the second terminal, and in order to avoid setting too many models, the same gesture recognition model may be selected for gesture recognition. At this time, when sample marking is performed on sample data, marking of a corresponding gesture can be performed on each sample data according to the number of switching gestures, so that the gesture recognition model can determine a corresponding gesture according to the first feature and the second feature.
In addition, in an exemplary embodiment, when the terminal is switched based on the gesture recognition model, negative sample data for a certain application scene or a certain user may be continuously collected, and then the gesture recognition model is further optimized based on the negative sample data to obtain a personalized gesture recognition model.
In an exemplary embodiment, when sample data is collected, in order to obtain complete mobile data, a period of time is usually set before the terminal moves and after the terminal moves, so as to prevent incomplete mobile data collection. On the basis, the mobile end point positioning can be carried out on the sample data. Specifically, as shown in fig. 7, the method may include the following steps S710 to S720:
in step S710, the mobile end point location is performed on the sample data to determine a data start point and a data end point of the terminal movement in the sample data.
In an exemplary embodiment, an algorithm may be used to perform the positioning of the moving end point to determine a data starting point and a data ending point corresponding to the movement performed by the first terminal or the second terminal when the first terminal pose and the second terminal pose are performed in the sample data. For example, a SWAB (Sliding Window And Bottom-up) algorithm may be employed to determine a data start point And a data end point of movement of the first terminal or the second terminal in the sample data.
In step S720, data in the sample data that is located beyond the data start point and the data end point is deleted, and updated sample data is obtained.
In an exemplary embodiment, after the data start point and the data end point are obtained, deleting data in the sample data, which is located outside the data start point and the data end point, and including complete data corresponding to movement executed by the first terminal or the second terminal when the first terminal posture and the second terminal posture are executed, so as to realize accurate segmentation of the complete data corresponding to the movement.
In addition, in step S410, before the first data segment and the second data segment are respectively sampled, the first data segment and the second data segment may be respectively filtered in advance to remove noise in the first data segment and the second data segment. In particular, the first data segment and the second data segment may be filtered using digital filters. For example, a 4-order Butterworth low-pass IIR digital filter may be used for filtering. In addition, the first data segment and the second data segment may be filtered in other manners, which is not limited in this disclosure.
When the gesture recognition model is trained, the sample data may be filtered to remove noise in the sample data before feature extraction is performed on the sample data.
Meanwhile, in step S410, before the first data segment and the second data segment are respectively sampled, the first data segment and the second data segment may also be respectively normalized in advance, so as to avoid individual differences of data. The normalization processing method may include Z-score 0 mean normalization, Min-Max linear function normalization, and the like, which is not limited in this disclosure. Similarly, when the gesture recognition model is trained, before feature extraction is performed on sample data, normalization processing may be performed on the sample data.
In step S330, when the first terminal posture and the second terminal posture conform to the switching posture corresponding to the target application, the operating terminal of the target application is switched to the second terminal based on the communication connection.
In an exemplary embodiment, when the first terminal posture and the second terminal posture conform to the switching posture corresponding to the target application, the running terminal of the target application can be switched to the second terminal based on the communication connection established between the first terminal and the second terminal. The switching of the running terminal of the target application to the second terminal means that the first terminal sends data capable of enabling the target application to complete the current function to the second terminal, so that the target application can continue to complete the current function of the target application at the second terminal. Specifically, in some embodiments, there may be a case where both the first terminal and the second terminal have data required for running the target application, and at this time, only data capable of representing the current state of the target application in the first terminal may be sent to the second terminal; in other embodiments, there may be a case where the second terminal does not have data required for running the target application, and the second terminal supports running of the target application, and at this time, all data supporting completion of the current function of the target application may be sent to the second terminal; in addition, there may be some cases where the second terminal does not support the target application to run, and the handover function cannot be implemented. In addition, in different application scenarios, some auxiliary data can be sent to the second terminal, so as to better support the running of the target application.
For example, still taking the target application as the call application, the first terminal as the smart watch, the second terminal as the mobile phone, and taking the user to call through the smart watch as an example, when it is determined that the first terminal posture and the second terminal posture conform to the switching posture corresponding to the call application, the call number extracted from the smart watch may be sent to the mobile phone terminal, so as to achieve the purpose of switching the call application to the mobile phone. In addition, in the application scene, if the contact information exists, the contact information can be sent to the mobile phone for corresponding display in order to enable the user to make the current call object more clear.
In addition, in an exemplary embodiment, besides the terminal switching, some specific gestures may be set in the first terminal or the second terminal to be linked with other devices with communication connection. Specifically, when the mobile data corresponding to the first terminal is set to be consistent with the specific gesture on the first terminal, the second terminal or the other terminals may be controlled to execute the corresponding operation. For example, assume that the first terminal is a smart watch, the other terminals are smart televisions, the set specific gesture is that the smart watch turns around counterclockwise, and the corresponding operation is to turn down the volume. At this moment, if the user wears the anticlockwise circling of arm of intelligent wrist-watch for the anticlockwise circling of intelligent wrist-watch, then control smart television and turn down the volume. Through the setting, the control of the equipment is more intelligent and linked, and the condition that the equipment such as a specific remote controller is required to be controlled is avoided.
The training process and the specific application process of the gesture recognition model are elaborated by taking LR as a classification algorithm and taking the above 5 types, which are 24-dimensional features as examples:
training process:
for positive and negative sample dataFeature extraction, setting q0Is 1, is used for solving the bias parameter b, and forms a training characteristic Q ═ Q with the previously extracted characteristic0,q1,q2,…,q24]T(ii) a Constructing a classification algorithm model according to feature pairs extracted based on positive sample data and negative sample data, inputting a training feature pair Q and a sample mark y, y belongs to {0,1}, and linearly adding weight, bias parameters and features to obtain wq + B ═ w1q1+w2q2+…+w24q24+bq0. Note that the weight W ═ W1,w2,…,w24]TEach element in the set is initialized to 0, the bias parameter B is initialized to 1, and B ═ bq0
Then, wq + B is input into a preset model constructed based on an LR classification algorithm, a parameter group which enables a likelihood function L (w) to obtain a maximum value is found through maximum likelihood estimation, and B and w1,w2,…,w24
The specific process is as follows:
A) taking logarithm of L (w) (as formula 5), and performing formula conversion to obtain formula (6):
Figure BDA0003550321300000171
Figure BDA0003550321300000172
B) and (3) obtaining a formula (7) by performing partial derivation on the likelihood function through a gradient descent method:
Figure BDA0003550321300000173
wherein wkRepresents the following formula (8):
Figure BDA0003550321300000174
wherein m represents the number of samples; alpha represents the learning rate and is used for controlling the step length, the initial value is 0.01, and the subsequent adjustment is carried out according to the requirement; i represents the ith sample; q. q.sikRepresenting the characteristics of the kth column in the ith sample; w is akRepresenting parameters corresponding to the characteristics of the kth column; k has a value in the range of [0,24 ]]。
According to the above formula, the initialization weight W ═ W1,w2,…,w24]Iterating for 0, b-1 until the specified precision, solving for weight W-W1,w2,…,w24]And a bias parameter b, obtaining a gesture recognition model.
The application process comprises the following steps:
a weight W, a bias parameter b, and a feature Q extracted from the first terminal and the second terminal [ Q ] obtained based on the training process described above0,q1,q2,…,q24]TThe composition wq + B ═ w1q1+w2q2+…+w24q24+bq0And outputting the probability y, y belonging to {0,1} as parameter input of the following formula, wherein the probability y is used for judging whether the first terminal posture and the second terminal posture accord with the switching posture of the target application:
wherein, y ═ 1 denotes that the first terminal posture and the second terminal posture conform to the switching posture, and the corresponding probability is calculated as the following formula (9):
Figure BDA0003550321300000181
y ═ 0 denotes that the first terminal posture and the second terminal posture do not conform to the switching posture, and the corresponding probabilities are calculated as in the following formula (10):
Figure BDA0003550321300000182
wherein the content of the first and second substances,
Figure BDA0003550321300000183
the data are converted into sigmoid functions, and the logarithm probability of a real mark is approximated through a prediction result of a linear regression model; when the probability is more than 0.5, the probability is 1, and when the probability is less than 0.5, the probability is 0.
By comparing the probabilities of y being 1 and y being 0, the categories to which the first terminal pose and the second terminal pose belong, i.e., whether the first terminal pose and the second terminal pose correspond to the switching pose, can be output.
When the application scenario is that the call application is switched between the smart watch and the mobile phone, the actual training process can be as shown in fig. 8, and includes the following steps:
step S801, performing analog/digital conversion on the acquired original mobile data of the smart watch and the mobile phone, sampling, and obtaining a pair of sample data (including sample data corresponding to the smart watch and sample data corresponding to the mobile phone) and a sample mark corresponding to the sample data each time sampling;
the main sources of the acquired original mobile data can include data of a three-axis acceleration sensor, a gyroscope sensor and the like of a mobile phone and a smart watch, and the sampling frequency is 125 Hz; it should be noted that the three-axis acceleration sensor is used for marking the movement behavior of the terminal, and does not have the capability of accurately detecting the angle change of the object, while the gyroscope sensor is used for detecting the state of the horizontal change, but cannot calculate the intensity of the movement of the object, so that data of the two sensors need to be combined;
step S803, positioning the moving end point through an algorithm, determining a data starting point and a data ending point corresponding to the movement executed by the first terminal or the second terminal when the first terminal gesture or the second terminal gesture is executed in the sample data, and updating the sample data;
step S805, filtering the sample data to remove noise in the sample data;
step S807, normalizing the sample data to avoid individual difference of the data;
step S809, resampling sample data; in some implementationsIn the example, the situation of moving too fast may occur during the acquisition of the raw movement data. In this case, if the sampling frequency is too low, the sample data may be missing or the sample data may not be sampled sufficiently, so that the sample data needs to be re-sampled. For example, sampling a data stream of an acceleration sensor of a smart watch may obtain a time series ai (k), i ═ x, y, z, and a may be obtainedi(k) Sampling frequency conversion is carried out, the original sampling frequency 125HZ is converted into 100HZ, and the sampling rate is improved; in addition, after data resampling, the processes of mobile endpoint positioning, filtering and normalization processing can be executed again;
step S811, extracting features of the sample data;
step S813 of training an LR model based on the extracted features;
and step S815, outputting the posture recognition model.
When the application scenario is that the call application is switched between the smart watch and the mobile phone, an actual application process may be as shown in fig. 9, and includes the following steps:
step 901, judging whether the smart watch or the mobile phone is used for the first time or whether the smart watch and the mobile phone are reset;
it should be noted that, in order to ensure that terminal switching can be achieved, it is generally required that the systems of the smart watch and the mobile phone are compatible with each other (for example, both are android systems), communication connection (for example, a bluetooth connection state) exists between the smart watch and the mobile phone, and the mobile phone or the smart watch is set by a user to allow an application running terminal to be switched between the smart watch and the mobile phone. Meanwhile, in order to ensure that the target application can be accurately identified, whether the mobile phone and the smart watch are in a call mode needs to be started by default.
Step S903, when the smart watch or the mobile phone is used for the first time or reset, directly loading an original posture recognition model stored in the smart watch or the mobile phone;
it should be noted that, at this time, there is no use data of the user, so the gesture recognition model has no personalization and is an original gesture recognition model stored by the developer;
step S905, loading the optimized gesture recognition models stored in the smart watch and the mobile phone under the condition that the mobile phone or the smart watch is not used for the first time and the smart watch and the mobile phone are not reset;
at this time, because the use data of the user exists, the gesture recognition model can be further optimized based on the negative sample data acquired by the use data of the user, and has a personalized optimized gesture recognition model.
Step S907, obtaining change thresholds corresponding to the smart watch and the mobile phone respectively;
the first change threshold corresponding to the smart watch and the second change threshold corresponding to the mobile phone are set in advance and stored locally, so that the first change threshold and the second change threshold can be directly or locally set. In addition, in the current application scenario, the first variation threshold and the second variation threshold may be set to the same value;
step S909, judging whether the smart watch or the mobile phone is in a conversation state or not, and whether Bluetooth connection is established between the smart watch and the mobile phone or not;
step S911, when the smart watch or the mobile phone is in a conversation state and a Bluetooth connection is established between the smart watch and the mobile phone, whether a first variation corresponding to the smart watch and a second variation corresponding to the mobile phone are both larger than or equal to a variation threshold value is judged;
step S913, when the variation generated by the smart watch and the mobile phone is larger than or equal to the variation threshold, extracting the smart watch data segment and the mobile phone data segment corresponding to the smart watch and the mobile phone, and performing gesture recognition on the smart watch data segment and the mobile phone data segment based on the previously loaded original gesture recognition model or the optimized gesture recognition model to determine the first terminal gesture and the second terminal gesture;
step S915, judging whether the first terminal posture and the second terminal posture accord with a switching posture corresponding to the call application;
step S917, when the first terminal posture and the second terminal posture accord with a switching posture corresponding to the call application, extracting a current call number, and switching the call application to another terminal;
the other terminal is a terminal which does not run the conversation application currently in the smart watch and the mobile phone. For example, when a call is answered through the smart watch at present, the other terminal is a mobile phone; currently, the other terminal is the intelligent watch when the mobile phone answers the call;
step S919, acquiring negative sample data according to the user data, and further optimizing the locally stored gesture recognition model to obtain a more personalized gesture recognition model.
In summary, in the exemplary embodiment, a purpose of intelligently switching terminals by recognizing a first terminal posture and a second terminal posture is provided, and by the technical scheme of the present disclosure, recognition of multi-terminal data can be utilized to reduce unnecessary operations for a user, so that terminal switching is more convenient and faster, and steps requiring manual switching by the user are simplified; meanwhile, some artificial misoperation conditions in the operation process are also avoided. In addition, the technical scheme of the disclosure can also improve the intellectualization of terminal switching and improve the user experience.
In addition, the technical scheme of the method avoids the problems of insufficient memory, increased power consumption, slow response time and the like caused by using a relatively large calculation method at the terminal by using methods such as machine learning and the like; meanwhile, the gesture recognition model can be optimized according to the use data of the user, so that the gesture recognition model is more personalized and is more suitable for each application scene or user.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes illustrated in the above figures are not intended to indicate or limit the temporal order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 10, in the embodiment of the present example, an application running terminal switching apparatus 1000 is further provided, which is applied to a first terminal and a second terminal that establish a communication connection; including a data acquisition module 1010, a pose determination module 1020, and a terminal switching module 1030. Wherein:
the data obtaining module 1010 may be configured to obtain first mobile data of a first terminal and second mobile data of a second terminal corresponding to the first mobile data in response to the first terminal running a target application.
The pose determination module 1020 may be configured to determine a first terminal pose from the first movement data and a second terminal pose from the second movement data.
The terminal switching module 1030 may be configured to switch the running terminal of the target application to the second terminal based on the communication connection when the first terminal posture and the second terminal posture conform to the switching posture corresponding to the target application.
In an exemplary embodiment, the posture determination module 1020 may be configured to sample the first movement data and the second movement data respectively to obtain a first sampling segment and a second sampling segment when the first movement data satisfies a first change condition and the second movement data satisfies a second change condition; and determining the first terminal attitude according to the first sampling segment, and determining the second terminal attitude according to the second sampling segment. .
In an exemplary embodiment, the gesture determination module 1020 may be configured to calculate a first variation at a time t based on the first movement data, and calculate a second variation at the time t based on the second movement data; when the first variation is larger than or equal to a first variation threshold and the second variation is larger than or equal to a second variation threshold, acquiring a first data segment after the t moment in the first movement data and acquiring a second data segment after the t moment in the second movement data; and respectively sampling the first data segment and the second data segment to obtain a first sampling segment and a second sampling segment.
In an exemplary embodiment, the data acquisition module 1010 may be configured to filter and denoise the first data segment and the second data segment, respectively.
In an exemplary embodiment, the data acquisition module 1010 may be configured to normalize the first data segment and the second data segment.
In an exemplary embodiment, the pose determination module 1020 may be configured to pose-recognize the first sampling segment and the second sampling segment based on a pose recognition model to determine the first terminal pose and the second terminal pose.
In an exemplary embodiment, the pose determination module 1020 may be configured to extract a first feature of the first sampling segment and a second feature of the second sampling segment; and inputting the first characteristic and the second characteristic into a gesture recognition model for gesture recognition.
In an exemplary embodiment, the gesture determination module 1020 may be configured to collect sample data and perform sample labeling on the sample data; the sample data comprises first sample data acquired by a first terminal and second sample data acquired by a second terminal and corresponding to the first sample data; the sample marks comprise positive sample marks and negative sample marks; acquiring positive sample data and negative sample data of a preset sampling proportion in the sample data based on the sample mark; extracting a positive sample feature pair of first sample data and second sample data in the positive sample data, and extracting a negative sample feature pair of the first sample data and the second sample data in the negative sample data; and training the preset model based on the positive sample characteristic pair and the negative sample characteristic pair to obtain a posture recognition model.
In an exemplary embodiment, the data obtaining module 1010 may be configured to perform mobile endpoint location on sample data to determine a data starting point and a data ending point of the terminal movement in the sample data; and deleting the data outside the data starting point and the data ending point in the sample data to obtain the updated sample data.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3 to 9 may be performed.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (12)

1. An application operation terminal switching method is characterized in that the method is applied to a first terminal and a second terminal which establish communication connection; the method comprises the following steps:
responding to the running of a target application by the first terminal, and acquiring first mobile data of the first terminal and second mobile data of the second terminal corresponding to the first mobile data;
determining a first terminal posture according to the first movement data, and determining a second terminal posture according to the second movement data;
and when the first terminal posture and the second terminal posture accord with the switching posture corresponding to the target application, switching the running terminal of the target application to the second terminal based on the communication connection.
2. The method of claim 1, wherein determining a first terminal pose from the first movement data and a second terminal pose from the second movement data comprises:
when the first mobile data meet a first change condition and the second mobile data meet a second change condition, respectively sampling the first mobile data and the second mobile data to obtain a first sampling section and a second sampling section;
and determining the first terminal attitude according to the first sampling segment, and determining the second terminal attitude according to the second sampling segment.
3. The method of claim 2, wherein the first varying condition comprises a first varying threshold, and the second varying condition comprises a second varying threshold;
when the first mobile data meets a first change condition and the second mobile data meets a second change condition, respectively sampling the first mobile data and the second mobile data to obtain a first sampling section and a second sampling section, including:
calculating a first variation at a tth time based on the first movement data, and calculating a second variation at the tth time based on the second movement data;
when the first variation is larger than or equal to a first variation threshold and the second variation is larger than or equal to a second variation threshold, acquiring a first data segment after the t moment in the first movement data and acquiring a second data segment after the t moment in the second movement data;
and respectively sampling the first data segment and the second data segment to obtain a first sampling segment and a second sampling segment.
4. The method of claim 3, wherein prior to said separately sampling said first data segment and said second data segment, said method further comprises:
and respectively carrying out filtering and denoising on the first data segment and the second data segment.
5. The method of claim 3, wherein prior to said separately sampling said first data segment and said second data segment, said method further comprises:
and normalizing the first data segment and the second data segment.
6. The method of claim 2, wherein determining the first terminal pose based on the first sampling segment and determining the second terminal pose according to the second sampling segment comprises:
performing gesture recognition on the first sampling segment and the second sampling segment based on a gesture recognition model to determine the first terminal gesture and the second terminal gesture.
7. The method of claim 6, wherein the gesture recognition of the first sampling segment and the second sampling segment based on a gesture recognition model comprises:
extracting a first feature of the first sampling segment and a second feature of the second sampling segment;
and inputting the first characteristic and the second characteristic into the gesture recognition model for gesture recognition.
8. The method of claim 6, further comprising:
collecting sample data, and carrying out sample marking on the sample data; the sample data comprises first sample data acquired by the first terminal and second sample data acquired by the second terminal and corresponding to the first sample data; the sample markers comprise positive sample markers and negative sample markers;
acquiring positive sample data and negative sample data of a preset sampling proportion in the sample data based on the sample mark;
extracting a positive sample feature pair of first sample data and second sample data in the positive sample data, and extracting a negative sample feature pair of the first sample data and the second sample data in the negative sample data;
and training a preset model based on the positive sample characteristic pair and the negative sample characteristic pair to obtain a posture recognition model.
9. The method of claim 8, further comprising:
carrying out mobile end point positioning on the sample data to determine a data starting point and a data ending point of terminal movement in the sample data;
and deleting the data outside the data starting point and the data ending point in the sample data to obtain the updated sample data.
10. An application operation terminal switching device is characterized in that the device is applied to a first terminal and a second terminal which are used for establishing communication connection; the device comprises:
the data acquisition module is used for responding to the running of a target application of the first terminal, and acquiring first mobile data of the first terminal and second mobile data of the second terminal corresponding to the first mobile data;
the attitude determination module is used for determining a first terminal attitude according to the first mobile data and determining a second terminal attitude according to the second mobile data;
and the terminal switching module is used for switching the running terminal of the target application to the second terminal based on the communication connection when the first terminal posture and the second terminal posture accord with the switching posture corresponding to the target application.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-9 via execution of the executable instructions.
CN202210259749.2A 2022-03-16 2022-03-16 Application operation terminal switching method and device, medium and electronic equipment Pending CN114722911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210259749.2A CN114722911A (en) 2022-03-16 2022-03-16 Application operation terminal switching method and device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210259749.2A CN114722911A (en) 2022-03-16 2022-03-16 Application operation terminal switching method and device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114722911A true CN114722911A (en) 2022-07-08

Family

ID=82238604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210259749.2A Pending CN114722911A (en) 2022-03-16 2022-03-16 Application operation terminal switching method and device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114722911A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755567A (en) * 2023-08-21 2023-09-15 北京中科心研科技有限公司 Equipment interaction method and system based on gesture data, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755567A (en) * 2023-08-21 2023-09-15 北京中科心研科技有限公司 Equipment interaction method and system based on gesture data, electronic equipment and medium

Similar Documents

Publication Publication Date Title
WO2021082749A1 (en) Action identification method based on artificial intelligence and related apparatus
CN109299315B (en) Multimedia resource classification method and device, computer equipment and storage medium
CN111476783B (en) Image processing method, device and equipment based on artificial intelligence and storage medium
WO2022095674A1 (en) Method and apparatus for operating mobile device
CN115699082A (en) Defect detection method and device, storage medium and electronic equipment
KR20200094732A (en) Method and system for classifying time series data
CN113744286A (en) Virtual hair generation method and device, computer readable medium and electronic equipment
CN112860169A (en) Interaction method and device, computer readable medium and electronic equipment
CN111589138B (en) Action prediction method, device, equipment and storage medium
CN114722911A (en) Application operation terminal switching method and device, medium and electronic equipment
CN111968641A (en) Voice assistant wake-up control method and device, storage medium and electronic equipment
CN111046742A (en) Eye behavior detection method and device and storage medium
CN113766127A (en) Control method and device of mobile terminal, storage medium and electronic equipment
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN113284206A (en) Information acquisition method and device, computer readable storage medium and electronic equipment
CN113821658A (en) Method, device and equipment for training encoder and storage medium
CN111191018B (en) Response method and device of dialogue system, electronic equipment and intelligent equipment
CN112036307A (en) Image processing method and device, electronic equipment and storage medium
CN114662606A (en) Behavior recognition method and apparatus, computer readable medium and electronic device
CN110321829A (en) A kind of face identification method and device, electronic equipment and storage medium
CN111770484B (en) Analog card switching method and device, computer readable medium and mobile terminal
CN112988984B (en) Feature acquisition method and device, computer equipment and storage medium
CN111310701B (en) Gesture recognition method, device, equipment and storage medium
CN114822543A (en) Lip language identification method, sample labeling method, model training method, device, equipment and storage medium
CN113362260A (en) Image optimization method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination