CN112383664B - Device control method, first terminal device, second terminal device and computer readable storage medium - Google Patents

Device control method, first terminal device, second terminal device and computer readable storage medium Download PDF

Info

Publication number
CN112383664B
CN112383664B CN202011103205.4A CN202011103205A CN112383664B CN 112383664 B CN112383664 B CN 112383664B CN 202011103205 A CN202011103205 A CN 202011103205A CN 112383664 B CN112383664 B CN 112383664B
Authority
CN
China
Prior art keywords
terminal device
information
event
screen
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011103205.4A
Other languages
Chinese (zh)
Other versions
CN112383664A (en
Inventor
胡昶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011103205.4A priority Critical patent/CN112383664B/en
Publication of CN112383664A publication Critical patent/CN112383664A/en
Application granted granted Critical
Publication of CN112383664B publication Critical patent/CN112383664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]

Abstract

The application discloses a device control method, a first terminal device and a second terminal device, wherein the method comprises the following steps: determining position information of the second terminal equipment and attitude information of the second terminal equipment; determining position information of the first terminal equipment and attitude information of the first terminal equipment; determining position information of a selection point based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device, wherein the selection point is a point selected by the first terminal device on a plane where a screen of the second terminal device is located; detecting a first operation event corresponding to the selection point; and sending first information to the second terminal equipment, wherein the first information comprises the position information of the selection point and a first operation event corresponding to the selection point. By implementing the method, the control efficiency of the second terminal device is improved.

Description

Device control method, first terminal device, second terminal device and computer readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to an apparatus control method, a first terminal apparatus, and a second terminal apparatus.
Background
In practical applications, a user usually needs to control a terminal device (such as a television, a computer, a projection device, etc.). For example, a terminal device is taken as a television. The way a television is controlled typically has two types:
the first mode is still the previous remote control mode, that is, pressing the up, down, left and right keys of the remote control to move the cursor to the designated button, and then clicking the determined key of the remote control to indicate the button where the cursor is selected. The operation mode has low efficiency and low speed, and reduces the user experience to a certain extent.
The method II comprises the steps of projecting an operation interface of a television to terminal equipment such as a mobile phone and the like, and realizing large-screen control through an interactive command; however, after the operation interface of the television is projected to the interface of the mobile phone, the operation buttons in the operation interface become very small, which is not beneficial to the quick operation of the user. Therefore, the control efficiency of the above two ways to control the terminal device is low.
Disclosure of Invention
The application provides a device control method, a first terminal device and a second terminal device, which is beneficial to improving the control efficiency of the second terminal device.
In a first aspect, an embodiment of the present application provides an apparatus control method, which is applied to a first terminal apparatus, and the method includes: determining position information of the second terminal equipment and attitude information of the second terminal equipment; determining position information of the first terminal equipment and attitude information of the first terminal equipment; determining position information of a selection point based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device, wherein the selection point is a point selected by the first terminal device on a plane where a screen of the second terminal device is located; detecting a first operation event corresponding to the selection point; and sending first information to the second terminal equipment, wherein the first information comprises the position information of the selection point and a first operation event corresponding to the selection point.
In the method described in the first aspect, since the position information of the selection point on the plane on which the screen of the second terminal device is located is determined based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device, and the posture information of the second terminal device. Therefore, the user can change the position of the selection point in the screen of the second terminal device by holding the first terminal device for movement. The first terminal device can be regarded as a mouse, and a user can quickly select a position in a screen of the second terminal device by moving the first terminal device. There is no need to move a selected position in the screen of the second terminal device by pressing the up, down, left and right keys, as in the conventional remote controller. Therefore, based on the method described in the first aspect, it is beneficial to improve the control efficiency of the second terminal device.
In one possible implementation manner, the first terminal device may determine the position information and the posture information of the first terminal device with a preset time as a period, determine the position information of the selection point based on the position information and the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device with the preset time as the period, detect the first operation event corresponding to the selection point with the preset time as the period, and send the first information to the second terminal device with the preset time as the period. For example, the preset time is 33ms or 15 ms. The user can thus select a position in the screen of the second terminal device in real time.
In a possible implementation manner, the first operation event is a press-down event, a lift-up event, or no event, and the first information further includes a timestamp corresponding to the first operation event. The operation events based on press-down events, lift-up events or no events can be flexibly composed of a plurality of basic operation events into gesture events such as press operation events, long press operation events or sliding operation events. Therefore, based on the possible implementation mode, the diversity of gestures for controlling the second terminal device is favorably improved.
In one possible implementation manner, the position information of the first terminal device includes position information of a camera of the first terminal device in a world coordinate system, and the posture information of the first terminal device includes a normal vector of a plane where the camera of the first terminal device is located in the world coordinate system. Determining the position information of the selection point based on the position information of the first terminal device and the posture information of the first terminal device is advantageous for improving the accuracy of the determined position information of the selection point.
In one possible implementation, the position information of the second terminal device includes center position information of a screen of the second terminal device in the world coordinate system and/or position information of one or more vertices of the screen of the second terminal device, and the posture information of the second terminal device includes a normal vector of the screen of the second terminal device in the world coordinate system. Determining the position information of the selection point based on the position information of the second terminal device and the posture information of the second terminal device is advantageous for improving the accuracy of the determined position information of the selection point.
In a possible implementation manner, the position information of the second terminal device includes center position information of a screen of the second terminal device and position information of one or more vertices of the screen of the second terminal device in a world coordinate system, and a specific implementation manner of determining the position information of the selection point based on the position information of the first terminal device and the pose information of the first terminal device, the position information of the second terminal device, and the pose information of the second terminal device is that: the method comprises the steps that a first terminal device determines position information of a selected point in a world coordinate system based on position information of a first terminal device camera in the world coordinate system, a normal vector of a plane where the first terminal device camera is located in the world coordinate system, center position information of a second terminal device screen in the world coordinate system and a normal vector of the second terminal device screen in the world coordinate system; the first terminal device determines position information of the selection point in the planar coordinate system based on position information of one or more vertices of the second terminal device screen in the world coordinate system and position information of the selection point in the world coordinate system. The position information of the selection point included in the first information is specifically position information of the selection point in a plane coordinate system. Based on this possible implementation, the position information of the selection point can be accurately determined.
In a possible implementation manner, the first terminal device may further start the camera and display a camera preview interface, where the camera preview interface includes prompt information for prompting to move the first terminal device so that the second terminal device is displayed in the camera preview interface; the specific implementation manner of the first terminal device determining the position information and the posture information of the second terminal device is as follows: the first terminal equipment determines the position information and the posture information of the second terminal equipment based on the image which is collected by the camera and comprises the second terminal equipment. Based on the possible implementation mode, the first terminal device is favorable for successfully detecting the position information and the posture information of the second terminal device.
In a possible implementation manner, the first terminal device may further obtain a picture of a target area of the screen of the second terminal device, where the picture of the target area includes a function button where the selection point is located; the first terminal device may also display a screen of the target area. Optionally, the size of the function button included in the screen of the target area is smaller than a preset size. In this possible implementation, displaying the screen of the target area by the first terminal device is beneficial for the user to see the function button selected by the selection point in the second terminal device more clearly.
In a possible implementation manner, the first terminal device may further receive a text information acquisition request sent by the second terminal device; the first terminal equipment can also output an information input keyboard; the first terminal device can also acquire text information input by the user through the information input keyboard and send the text information to the second terminal device. In a possible implementation manner, text information can be input in the second terminal device through the keyboard of the first terminal device, so that convenience of inputting the text information in the second terminal device by a user is improved.
In a possible implementation manner, after determining the position information of the selection point, the first terminal device may determine whether the selection point is within a screen of the second terminal device. And if the selection point is not in the screen of the second terminal equipment, prompting a user that the selection point is out of the screen of the second terminal equipment. Based on the possible implementation, the validity of the selection point sent to the second terminal device is guaranteed.
In a second aspect, an embodiment of the present application provides an apparatus control method, which is applied to a second terminal apparatus, and the method may include: receiving first information sent by first terminal equipment, wherein the first information comprises position information of a selection point and a first operation event corresponding to the selection point, and the selection point is a point selected by the first terminal equipment on a plane where a screen of the second terminal equipment is located; a target operation is performed based on the first information. Based on the method described in the second aspect, it is beneficial to improve the control efficiency of the second terminal device.
In a possible implementation manner, the first operation event is a press-down event, a lift-up event, or no event, and the first information further includes a timestamp corresponding to the first operation event. Based on the possible implementation mode, the diversity of gestures for controlling the second terminal device is improved.
In a possible implementation manner, a specific implementation manner of receiving the first information sent by the first terminal device is as follows: receiving first information sent by first terminal equipment by taking preset time as a period; the specific implementation manner of executing the target operation based on the first information is as follows: determining a second operation event input by a user at a plurality of selection points based on a plurality of first operation events received in a plurality of continuous periods and timestamps corresponding to the plurality of first operation events; and executing the target operation based on the second operation event and the position information of the plurality of selection points. The plurality of first operation events can be flexibly combined into the second operation event, so that the diversity of gestures for controlling the second terminal device can be improved based on the possible implementation mode.
In one possible implementation, the second operation event is a click event or a slide-touch event or a long-press event.
In a possible implementation manner, a specific implementation manner of executing the target operation based on the position information of the plurality of selection points corresponding to the second operation event and the plurality of first operation events is as follows: when the input frame is selected, if the second operation event is a sliding touch event, determining the track of the sliding touch event based on the position information of a plurality of selection points corresponding to a plurality of first operation events; determining text information based on the trajectory of the sliding touch event and inputting the text information in the input box. Based on the possible implementation mode, the user can input the text information in the input box only by pressing the first terminal device and holding the first terminal device to move in the process of pressing the first terminal device, so that the text information can be conveniently input in the input box of the second terminal device.
In a possible implementation manner, after the second terminal device receives the first information sent by the first terminal device, it may also be determined whether the selection point is in the screen of the second terminal device based on the location information of the selection point; if the position of the selection point is not in the screen of the second terminal device, prompting the user to move the second terminal device to enable the position of the selection point to be in the screen of the second terminal device; and if the position of the selection point is in the screen of the second terminal equipment, executing the step of executing the target operation based on the first information. Based on the possible implementation mode, when the position of the selection point is not in the screen of the second terminal device, the user can be prompted in time, and user experience is improved.
In a possible implementation manner, if the position of the selection point is within the screen of the second terminal device, the position of the selection point is highlighted in the screen of the second terminal device. Based on the possible implementation, it is beneficial for the user to observe the position of the selection point, so that the user can make a control operation on the selection point.
In a possible implementation manner, if the target operation is to select an input box, sending a text information input instruction to the first terminal device; and receiving the text information sent by the first terminal equipment. Based on the possible implementation mode, the convenience of inputting the text information at the second terminal equipment by the user is improved.
In a possible implementation manner, the specific implementation manner of determining, based on a plurality of first operation events received in a plurality of consecutive cycles and timestamps corresponding to the plurality of first operation events, a second operation event input by the user at a plurality of selection points is: if three first operation events received in three continuous periods sequentially comprise a no-event/lift-up event, a press-down event and a lift-up event according to the sequence of the timestamps from front to back, the second terminal device determines that the user clicks the event at the selection point corresponding to the press-down event and the selection point corresponding to the lift-up event. Based on this possible implementation, the click event can be accurately determined.
In a possible implementation manner, the specific implementation manner of determining, based on a plurality of first operation events received in a plurality of consecutive cycles and timestamps corresponding to the plurality of first operation events, a second operation event input by the user at a plurality of selection points is: and if the plurality of first operation events received in a plurality of continuous periods sequentially comprise a plurality of pressing events and a lifting event according to the sequence of the time stamps from front to back, the second terminal equipment determines that the user carries out continuous touch events at the selection points corresponding to the plurality of first operation events. If the position information of the selection points corresponding to the first operation events is different or the distance between any two selection points corresponding to the first operation events is larger than a preset value, the continuous touch event is a sliding touch event; and if the position information of the selection points corresponding to the first operation events is the same or the distance between any two selection points corresponding to the first operation events is smaller than a preset value, the continuous touch event is a long-press event. Based on this possible implementation, a sliding touch event or a long press event can be accurately determined.
In a third aspect, an embodiment of the present application provides a first terminal device, where the first terminal device includes a memory and at least one processor; the memory is coupled to the one or more processors and stores computer program code for storing the computer program code, the computer program code comprising computer instructions which, when executed by the one or more processors, cause the first terminal device to perform the method as described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a second terminal device, where the second terminal device includes a memory and at least one processor; the memory is coupled to the one or more processors and stores computer program code for storing the computer program code, the computer program code comprising computer instructions which, when executed by the one or more processors, cause the second terminal device to perform the method as described in the second aspect or any one of the possible implementations under the second aspect.
In a fifth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on a first terminal device, the first terminal device is caused to perform the method described in the first aspect or any one of the possible implementation manners in the first aspect.
In a sixth aspect, the present application provides a computer storage medium, which includes computer instructions that, when executed on a second terminal device, cause the second terminal device to perform the method described in the second aspect or any one of the possible implementation manners in the second aspect.
In a seventh aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to execute the method described in the first aspect or any one of the possible implementation manners under the first aspect.
In an eighth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the method as described in the second aspect or any one of the possible implementations under the second aspect.
Drawings
FIG. 1 is a diagram of a system architecture provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of a first terminal device 100 provided in an embodiment of the present application;
fig. 3 is a block diagram of a software structure of the first terminal device 100 according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a second terminal device 200 provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of an apparatus control method according to an embodiment of the present application;
fig. 6 is a schematic diagram of position information of a terminal device and posture information of the terminal device provided in an embodiment of the present application;
FIG. 7 is a schematic illustration of a setup interface provided by an embodiment of the present application;
fig. 8 is a schematic diagram of position information of a terminal device, posture information of the terminal device, and a selection point provided in an embodiment of the present application;
FIG. 9 is a plane coordinate system n according to an embodiment of the present applicationcA schematic diagram of (a);
FIG. 10 is a schematic diagram of a text input interface provided by an embodiment of the present application;
fig. 11 is a schematic flowchart of another apparatus control method provided in an embodiment of the present application;
fig. 12 is a schematic diagram of a camera preview interface provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of a pressing area and a prompting area provided by an embodiment of the present application;
FIG. 14 is another plane coordinate system n provided by the embodiments of the present applicationcSchematic representation of (a).
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
In order to improve the control efficiency of the second terminal device, the embodiments of the present application provide a device control method, a first terminal device, and a second terminal device. In order to better understand the device control method provided in the embodiment of the present application, a system architecture to which the device control method is applied is described below.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present disclosure. The system architecture 10 includes a first terminal device 100 and a second terminal device 200. Fig. 1 illustrates an example system architecture 10 including a first terminal device 100 and a second terminal device 200. Of course, the system architecture 10 may further include a plurality of first terminal devices 100 and a plurality of second terminal devices 200, which is not limited in the embodiment of the present application.
The first terminal device 100 may be a mobile phone, a tablet computer, a remote controller, or a wearable electronic device (e.g., a smart watch, AR glasses) with a wireless communication function. The second terminal device 200 may be a terminal device having a display screen, and the second terminal device 200 may be a large-screen device such as a television or a projector. Alternatively, the second terminal device 200 may also be a device with a smaller screen, such as a smart phone or a tablet computer. The first terminal device 100 is used to control the second terminal device 200. The first terminal device 100 may control the second terminal device 200 through bluetooth, WIFI, or other communication methods.
The structure of the first terminal device 100 is described below. Referring to fig. 2, fig. 2 is a schematic structural diagram of a first terminal device 100 according to an embodiment of the present disclosure.
The first terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation to the first terminal device 100. In other embodiments of the present application, the first terminal device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the first terminal device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the first terminal device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may communicate through a PCM interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the first terminal device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the first terminal device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the first terminal device 100, and may also be used to transmit data between the first terminal device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules according to the embodiment of the present invention is only an exemplary illustration, and does not constitute a structural limitation on the first terminal device 100. In other embodiments of the present application, the first terminal device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives the input of the battery 142 and/or the charging management module 140 and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. in other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the first terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the first terminal device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the first terminal device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the first terminal device 100, including Wireless Local Area Networks (WLANs) (e.g., Wi-Fi networks), Bluetooth (BT), BLE broadcasting, Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the first terminal device 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160, so that the first terminal device 100 can communicate with a network and other devices through a wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The first terminal device 100 implements a display function by the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the first terminal device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The first terminal device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the first terminal device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the first terminal device 100 selects a frequency bin, the digital signal processor is configured to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The first terminal device 100 may support one or more video codecs. In this way, the first terminal device 100 can play or record video in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU may implement applications such as intelligent recognition of the first terminal device 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the first terminal device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications and data processing of the first terminal device 100 by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the first terminal device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The first terminal device 100 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The first terminal device 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the first terminal apparatus 100 answers a call or voice information, it is possible to answer a voice by bringing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The first terminal device 100 may be provided with at least one microphone 170C. In other embodiments, the first terminal device 100 may be provided with two microphones 170C, which may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the first terminal device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The gyro sensor 180B may be used to determine the motion attitude of the first terminal device 100. In some embodiments, the angular velocity of the first terminal device 100 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the first terminal device 100 calculates an altitude from the barometric pressure measured by the barometric pressure sensor 180C, and assists in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The first terminal device 100 may detect the opening and closing of the folder holster using the magnetic sensor 180D.
The acceleration sensor 180E can detect the magnitude of acceleration of the first terminal device 100 in various directions (generally, three axes). The magnitude and direction of gravity can be detected when the first terminal device 100 is stationary. The method can also be used for recognizing the posture of the terminal equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The first terminal device 100 may measure the distance by infrared or laser. In some embodiments, shooting a scene, the first terminal device 100 may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The first terminal device 100 emits infrared light to the outside through the light emitting diode. The first terminal device 100 detects infrared reflected light from a nearby object using a photodiode to automatically turn off the screen for power saving. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The first terminal device 100 may adaptively adjust the brightness of the display screen 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the first terminal device 100 is in a pocket, in order to prevent a false touch.
The fingerprint sensor 180H is used to collect a fingerprint. The first terminal device 100 may utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 180J is used to detect temperature. In some embodiments, the first terminal device 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the first terminal device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The first terminal device 100 may receive a key input, and generate a key signal input related to user setting and function control of the first terminal device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the first terminal device 100 by being inserted into the SIM card interface 195 or being pulled out from the SIM card interface 195. The first terminal device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The first terminal device 100 interacts with the network through the SIM card to implement functions such as a call and data communication. In some embodiments, the first terminal device 100 employs eSIM, namely: an embedded SIM card. The eSIM card may be embedded in the first terminal device 100 and cannot be separated from the first terminal device 100.
The software system of the first terminal device 100 may adopt a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the first terminal device 100. Fig. 3 is a block diagram of a software configuration of the first terminal device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 3, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the first terminal device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the embodiment of the present application, the structure of the second terminal device 200 may be the same as that of the first terminal device 100, or the structure of the second terminal device 200 includes only a part of the elements of the first terminal device 100. For example, referring to fig. 4, fig. 4 is a schematic structural diagram of a second terminal device 200 according to an embodiment of the present disclosure. As shown in fig. 4, the second terminal device 200 may include: processor 201, memory 202, wireless communication module 203, display 204, power 205.
The memory 202 may be used, among other things, for storing application code that is executed by the processor 201 to cause the second terminal device 200 to perform the methods in embodiments of the present invention. For further description of the processor 201, the memory 202, the wireless communication module 203, the display 204 and the power supply 205, reference may be made to the above description of the processor, the memory, the wireless communication module, the display and the power supply in the first terminal device 100, which is not repeated herein.
It is to be understood that the structure illustrated in fig. 4 in the embodiment of the present application does not specifically limit the second terminal device 200. The second terminal device 200 may have more or fewer components than shown in fig. 4.
The software system of the second terminal apparatus 200 may be the same as or different from that of the first terminal apparatus 100. For example, the second terminal device 200 includes all or part of an application layer, an application framework layer, an android runtime, a system library, and a kernel layer.
The following describes the device control method, the first terminal device, and the second terminal device provided in the embodiment in detail:
referring to fig. 5, fig. 5 is a schematic flowchart of an apparatus control method according to an embodiment of the present disclosure. The method execution subjects shown in fig. 5 may be the first terminal device and the second terminal device, or the subjects may be a chip in the first terminal device and a chip in the second terminal device. Fig. 5 illustrates an example of an execution subject of the method, which is a first terminal device and a second terminal device. The same principle is applied to the execution of the device control methods shown in other figures in the embodiments of the present application, and details are not described later. The device control method shown in fig. 5 includes steps 501 to 506. Wherein:
501. the first terminal device determines position information of the second terminal device and attitude information of the second terminal device.
The first terminal device may determine the position information of the second terminal device and the posture information of the second terminal device by using a simultaneous localization and mapping (SLAM) algorithm and a 3D (three-dimensional) object recognition algorithm. SLAM is also known as instantaneous mapping and mapping (CML), or concurrent mapping and positioning. The SLAM algorithm is an important branch of AR technology and is mainly used to determine and track the relative position of the terminal device in the current environment. When the SLAM algorithm starts, a temporary world coordinate system is first established that is consistent with the real dimensions, i.e., 1 meter under the world coordinate system is consistent with 1 meter in the real world. The SLAM algorithm can continuously determine its position and pose (orientation) in the temporary world coordinate system as the terminal device moves. The 3D object recognition technology can recognize a designated 3D object in real time in a video or a camera preview stream through Artificial Intelligence (AI), and in combination with the SLAM technology, can recognize and track the position of the 3D object in real environment in real time. Of course, the first terminal device may also determine the position information of the second terminal device and the posture information of the second terminal device through other algorithms, which is not limited in the embodiment of the present application. For example, the first terminal device may determine the position information of the second terminal device and the attitude information of the second terminal device through a collision detection algorithm.
In one possible implementation, the position information of the second terminal device includes center position information of a screen of the second terminal device in the world coordinate system and/or position information of one or more vertices of the screen of the second terminal device, and the posture information of the second terminal device includes a normal vector of the screen of the second terminal device in the world coordinate system. The position information of the selection point is determined based on the position information of the second terminal device and the posture information of the second terminal device, so that the accuracy of the determined position information of the selection point is improved. For example, as shown in fig. 6, the first terminal device may self-perform the SLAM algorithmDefining a world coordinate system omegacThe position information of the second terminal device comprises a world coordinate system omegacCenter position coordinate p of lower second terminal device screenbsWorld coordinate system omegacCoordinates c of four vertexes of upper, lower, left and right of lower second terminal device screenbs0、cbs1、cbs2、cbs3. The attitude information of the second terminal device comprises a world coordinate system omegacNormal vector q of lower second terminal device screenbs
In another possible implementation, the position information of the second terminal device may also be coordinates of any position of the screen of the second terminal device in a world coordinate system, and the posture information of the second terminal device may also be a normal vector of any plane of the second terminal device in the world coordinate system, which is not limited in the embodiment of the present application.
502. The first terminal device determines position information of the first terminal device and attitude information of the first terminal device.
In the embodiment of the present application, the execution sequence of step 501 and step 502 is not limited. Step 501 may be performed first and then step 502 may be performed, or step 502 may be performed first and then step 501 may be performed, or step 501 and step 502 may be performed simultaneously.
In this embodiment, the first terminal device may perform steps 502 to 505 with a preset time as a period. For example, the preset time may be greater than or equal to 1 msec and less than or equal to 50 msec. The first terminal device may determine the position information of the second terminal device and the posture information of the second terminal device only at an initial stage, and the position information of the second terminal device and the posture information of the second terminal device do not need to be determined with a preset time as a period. For example, if the second terminal device is a large-screen device and the position of the second terminal device does not move in general, the first terminal device does not need to determine the position information of the second terminal device and the posture information of the second terminal device with a preset time period. Of course, if the position of the second terminal device may move at any time, the first terminal device may also determine the position information of the second terminal device and the posture information of the second terminal device with a preset time period.
In the embodiment of the application, the first terminal device may determine the position information of the first terminal device and the posture information of the first terminal device through a SLAM algorithm. Of course, the first terminal device may also determine the position information and the posture information of the first terminal device through other algorithms, which is not limited in the embodiment of the present application.
In the embodiment of the application, the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device are in the same world coordinate system.
In one possible implementation, the position information of the first terminal device includes position information of a camera of the first terminal device in a world coordinate system, and the posture information of the first terminal device includes a normal vector of a plane in which the camera of the first terminal device is located in the world coordinate system. For example, as shown in fig. 6, at time n, the position information of the first terminal device is a world coordinate system ωcCoordinate p of camera of first terminal equipmentnThe attitude information of the first terminal device is a world coordinate system omegacNormal vector q of lower plane where first terminal equipment camera is locatedn
In another possible implementation, the position information of the first terminal device may also be coordinates of any position of the screen of the first terminal device in a world coordinate system, and the posture information of the second terminal device may also be a normal vector of any plane of the first terminal device in the world coordinate system, which is not limited in the embodiment of the present application.
In one possible implementation, the first terminal device and the second terminal device may establish a connection in advance, and after establishing the connection, perform step 501 and step 502.
In one possible implementation, when the first terminal device detects an operation of turning on the device control switch by a user, the first terminal device performs step 501 and step 502. When the first terminal device detects an operation of turning off the device control switch by the user, the first terminal device may stop performing step 501 and step 502. For example, fig. 7 is a schematic diagram of a setting interface of a first terminal device, and as shown in fig. 7, the setting interface of the first terminal device includes a device control switch 701. When the first terminal device detects a user's click or rightward sliding operation of the device control switch 701, the first terminal device starts to execute steps 501 and 502. Subsequently, when the first terminal device detects the user's click or leftward sliding operation on the device control switch 701 again, the first terminal device stops executing steps 501 and 502.
In another possible implementation, a control application for controlling the other terminal devices is installed in the first terminal device. After the first terminal device starts the control application, a second terminal device to be controlled may be selected from the control application, and then step 501 and step 502 are performed. The execution of steps 501 and 502 is stopped when it is detected that the user has closed the control application.
503. The first terminal device determines position information of a selection point based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device, wherein the selection point is a point selected by the first terminal device on a plane where a screen of the second terminal device is located.
In the embodiment of the present application, the selection point may be an intersection point of a ray with a certain position on the first terminal device as a starting point and a plane where the screen of the second terminal device is located. The ray can use the position of the camera of the first terminal device as a starting point, and the direction of the ray is the same as the normal direction of the plane where the camera of the first terminal device is located. Alternatively, the ray direction may have an angular deviation from the normal direction of the plane where the first terminal device camera is located. For example, as shown in fig. 8, fig. 8 exemplifies that the ray direction is the same as the normal direction of the plane where the first terminal device camera is located.
In one possible implementation, the position information of the second terminal device includes center position information of a screen of the second terminal device in the world coordinate system and position information of one or more vertices of the screen of the second terminal device, and the posture information of the second terminal device includes a normal vector of the screen of the second terminal device in the world coordinate system. The position information of the first terminal equipment comprises position information of a camera of the first terminal equipment in a world coordinate system, and the posture information of the first terminal equipment comprises a normal vector of a plane where the camera of the first terminal equipment is located in the world coordinate system. The specific implementation manner of the first terminal device determining the position information of the selection point based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device, and the posture information of the second terminal device may be: the method comprises the steps that a first terminal device determines position information of a selected point in a world coordinate system based on position information of a first terminal device camera in the world coordinate system, a normal vector of a plane where the first terminal device camera is located in the world coordinate system, center position information of a second terminal device screen in the world coordinate system and a normal vector of the second terminal device screen in the world coordinate system; the first terminal equipment determines the position information of the selection point in a plane coordinate system based on the position information of one or more vertexes of a second terminal equipment screen in the world coordinate system and the position information of the selection point in the world coordinate system; the first information in step 505 includes location information of the selection point, specifically, location information of the selection point in the planar coordinate system.
For example, as shown in FIG. 8, assume the world coordinate system is ωc. At time n, the position information of the first terminal device is the world coordinate system omegacCoordinate p of camera of first terminal equipmentnThe attitude information of the first terminal device is a world coordinate system omegacNormal vector q of lower first terminal equipment camera planen. The position information of the second terminal device comprises a world coordinate system omegacCenter position coordinate p of lower second terminal device screenbsWorld coordinate system omegacCoordinates c of four vertexes of upper, lower, left and right of lower second terminal device screenbs0、cbs1、cbs2、cbs3. The attitude information of the second terminal device comprises a world coordinate system omegacNormal vector q of lower second terminal device screenbs
At time nThe selection point is the coordinate p of the camera on the first terminal devicenAnd the intersection point of a ray which is the starting point and the plane where the screen of the second terminal device is located. The direction information of the ray and the normal vector q of the first terminal equipment cameranThe same is true. Suppose that at time n, the selected point is in the world coordinate system ωcThe coordinate of lower is in。inThe following system of equations needs to be solved for the calculation of (1):
Figure BDA0002726095590000151
wherein, formula (1) is a parameter equation of the ray; formula (2) is a parameter equation of a plane where the screen of the second terminal device is located, and in formula (2), the product is a dot product operation.
Figure BDA0002726095590000152
in=pn+mqn
As shown in FIG. 9, assume that the vertex c in the upper left corner of the screen of the second terminal device is the vertex cbs0As an origin, a plane coordinate system n of a plane where a screen of the second terminal device is located is establishedcIn the world coordinate system omegacThe coordinates of four vertexes of the second terminal device screen are respectively cbs0、 cbs1、cbs2、cbs3. The selection point is ncThe coordinates of (x) belowin,yin) Wherein:
Figure BDA0002726095590000153
the first terminal equipment determines that the selected point is in a plane coordinate system n at time ncCoordinates of lower (x)in,yin) Then, the coordinates (x)in,yin) And the first operational event detected at time n to the second terminal device.
504. And the first terminal equipment detects a first operation event corresponding to the selection point.
In the embodiment of the present application, step 504 and step 503 may be performed simultaneously. The first operation event corresponding to the selection point is a first operation event input by the first terminal device by the user when the first terminal device determines the selection point. The first operation event corresponding to the selection point may be understood as the first operation event performed by the user at the selection point.
In one possible implementation, the first operational event may be a press event or a lift event or no event. Alternatively, the first operation event may be a click event or no event. For example, the first terminal device determines the position information (x) of the selection point at time nin,yin) And detecting that the user inputs a pressing event at the first terminal equipment at the time n, and corresponding to the pressing event at the selection point of the time n. In step 505, the first terminal device sends first information to the second terminal device, the first information including location information (x) of the selection pointin,yin) And a press event.
505. The first terminal equipment sends first information to the second terminal equipment, wherein the first information comprises position information of the selection point and a first operation event corresponding to the selection point.
506. The second terminal device performs a target operation based on the first information.
In the embodiment of the application, after receiving the first information, the second terminal device executes the target operation based on the first information. Optionally, the second terminal device may receive the first information at a preset time period. The target operation may be playing, pausing, fast forwarding, fast rewinding, playing the previous album, playing the next album, playing the previous song, playing the next song, increasing the volume, decreasing the volume, turning down the page, turning up the page, or selecting an input box or inputting text information.
In one possible implementation, if the first operation event is a click event, the second terminal device may perform a target operation based on the click event received in the current cycle and the position information of the selection point. For example, if the position information of the selection point is on the play function button, the second terminal device plays the multimedia file.
In another possible implementation, if the first operation event is a press-down event or a lift-up event or no event, the second terminal device needs to perform the target operation based on the first information received for a plurality of consecutive periods.
In one possible implementation, if the first operation event is a press-down event, a lift-up event, or no event, the first information further includes a timestamp corresponding to the first operation event. The timestamp corresponding to the first operation event is the time when the first operation event is detected. For example, the first terminal device determines the position information (x) of the selection point at time nin,yin) And detecting that the user inputs a pressing event at the first terminal equipment at the time n, the first information comprises the position information (x) of the selection pointin,yin) Press event and time n.
In a possible implementation, the first operation event is a press-down event, a lift-up event, or no event, and the first information further includes a timestamp corresponding to the first operation event. The specific implementation manner of the second terminal device executing the target operation based on the first information is as follows: the second terminal equipment determines a second operation event input by a user at a plurality of selection points based on a plurality of first operation events received in a plurality of continuous periods and timestamps corresponding to the plurality of first operation events; the second terminal device performs a target operation based on the second operation event and the position information of the plurality of selection points. Optionally, the second operation event is a click event, a sliding touch event or a long-press event. Alternatively, the second operation event may also be another event, and the embodiment of the present application is not limited. The plurality of first operation events can be flexibly combined into the second operation event, so that the diversity of gestures for controlling the second terminal device can be improved based on the possible implementation mode.
For example, as shown in table 1 below, if three first operation events received in three consecutive cycles sequentially include one no event/lift event, one press event, and one lift event in the order from the front to the back of the timestamp, the second terminal device determines that the user performed a click event at the selection point corresponding to the press event and the selection point corresponding to the lift event. The second terminal device executes the target operation based on the click event, the position information of the selection point corresponding to the press event, and the position information of the selection point corresponding to the lift event.
As another example, as shown in table 1 below, if the plurality of first operation events received in the consecutive plurality of cycles sequentially include a plurality of press-down events and one lift-up event in the order from the front to the back of the timestamp, the second terminal device determines that the user has performed consecutive touch events at the selection points corresponding to the plurality of first operation events. If the position information of the selection points corresponding to the first operation events is different or the distance between any two selection points corresponding to the first operation events is larger than a preset value, the continuous touch event is a sliding touch event; and if the position information of the selection points corresponding to the first operation events is the same or the distance between any two selection points corresponding to the first operation events is smaller than a preset value, the continuous touch event is a long-press event. The second terminal device executes a target operation based on the continuous touch event and the position information of the selection point corresponding to the plurality of first operation events. Optionally, the second terminal device may record a touch trajectory of the continuous touch event based on the position information of the selection point corresponding to the plurality of first operation events, so that the subsequent second terminal device may execute the target operation based on the continuous touch event and the touch trajectory of the continuous touch event. For example, for the second row in Table 1 below, the second terminal device records the touch trajectory for the consecutive touch events as (x)in,yin),(xin+Δt,yin+Δt) …. For the third row in Table 1 below, the second terminal device records a touch trajectory of … for consecutive touch events, (x)in-Δt,yin-Δt),(xin,yin)。
TABLE 1
Figure BDA0002726095590000161
Figure BDA0002726095590000171
In one possible implementation, a specific implementation manner in which the second terminal device executes the target operation based on the second operation event and the location information of the plurality of selection points corresponding to the plurality of first operation events is as follows: when the input box is selected, if the second operation event is a sliding touch event, the second terminal device determines a trajectory of the sliding touch event based on position information of a plurality of selection points corresponding to the plurality of first operation events, determines text information based on the trajectory of the sliding touch event, and inputs the text information into the input box. Alternatively, the text information may be words or symbols or numbers or letters, etc. Based on the possible implementation mode, a user can input a sliding touch event in the second terminal device only by pressing the first terminal device and holding the first terminal device to move in the process of pressing the first terminal device, and the track of the sliding touch event is determined according to the moving track of the first terminal device. For example, if the trajectory of the first terminal device movement is "O" shaped, the trajectory of the sliding touch event is "O" shaped. The second terminal device may input text information in the input box based on the trajectory of the sliding touch operation. For example, if the trace of the sliding touch event is "O" shaped, the second terminal device may input the letter "O" in the input box. That is to say, based on the possible implementation manner, the user can input the text information in the input box only by pressing the first terminal device and holding the first terminal device to move in the process of pressing the first terminal device, so that the text information can be conveniently input in the input box of the second terminal device.
The solution described in fig. 5 is further illustrated below in a practical example:
the first terminal device and the second terminal device are connected first. After the first terminal device and the second terminal device establish a connection, as shown in fig. 7, the user may turn on a device control switch 701 at the first terminal device. After the first terminal device turns on the device control switch 701, the first terminal device starts a camera, and starts to initialize the position information and the posture information of the second terminal device through the SLAM algorithm and the 3D object recognition algorithm. Due to the position of the televisionThe device usually does not move, so the first terminal equipment only needs to calculate the position information of the second terminal equipment and the posture information of the second terminal equipment once. For example, as shown in fig. 8, the position information of the second terminal device determined by the first terminal device includes a world coordinate system ωcCenter position coordinate p of lower second terminal device screenbsWorld coordinate system omegacCoordinates c of four vertexes of upper, lower, left and right of lower second terminal device screenbs0、cbs1、cbs2、cbs3. The attitude information of the second terminal device comprises a world coordinate system omegacNormal vector q of lower second terminal device screenbs
The first terminal device also calculates the position information of the first terminal device and the posture information of the second terminal device by using the SLAM algorithm with delta t as a time period. For example, at time n- Δ t, the first terminal device determines the position information of the first terminal device as the world coordinate system ωcCoordinate p of camera of first terminal equipmentn-ΔtThe attitude information of the first terminal device is a world coordinate system omegacNormal vector q of lower first terminal equipment camera planen-Δt. First terminal equipment is based on central position coordinate p of second terminal equipment screenbsCoordinates c of four top points of upper, lower, left and right of the screen of the second terminal equipmentbs0、cbs1、cbs2、cbs3Normal vector q of the screen of the second terminal devicebsCoordinate p of first terminal equipment cameran-ΔtAnd a normal vector q of a plane where the first terminal equipment camera is locatedn-ΔtDetermining the plane coordinate (x) of the plane of the selection point 1 on the screen of the second terminal equipment at the time n-delta tin-Δt,yin-Δt). The first terminal device detects a first operational event 1 at time n- Δ t. The first terminal device sends first information 1 to the second terminal device, the first information 1 including the plane coordinates (x) of the selection point 1in-Δt,yin-Δt) A first operational event 1 and a timestamp n- Δ t.
At time n, the first terminal device determines the position information of the first terminal device as a world coordinate system omegacCoordinate p of camera of first terminal equipmentnThe attitude information of the first terminal device is a world coordinate system omegacNormal vector q of lower first terminal equipment camera planen. First terminal equipment is based on central position coordinate p of second terminal equipment screenbsCoordinates c of four top points of upper, lower, left and right of the screen of the second terminal equipmentbs0、cbs1、cbs2、cbs3Normal vector q of the screen of the second terminal devicebsCoordinate p of first terminal equipment cameranAnd a normal vector q of a plane where the first terminal equipment camera is locatednDetermining the plane coordinates (x) of the selection point 2 at time nin,yin). The first terminal device detects a first operational event 2 at time n. The first terminal device sends first information 2 to the second terminal device, the first information 2 comprising the plane coordinates (x) of the selection point 2in,yin) First operational event 2 and timestamp n.
At time n + Δ t, the first terminal device determines the position information of the first terminal device as a world coordinate system ωcCoordinate p of camera of first terminal equipmentn+ΔtThe attitude information of the first terminal device is a world coordinate system omegacNormal vector q of lower first terminal equipment camera planen+Δt. First terminal equipment is based on central position coordinate p of second terminal equipment screenbsCoordinates c of four top points of upper, lower, left and right of the screen of the second terminal equipmentbs0、cbs1、cbs2、cbs3Normal vector q of the screen of the second terminal devicebsCoordinate p of first terminal equipment cameran+ΔtAnd a normal vector q of a plane where the first terminal equipment camera is locatedn+ΔtDetermining the plane coordinates (x) of the selection point 3 at time n + Δ tin+Δt,yin+Δt). Assuming that the first terminal device detects the first operational event 3 at time n + Δ t, the first terminal device sends first information 3 to the second terminal device, the first information 3 including the plane coordinates (x) of the selection point 3in+Δt,yin+Δt) First operational event 3 and timestamp n + Δ t.
Therefore, the user can change the position of the selection point on the plane where the screen of the second terminal device is located by holding the first terminal device for movement.
Three different processing situations after the first terminal device receives the first information 1 to the first information 3 are described below:
suppose that the coordinates (x) of point 1 are selectedin-Δt,yin-Δt) Selecting coordinates (x) of point 2in,yin) And the coordinates (x) of the selection point 3in+Δt,yin+Δt) And if the first operation event 1, the first operation event 2 and the first operation event 3 are a pressing event, a pressing event and a lifting event in sequence, the second terminal device determines that the user performs sliding touch operation at the selection points 1 to 3.
In one possible implementation, as shown in fig. 10, it is assumed that the second terminal device determines that the input box displayed by the second terminal device is already in the selected state before the user performs the sliding touch operation at the selection point 1 to the selection point 3, that is, the second terminal device waits for the text information to be input in the input box. The second terminal device determines that the second terminal device performs a sliding touch operation on the selection point 1 to the selection point 3 based on the coordinate (x) of the selection point 1in-Δt,yin-Δt) Selecting coordinates (x) of point 2in,yin) And the coordinates (x) of the selection point 3in+Δt,yin+Δt) Determining a track of the sliding touch operation, and determining text information input by the user based on the track of the sliding touch operation. Assuming that the trajectory of the sliding touch operation is one horizontal, the second terminal device may determine that the text information input by the user is "one". The second terminal device enters the text information "one" into the text entry box. Therefore, the user can input the text information in the input box only by pressing the first terminal device and holding the first terminal device to move in the process of pressing the first terminal device, and the text information can be conveniently input in the input box of the second terminal device. Optionally, as shown in fig. 10, the second terminal device may further display a track of the sliding touch operation on the display interface, which is beneficial for the user to determine the first terminal deviceWhether the movement of the device is accurate.
In another possible implementation, the second terminal device may also determine the text information not based on the trajectory of the sliding touch operation. For example, if the target operation executed by the second terminal device is to select the input box, the second terminal device may send a text information input instruction to the first terminal device; after receiving a text information acquisition request sent by second terminal equipment, first terminal equipment outputs information and inputs the information to a keyboard; the user may enter textual information through the information entry keypad. After acquiring text information input by a user through an information input keyboard, the first terminal device sends the text information to the second terminal device. After the second terminal device receives the text information sent by the first terminal device, the text information is input in the input box. Based on the possible implementation mode, the convenience of inputting the text information at the second terminal equipment by the user is improved.
In another possible implementation, after determining that the user performs the sliding touch operation at the selection point 1 to the selection point 3, the second terminal device may identify the sliding direction of the sliding touch operation based on the trajectory of the sliding touch operation, and then perform different operations according to different sliding directions. For example, if the right slide is performed, the operation of turning the page down is performed, and if the right slide is performed, the operation of turning the page up is performed.
Let us assume the coordinates (x) of the selection point 1in-Δt,yin-Δt) Selecting coordinates (x) of point 2in,yin) And the coordinates (x) of the selection point 3in+Δt,yin+Δt) And if the first operation event 1, the first operation event 2 and the first operation event 3 are a press-down event, a press-down event and a lift-up event in sequence, the second terminal device determines that the user has performed long-press operations at the selection points 1 to 3.
For example, if the positions of the selection point 1 to the selection point 3 are on the right side of the screen of the second terminal device, the second terminal device may fast forward and play the multimedia file, and if the positions of the selection point 1 to the selection point 3 are on the left side of the screen of the second terminal device, the second terminal device may fast rewind the played multimedia file.
Let us assume the coordinates (x) of the selection point 2in,yin) And the coordinates (x) of the selection point 3in+Δt,yin+Δt) Similarly, if the first operation event 1, the first operation event 2, and the first operation event 3 are no event, a pressing event, and a lifting event in sequence, the second terminal device determines that the user has performed a click operation at the selection point 2 and the selection point 3. For example, assuming that the positions of the selection point 2 and the selection point 3 are on the play button, the second terminal device may play the multimedia file.
In the method described in fig. 5, since the position information of the selection point on the plane on which the screen of the second terminal device is located is determined based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device, and the posture information of the second terminal device. Therefore, the user can change the position of the selection point in the screen of the second terminal device by holding the first terminal device for movement. The first terminal device can be regarded as a mouse, and the position in the screen of the second terminal device can be quickly selected by moving the first terminal device. There is no need to move a selected position in the screen of the second terminal device by pressing the up, down, left and right keys, as in the conventional remote controller. It can be seen that the implementation of the method described in fig. 5 is beneficial to improving the control efficiency of the terminal device.
Referring to fig. 11, fig. 11 is a schematic flowchart of another apparatus control method according to an embodiment of the present disclosure. The device control method includes steps 1101 to 1111. Wherein:
1101. the first terminal equipment starts the camera.
In one possible implementation, the first terminal device and the second terminal device may establish a connection in advance, and after establishing the connection, the first terminal device performs step 1101.
In one possible implementation, when the first terminal device detects an operation of the device control switch being turned on by a user, the first terminal device performs step 1101.
1102. The first terminal equipment displays a camera preview interface, the camera preview interface comprises prompt information, and the prompt information is used for prompting that the first terminal equipment is moved to enable the second terminal equipment to be displayed in the camera preview interface.
In the embodiment of the application, the first terminal device displays a camera preview interface after the camera is started. As shown in fig. 12, the camera preview interface includes prompt information to prompt the first terminal device to be moved to cause the second terminal device to be displayed in the camera preview interface. The second terminal device is displayed in the camera preview interface, and the first terminal device can acquire the image of the second terminal device, so that the first terminal device can determine the position information and the posture information of the second terminal device based on the image of the second terminal device.
Optionally, the first terminal device may further output the prompt information on the camera preview interface only when detecting that the second terminal device does not have the camera preview interface. Or, the first terminal device may output the prompt information on the camera preview interface only when the second terminal device is controlled for the first time.
1103. The first terminal equipment determines the position information and the posture information of the second terminal equipment based on the image which is collected by the camera and comprises the second terminal equipment.
In the embodiment of the application, after the user moves the first terminal device to enable the second terminal device to be displayed in the camera preview interface, the first terminal device acquires the image comprising the second terminal device. The first terminal equipment determines the position information and the posture information of the second terminal equipment based on the image which is collected by the camera and comprises the second terminal equipment. The first terminal device can specifically determine the position information and the posture information of the second terminal device based on the image including the second terminal device acquired by the camera through the SLAM algorithm and the 3D object recognition algorithm.
Optionally, after the first terminal device determines the position information of the second terminal device and the posture information of the second terminal device, the camera preview interface may be closed, which is beneficial to saving electric power.
Optionally, as shown in fig. 13, after the first terminal device closes the camera preview interface, the pressed area may also be output. In the press area, the user may input a first operation event. Optionally, as shown in fig. 13, the first terminal device may further prompt the user to select a position in the screen of the second terminal device by pressing and moving the first terminal device.
By executing steps 1101 to 1103, the first terminal device is facilitated to successfully detect the position information and the posture information of the second terminal device.
1104. The first terminal device determines position information of the first terminal device and attitude information of the first terminal device.
The specific implementation manners of steps 1104 to 1107 are the same as those of steps 502 to 505, and are not described herein again.
1105. The first terminal device determines position information of a selection point on the basis of the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device, wherein the selection point is a point selected by the first terminal device on a plane where a screen of the second terminal device is located.
1106. And the first terminal equipment detects a first operation event corresponding to the selection point.
1107. The first terminal equipment sends first information to the second terminal equipment, wherein the first information comprises position information of the selection point and a first operation event corresponding to the selection point.
1108. The second terminal device determines whether the selection point is in a screen of the second terminal device based on the position information of the selection point.
In the embodiment of the application, after the second terminal device receives the first information, whether the selection point is in the screen of the second terminal device is determined based on the position information of the selection point. If the position of the selection point is not within the second terminal device screen, step 1109 is performed. If the position of the selection point is within the second terminal device screen, steps 1110 and 1111 are executed. The steps 1110 and 1111 are not performed sequentially, and the steps 1110 and 1111 may be performed first, or the steps 1111 and 1110 may be performed first, or the steps 1110 and 1111 may be performed simultaneously.
1109. And if the position of the selection point is not in the screen of the second terminal equipment, the second terminal equipment prompts the user to move the first terminal equipment to enable the position of the selection point to be in the screen of the second terminal equipment.
By executing the step 1109, when the position of the selection point is not in the screen of the second terminal device, the second terminal device can prompt the user in time, and the user experience is increased.
1110. And if the position of the selection point is in the screen of the second terminal equipment, the second terminal equipment highlights the position of the selection point in the screen of the second terminal equipment.
In the embodiment of the present application, a specific implementation manner of highlighting a position of the selection point in the screen of the second terminal device may be: displaying a semi-transparent cursor at the position of the selection point in the screen of the second terminal device, or changing the color of the position of the selection point, etc.
By performing step 1110, it is beneficial for the user to observe the location of the selection point so that the user can make a control operation on the selection point.
1111. And if the position of the selection point is in the screen of the second terminal equipment, the second terminal equipment executes the target operation based on the first information.
For example, assume a planar coordinate system ncAs shown in fig. 14, the plane coordinate system ncAnd the vertex at the upper left corner of the screen of the middle second terminal equipment is the origin (0, 0). The coordinates of the top right vertex are (1, 0). The coordinates of the vertex of the lower left corner are (0, 1). The coordinates of the vertex of the lower right corner are (1, 1). The coordinate of the selected point at time n is (x)in,yin)。
If xinIs less than or equal to 0, and yinIf the intersection point is not larger than 0, the second terminal equipment prompts the user that the first terminal equipment is moved to the right lower direction when the intersection point is not in the screen;
if 0 < xin< 1, and yinIf the intersection point is not larger than 0, the second terminal equipment prompts the user that the intersection point is not in the screen, and the first terminal equipment is moved downwards;
if xinNot less than 1, and yinIf the intersection point is not larger than 0, the second terminal equipment prompts the user that the first terminal equipment is moved to the left lower direction if the intersection point is not in the screen;
if xinIs less than or equal to 0, and yinIf the intersection point is not in the screen, the second terminal equipment prompts a user to move the first terminal equipment to the upper right direction;
if 0 < xin< 1, and yinIf the intersection point is not in the screen, the second terminal equipment prompts a user to move the first terminal equipment upwards;
if xinNot less than 1, and yinIf the intersection point is not in the screen, the second terminal equipment prompts a user to move the first terminal equipment to the left and upwards;
if xinNot more than 0, and 0<yin<1, the second terminal equipment prompts a user, the intersection point is no longer in the screen, and the first terminal equipment moves rightwards;
if xinNot more than 0 and 0 < yinIf the intersection point is less than 1, the second terminal equipment prompts the user, the intersection point is not in the screen any more, and the first terminal equipment is moved leftwards;
if 0 < xin< 1 and 0 < yinIf the number of the selected points is less than 1, the second terminal device displays a semi-transparent cursor at the corresponding position in the screen, and the coordinate of the selected point is (x)in,yin) And the first operation event performs the target operation.
The principle of determining whether the selection point is in the screen of the second terminal device when any one of the other three vertices is taken as the origin is similar, and details are not repeated here.
In one possible implementation, the first terminal device may further obtain a picture of a target area of the screen of the second terminal device, where the target area includes a function button where the selection point is located. The first terminal device displays a screen of the target area.
Alternatively, the target area may be an area centered on the position of the selection point. Optionally, the picture of the target area may be sent to the first terminal device by the second terminal device. Alternatively, the target area may be photographed by the first terminal device. The first terminal device may acquire a picture of a target area of a screen of the second terminal device and display the picture of the target area when a distance between the first terminal device and the second terminal device is greater than a preset distance. Or, the user may manually trigger the first terminal device to acquire the picture of the target area on the screen of the second terminal device. For example, the user may click on a function button in the screen of the first terminal device. And when the first terminal equipment detects that the user clicks the function button, acquiring the picture of the target area of the screen of the second terminal equipment. Or, the second terminal device may actively send the selection point to the first terminal device when detecting that the size of the function button where the selection point is located is smaller than a preset size. And after the first terminal equipment acquires the picture of the target area, displaying the target area. The first terminal equipment displays the picture of the target area, so that the user can see the function button selected by the selection point in the second terminal equipment more clearly.
Embodiments of the present application also provide a computer-readable storage medium having stored therein instructions, which when executed on a computer or processor, cause the computer or processor to perform one or more steps of any one of the methods described above.
The embodiment of the application also provides a computer program product containing instructions. The computer program product, when run on a computer or processor, causes the computer or processor to perform one or more steps of any of the methods described above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optics, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (22)

1. An apparatus control method, applied to a first terminal apparatus, the method comprising:
determining position information of a second terminal device and attitude information of the second terminal device;
determining position information of the first terminal device and attitude information of the first terminal device;
determining position information of a selection point based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device, wherein the selection point is a point selected by the first terminal device on a plane where a screen of the second terminal device is located;
detecting a first operation event corresponding to the selection point;
sending first information to the second terminal device, wherein the first information comprises the position information of the selection point and a first operation event corresponding to the selection point;
the position information of the first terminal equipment comprises the position information of a camera of the first terminal equipment in a world coordinate system, and the attitude information of the first terminal equipment comprises a normal vector of a plane where the camera of the first terminal equipment is located in the world coordinate system; the position information of the second terminal device comprises central position information of a screen of the second terminal device and/or position information of one or more vertexes of the screen of the second terminal device in a world coordinate system, and the posture information of the second terminal device comprises a normal vector of the screen of the second terminal device in the world coordinate system.
2. The method of claim 1, wherein the first operation event is a press-down event, a lift-up event, or a no event, and wherein the first information further comprises a timestamp corresponding to the first operation event.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
starting a camera;
displaying a camera preview interface, wherein the camera preview interface comprises prompt information, and the prompt information is used for prompting that the first terminal device is moved to enable the second terminal device to be displayed in the camera preview interface;
the determining the position information of the second terminal device and the posture information of the second terminal device includes:
and determining the position information and the posture information of the second terminal equipment based on the image which is acquired by the camera and comprises the second terminal equipment.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a picture of a target area of a screen of the second terminal equipment, wherein the picture of the target area comprises a function button where the selection point is located;
and displaying the picture of the target area.
5. An apparatus control method, applied to a second terminal apparatus, the method comprising:
receiving first information sent by first terminal equipment, wherein the first information comprises position information of a selection point and a first operation event corresponding to the selection point, and the selection point is a point selected by the first terminal equipment on a plane where a screen of second terminal equipment is located;
determining whether the selection point is in a screen of the second terminal device based on the position information of the selection point;
if the position of the selection point is not in the screen of the second terminal device, prompting a user to move the first terminal device to enable the position of the selection point to be in the screen of the second terminal device;
and if the position of the selection point is in the screen of the second terminal equipment, executing target operation based on the first information.
6. The method of claim 5, wherein the first operational event is a press event or a lift event or no event, and wherein the first information further comprises a timestamp corresponding to the first operational event.
7. The method of claim 6, wherein the receiving the first information sent by the first terminal device comprises:
receiving first information sent by first terminal equipment by taking preset time as a period;
the performing a target operation based on the first information includes:
determining a second operation event input by a user at a plurality of selection points based on a plurality of first operation events received in a plurality of continuous periods and timestamps corresponding to the plurality of first operation events;
performing a target operation based on the second operation event and the location information of the plurality of selection points.
8. The method of claim 7, wherein the second operational event is a click event or a sliding touch event or a long press event.
9. The method according to claim 7 or 8, wherein the performing a target operation based on the second operation event and the location information of the plurality of selection points comprises:
under the condition that the input frame is selected, if the second operation event is a sliding touch event, determining the track of the sliding touch event based on the position information of the plurality of selection points;
determining text information based on the trajectory of the sliding touch event and inputting the text information in the input box.
10. The method of claim 5, further comprising:
and if the position of the selection point is in the screen of the second terminal equipment, highlighting the position of the selection point in the screen of the second terminal equipment.
11. A first terminal device, wherein said first terminal device comprises a memory and at least one processor; the memory coupled with the one or more processors, the memory storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the first terminal device to:
determining position information of a second terminal device and attitude information of the second terminal device;
determining position information of the first terminal device and attitude information of the first terminal device;
determining position information of a selection point based on the position information of the first terminal device, the posture information of the first terminal device, the position information of the second terminal device and the posture information of the second terminal device, wherein the selection point is a point selected by the first terminal device on a plane where a screen of the second terminal device is located;
detecting a first operation event corresponding to the selection point;
sending first information to the second terminal device, wherein the first information comprises the position information of the selection point and a first operation event corresponding to the selection point;
the position information of the first terminal equipment comprises the position information of a camera of the first terminal equipment in a world coordinate system, and the attitude information of the first terminal equipment comprises a normal vector of a plane where the camera of the first terminal equipment is located in the world coordinate system; the position information of the second terminal device comprises central position information of a screen of the second terminal device and/or position information of one or more vertexes of the screen of the second terminal device in a world coordinate system, and the posture information of the second terminal device comprises a normal vector of the screen of the second terminal device in the world coordinate system.
12. The first terminal device according to claim 11, wherein the first operation event is a press-down event, a lift-up event, or a no event, and the first information further includes a timestamp corresponding to the first operation event.
13. The first terminal device of claim 11 or 12, wherein the first terminal device further comprises a camera and a display screen, and wherein the computer instructions, when executed by the one or more processors, cause the first terminal device to further perform the following:
starting the camera;
displaying a camera preview interface through the display screen, wherein the camera preview interface comprises prompt information, and the prompt information is used for prompting the first terminal device to move so that the second terminal device is displayed in the camera preview interface;
the determining, by the first terminal device, the position information of the second terminal device and the posture information of the second terminal device includes:
and determining the position information and the posture information of the second terminal equipment based on the image which is acquired by the camera and comprises the second terminal equipment.
14. The first terminal device of claim 11 or 12, wherein the first terminal device further comprises a display screen, which when executed by the one or more processors causes the first terminal device to further perform the following:
acquiring a picture of a target area of a screen of the second terminal equipment, wherein the picture of the target area comprises a function button where the selection point is located;
and displaying the picture of the target area through the display screen.
15. A second terminal device, wherein the second terminal device comprises a display screen, a memory and at least one processor; the memory coupled with the one or more processors, the memory storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the second terminal device to:
receiving first information sent by first terminal equipment, wherein the first information comprises position information of a selection point and a first operation event corresponding to the selection point, and the selection point is a point selected by the first terminal equipment on a plane where a screen of second terminal equipment is located;
determining whether the selection point is in a screen of the second terminal device based on the position information of the selection point;
if the position of the selection point is not in the screen of the second terminal device, prompting a user to move the first terminal device to enable the position of the selection point to be in the screen of the second terminal device;
and if the position of the selection point is in the screen of the second terminal equipment, executing target operation based on the first information.
16. The second terminal device according to claim 15, wherein the first operation event is a press-down event, a lift-up event, or a no event, and the first information further includes a timestamp corresponding to the first operation event.
17. The second terminal device according to claim 16, wherein the second terminal device receives the first information sent by the first terminal device, and the second terminal device includes:
receiving first information sent by first terminal equipment by taking preset time as a period;
the second terminal device performs a target operation based on the first information, including:
determining a second operation event input by a user at a plurality of selection points based on a plurality of first operation events received in a plurality of continuous periods and timestamps corresponding to the plurality of first operation events;
performing a target operation based on the second operation event and the location information of the plurality of selection points.
18. The second terminal device according to claim 17, wherein the second operation event is a click event, a slide touch event, or a long press event.
19. The second terminal device according to claim 17 or 18, wherein the second terminal device performs a target operation based on the second operation event and the location information of the plurality of selection points, including:
under the condition that the input frame is selected, if the second operation event is a sliding touch event, determining the track of the sliding touch event based on the position information of the plurality of selection points;
determining text information based on the trajectory of the sliding touch event and inputting the text information in the input box.
20. The second terminal device of claim 15, wherein the computer instructions, when executed by the one or more processors, cause the second terminal device to further perform the following:
and if the position of the selection point is in the screen of the second terminal equipment, highlighting the position of the selection point in the screen of the second terminal equipment.
21. A computer-readable storage medium comprising instructions that, when executed on a first terminal device, cause the first terminal device to perform the method of any one of claims 1-4.
22. A computer-readable storage medium comprising instructions that, when executed on a second terminal device, cause the second terminal device to perform the method of any of claims 5-10.
CN202011103205.4A 2020-10-15 2020-10-15 Device control method, first terminal device, second terminal device and computer readable storage medium Active CN112383664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011103205.4A CN112383664B (en) 2020-10-15 2020-10-15 Device control method, first terminal device, second terminal device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011103205.4A CN112383664B (en) 2020-10-15 2020-10-15 Device control method, first terminal device, second terminal device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112383664A CN112383664A (en) 2021-02-19
CN112383664B true CN112383664B (en) 2021-11-19

Family

ID=74581555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011103205.4A Active CN112383664B (en) 2020-10-15 2020-10-15 Device control method, first terminal device, second terminal device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112383664B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965773A (en) * 2021-03-03 2021-06-15 闪耀现实(无锡)科技有限公司 Method, apparatus, device and storage medium for information display
CN114115691B (en) * 2021-10-27 2023-07-07 荣耀终端有限公司 Electronic equipment and interaction method and medium thereof
CN115016666B (en) * 2021-11-18 2023-08-25 荣耀终端有限公司 Touch processing method, terminal equipment and storage medium
CN117435087A (en) * 2022-07-21 2024-01-23 荣耀终端有限公司 Two-dimensional code identification method, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101379455A (en) * 2006-02-03 2009-03-04 松下电器产业株式会社 Input device and its method
CN106375811A (en) * 2016-08-31 2017-02-01 天脉聚源(北京)传媒科技有限公司 Program play control method and device
US10567567B2 (en) * 2017-08-10 2020-02-18 Lg Electronics Inc. Electronic device and method for controlling of the same

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098095A1 (en) * 2004-01-30 2016-04-07 Electronic Scripting Products, Inc. Deriving Input from Six Degrees of Freedom Interfaces
CN103365572B (en) * 2012-03-26 2018-07-03 联想(北京)有限公司 The remote control method and electronic equipment of a kind of electronic equipment
CN103517111A (en) * 2012-06-29 2014-01-15 上海广电电子科技有限公司 Television remote control method based on touch remote control device and television system
CN103870149B (en) * 2012-12-18 2017-08-29 联想(北京)有限公司 Data processing method and electronic equipment
CN104035562B (en) * 2014-06-18 2017-03-22 广州市久邦数码科技有限公司 Method and system for mapping three-dimensional desktop touch events
CN106331809A (en) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 Television control method and television control system
CN106341606A (en) * 2016-09-29 2017-01-18 努比亚技术有限公司 Device control method and mobile terminal
CN106775059A (en) * 2016-11-22 2017-05-31 努比亚技术有限公司 Dual-screen electronic device visualized operation real-time display method and equipment
CN108958569B (en) * 2017-05-19 2023-04-07 腾讯科技(深圳)有限公司 Control method, device, system and terminal of smart television and smart television
GB201807749D0 (en) * 2018-05-13 2018-06-27 Sinden Andrew James A control device for detection
CN113220139B (en) * 2019-07-30 2022-08-02 荣耀终端有限公司 Method for controlling display of large-screen equipment, mobile terminal and first system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101379455A (en) * 2006-02-03 2009-03-04 松下电器产业株式会社 Input device and its method
CN106375811A (en) * 2016-08-31 2017-02-01 天脉聚源(北京)传媒科技有限公司 Program play control method and device
US10567567B2 (en) * 2017-08-10 2020-02-18 Lg Electronics Inc. Electronic device and method for controlling of the same

Also Published As

Publication number Publication date
CN112383664A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN113794800B (en) Voice control method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN112231025B (en) UI component display method and electronic equipment
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN111669459B (en) Keyboard display method, electronic device and computer readable storage medium
JP7173670B2 (en) VOICE CONTROL COMMAND GENERATION METHOD AND TERMINAL
EP4195707A1 (en) Function switching entry determining method and electronic device
CN110559645B (en) Application operation method and electronic equipment
CN114185503B (en) Multi-screen interaction system, method, device and medium
CN112068907A (en) Interface display method and electronic equipment
CN113746961A (en) Display control method, electronic device, and computer-readable storage medium
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
WO2021238740A1 (en) Screen capture method and electronic device
CN115016697A (en) Screen projection method, computer device, readable storage medium, and program product
CN111249728B (en) Image processing method, device and storage medium
CN114064160A (en) Application icon layout method and related device
CN114090140A (en) Interaction method between devices based on pointing operation and electronic device
CN115032640B (en) Gesture recognition method and terminal equipment
CN113438366A (en) Information notification interaction method, electronic device and storage medium
CN113805825B (en) Method for data communication between devices, device and readable storage medium
CN115268737A (en) Information processing method and device
CN114911400A (en) Method for sharing pictures and electronic equipment
CN114173184A (en) Screen projection method and electronic equipment
CN113448658A (en) Screen capture processing method, graphical user interface and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant