CN114513689B - Remote control method, electronic equipment and system - Google Patents

Remote control method, electronic equipment and system Download PDF

Info

Publication number
CN114513689B
CN114513689B CN202011167645.6A CN202011167645A CN114513689B CN 114513689 B CN114513689 B CN 114513689B CN 202011167645 A CN202011167645 A CN 202011167645A CN 114513689 B CN114513689 B CN 114513689B
Authority
CN
China
Prior art keywords
electronic device
touch
display interface
electronic equipment
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011167645.6A
Other languages
Chinese (zh)
Other versions
CN114513689A (en
Inventor
王姚
钱凯
庄志山
朱爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011167645.6A priority Critical patent/CN114513689B/en
Priority to PCT/CN2021/116179 priority patent/WO2022088974A1/en
Publication of CN114513689A publication Critical patent/CN114513689A/en
Application granted granted Critical
Publication of CN114513689B publication Critical patent/CN114513689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The embodiment of the application provides a remote control method, electronic equipment and a system, wherein first electronic equipment comprises a camera device, wireless connection is established between the first electronic equipment and second electronic equipment, and the first electronic equipment receives a first operation; in response to receiving the first operation, the first electronic device starts the camera; the first electronic device acquires a first image by using the camera device; the first electronic device displays a first user interface, wherein the first user interface comprises the first image; the first electronic device receives touch operation aiming at the first user interface; and responding to the virtual touch event after the second electronic device receives the virtual touch event, and executing a third operation.

Description

Remote control method, electronic equipment and system
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a remote control method, an electronic device, and a system.
Background
At present, with the development of intelligent televisions, the functions supported by the intelligent televisions are more and more, and the information of a user interface is more and more complex, so that the remote control requirements on the intelligent televisions are higher and higher. In the traditional remote control technology, the mode of realizing remote control on the intelligent television through a simple key structure cannot better support the requirement of the current intelligent television on the remote control function, so how to better realize remote control on the intelligent television is a problem worthy of research.
Based on the popularity of touch technology application, a research direction of combining the touch technology with the remote control technology to realize remote control of the intelligent television currently exists, and compared with the traditional remote control technology, the remote control mode can meet the remote control requirements of more types of intelligent televisions. In the related art, the intelligent television can be remotely controlled in a mode of combining keys and a touch panel by adding the touch panel on the traditional remote controller, however, the technical scheme cannot meet the requirements of various control scenes of the current intelligent television service; or, develop the application used for realizing the remote control of the intelligent television on the intelligent mobile phone, through transmitting the display content of the intelligent television to the intelligent mobile phone after encoding, thus the intelligent mobile phone can realize the remote control of the intelligent television after decoding, and then the technical scheme has the defect of serious time delay because of the characteristic of large transmission quantity. Therefore, in order to solve the problems in the related art, the application provides a remote control method which can meet the requirements of more control scenes of the current intelligent television service and can reduce the time delay.
Disclosure of Invention
The application provides a remote control method, electronic equipment and a system, which are used for meeting the requirements of various control scenes of the current intelligent television service, and have the characteristic of low time delay, so that the remote control accuracy of a user on the intelligent television is improved, and the touch control operation experience of the user is improved.
In a first aspect, an embodiment of the present application provides a remote control method, which is applicable to a first electronic device, where the first electronic device includes an image capturing device, and wireless connection is established between the first electronic device and a second electronic device, and the first electronic device receives a first operation; in response to receiving the first operation, the first electronic device starts the camera; the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment; the first electronic device displays a first user interface, wherein the first user interface comprises the first image; the first electronic device receives touch operation aiming at the first user interface, acquires touch point coordinates corresponding to the touch operation aiming at the first user interface, and generates a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device; and the first electronic equipment sends the virtual touch event to the second electronic equipment, so that after the second electronic equipment receives the virtual touch event, the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment is executed in response to the virtual touch event.
In the method, after the first electronic device acquires an acquired image by utilizing an acquisition device and displays the acquired image as a user interface according to the acquired image, a user can realize touch operation on the first electronic device, so that the first electronic device generates a virtual touch event according to the touch operation of the user and sends the virtual touch event to the second user device to realize remote control of the second user device, and the user can realize the needs of various remote control scenes of the second electronic device according to the touch operation of the first electronic device; in addition, the transmission data volume between the first electronic equipment and the second electronic equipment is a virtual touch event, and the method has the characteristic of small data volume, so that the time delay of data interaction between the first electronic equipment and the second electronic equipment is reduced, and the touch experience of a user is improved.
A possible implementation manner, when an application icon which is installed by the first electronic device and is used for performing touch operation is clicked and triggered, determining that the first operation is received; or when the remote control in the notification bar drop-down interface is clicked and triggered, the first electronic device determines that a first operation is received; or after the first electronic device receives the voice operation or the gesture operation, determining that the first operation is received. Therefore, a plurality of entries which can be determined to enter the virtual touch event generation scene can be provided for the user on the first electronic device, so that portability is provided for the user, and user experience is improved.
In one possible implementation, the first electronic device identifies a display interface area of the second electronic device before the first electronic device receives a touch operation for the first user interface. The implementation is that the first electronic device determines that an area inside the screen frame of the second electronic device in the first user interface is a display interface area of the second electronic device, and content displayed in the display interface area of the second electronic device is a current display interface of the second electronic device. Based on the above, when the first electronic device ingests the display interface including the second electronic device, the ingest screen information is in a wider range than the second electronic device, and then other touch operations except for the touch operation in the display interface area of the second electronic device are not used for generating the virtual touch event on the display interface of the first electronic device, so that the second electronic device displayed on the first electronic device display interface is determined by identifying the screen frame of the second electronic device in the first electronic device display interface, thereby improving the efficiency of generating the virtual touch event and reducing the processing time of the remote control process.
In a possible implementation manner, the first electronic device determines that the area of the screen frame content of the second electronic device can be implemented as that the first electronic device sends an anchor generating instruction to the second electronic device so that after the second electronic device receives the anchor generating instruction, an anchor is generated on a display interface in response to the anchor generating instruction; and the first electronic device determines the area inside the screen frame of the second electronic device according to the acquired information of the anchor point in the first image. Based on the method, the first electronic device can determine the display area of the second electronic device according to the determined area of the anchor point after determining a plurality of target anchor points contained in the display interface of the first electronic device in the anchor point detection mode, so that the efficiency and accuracy of generating the virtual touch event by the first electronic device are improved, and the processing time of a remote control process is reduced.
A possible implementation manner, after the first electronic device identifies the display interface area of the second electronic device, whether the size of the display interface area of the second electronic device is smaller than a first threshold value is judged; and if the size of the display interface area of the second electronic device is smaller than a first threshold value, the first electronic device adjusts the focal length of the image pickup device to a first focal length. Therefore, in order to facilitate the user to perform more accurate touch operation on the display interface of the first electronic device, the focal length of the camera device can be intelligently adjusted according to the display size of the display interface of the second electronic device contained in the display interface of the first electronic device, so that the user can perform touch operation through the first electronic device conveniently, and more accurate virtual remote control events are generated.
In one possible implementation manner, after the first electronic device receives the touch operation for the first user interface, before the virtual touch event is generated, the first electronic device obtains at least one touch point coordinate; the first electronic device determines whether the at least one touch point coordinate is within a display interface area of the second electronic device; and in response to the first electronic device determining that the at least one touch point coordinate is within the display interface region of the second electronic device, the first electronic device generates the virtual touch event. Based on the above, the implementation manner provides a specific implementation manner for generating the virtual event accurately, and the first electronic device generates the virtual touch event based on the acquired touch point coordinates, so that the accuracy of the generated virtual touch event is ensured.
In one possible implementation manner, the generating, by the first electronic device, the virtual touch event in response to receiving the touch operation for the first user interface is implemented by the first electronic device converting the acquired touch point coordinates corresponding to the touch operation for the first user interface into relative coordinates in a display interface area of the second electronic device, where the first electronic device generates the virtual touch event according to the relative coordinates in the current display interface of the second electronic device. Based on the above, after the first electronic device obtains the coordinates of the touch point corresponding to the user operation, the coordinates of the touch point are converted into the relative coordinates belonging to the second electronic device according to the display effect of the second electronic device on the two-dimensional projection interface of the first electronic device, so that the accuracy of the virtual touch event generated based on the first electronic device is ensured.
In a possible implementation manner, the touch operation includes a clicking operation and/or a sliding operation, and the touch point coordinates corresponding to the touch operation of the first user interface include a single coordinate and/or multiple coordinates. The scene provides the advantages that the operation of the user can be clicking operation or sliding operation, and the user can execute different user operations on the first electronic equipment, so that the diversity of the generated virtual remote control events is met, various remote control scenes of the second electronic equipment are further met, and the user experience is improved.
A possible implementation manner is that the first electronic device is a mobile phone, the second electronic device is an intelligent television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic device is a menu interface of the intelligent television; the first user interface is a display interface of the first electronic equipment after entering a remote control mode; the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions; the display interface area of the second electronic device is an image area of the menu interface of the intelligent television, which is acquired by the mobile phone; the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the intelligent television in the first user interface; and the second electronic equipment executes the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment to execute the function corresponding to one of the plurality of controls in the image of the menu interface of the intelligent television. Based on the above, the implementation manner provides a possible scene of the first electronic device and the second electronic device, namely, a scene of realizing remote control on the smart television through the mobile phone.
In a second aspect, an embodiment of the present application further provides an electronic device, adapted to a first electronic device, where the first electronic device includes an image capturing apparatus, and the first electronic device and a second electronic device establish wireless connection, where the electronic device includes: the touch screen comprises a touch panel and a display screen; one or more processors; a memory; a plurality of applications; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the first electronic device, cause the first electronic device to perform the steps of: receiving a first operation; in response to receiving the first operation, activating the image capturing apparatus; acquiring a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment; displaying a first user interface, wherein the first user interface comprises the first image; receiving a touch operation for the first user interface; in response to receiving the touch operation for the first user interface, acquiring touch point coordinates corresponding to the touch operation for the first user interface, and generating a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device; and sending the virtual touch event to the second electronic equipment, so that after the second electronic equipment receives the virtual touch event, responding to the virtual touch event, and executing the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment.
A possible implementation manner, when the instruction is executed by the first electronic device, causes the first electronic device to perform receiving a first operation, specifically perform: displaying a first application icon, and receiving an operation aiming at the first application icon; or, receiving a first voice operation; alternatively, a first gesture operation is received.
A possible implementation, when the instructions are executed by the first electronic device, causes the first electronic device to further perform: before receiving a touch operation for the first user interface, determining an area inside a screen frame of the second electronic device in the first user interface as a display interface area of the second electronic device.
A possible implementation manner, when the instruction is executed by the first electronic device, causes the first electronic device to execute determining an area inside a screen frame of the second electronic device, specifically performs: sending an anchor generating instruction to the second electronic equipment, so that after the second electronic equipment receives the anchor generating instruction, an anchor is generated on a display interface in response to the anchor generating instruction; and determining the area inside the screen frame of the second electronic equipment according to the acquired information of the anchor point in the first image.
A possible implementation manner, when the instruction is executed by the first electronic device, causes the first electronic device to identify a display interface area of the second electronic device to further execute: judging whether the size of the display interface area of the second electronic equipment is smaller than a first threshold value; and if the size of the display interface area of the second electronic equipment is smaller than a first threshold value, adjusting the focal length of the image pickup device to a first focal length.
A possible implementation manner, when the instruction is executed by the first electronic device, causes the first electronic device to further execute, after receiving a touch operation for the first user interface, before generating the virtual touch event: acquiring at least one touch point coordinate; determining whether the at least one touch point coordinate is within a display interface area of the second electronic device; and generating the virtual touch event in response to the first electronic device determining that the at least one touch point coordinate is in the display interface area of the second electronic device.
A possible implementation manner, when the instruction is executed by the first electronic device, causes the first electronic device to execute generating a virtual touch event based on the touch point coordinates, specifically execute: converting the obtained touch point coordinates corresponding to the touch operation of the first user interface into relative coordinates in the current display interface of the second electronic device; and generating the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
In a possible implementation manner, the touch operation includes a clicking operation and/or a sliding operation, and the touch point coordinates corresponding to the touch operation of the first user interface include a single coordinate and/or multiple coordinates.
A possible implementation manner is that the first electronic device is a mobile phone, the second electronic device is an intelligent television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic device is a menu interface of the intelligent television; the first user interface is a display interface of the first electronic equipment after entering a remote control mode; the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions; the display interface area of the second electronic device is an image area of the menu interface of the intelligent television, which is acquired by the mobile phone; the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the intelligent television in the first user interface; and the second electronic equipment executes the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment to execute the function corresponding to one of the plurality of controls in the image of the menu interface of the intelligent television.
It should be noted that, the beneficial effects of each design of the electronic device provided in the second aspect of the embodiment of the present application refer to any one of the possible beneficial effects of the first aspect, and are not described herein again.
In a third aspect, an embodiment of the present application provides a remote control system, including a first electronic device and a second electronic device, where the first electronic device includes an image capturing apparatus, the first electronic device and the second electronic device establish wireless connection, and the first electronic device is configured to receive a first operation; in response to receiving the first operation, the first electronic device starts the camera; the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment; the first electronic device is used for displaying a first user interface, and the first user interface comprises the first image; the first electronic device is used for receiving touch operation aiming at the first user interface; in response to receiving the touch operation for the first user interface, the first electronic device obtains touch point coordinates corresponding to the touch operation for the first user interface, and generates a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device; the first electronic device sends the virtual touch event to the second electronic device; the second electronic equipment receives the virtual touch event; and responding to the received virtual touch event, and executing a third operation by the second electronic equipment.
In a fourth aspect, embodiments of the present application also provide a remote control device comprising a module/unit for performing the method of any one of the possible implementations of the first aspect. These modules/units may be implemented by hardware, or may be implemented by hardware executing corresponding software.
In a fifth aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program (which may also be referred to as code, or instructions) which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: a computer program (which may also be referred to as code, or instructions) which, when executed, causes a computer to perform the method of any one of the possible implementations of the first aspect.
In a seventh aspect, there is also provided a graphical user interface on an electronic device with a display screen, one or more memories, and one or more processors to execute one or more computer programs stored in the one or more memories, the graphical user interface comprising a graphical user interface displayed by the electronic device when executing any of the possible implementations of the first aspect of the embodiments of the application.
Drawings
Fig. 1a is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 1b is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an android operating system according to an embodiment of the present application;
fig. 4a is one of application scenario diagrams of a remote control method according to an embodiment of the present application;
FIG. 4b is a second application scenario diagram of a remote control method according to an embodiment of the present application;
FIG. 4c is a third application scenario diagram of a remote control method according to an embodiment of the present application;
fig. 5a is a schematic flow chart of a remote control method according to an embodiment of the present application;
fig. 5b is a schematic diagram of anchor point generation according to an embodiment of the present application;
FIG. 6a is a schematic diagram of coordinate transformation according to an embodiment of the present application;
FIG. 6b is a second schematic diagram of coordinate transformation according to the embodiment of the present application;
fig. 7 is a schematic structural diagram of a remote control device according to an embodiment of the present application.
Detailed Description
With the rapid development of society, mobile terminal devices such as mobile phones are becoming more popular. The mobile phone not only has a communication function, but also has strong processing capacity, storage capacity, a photographing function, a data editing function and the like. Therefore, the mobile phone not only can be used as a communication tool, but also can be used as a mobile database of a user, and can provide a mobile computing environment to realize that after the received data are subjected to predefined processing, an instruction with a control function is output, such as an instruction for realizing a remote control virtual touch event sent by the first electronic equipment. Therefore, based on the convenience and touch capability of the mobile terminal device, the virtual touch event of the electronic device such as the smart television and the like needing remote control can be sent through the mobile terminal device, and the method can be suitable for various possible remote control scenes.
Based on the description in the background art, in the related art, a remote control mode of adding a touch panel to a traditional remote controller, although the combination of the touch technology and the remote control technology is considered to realize more types of remote control demands of the smart television, the disadvantage of inaccurate touch operation exists, for example, accurate control on fast forward or backward play of a video cannot be realized, so that the demands of the smart television on various control scenes cannot be met. Or, through developing the application that is used for realizing intelligent TV remote control on the smart mobile phone, after encoding the current display content of smart TV into image data, send to the smart mobile phone, then through decoding the image data, display on the touch panel of smart mobile phone, after carrying out touch operation on the touch panel of smart mobile phone, feedback smart TV after encoding the display content that contains touch operation again, thereby realize the remote control operation of smart mobile phone to smart TV, however this remote control mode has the problem with the chirality poor, thereby lead to controlling inaccurately, user experience is poor.
In view of the above, the present application provides a remote control method, based on the principles of augmented reality technology and image tracking technology, using the image capturing and presenting capability of a first electronic device, using an image capturing device included in the first electronic device to obtain a captured image, where the captured image includes a display interface area of a second electronic device, and converting a touch operation of a user on the first electronic device into a virtual remote control operation on the second electronic device, and sending a virtual touch event generated by the first electronic device based on the virtual remote control operation to the second user device, so as to realize remote control on the second electronic device.
The embodiment of the application can be applied to electronic devices such as mobile phones, tablet computers, wearable devices (e.g., watches, bracelets, helmets, headphones and the like), vehicle-mounted devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA), smart home devices (e.g., smart televisions, smart projections, smart speakers, smart cameras and the like). It will be appreciated that embodiments of the present application are not limited in any way by the particular type of electronic device.
Applications (apps) of various functions, such as WeChat, mailbox, microblog, video, smart life, smart remote control, etc., may be installed in the above electronic device. In the embodiment of the present application, attention is focused on how an App installed in a first electronic device for sending a virtual touch event generates a virtual touch event.
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. The terminology used in the description of the embodiments of the application is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application.
It should be noted that, in the embodiments of the present application, "at least one" refers to one or more, and a plurality refers to two or more. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. It is to be understood that in the present application, unless otherwise indicated, the terms "application" and "application" mean either. For example, A/B may represent A or B. The "and/or" in the present application is merely one association relationship describing the association object, indicating that three relationships may exist.
It should be noted that, in the embodiments of the present application, the terms "first," "second," and the like are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance, or for indicating or implying a sequence. Features defining "first", "second" may include one or more such features, either explicitly or implicitly. In describing embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Referring to fig. 1a, an application scenario diagram of a remote control method according to an embodiment of the present application is provided. As shown in fig. 1a, the application scenario may include: a first electronic device 101 and a second electronic device 102. The first electronic device 101 and the second electronic device 102 may access the same lan, or may access different lans. An example where the first electronic device 101 and the second electronic device 102 are connected to the same local area network may be specifically: the first electronic device 101 and the second electronic device 102 establish a wireless connection with the same wireless access point. For example, the first electronic device 101 and the second electronic device 102 may access the same wireless fidelity (wireless fidelity, WI-FI) hotspot, and for example, the first electronic device 101 and the second electronic device 102 may also access the same bluetooth beacon via the bluetooth protocol. For another example, the first electronic device 101 and the second electronic device 102 may trigger a communication connection through a short-range wireless communication technology (near field communication, NFC) tag, and transmit encrypted information through the bluetooth module to perform identity authentication. After authentication is successful, data transmission is performed by a point-to-point (P2P) mode.
In implementation, the first electronic device 101 may be configured as a transmitting client, and after generating the virtual touch event based on the touch operation of the user, transmit the virtual touch event to the second electronic device 102. In a first possible implementation manner of the first electronic device 101 entering the user interface for generating the virtual touch event is shown in fig. 1a, the interface displayed by the first electronic device 101 is a mobile phone main interface including a plurality of App icons, and the mobile phone main interface includes an intelligent remote control App icon for generating the virtual touch event. After the first electronic device 101 detects that the intelligent remote control App icon is triggered by clicking the intelligent remote control App icon included in the mobile phone main interface in fig. 1a, jumping to a user interface for generating a virtual touch event, generating the virtual touch event according to the user touch operation detected again, and sending the generated virtual touch event to the second electronic device 102 through a network or through a close range communication connection established between the first electronic device 101 and the second electronic device 102, so that the second electronic device 102 responds to the virtual touch event to execute a corresponding operation. In some embodiments, the first electronic device 101 may be a portable electronic device that also includes other functionality such as personal digital assistant and/or music player functionality, such as a cell phone, tablet, wearable device with wireless communication functionality (e.g., a smart watch), etc.; the second electronic device 102 may be an electronic device such as an intelligent television, an intelligent projector, etc., which is not specifically limited in the embodiments of the present application.
A second possible implementation manner of the first electronic device 101 entering the user interface for generating the virtual touch event is shown in fig. 1b, where the interface displayed by the first electronic device 101 may also be a notification bar drop-down interface of a mobile phone, and the notification bar drop-down interface includes a remote control for remotely controlling a second electronic device, such as a smart tv or a smart projection. By clicking a remote control included in a notification bar drop-down interface in the first electronic device 101 in fig. 1b, the first electronic device 101 jumps to a user interface for generating a virtual touch event after detecting that the remote control is triggered, and the principles of the subsequent embodiments are the same as those of the first possible embodiment, which is not described herein again. The specific embodiment of the first electronic device 101 for generating the virtual touch event will be described below, which will not be described herein.
Exemplary embodiments of electronic devices to which embodiments of the present application may be applied include, but are not limited to, piggybacking Or other operating system. The portable electronic device described above may also be other portable electronic devices, such as a Laptop computer (Laptop) or the like having a touch-sensitive surface, e.g. a touch panel.
Referring to fig. 2, an electronic device 200 may be a first electronic device 101 and/or a second electronic device 102 in an embodiment of the present application, where the embodiment of the present application uses the first electronic device 101 as the electronic device 200 as an example, and the electronic device 200 provided in the embodiment of the present application is described. It will be appreciated by those skilled in the art that the electronic device 200 shown in fig. 2 is merely an example and does not constitute a limitation of the electronic device 200, and that the electronic device 200 may have more or fewer components than shown in the figures, may combine two or more components, or may have different configurations of components. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, a sensor module 280, a camera 293, a display 294, and the like. Wherein the sensor module 280 may include a gyroscope sensor 280A, a touch sensor 280K (of course, the electronic device 200 may also include other sensors such as a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, a barometric sensor, a bone conduction sensor, etc., not shown).
The processor 210 may operate the remote control method provided by the embodiment of the present application, so as to meet the requirements of various remote control functions for remote control of the smart tv through the first electronic device on the basis of ensuring the control accuracy, thereby improving the user experience. The processor 210 may include different devices, such as an integrated CPU and a GPU, where the CPU and the GPU may cooperate to perform a remote control method according to an embodiment of the present application, such as a part of an algorithm in the remote control method is performed by the CPU, and another part of the algorithm is performed by the GPU, so as to obtain a faster processing efficiency.
The display 294 may display photos, videos, web pages, or files, etc. In an embodiment of the present application, the display 294 may display a mobile phone main interface of the first electronic device 101 as shown in fig. 1a, or a notification bar drop-down interface as shown in fig. 1 b. When the processor 210 detects a touch event of a finger (or a stylus, etc.) of a user with respect to a certain application icon, a user interface of an application corresponding to the application icon is opened in response to the touch event, and the user interface of the application is displayed on the display screen 294.
The camera 293 (front camera or rear camera, or one camera may be used as both front camera and rear camera) is used to capture still images or video, for example, if the electronic device 200 is a first electronic device 101 as shown in fig. 1a, 1b, the camera of the first electronic device 101 is used to capture images of a display interface area containing a second electronic device 102.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may also store one or more computer programs corresponding to the virtual touch event generation algorithm provided in the embodiments of the present application. When the code of the virtual touch event generation algorithm stored in the internal memory 221 is executed by the processor 210, the processor 210 may perform the generation of the virtual touch event and transmit it to the second electronic device 102 through the mobile communication module 251 or the wireless communication module 252.
Of course, the codes of the virtual touch event generation algorithm provided by the embodiment of the application can also be stored in the external memory. In this case, the processor 210 may run a code of a virtual touch event generation algorithm stored in the external memory through the external memory interface 220, and the processor 210 may run generation of the virtual touch event and transmit it to the second electronic device 102 through the mobile communication module 251 or the wireless communication module 252.
The function of the sensor module 280 is described below.
The gyro sensor 280A may be used to determine a motion gesture of the electronic device 200. In some embodiments, the angular velocity of electronic device 200 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 280A. I.e., gyro sensor 280A may be used to detect the current motion state of electronic device 200, such as shaking or being stationary. In the embodiment of the application, if the gyroscope sensor 280A detects that the electronic device 200 is in a shaking state, the electronic device 200 can timely analyze and identify the real-time image captured by the camera 293, so as to avoid the inaccurate generation of the virtual touch event caused by shaking.
The touch sensor 280K, also referred to as a "touch panel". The touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 280K is configured to detect a touch operation acting on or near the touch sensor, for example, a user touch operation for generating a virtual touch event in an embodiment of the present application. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display 294. In other embodiments, the touch sensor 280K may also be disposed on the surface of the electronic device 200 at a different location than the display 294.
Illustratively, the user clicks an icon of the smart remote control in the mobile phone main interface as shown in fig. 1a through the touch sensor 280K, triggers the processor 210 to start the smart remote control application, displays the skipped user interface for generating the virtual touch event through the display 294, and triggers the camera 293 to be turned on.
The wireless communication function of the electronic device 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, a modem processor, a baseband processor, and the like. In the embodiment of the present application, the interaction of information such as virtual touch events can be implemented between the first electronic device 101 and the second electronic device 102 through the wireless communication function of the electronic device 200.
The wireless communication module 252 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared (IR), etc., as applied on the electronic device 200.
It should be understood that in practical applications, the electronic device 200 may include more or fewer components than shown in fig. 2, and embodiments of the present application are not limited.
In order to implement the functions in the method provided by the embodiment of the present application, the electronic device 200 may include a hardware structure and/or a software module, where the functions are implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module. Some of the functions described above are performed in a hardware configuration, a software module, or a combination of hardware and software modules, depending on the specific application of the solution and design constraints.
The software system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, an Android system adopting a layered architecture is taken as an example, and the software structure of the electronic equipment is illustrated. Fig. 3 shows a software structure block diagram of an Android system provided by an embodiment of the application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system library, a hardware abstraction layer, and a kernel layer, respectively.
The application layer is the top layer of the operating system and may include a series of application packages. As shown in fig. 3, the application layer may include a native application of the operating system, which may include a User Interface (UI), a camera, a short message, a call, etc., and a third party application, which may include a map, a smart life, a smart remote control, etc. The application mentioned below may be a native application of an operating system installed when the electronic device leaves the factory, or may be a third party application downloaded from a network or acquired from other electronic devices by a user during use of the electronic device.
In some embodiments of the present application, the application layer may be used to implement presentation of an editing interface, where the editing interface may be used by the first electronic device to implement an operation of the user on a virtual touch event generated by the second electronic device in an App such as an intelligent remote control that focuses on the present application. The editing interface may be, for example, a control interface of an intelligent remote control App displayed on a touch screen of the first electronic device, for example, a user interface displayed on the first electronic device shown in (1-2) in fig. 5b, where the user interface displays screen information including a real-time display interface of the second electronic device, which is shot by the first electronic device by using the camera, so that remote control operation is implemented on the control interface of the first electronic device, so as to implement virtual remote control operation for the second electronic device, and further implement corresponding operations such as manipulation or setting change on the display interface of the second electronic device.
The application framework layer provides an application programming interface and programming framework for the application of the application layer. The application framework layer may also include some predefined functions. As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like. In some embodiments of the present application, the application framework layer is mainly responsible for invoking a service interface in communication with the hardware abstraction layer, so as to transfer a virtual touch event generation request to the hardware abstraction layer, where the virtual touch event generation request includes a predefined program of a virtual touch event generation service, and is used for implementing generation of various virtual touch events required by the second electronic device in the present application; and is also responsible for managing the user name and password of login authentication, and the like. For example, the virtual touch event generation service may include various modules required for managing the generation of virtual touch events involved in the embodiments of the present application. For example, the virtual touch event generation service includes a target detection module, a coordinate conversion module, a WI-FI service, and the like.
The target detection module is used for detecting and remotely controlling the display interface area of the second electronic device in the control interface of the intelligent remote control App opened from the first electronic device so as to realize more accurate interface control of the second electronic device. Fig. 5b is a schematic diagram of anchor point generation according to an embodiment of the present application. For example, as shown in fig. 5b, a screen frame of a second electronic device (for example, a smart tv) is detected from the display interface of the first electronic device (for example, a mobile phone) shown in (1-2) in fig. 5b, so as to determine a display interface area of the smart tv.
The coordinate conversion module is used for determining a coordinate point sequence of the touch operation after the first electronic device detects the touch operation of the user in the opened intelligent remote control App, screening out coordinate points in a display interface area of the second electronic device, and then converting the screened coordinate points, so that the coordinate point sequence generated by the touch operation of the user in the opened intelligent remote control App is converted into a corresponding coordinate point sequence in an interface on the second electronic device.
The WI-FI service is used for guaranteeing information interaction between the first electronic equipment and the second electronic equipment, so that a virtual touch event generated by the first electronic equipment is sent to the second electronic equipment, and virtual remote control operation of the second electronic equipment is further achieved.
The hardware abstraction layer (hardware abstraction layer, HAL) is a support for the application framework layer, an important ligament connecting the application framework layer and the kernel layer, which can provide services to the developer through the application framework layer. For example, the function of the virtual touch event generation service in the embodiment of the present application may be implemented by configuring a first process at a hardware abstraction layer, and the first process may be a sub-process separately constructed in the hardware abstraction layer. The first process may include a virtual touch event generation service configuration interface, a virtual touch event generation service controller, and other modules. The virtual touch event generation service configuration interface is a service interface which communicates with the application framework layer.
The kernel layer may be a Linux kernel layer, which is an abstraction layer between hardware and software. The kernel layer is provided with a plurality of drivers related to the first electronic device and at least comprises a display driver and a camera driver; a camera drive; an audio drive; bluetooth driving; WI-FI drive, etc., to which embodiments of the application are not limited in any way.
With reference to the description of the hardware framework of the electronic device in fig. 2 and the description of the software framework of the electronic device in fig. 3, the working principles of the software and hardware of the electronic device 200 are illustrated below for a remote control application scenario.
When touch sensor 280K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including information such as touch coordinates and a timestamp of the touch operation), where the original input event is, for example, a user touch event in the subsequent embodiment of the present application. The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a single click operation, and taking a control corresponding to the single click operation as a control of an application icon of the intelligent remote control App as an example, calling an interface of an application framework layer when the intelligent remote control App is started, further starting a camera driver by calling a kernel layer, capturing an image containing a display interface of the second electronic device by the camera 293, and displaying the image on the display screen 294 of the first electronic device as real-time image information by the display driver, wherein the real-time image information comprises the image captured by the camera.
In the following description, a mobile phone will be described as an example. It should be understood that, the hardware architecture of the mobile phone may be shown in fig. 2, the software architecture may be shown in fig. 3, where a software program and/or a module corresponding to the software architecture in the mobile phone may be stored in the memory 140, and the processor 130 may execute the software program and the application stored in the memory 140 to execute the flow of the remote control method provided by the embodiment of the present application. For ease of understanding, terms that may be referred to in the following examples are explained:
(1) User touch event: the method comprises the steps that touch operation performed on first electronic equipment by a user is indicated, a touch point coordinate point or a touch point coordinate sequence of the touch operation is included in a user touch event, for example, if the touch operation is a click operation, the touch point coordinate point is included in the user touch event; if the touch operation is a sliding operation, the user touch event is a touch point coordinate sequence (the touch point coordinate sequence at least includes a sliding start position coordinate, a sliding end coordinate, or further includes a sliding distance, a sliding direction, etc.). The touch operation includes, but is not limited to, a click operation, a sliding operation, a long press operation, a double click operation, a click screen designating control operation, and the like.
(2) Virtual touch event: the method comprises the steps that a first electronic device is instructed to be converted into virtual touch operation on a second electronic device according to a user touch event, so that the second electronic device can execute corresponding operation according to the virtual touch event, and the virtual touch event comprises a relative coordinate point or a relative coordinate sequence of the virtual touch operation. The specific coordinate conversion implementation manner is described hereinafter, and is not described in detail herein.
(3) Two-dimensional projection interface: the interface is used for indicating the two-dimensional display effect of the second electronic equipment on the display interface of the first electronic equipment, and the two-dimensional projection interface is an interface obtained after two-dimensional projection is carried out according to the position relation of the second electronic equipment in the three-dimensional space.
It should be understood that in embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one (item) below" or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b and c can be single or multiple.
The plurality of the embodiments of the present application is greater than or equal to two.
In addition, it should be understood that in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not for indicating or implying any relative importance or order.
In addition, in the embodiment of the present application, "terminal," "first electronic device," "electronic device," "mobile phone," etc. may be used in combination, that is, may refer to various devices that may be used to implement the embodiment of the present application.
For easy understanding, in the embodiment described below, the first electronic device is taken as an example of a smart phone, but the application is not limited to the smart phone, and any electronic device capable of implementing touch operation can be used as the first electronic device in the application; in addition, the second electronic device is taken as an example of intelligent electricity, but the application is not limited to the intelligent television, and any electronic device needing remote control can be taken as the second electronic device in the application, for example, the second electronic device can also be an intelligent projector and the like.
The following presents a simplified summary of several embodiments of the application in order to provide a better understanding of such embodiments.
In order to facilitate understanding of the remote control method provided by the present application, the interface processing effects that can be achieved by using the remote control method provided by the present application will be described below by referring to the user interfaces shown in fig. 4a to 4 c. Including the following several possible scenarios:
scene 1: referring to the schematic view of the interface processing effect shown in fig. 4a, in which the display interface of the second electronic device shown in fig. 4a (1-1) is shown as a program top page and is a selection interface under a drama category, based on the foregoing description embodiment, the user may open the smart remote control App in a manner shown in fig. 1a and fig. 1b, and then the smart remote control App triggers the mobile phone camera to open, and the user captures, based on the camera of the mobile phone, the content displayed on the current smart television display interface as a real-time image of the mobile phone interface display, specifically, the content displayed on the display interface of the first electronic device shown in fig. 4a (1-2) (shown as an example of the mobile phone in fig. 4 a).
At this time, the user performs a clicking operation on "television play 2", after receiving the clicking operation, the mobile phone determines the coordinates of the touch point of the clicking position in the mobile phone display interface, performs coordinate conversion on the coordinates of the touch point, obtains the corresponding relative coordinates on the smart television, and generates a virtual touch event including the obtained relative coordinates on the smart television and sends the virtual touch event to the smart television.
After the intelligent television receives the virtual touch event, analyzing the virtual touch event to obtain the relative coordinate, and determining the position of the poster picture with the corresponding position of the relative coordinate being 'TV play 2' on the display interface of the intelligent television, so that the user can be determined to play TV play 2 in a touch manner, the intelligent television responds to the virtual touch event, plays TV play 2, and changes the display interface to the interface displayed by the intelligent television as shown in (2-1) in fig. 4a, wherein the display interface is the play starting picture of TV play 2.
Based on the implementation procedure of the introduced scenario 1, the implementation of triggering the interface for generating the virtual touch event in the following introduced scenarios 2 to 3 is similar, so that the description is omitted in the following description.
Scene 2: referring to the schematic diagram of the interface processing effect shown in fig. 4b, in which (1-1) in fig. 4b is the display interface of the smart tv, and (1-2) in fig. 4b is the display interface of the mobile phone. At this time, the user slides the screen from bottom to top in the right half area on the display interface of the mobile phone, after the mobile phone receives the sliding operation of the user through the touch panel, a plurality of touch point coordinates corresponding to the sliding operation of the user are obtained, the plurality of touch point coordinates comprise a sliding starting coordinate and a sliding ending coordinate, the sliding starting coordinate and the sliding ending coordinate are respectively subjected to coordinate conversion, the corresponding relative sliding starting coordinate and relative sliding ending coordinate on the intelligent television are obtained, and a virtual touch event comprising the obtained corresponding relative sliding starting coordinate and relative sliding ending coordinate on the intelligent television is generated and sent to the intelligent television.
After the intelligent television receives the virtual touch event, analyzing the virtual touch event to obtain a corresponding relative sliding starting coordinate and a corresponding relative sliding ending coordinate on the intelligent television, calculating a sliding distance between the corresponding relative sliding starting coordinate and the corresponding relative sliding ending coordinate on the intelligent television, and then determining the corresponding volume according to the corresponding relation between different sliding distances and volume values stored in advance.
If the intelligent television determines that the corresponding relative sliding starting coordinate and relative sliding ending coordinate on the intelligent television are two coordinates obtained by sliding the display screen of the intelligent television from bottom to top and are positioned in the right half side display area of the display interface, the virtual touch event can be determined to increase the volume, and the intelligent television increases the current volume by calling the corresponding volume adjusting control to determine the volume; otherwise, if the intelligent television determines that the corresponding relative sliding starting coordinate and the corresponding relative sliding ending coordinate on the intelligent television are two coordinates obtained by sliding the display screen of the intelligent television from top to bottom, the virtual touch event can be determined to reduce the volume, and the intelligent television reduces the current volume by calling the corresponding volume adjusting control to determine the volume.
In addition, after the smart television determines that the sliding operation is to adjust the volume, the smart television may also display a volume adjustment condition on the display interface of the smart television, for example, the volume popup window may be a volume adjustment display bar displayed in (1-1) in fig. 4b, so that the volume adjustment display bar is displayed on the display interface of the mobile phone synchronously, so that the user may adjust the current volume through the volume adjustment display bar displayed on the display interface of the mobile phone, the mobile phone may also send the distance information and the dragging direction of the user dragging the volume adjustment display bar to the smart television through a virtual touch event, the smart television analyzes the distance information and the dragging direction included in the virtual touch event, then determines the volume size corresponding to the parsed distance information according to the relationship between different distances and different volume sizes, and determines whether to increase or decrease the volume of the current playing program according to the determined volume size and the dragging direction, where the volume size of the increase or decrease may be the determined volume size.
Scene 3: referring to the interface processing effect schematic diagram shown in fig. 4c, the (1-1) smart tv in fig. 4c displays a play picture of the tv show 2 along the scenario described in scenario 1. At this time, after the user performs the clicking operation on the display interface of the mobile phone shown in (1-2) in fig. 4c, through the processing of the mobile phone and the smart television with the same principle as that of the scene 1, the smart television may also determine that the clicking operation of the user on the mobile phone is to call the play information control, and then the smart television responds according to the virtual touch operation and then displays the display interface of the smart television shown in (2-1) in fig. 4c, that is, displays a "<" control for returning, a "pause" control, a control representing a play progress bar, and the like in the display interface of the smart television.
Further, the user may further execute clicking on the "<" control for returning in the mobile phone display interface shown in (2-2) in fig. 4c, and after the smart television analyzes the virtual touch event, determining that the mobile phone executes clicking operation on the "<" control, the user may call back the control, so that the smart television executes operation of exiting the play picture of the television play 2, and change the smart television display interface to a program top page display interface, that is, the display interface of the smart television shown in (1-1) in fig. 4 a. Similarly, the user can click on the "||" control on the display interface of the mobile phone, and the intelligent television can execute the operation of suspending the current playing picture; or, the user can drag the playing progress bar on the mobile phone display interface, so that the intelligent television can control the playing process of the current program according to the dragging direction and the dragging distance of the user, and the current display interface is displayed as a playing picture corresponding to the position to which the playing progress bar is adjusted.
It should be noted that, the above description is a few possible scenarios provided by the present application, but the present application is not limited to the above-mentioned scenarios, and the virtual touch events that can be implemented by the second electronic device through the touch operation performed on its own display screen can be implemented by the user performing the user operation on the display interface of the first electronic device, so that the user can conveniently perform the remote control operation to generate the virtual touch events for the second electronic device, thereby improving the user experience. For example, in addition to the above-mentioned click operation and slide operation, a click long-press operation may be included, for example, to realize double-speed playing of the currently played program of the second electronic device; a double-click operation, for example, may be used to implement a pause/replay operation of a currently playing program of the second electronic device; the multi-finger operation event may, for example, implement an operation such as a zoom-in/zoom-out operation on a display interface of the second electronic device, which may be other operations that may be implemented.
Based on the foregoing description of the interface processing effect that can be achieved by using the method provided by the present application, the implementation process of the remote control method provided by the present application is described below, so as to explain how to use the method provided by the present application to achieve the interface processing effect of fig. 4a to 4c that is described above, so that various virtual touch events can be generated by the first electronic device, and the requirements of various control scenarios of the second electronic device can be satisfied. Referring to fig. 5a, a process flow diagram of a remote control method according to an embodiment of the present application includes the following steps:
s501: the first electronic device acquires real-time image information containing a display interface area of the second electronic device through the camera device.
The method comprises the steps that after the processor in the first electronic equipment detects that an intelligent remote control App installed in the first electronic equipment is triggered by a user, the camera device of the first electronic equipment is controlled to be opened, and the user can operate the camera device to control the first electronic equipment to shoot a picture, wherein the picture comprises a display interface of the second electronic equipment.
That is, the processor is first required to determine that the first electronic device is in the scene of the user interface for generating the virtual touch event, and after determining that the first electronic device is in the scene, the processor may generate a call instruction for driving the image capturing apparatus and send the call instruction to the image capturing apparatus, so that the image capturing apparatus of the first electronic device is turned on to be in an operating state after receiving the call instruction.
One possible implementation of determining a scene in a user interface for generating a virtual touch event is: if the processor in the first electronic device detects a click operation of the user on the touch panel for a designated application icon (the designated application is an App for implementing a remote control function, for example, may be an App for intelligent remote control, intelligent life, etc. included in fig. 1 a) through the touch sensor 280K, it may be determined that the first electronic device is in a scenario of a user interface for generating a virtual touch event, and then the image capturing device is triggered to be turned on, and it is determined that the effect of real-time image information captured by the image capturing device is for generating the virtual touch event. The clicking operation may also be implemented as a user clicking a remote control in the drop-down display interface of the notification bar of the first electronic device (for example, may be a remote control icon control in the smart television or the smart projection included in fig. 1 b). Therefore, in order to provide various embodiments for determining the virtual touch event generation scene, a trigger entry for triggering a scene in a user interface for generating the virtual touch event may be preset in various display interfaces of the first electronic device, so that a user can conveniently perform remote control operation.
Further, another possible implementation of determining a scene at a user interface for generating a virtual touch event is: the processor in the first electronic device may also determine that it is in the context of a user interface for generating a virtual touch event after receiving a voice control instruction from a user to launch an application implementing a remote control function via microphone 270C. For example, after receiving a voice control instruction of opening the intelligent remote control sent by a user through a microphone, the processor triggers display of an intelligent remote control App editing interface, so that the first electronic device is in a scene of a user interface for generating a virtual touch event.
In order to realize remote control operation of the second electronic device based on the first electronic device, after determining that the first electronic device is in a scene of a user interface for generating a virtual touch event, a user shoots an image pickup device of the first electronic device on the second electronic device, so that the first electronic device receives a display interface of the second electronic device contained in real-time image information picked up by the image pickup device, and real-time synchronous display is carried out on the interface of the first electronic device. For example, as shown in fig. 5b (1), the area range of the mobile phone is taken, and in fig. 5b (1-2), the display interface of the area range of the mobile phone is taken, and the real-time image information displayed on the interface of the mobile phone includes the front appearance of the smart television and the display interface of the smart television.
In the above embodiment, the effect of shooting by the first electronic device is to realize remote control of the second electronic device, so that the real-time image information shot by the image capturing device is not required to be stored, but the real-time synchronous display of the real-time image information shot by the image capturing device can be realized by the preview capability of the first electronic device, so that the storage space of the first electronic device is saved, the processing efficiency of the first electronic device is improved, the time delay in the generation process of the virtual touch event is reduced, and the problem that the chirality is poor (i.e. the processing time for the second electronic device to execute the corresponding operation is long for the touch operation of the user, so that the operation response of the user to the second electronic device is perceived) is avoided. In the implementation, the first electronic device directly transmits the real-time image information shot by the shooting device to the display driver, so that the real-time synchronous display of the real-time image information on the user interface on the application program layer is realized through the display driver, namely, the real-time image information is displayed.
S502: and the first electronic equipment identifies the display interface area belonging to the second electronic equipment from the real-time image information.
Because the real-time image information captured by the first electronic device generally includes a larger area range than the area where the display interface of the second electronic device is located, and the touch operation of the user outside the area of the display interface of the second electronic device is irrelevant to the generation of the virtual touch event, in order to monitor the more accurate virtual touch operation of the second electronic device, when the method is implemented, the first electronic device may first analyze each frame of image in the captured real-time image information by using Object Detection (Object Detection) and Object Tracking (Object Tracking) technologies, so as to identify and track the screen frame of the second electronic device from each frame of image of the real-time image information.
The screen frame is used for determining a display interface area belonging to the second electronic equipment, namely, an area inside the screen frame is the display interface area of the second electronic equipment. Then, the first electronic device screens out the touch operation in the screen frame, and filters out the touch operation which does not belong to the screen frame. The screen frame of the second electronic device is identified and used as a screening condition of the first electronic device on the user touch event received on the touch panel, so that the accuracy and the data processing efficiency of generating the virtual touch event can be better improved.
In addition, through the recognition and tracking of the screen frame of the second electronic device, the anti-shake function on the first electronic device can be further realized, each frame of image in the captured real-time image information can be analyzed and recognized during implementation, and the content of the display area in the screen frame is locked based on the recognized screen frame, so that the problem of blurring of the display area of the second electronic device displayed in the real-time image information caused by shake of the first electronic device is avoided.
In order to improve the overall sense of the display interface of the second electronic device displayed in the first electronic device of the user, avoid the problem that when the user holds the first electronic device to pick up the second electronic device, the touch operation on the first electronic device is inconvenient due to too far distance, or the display interface of the first electronic device cannot completely cover the display interface of the second electronic device due to too near distance. The method includes that if the first electronic device judges that the size of a display interface area of the second electronic device in the real-time image is smaller than a first threshold value, the focal length of the first electronic device is adjusted. The preconfigured display range proportion can be, for example, that the area occupied by the display interface of the second electronic device is two thirds of the area occupied by the display screen of the first electronic device.
For example, if the display proportion of the display interface of the second electronic device on the display interface of the first electronic device is too small due to the too long distance, which is not beneficial to the user operation, the range size of the displayed real-time image may be enlarged based on the screen frame of the first electronic device when the current display proportion is determined to be one-half based on the pre-configured display range proportion of two-thirds, so that the display proportion of the real-time image in the display screen of the first electronic device reaches two-thirds. Similarly, if the display interface of the second electronic device cannot display the complete problem on the screen of the first electronic device due to too close distance, the processor may call the wide-angle lens of the image capturing device to capture a real-time image with a wider range, so as to satisfy that the display interface of the second electronic device is displayed completely on the screen of the first electronic device and can be further displayed to a preset display proportion.
The first electronic device may identify the screen frame of the second electronic device according to the following embodiments:
in one possible implementation, the object detection module in the application framework layer in the first electronic device identifies a screen frame of the second electronic device according to a pre-trained object detection model. The implementation of the first electronic device training the target detection model may be: the first electronic device learns the characteristics of the screen frame of the second electronic device by taking a large amount of frame images of real-time image information as training samples and taking the screen frame of the second electronic device as a training target, and finally obtains the target detection model by taking the screen frame of the second electronic device as output.
When the method is implemented, after receiving the real-time image information, the target detection module takes the real-time image information as input of the pre-trained target detection model, then recognizes the screen frame of the second electronic device through the target detection model, finally outputs the recognized screen frame of the second electronic device, and further determines the display interface area of the second electronic device according to the screen frame of the second electronic device.
In another possible implementation manner, if the recognition result of the screen frame by the object detection model is poor, for example, in a dim light environment with poor light, it is difficult to recognize the screen frame of the second electronic device from the real-time image information; or, if the second electronic device is an intelligent projection, the projection interface of the intelligent projection may not have an obvious screen frame, so it may be difficult to accurately identify the screen frame through the pre-trained object detection model. In this scenario, when implementing, the first electronic device may send an instruction for generating an anchor point to the second electronic device through interaction with the second electronic device, so that the second electronic device generates anchor points that are convenient for the first electronic device to detect on four corners of the display interface (or the projection interface of the intelligent projection). After the second electronic device generates the anchor points, the first electronic device detects four anchor points (for example, A, B, C, D points of four screen frame corner points in the display interface of the second electronic device shown in fig. 5b (1-1)) included in the real-time image information through the target detection module, and then the screen frame of the second electronic device can also be determined according to the A, B, C, D anchor points.
S503: the first electronic equipment receives touch operation of a user on the touch panel, and obtains a first touch coordinate point (or sequence) after screening, so as to obtain a user touch event.
In implementation, after receiving a touch operation of a user on a touch panel through the touch sensor 280K, a processor in the first electronic device obtains a touch coordinate point (or sequence) corresponding to the touch operation. In a possible implementation manner, if the touch operation is a click operation, the processor processes the touch event into a user touch event according to information such as coordinates of a touch point and a time stamp of the click operation. In another possible implementation manner, if the touch operation is a sliding operation, a plurality of touch coordinate points of the sliding operation, that is, a touch coordinate sequence, is obtained, where the touch coordinate sequence at least includes a touch sliding start coordinate and a touch sliding end coordinate, determines a sliding distance and a sliding direction of the sliding operation, and further processes the sliding distance, the sliding direction, the timestamp and other information into a user touch event according to the touch sliding start coordinate, the touch sliding end coordinate, the sliding distance, the sliding direction, the timestamp and other information. In other possible embodiments, if the touch operation is a long click operation, the processor processes the information such as coordinates of a touch point of the long click operation, a long time of the long click operation, and the like into a user touch event; or if the touch operation is a multi-finger operation event, the processor processes the touch operation event into a user touch event according to the information such as the coordinates of the touch points of the fingers, the time stamp and the like; other possible user operations are processed based on the same principle to obtain the user touch event, and the application is not described herein again.
Screening a first touch coordinate point (or sequence) belonging to the screen frame region in the touch operation of the user based on the screen frame of the second electronic device identified in the step S502, so as to screen out a virtual remote control operation executed by the user on the first electronic device in the display region of the second electronic device, and further generating a virtual touch event according to the first touch coordinate point (or sequence); and the first electronic device ignores the second touch coordinate point (or sequence) which does not belong to the frame area of the screen, and further does not generate a virtual touch event according to the second touch coordinate point (or sequence). Through detecting the screen frame of the second electronic device, and then screening the touch coordinate points (or sequences) corresponding to the user touch events according to the detected screen frame, the user touch events outside the screen frame area of the second electronic device can be ignored, the calculated data quantity during virtual touch event generation is reduced, and the accuracy of virtual touch event generation can be improved.
S504: the first electronic device converts the first touch coordinate point (or sequence) into a relative coordinate point (or sequence) of the second electronic device, and generates a virtual touch event.
Specifically, the display interface of the second electronic device included in the real-time image displayed on the interface of the first electronic device is essentially a two-dimensional projection of the second electronic device in the three-dimensional space obtained by capturing by the image capturing device, and since there is any arbitrary capturing angle of the first electronic device, the two-dimensional projection of the second electronic device displayed on the interface of the first electronic device may be a trapezoid, for example, a schematic diagram of a screen frame of the second electronic device shown in (1) in fig. 6a and (1) in fig. 6 b. It can be understood that the touch coordinates of the user touch operation received on the first electronic device cannot be in one-to-one correspondence with the coordinates on the second electronic device, so, in order to ensure the accuracy of the virtual touch event generated by the first electronic device, after the first electronic device obtains the first touch coordinate point (or sequence), the first touch coordinate point (or sequence) can be converted into a corresponding touch relative coordinate point (or sequence) on the second electronic device through the principle of augmented reality technology, and the first electronic device processes and generates the virtual touch event according to the touch relative coordinate point (or sequence), so that the second electronic device can obtain the virtual touch operation on the second electronic device according to the relative coordinate point (or sequence) after receiving and analyzing the virtual touch event, thereby obtaining the accurate touch coordinate on the second electronic device.
In the implementation process of the coordinate conversion, it is assumed that four corner coordinates of a screen frame of the second electronic device in the display interface of the first electronic device are respectively represented by (x 1, y 1), (x 2, y 2), (x 3, y 3), and (x 4, y 4), and touch coordinate points of a user touch event (assumed to be a click operation) on the display interface of the first electronic device are represented by (x, y) (if the user touch event is a sliding operation, the touch coordinate points may be represented by a touch coordinate sequence, in this embodiment, only one coordinate point is taken as an example, the coordinate conversion manners of other coordinate points in the coordinate sequence are the same, and are not described again later), and relative coordinate points of a virtual touch operation on the display interface of the converted second electronic device are represented by (x ', y'). And a group of vertical frames of the second electronic device is represented by lines L1 and L3, a group of parallel frames of the second electronic device is represented by lines L2 and L4, and a group of vertical frames are parallel to each other and a group of horizontal frames are parallel to each other in the three-dimensional space based on the fact that the screen frame of the second electronic device is a rectangle. The following description, taken in conjunction with fig. 6a to 6b, describes an exemplary embodiment of the coordinate conversion performed by the first electronic device through the coordinate conversion module, including the following several possible scenarios:
In the scene 1, in real-time image information shot by the first electronic device, the screen frames of the second electronic device are displayed as a group of vertical frames parallel to each other on the two-dimensional projection interface. For example, referring to lines L1 and L3 shown in (1) in fig. 6a, line L1 is a left vertical frame of the screen frame of the second electronic device, line L3 is a right vertical frame, and lines L1 and L3 are two parallel lines on the display interface of the first electronic device.
In the implementation process of determining the relative coordinates (x ', y') of the virtual touch event, the first electronic device determines x 'and y', respectively.
The value of the virtual touch event coordinate x ' depends on the relative distance between (x ', y ') on the second electronic device and any one of the vertical frames in the three-dimensional space. For example, referring to the illustration shown in fig. 6a, a set of vertical borders of the second electronic device in a two-dimensional projection plane, such as lines L1, L3 are shown parallel to each other, so that the relative distance between (x ', y') and any vertical border in the same position in three-dimensional space can be obtained by analogy from the relative distance between (x, y) and any vertical border on the display interface of the first electronic device (i.e., the two-dimensional projection interface of the second electronic device). Taking a relative distance between a touch point and a left vertical frame as an example, a virtual touch event coordinate x' on the second electronic device may be obtained according to the following formula 1:
Wherein w in the formula is the width of the screen frame of the second electronic device, x in the formula is the abscissa of the user touch event coordinate on the first electronic device, x1 is the abscissa of the left screen frame of the second electronic device in the display interface of the first electronic device, and x2 is the abscissa of the right screen frame.
It should be noted that, the size information of the second electronic device (for example, the width information including the screen frame of the second electronic device and the height information related to the subsequent embodiment) may be requested by the first electronic device to the second electronic device when the first electronic device first establishes a communication connection with the second electronic device, or the second electronic device actively sends the size information to the first electronic device. Then, after the first electronic device obtains the size information of the second electronic device, the size information of the second electronic device may be stored locally, so as to be convenient for subsequent use. In addition, in addition to the size information of the second electronic device, the first electronic device may further obtain related information of other second electronic devices, such as model information of the second electronic device, and for example, before the first electronic device generates the virtual touch event, the first electronic device may determine the size information of the second electronic device according to the obtained model information of the second electronic device, where the first electronic device locally stores a correspondence between the model information of the second electronic device and the size information, or the first electronic device may locally perform network query to determine the size information of the second electronic device.
The value of the virtual touch event coordinate y 'depends on the relative distance between (x', y ') and any horizontal frame on the second electronic device in the three-dimensional space, and since a group of horizontal frames in the screen frames displayed on the display interface of the first electronic device in the scene 1 are not parallel to each other, for example, the lines L2 and L4 in (1) of fig. 6a are not parallel to each other, the relative distance between (x, y') and any horizontal frame on the display interface of the first electronic device (i.e. the two-dimensional projection interface of the second electronic device) cannot be obtained by analogy. Therefore, y 'cannot be determined according to the embodiment of determining x' above.
While the set of horizontal borders of the second electronic device are shown non-parallel to each other on the two-dimensional projection interface, the set of screen borders of the second electronic device in the horizontal direction are essentially parallel in three-dimensional space. On the basis, when the method is implemented, the distance relation between the point (x, y) on the two-dimensional projection plane and the screen frame of the second electronic device can be reversely deduced through the point (x, y) and the line L2 (the upper horizontal frame in the two-dimensional projection interface) and the line L4 (the lower screen frame in the two-dimensional projection interface). Specifically, the lines L2 and L4 may be extended by using a three-dimensional projection principle, so as to obtain an intersection point (x 5, y 5) of the two horizontal frames in the two-dimensional projection plane, as shown in (3) in fig. 6 a; then, the intersection points (x 5, y 5) and (x, y) are connected to obtain an intersection point (x 6, y 6) of the connection line and the line L1 (left vertical frame in the two-dimensional projection interface). It can be understood that, according to the three-dimensional projection principle, the relative distance between the midpoint (x ', y') in the three-dimensional space and the horizontal frame at the same position can be obtained by analogy through the proportion of any one of the sub-line segments after dividing the L1 into the sub-line segments L1a and L1b by taking the intersection point (x 6, y 6) as the dividing point in the two-dimensional projection interface. In a possible implementation manner, taking the proportion of the sub-line segment L1b relative to L1 as an example, the virtual touch event coordinate y' on the second electronic device may be implemented as follows according to the following formula 2:
Wherein h is the height of the screen frame of the second electronic device, y6 is the intersection point obtained on the first electronic device at the left screen frame L1 according to the above embodiment, y3 is the ordinate of the intersection point of the left screen frame L1 and the lower screen frame L4, and y1 is the ordinate of the intersection point of the left screen frame L1 and the upper screen frame L2.
And in the scene 2, in the real-time image information shot by the first electronic equipment, the screen frames of the second electronic equipment are displayed as a group of horizontal frames which are parallel to each other on the two-dimensional projection interface. For example, referring to lines L2 and L4 shown in (1) in fig. 6b, line L2 is an upper screen frame of the second electronic device, line L4 is a lower screen frame, and lines L2 and L4 are two parallel lines on the display interface of the first electronic device.
In the implementation process of determining the relative coordinates (x ', y') of the virtual touch event, the first electronic device determines x 'and y', respectively.
The value of the virtual touch event coordinate x ' depends on the relative distance between (x ', y ') on the second electronic device and any one of the vertical frames in the three-dimensional space. Since a set of vertical borders in the screen borders displayed on the display interface of the first electronic device in scene 2 are not parallel to each other, e.g. the lines L1, L3 shown in fig. 6b are shown not parallel to each other, x 'in scene 2 cannot be determined according to the embodiment of determining x' in scene 1, and x 'in scene 2 can be determined based on the same principle as the embodiment of determining y' in scene 1.
The implementation is that a three-dimensional projection principle can be adopted to extend the lines L1 and L3 so as to obtain the intersection points (x 5', y 5') of the two vertical frames in the two-dimensional projection plane, as shown in (3) in fig. 6 b; then, the intersection points (x 5', y 5') and (x, y) are connected to obtain an intersection point (x 6', y 6') of the connection line and the line L2 (upper horizontal frame in the two-dimensional projection interface). It can be understood that, according to the three-dimensional projection principle, the relative distance between the midpoint (x ', y') in the three-dimensional space and the vertical frame at the same position can be obtained by analogy through the proportion of any sub-line segment after dividing L2 into sub-line short L2a, L2b in the two-dimensional projection interface by taking the intersection point (x 6', y 6') as the dividing point. In one possible implementation, taking the proportion of the sub-line segment L2b relative to L2 as an example, the virtual touch event coordinate x' on the second electronic device may be implemented according to the following disclosure
Formula 3 gives:
wherein w in the formula is the width of the screen frame of the second electronic device, x6' in the formula is the intersection point obtained on the upper screen frame L2 according to the above embodiment on the first electronic device, x2 is the abscissa of the intersection point of the upper screen frame L2 and the right screen frame L3, and x1 is the abscissa of the intersection point of the left screen frame L1 and the upper screen frame L2.
The value of the virtual touch event coordinate y ' depends on the relative distance between (x ', y ') on the second electronic device and any horizontal frame in the three-dimensional space. Illustratively, referring to what is shown in FIG. 6b, lines L2, L4 are shown parallel to each other, so that y 'under scene 2 is determined based on the same principle as the embodiment in which x' is determined in scene 1. Taking a relative distance between a touch point and an upper horizontal frame as an example, a virtual touch event coordinate y' on the second electronic device may be obtained according to the following formula 4:
wherein h is the height of the screen frame of the second electronic device, y is the ordinate of the user touch event coordinate on the first electronic device, y1 is the ordinate of the upper screen frame of the second electronic device in the display interface of the first electronic device, and y2 is the ordinate of the lower screen frame.
And 3, in the real-time image information shot by the first electronic equipment, displaying a group of horizontal frames and a group of vertical frames of the screen frames of the second electronic equipment on a two-dimensional projection interface, wherein the horizontal frames and the vertical frames are not parallel to each other.
By way of example, in connection with the contents shown in fig. 6a to 6b, one set of horizontal borders is shown as lines L2, L4 in fig. 6a, and one set of vertical borders is shown as lines L1, L3 in fig. 6 b. If a group of vertical frames or a group of horizontal frames are displayed in a scene which is not parallel to each other, determining a virtual touch event coordinate x 'on the second electronic device in the scene 3 according to the same principle of determining x' in the scene 2; and determining the virtual touch event coordinate y 'on the second electronic device in the scene 3 according to the same principle as the embodiment of determining y' in the scene 1, and the specific embodiment will not be described herein.
Scene 4: if the real-time image information shot by the first electronic device is displayed as a group of horizontal frames and a group of vertical frames on the two-dimensional projection interface, the screen frames of the second electronic device are parallel to each other.
Illustratively, and exemplary, in connection with the description shown in fig. 6 a-6 b, a set of horizontal borders are shown as lines L2, L4 in fig. 6b, and a set of vertical borders are shown as lines L1, L3 in fig. 6 a. If a group of vertical frames or a group of horizontal frames are displayed in a parallel scene, determining x 'in scene 4 according to the same principle as that of the embodiment for determining x' in scene 1, and detailed embodiments will not be described herein; similarly, y 'in scenario 4 is determined according to the same principle as in the embodiment of determining y' in scenario 2, and detailed description thereof will not be repeated here.
S505: the first electronic device establishes a communication connection with the second electronic device.
In order to ensure that the remote control operation of the first electronic device on the second electronic device can be realized, the first electronic device establishes communication connection with the second electronic device when the remote control operation is implemented. In a possible implementation manner, the first electronic device and the second electronic device may be connected to the same local area network, so as to establish a communication connection. By way of example, a wireless communication channel may be established through Wi-Fi P2P technology, which may be characterized by a low latency. Another possible implementation manner is that the first electronic device and the second electronic device can be connected through a short-range communication.
It should be noted that the execution timing of S505 is not limited in the present application, for example, the first electronic device may establish a communication connection with the second electronic device, or may be executed before determining that the virtual touch event is generated, for example, the first electronic device and the second electronic device are always connected to the same local area network, that is, the communication connection is always maintained. Or, when the first electronic device described in the foregoing embodiment needs to send the anchor point generation instruction to the second electronic device, the communication connection between the first electronic device and the second electronic device is established. It will be appreciated that when the first electronic device needs to interact with the second electronic device, a communication connection between the first electronic device and the second electronic device may be established.
S506: the first electronic device sends the virtual touch event to the second electronic device; wherein the virtual touch event carries the relative coordinate point (or sequence).
According to the application, in the embodiment of transmitting the touch coordinate sequence contained in the touch event to the second electronic equipment, due to the characteristic of small transmission data quantity, the time delay generated by data transmission is low, so that the problem of poor chirality caused by the technical scheme of transmitting the whole display interface of the second electronic equipment by adopting encoding and decoding can be well solved.
S507: and the second electronic equipment analyzes the virtual touch event and executes corresponding operation according to the relative coordinate point (or sequence).
After receiving the virtual touch event sent by the first electronic device, the second electronic device performs corresponding processing through an operating system of the second electronic device. Since the second electronic device is also a first electronic device, the hardware architecture of the second electronic device may also be as shown in fig. 2, and the software architecture may be as shown in fig. 3. The software program and/or the module corresponding to the software architecture in the second electronic device may be stored in the memory 140, and the processor 130 may run the software program and the application stored in the memory 140 to execute the flow of the remote control method provided by the embodiment of the present application.
When the method is implemented, after receiving and analyzing a relative coordinate point (or sequence) in a virtual touch event sent by a first electronic device, the second electronic device determines the touch position of the relative coordinate point (or sequence) on a display interface of the second electronic device, and the second electronic device identifies a control related to the touch position; and then, operating the corresponding control to enable the second electronic equipment to execute corresponding operation according to the virtual touch event.
For example, if the touch event of the user is that the user performs a sliding operation from bottom to top in a right half area in a screen frame of the second electronic device displayed in a display interface of the first electronic device, after the second electronic device parses a corresponding coordinate sequence from the received virtual touch event, it is determined that the corresponding coordinate sequence includes a sliding start coordinate and a sliding end coordinate, and a position area of the sliding start coordinate and the sliding end coordinate is a right half area of the display interface, and it is determined that the sliding direction is a positive direction of a y axis (assuming that a coordinate axis is established according to a horizontal screen display interface, such as a coordinate axis establishment manner shown in fig. 6a and 6 b), if it is determined that the touch event sent by the first electronic device is to achieve increasing a volume of a currently playing program, then the second electronic device performs a callback operation on the virtual touch event through the volume control after determining that a control to be invoked corresponding to the virtual touch event is the volume control, so as to achieve setting of "increasing the volume".
According to the embodiment described above, it can be seen that the present application enables the generation of a virtual touch event to a second electronic device by a first electronic device. In addition, in the embodiment adopted by the application, after the communication connection is established between the first electronic equipment and the second electronic equipment, the transmitted data only comprises the virtual touch event obtained according to the touch coordinate point (or sequence) of the user touch event, so the characteristic of small transmission data volume is realized, and the characteristic of low time delay exists in the embodiment adopted by the application, thereby the control accuracy of the first electronic equipment to the second electronic equipment can be better improved, and the requirements of multiple remote control scenes of the second electronic equipment can be met.
In the embodiments of the present application described above, the method provided in the embodiments of the present application is described in terms of the electronic device as the execution subject. In order to implement the functions in the method provided by the embodiment of the present application, the first electronic device may include a hardware structure and/or a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Some of the functions described above are performed in a hardware configuration, a software module, or a combination of hardware and software modules, depending on the specific application of the solution and design constraints.
Based on the above embodiments, the embodiments of the present application provide a remote control device, which is applied to a first electronic apparatus and is used to implement a remote control method provided by the embodiments of the present application. Referring to fig. 7, the apparatus 700 includes: a transceiver unit 701 and a processing unit 702. The transceiver unit 701 is configured to receive a first operation; the processing unit 702 is configured to start the image capturing apparatus in response to receiving the first operation; acquiring a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment; displaying a first user interface, wherein the first user interface comprises the first image; the transceiver unit 701 is configured to receive a touch operation for the first user interface; the processing unit 702 is configured to, in response to receiving the touch operation for the first user interface, obtain touch point coordinates corresponding to the touch operation for the first user interface, and generate a virtual touch event based on the touch point coordinates, where the virtual touch event includes relative coordinates in a current display interface of the second electronic device; and sending the virtual touch event to the second electronic device through the transceiver unit 701, so that after the second electronic device receives the virtual touch event, the operation corresponding to the relative coordinates in the current display interface of the second electronic device is executed in response to the virtual touch event.
In one possible design, the transceiver unit 701 is configured to receive the first operation, specifically: a first application icon is displayed on the first electronic device, and the transceiver unit 701 receives an operation for the first application icon; alternatively, the transceiver unit 701 receives a first voice operation; alternatively, the transceiver unit 701 receives a first gesture operation.
In a possible design, the transceiver unit 701 is further configured to determine that an area inside the screen frame of the second electronic device in the first user interface is a display interface area of the second electronic device before the touch operation for the first user interface.
In a possible design, the processing unit 702 is further configured to, when determining the area inside the screen frame of the second electronic device: transmitting an anchor generation instruction to the second electronic device through the transceiver unit 701, so that after the second electronic device receives the anchor generation instruction, an anchor is generated on a display interface in response to the anchor generation instruction; the processing unit 702 is configured to determine an area inside the screen frame of the second electronic device according to the acquired information of the anchor point in the first image.
In one possible design, the processing unit 702 is further configured to, after identifying the display interface area of the second electronic device: judging whether the size of the display interface area of the second electronic equipment is smaller than a first threshold value; and if the size of the display interface area of the second electronic device is smaller than a first threshold value, the first electronic device adjusts the focal length of the image pickup device to a first focal length.
In a possible design, after the transceiver unit 701 receives the touch operation for the first user interface, before generating the virtual touch event, the processing unit 702 is further configured to: acquiring at least one touch point coordinate; determining whether the at least one touch point coordinate is within a display interface area of the second electronic device; and in response to the first electronic device determining that the at least one touch point coordinate is within the display interface region of the second electronic device, the first electronic device generates the virtual touch event.
In one possible design, the processing unit 702 generates a virtual touch event based on the touch point coordinates, specifically for: converting the obtained touch point coordinates corresponding to the touch operation of the first user interface into relative coordinates in the current display interface of the second electronic device; and generating the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
In one possible design, the touch operation includes a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or a plurality of coordinates.
In one possible design, the first electronic device is a mobile phone, the second electronic device is a smart television, the camera device is a rear camera of the mobile phone, and the current display interface of the second electronic device is a menu interface of the smart television; the first user interface is a display interface of the first electronic equipment after entering a remote control mode; the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions; the display interface area of the second electronic device is an image area of the menu interface of the intelligent television, which is acquired by the mobile phone; the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the intelligent television in the first user interface; and the second electronic equipment executes the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment to execute the function corresponding to one of the plurality of controls in the image of the menu interface of the intelligent television.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific implementation of the embodiment of the present application, but the protection scope of the embodiment of the present application is not limited to this, and any changes or substitutions within the technical scope disclosed in the embodiment of the present application should be covered in the protection scope of the embodiment of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A remote control method, which is applicable to a first electronic device, wherein the first electronic device comprises an image pickup device, and the first electronic device and a second electronic device establish wireless connection, and the remote control method is characterized by comprising the following steps:
the first electronic device receives a first operation;
in response to receiving the first operation, the first electronic device starts the camera;
the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment;
the first electronic device displays a first user interface, wherein the first user interface comprises the first image;
The first electronic device receives touch operation aiming at the first user interface;
in response to receiving the touch operation for the first user interface, the first electronic device obtains touch point coordinates corresponding to the touch operation for the first user interface, converts the touch point coordinates into relative coordinates in a current display interface of the second electronic device, and generates a virtual touch event based on the relative coordinates, wherein the virtual touch event comprises the relative coordinates;
the first electronic device sends the virtual touch event to the second electronic device, so that after the second electronic device receives the virtual touch event, the operation corresponding to the relative coordinates in the current display interface of the second electronic device is executed in response to the virtual touch event;
the converting the touch point coordinate into a relative coordinate in the current display interface of the second electronic device includes:
when a group of vertical frames of the second electronic equipment in the first user interface are parallel to each other, determining a relative abscissa based on the actual width of the screen frame of the second electronic equipment and the abscissa in the touch point coordinates; or when a group of vertical frames of the second electronic device in the first user interface are not parallel to each other, determining a relative abscissa based on an actual width of a screen frame of the second electronic device, a first intersection abscissa, a first vertex abscissa, and a second vertex abscissa; the first intersection point is an intersection point of a first connecting line and a first horizontal frame, and the first connecting line is a connecting line between an intersection point of extension lines of the group of vertical frames and the coordinates of the touch point; the first vertex and the second vertex are two vertices of the first horizontal frame;
When a group of horizontal frames of the second electronic equipment in the first user interface are parallel to each other, determining a relative ordinate based on the actual height of the screen frame of the second electronic equipment and the ordinate in the touch point coordinates; or when a group of horizontal frames of the second electronic device in the first user interface are not parallel to each other, determining a relative ordinate based on the actual height of the screen frame of the second electronic device, a second intersection ordinate, a third vertex ordinate and a fourth vertex ordinate; the second intersection point is an intersection point of a second connecting line and the first vertical frame, and the second connecting line is a connecting line between an intersection point of extension lines of the group of horizontal frames and the touch point coordinate; the third vertex and the fourth vertex are two vertices of the first vertical frame.
2. The method of claim 1, wherein the first electronic device receives a first operation comprising:
the first electronic device displays a first application icon, and the first electronic device receives an operation aiming at the first application icon; or alternatively, the process may be performed,
the first electronic device receives a first voice operation; or alternatively, the process may be performed,
The first electronic device receives a first gesture operation.
3. The method of claim 1, wherein prior to the first electronic device receiving a touch operation for the first user interface, the method further comprises: and the first electronic device determines the area inside the screen frame of the second electronic device in the first user interface as the display interface area of the second electronic device.
4. The method of claim 3, wherein the first electronic device determining an area inside a bezel of the second electronic device screen comprises:
the first electronic equipment sends an anchor point generating instruction to the second electronic equipment, so that after the second electronic equipment receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction;
and the first electronic device determines the area inside the screen frame of the second electronic device according to the acquired information of the anchor point in the first image.
5. The method of claim 3, wherein after the first electronic device identifies the display interface area of the second electronic device, the method further comprises:
Judging whether the size of the display interface area of the second electronic equipment is smaller than a first threshold value;
and if the size of the display interface area of the second electronic device is smaller than a first threshold value, the first electronic device adjusts the focal length of the image pickup device to a first focal length.
6. The method of claim 3 or 4, wherein after the first electronic device receives a touch operation for the first user interface, before generating the virtual touch event, the method further comprises:
the first electronic device acquires at least one touch point coordinate;
the first electronic device determines whether the at least one touch point coordinate is within a display interface area of the second electronic device;
and in response to the first electronic device determining that the at least one touch point coordinate is within the display interface region of the second electronic device, the first electronic device generates the virtual touch event.
7. The method of any of claims 1 to 5, wherein the generating a virtual touch event based on the touch point coordinates comprises:
the first electronic device converts the acquired touch point coordinates corresponding to the touch operation of the first user interface into relative coordinates in a current display interface of the second electronic device;
And the first electronic device generates the virtual touch event according to the relative coordinates in the current display interface of the second electronic device.
8. The method according to any one of claims 1 to 5, wherein the touch operation comprises a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface comprise a single coordinate and/or a plurality of coordinates.
9. A method according to any one of claims 1 to 5, comprising:
the first electronic equipment is a mobile phone, the second electronic equipment is an intelligent television, the camera device is a rear camera of the mobile phone, and the current display interface of the second electronic equipment is a menu interface of the intelligent television;
the first user interface is a display interface of the first electronic equipment after entering a remote control mode;
the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions;
the display interface area of the second electronic device is an image area of the menu interface of the intelligent television, which is acquired by the mobile phone;
The touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the intelligent television in the first user interface;
and the second electronic equipment executes the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment to execute the function corresponding to one of the plurality of controls in the image of the menu interface of the intelligent television.
10. An electronic device, corresponding to a first electronic device, the first electronic device and a second electronic device establishing a wireless connection, the first electronic device comprising:
an image pickup device;
the touch screen comprises a touch panel and a display screen;
one or more processors;
a memory;
a plurality of applications;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the first electronic device, cause the first electronic device to perform the steps of:
Receiving a first operation;
in response to receiving the first operation, activating the image capturing apparatus;
acquiring a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment;
displaying a first user interface, wherein the first user interface comprises the first image;
receiving a touch operation for the first user interface;
in response to receiving the touch operation for the first user interface, acquiring touch point coordinates corresponding to the touch operation for the first user interface, converting the touch point coordinates into relative coordinates in a current display interface of the second electronic device, and generating a virtual touch event based on the relative coordinates, wherein the virtual touch event comprises the relative coordinates;
sending the virtual touch event to the second electronic equipment, so that after the second electronic equipment receives the virtual touch event, responding to the virtual touch event, and executing an operation corresponding to the relative coordinates in the current display interface of the second electronic equipment;
The converting the touch point coordinate into a relative coordinate in the current display interface of the second electronic device includes:
when a group of vertical frames of the second electronic equipment in the first user interface are parallel to each other, determining a relative abscissa based on the actual width of the screen frame of the second electronic equipment and the abscissa in the touch point coordinates; or when a group of vertical frames of the second electronic device in the first user interface are not parallel to each other, determining a relative abscissa based on an actual width of a screen frame of the second electronic device, a first intersection abscissa, a first vertex abscissa, and a second vertex abscissa; the first intersection point is an intersection point of a first connecting line and a first horizontal frame, and the first connecting line is a connecting line between an intersection point of extension lines of the group of vertical frames and the coordinates of the touch point; the first vertex and the second vertex are two vertices of the first horizontal frame;
when a group of horizontal frames of the second electronic equipment in the first user interface are parallel to each other, determining a relative ordinate based on the actual height of the screen frame of the second electronic equipment and the ordinate in the touch point coordinates; or when a group of horizontal frames of the second electronic device in the first user interface are not parallel to each other, determining a relative ordinate based on the actual height of the screen frame of the second electronic device, a second intersection ordinate, a third vertex ordinate and a fourth vertex ordinate; the second intersection point is an intersection point of a second connecting line and the first vertical frame, and the second connecting line is a connecting line between an intersection point of extension lines of the group of horizontal frames and the touch point coordinate; the third vertex and the fourth vertex are two vertices of the first vertical frame.
11. The electronic device of claim 10, wherein the first electronic device receives a first operation comprising:
displaying a first application icon, and receiving an operation aiming at the first application icon; or alternatively, the process may be performed,
receiving a first voice operation; or alternatively, the process may be performed,
a first gesture operation is received.
12. The electronic device of claim 10, wherein the instructions, when executed by the first electronic device, cause the first electronic device to further perform: prior to receiving a touch operation for the first user interface
And determining the area inside the screen frame of the second electronic device in the first user interface as the display interface area of the second electronic device.
13. The electronic device of claim 12, wherein the first electronic device determines an area inside a screen bezel of the second electronic device, specifically comprising:
the first electronic equipment sends an anchor point generating instruction to the second electronic equipment, so that after the second electronic equipment receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction;
and the first electronic device determines the area inside the screen frame of the second electronic device according to the acquired information of the anchor point in the first image.
14. The electronic device of claim 12, wherein the instructions, when executed by the first electronic device, cause the first electronic device to identify a display interface area of the second electronic device, further perform:
judging whether the size of the display interface area of the second electronic equipment is smaller than a first threshold value;
and if the size of the display interface area of the second electronic equipment is smaller than a first threshold value, adjusting the focal length of the image pickup device to a first focal length.
15. The electronic device of claim 12 or 13, wherein the instructions, when executed by the first electronic device, cause the first electronic device to, after receiving a touch operation for the first user interface, generate the virtual touch event, further perform:
acquiring at least one touch point coordinate;
determining whether the at least one touch point coordinate is within a display interface area of the second electronic device;
and generating the virtual touch event in response to the first electronic device determining that the at least one touch point coordinate is in the display interface area of the second electronic device.
16. The electronic device of any of claims 10-14, wherein the instructions, when executed by the first electronic device, cause the first electronic device to perform generating a virtual touch event based on the touch point coordinates, specifically perform:
Converting the obtained touch point coordinates corresponding to the touch operation of the first user interface into relative coordinates in the current display interface of the second electronic device;
and generating the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
17. The electronic device of any one of claims 10-14, wherein the touch operation comprises a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface comprise a single coordinate and/or a plurality of coordinates.
18. The electronic device of any one of claims 10 to 14, comprising:
the first electronic equipment is a mobile phone, the second electronic equipment is an intelligent television, the camera device is a rear camera of the mobile phone, and the current display interface of the second electronic equipment is a menu interface of the intelligent television;
the first user interface is a display interface of the first electronic equipment after entering a remote control mode;
the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions;
The display interface area of the second electronic device is an image area of the menu interface of the intelligent television, which is acquired by the mobile phone;
the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the intelligent television in the first user interface;
and the second electronic equipment executes the operation corresponding to the relative coordinates in the current display interface of the second electronic equipment to execute the function corresponding to one of the plurality of controls in the image of the menu interface of the intelligent television.
19. A remote control system including a first electronic device including an image pickup apparatus and a second electronic device establishing wireless connection, comprising:
the first electronic device is configured to receive a first operation;
in response to receiving the first operation, the first electronic device starts the camera;
the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment;
The first electronic device is used for displaying a first user interface, and the first user interface comprises the first image;
the first electronic device is used for receiving touch operation aiming at the first user interface;
in response to receiving the touch operation for the first user interface, the first electronic device obtains touch point coordinates corresponding to the touch operation for the first user interface, converts the touch point coordinates into relative coordinates in a current display interface of the second electronic device, and generates a virtual touch event based on the relative coordinates, wherein the virtual touch event comprises the relative coordinates;
the first electronic device sends the virtual touch event to the second electronic device;
the second electronic equipment receives the virtual touch event;
responding to the received virtual touch event, and executing an operation corresponding to the relative coordinates in the current display interface of the second electronic device by the second electronic device;
the converting the touch point coordinate into a relative coordinate in the current display interface of the second electronic device includes:
when a group of vertical frames of the second electronic equipment in the first user interface are parallel to each other, determining a relative abscissa based on the actual width of the screen frame of the second electronic equipment and the abscissa in the touch point coordinates; or when a group of vertical frames of the second electronic device in the first user interface are not parallel to each other, determining a relative abscissa based on an actual width of a screen frame of the second electronic device, a first intersection abscissa, a first vertex abscissa, and a second vertex abscissa; the first intersection point is an intersection point of a first connecting line and a first horizontal frame, and the first connecting line is a connecting line between an intersection point of extension lines of the group of vertical frames and coordinates of the touch point; the first vertex and the second vertex are two vertices of the first horizontal frame;
When a group of horizontal frames of the second electronic equipment in the first user interface are parallel to each other, determining a relative ordinate based on the actual height of the screen frame of the second electronic equipment and the ordinate in the touch point coordinates; or when a group of horizontal frames of the second electronic device in the first user interface are not parallel to each other, determining a relative ordinate based on the actual height of the screen frame of the second electronic device, a second intersection ordinate, a third vertex ordinate and a fourth vertex ordinate; the second intersection point is an intersection point of a second connecting line and the first vertical frame, and the second connecting line is a connecting line between an intersection point of extension lines of the group of horizontal frames and coordinates of the touch point; the third vertex and the fourth vertex are two vertices of the first vertical frame.
CN202011167645.6A 2020-10-27 2020-10-27 Remote control method, electronic equipment and system Active CN114513689B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011167645.6A CN114513689B (en) 2020-10-27 2020-10-27 Remote control method, electronic equipment and system
PCT/CN2021/116179 WO2022088974A1 (en) 2020-10-27 2021-09-02 Remote control method, electronic device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011167645.6A CN114513689B (en) 2020-10-27 2020-10-27 Remote control method, electronic equipment and system

Publications (2)

Publication Number Publication Date
CN114513689A CN114513689A (en) 2022-05-17
CN114513689B true CN114513689B (en) 2023-09-12

Family

ID=81381838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011167645.6A Active CN114513689B (en) 2020-10-27 2020-10-27 Remote control method, electronic equipment and system

Country Status (2)

Country Link
CN (1) CN114513689B (en)
WO (1) WO2022088974A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895834A (en) * 2022-05-30 2022-08-12 四川启睿克科技有限公司 Display method of intelligent household equipment control page
CN115167752A (en) * 2022-06-28 2022-10-11 华人运通(上海)云计算科技有限公司 Single-screen system and control method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201278211Y (en) * 2008-09-08 2009-07-22 Tcl集团股份有限公司 Remote controller with touch screen and camera
CN103945251A (en) * 2014-04-03 2014-07-23 上海斐讯数据通信技术有限公司 Remote control system and mobile terminal
US20150181278A1 (en) * 2013-12-24 2015-06-25 Samsung Electronics Co., Ltd. Display apparatus and display method thereof
CN110866495A (en) * 2019-11-14 2020-03-06 杭州睿琪软件有限公司 Bill image recognition method, bill image recognition device, bill image recognition equipment, training method and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI702843B (en) * 2012-02-15 2020-08-21 立視科技股份有限公司 Television system operated with remote touch control
CN103491444B (en) * 2012-06-14 2016-09-21 腾讯科技(深圳)有限公司 Image interaction method and system and the display device of correspondence
CN104639962B (en) * 2015-02-02 2018-05-08 惠州Tcl移动通信有限公司 A kind of method and system for realizing TV touch control
CN104703008A (en) * 2015-02-04 2015-06-10 中新科技集团股份有限公司 Method for controlling television through mobile phone
CN106331809A (en) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 Television control method and television control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201278211Y (en) * 2008-09-08 2009-07-22 Tcl集团股份有限公司 Remote controller with touch screen and camera
US20150181278A1 (en) * 2013-12-24 2015-06-25 Samsung Electronics Co., Ltd. Display apparatus and display method thereof
CN103945251A (en) * 2014-04-03 2014-07-23 上海斐讯数据通信技术有限公司 Remote control system and mobile terminal
CN110866495A (en) * 2019-11-14 2020-03-06 杭州睿琪软件有限公司 Bill image recognition method, bill image recognition device, bill image recognition equipment, training method and storage medium

Also Published As

Publication number Publication date
CN114513689A (en) 2022-05-17
WO2022088974A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
CN110661917B (en) Display method and electronic equipment
WO2022100237A1 (en) Screen projection display method and related product
CN112394895B (en) Picture cross-device display method and device and electronic device
CN112558825A (en) Information processing method and electronic equipment
WO2019174628A1 (en) Photographing method and mobile terminal
US9742995B2 (en) Receiver-controlled panoramic view video share
US20220398059A1 (en) Multi-window display method, electronic device, and system
KR20190014638A (en) Electronic device and method for controlling of the same
CN112527174B (en) Information processing method and electronic equipment
CN112527222A (en) Information processing method and electronic equipment
CN114513689B (en) Remote control method, electronic equipment and system
CN112866773B (en) Display equipment and camera tracking method in multi-person scene
WO2022028537A1 (en) Device recognition method and related apparatus
CN112328941A (en) Application screen projection method based on browser and related device
WO2023030099A1 (en) Cross-device interaction method and apparatus, and screen projection system and terminal
CN110086998B (en) Shooting method and terminal
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
CN115643485A (en) Shooting method and electronic equipment
CN113825002A (en) Display device and focus control method
WO2023231697A1 (en) Photographing method and related device
CN112822544A (en) Video material file generation method, video synthesis method, device and medium
CN114079691B (en) Equipment identification method and related device
WO2022105793A1 (en) Image processing method and device
CN115484387A (en) Prompting method and electronic equipment
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant