CN114513689A - Remote control method, electronic equipment and system - Google Patents

Remote control method, electronic equipment and system Download PDF

Info

Publication number
CN114513689A
CN114513689A CN202011167645.6A CN202011167645A CN114513689A CN 114513689 A CN114513689 A CN 114513689A CN 202011167645 A CN202011167645 A CN 202011167645A CN 114513689 A CN114513689 A CN 114513689A
Authority
CN
China
Prior art keywords
electronic device
display interface
touch
electronic equipment
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011167645.6A
Other languages
Chinese (zh)
Other versions
CN114513689B (en
Inventor
王姚
钱凯
庄志山
朱爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011167645.6A priority Critical patent/CN114513689B/en
Priority to PCT/CN2021/116179 priority patent/WO2022088974A1/en
Publication of CN114513689A publication Critical patent/CN114513689A/en
Application granted granted Critical
Publication of CN114513689B publication Critical patent/CN114513689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The embodiment of the application provides a remote control method, electronic equipment and a system, wherein the first electronic equipment comprises a camera device, the first electronic equipment and the second electronic equipment are in wireless connection, and the first electronic equipment receives a first operation; in response to receiving the first operation, the first electronic equipment starts the camera device; the first electronic equipment acquires a first image by using the camera device; the first electronic equipment displays a first user interface, wherein the first user interface comprises the first image; the first electronic device receives touch operation aiming at the first user interface; in response to receiving the touch operation for the first user interface, the first electronic device generates a virtual touch event and sends the virtual touch event to the second electronic device, so that after the second electronic device receives the virtual touch event, a third operation is executed in response to the virtual touch event.

Description

Remote control method, electronic equipment and system
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a remote control method, an electronic device, and a system.
Background
At present, with the development of smart televisions, functions supported by the smart televisions are more and more, and information of user interfaces is more and more complex, so that the demand for remote control of the smart televisions is higher and higher. In the traditional remote control technology, the mode of remotely controlling the smart television through a simple key structure cannot better support the requirement of the current smart television on the remote control function, so how to better realize the remote control of the smart television is a problem worthy of research.
Based on the popularity of the application of the touch technology, the research direction of combining the touch technology with the remote control technology to realize the remote control of the smart television currently exists, and compared with the traditional remote control technology, the remote control mode can meet the requirements of the smart television for more types of remote control. In the related art, a touch panel can be added on a traditional remote controller to realize the remote control of the smart television in a mode of combining a key and the touch panel, however, the technical scheme can not meet the requirements of various control scenes of the current smart television service; or, an application for realizing remote control of the smart television is developed on the smart phone, and the display content of the smart television is coded and then transmitted to the smart phone, so that the smart phone can realize remote control of the smart television after decoding. Therefore, in order to solve the problems in the related art, the present application provides a remote control method that can meet the needs of more kinds of operation scenarios of the current smart tv service and can reduce the time delay.
Disclosure of Invention
The application provides a remote control method, electronic equipment and a system, which are used for meeting the requirements of various control scenes of the current smart television service, have the characteristic of low time delay, and improve the remote control accuracy of a user on a smart television, so that the touch operation experience of the user is improved.
In a first aspect, an embodiment of the present application provides a remote control method, which is applied to a first electronic device, where the first electronic device includes an image capture apparatus, the first electronic device establishes a wireless connection with a second electronic device, and the first electronic device receives a first operation; in response to receiving the first operation, the first electronic equipment starts the camera device; the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment; the first electronic equipment displays a first user interface, wherein the first user interface comprises the first image; the first electronic device receives touch operation aiming at the first user interface, acquires touch point coordinates corresponding to the touch operation aiming at the first user interface, and generates a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device; the first electronic device sends the virtual touch event to the second electronic device, so that after the second electronic device receives the virtual touch event, the second electronic device responds to the virtual touch event and executes an operation corresponding to the relative coordinate in the current display interface of the second electronic device.
In the method, after a first electronic device acquires a captured image by using a capturing device and displays the captured image as a user interface according to the captured image, a user can perform touch operation on the first electronic device, so that the first electronic device generates a virtual touch event according to the user touch operation and sends the virtual touch event to a second user device to realize remote control on the second user device, and the user can realize the requirements on various remote control scenes of the second electronic device according to the touch operation on the first electronic device; in addition, the transmission data volume between the first electronic device and the second electronic device is a virtual touch event, and the method has the characteristic of small data volume, so that the time delay of data interaction between the first electronic device and the second electronic device is reduced, and the touch experience of a user is improved.
According to one possible implementation manner, when an application icon installed on the first electronic device and used for performing touch operation is triggered by clicking, the first electronic device determines that a first operation is received; or the first electronic device determines to receive a first operation when a remote control in a notification bar pull-down interface is triggered by clicking; or after the first electronic device receives the voice operation or the gesture operation, determining that the first operation is received. Therefore, various entries which can determine to enter the virtual touch event generation scene can be provided for the user on the first electronic device, so that the portability is provided for the user, and the user experience is improved.
In one possible implementation, before the first electronic device receives a touch operation for the first user interface, the first electronic device identifies a display interface area of the second electronic device. Specifically, the first electronic device determines that an area inside a screen frame of the second electronic device in the first user interface is a display interface area of the second electronic device, and content displayed in the display interface area of the second electronic device is a current display interface of the second electronic device. Based on this, when the first electronic device captures the display interface including the second electronic device, the captured image information is in a wider range than that of the second electronic device, and then on the display interface of the first electronic device, other touch operations are not used for generating the virtual touch event except for the touch operation in the display interface area of the second electronic device, so that the second electronic device displayed on the display interface of the first electronic device is determined by identifying the screen frame of the second electronic device in the display interface of the first electronic device, the efficiency of generating the virtual touch event is improved, and the processing time of the remote control process is reduced.
In one possible implementation manner, the first electronic device may determine an area of the screen frame content of the second electronic device, where the first electronic device sends an anchor point generating instruction to the second electronic device, so that after the second electronic device receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction; and the first electronic equipment determines the area inside the screen frame of the second electronic equipment according to the acquired information of the anchor point in the first image. Based on this, the first electronic device can determine the display area of the second electronic device according to the area determined by the anchor point after determining the plurality of target anchor points included in the display interface of the first electronic device in a manner of detecting the anchor point, so that the efficiency and accuracy of generating the virtual touch event by the first electronic device are improved, and the processing time of the remote control process is reduced.
In a possible implementation manner, after the first electronic device identifies the display interface area of the second electronic device, whether the size of the display interface area of the second electronic device is smaller than a first threshold value is judged; if the size of the display interface area of the second electronic device is smaller than a first threshold, the first electronic device adjusts the focal length of the camera device to a first focal length. Therefore, in order to facilitate the user to perform more accurate touch operation on the display interface of the first electronic device, the focal length of the camera device can be intelligently adjusted according to the display size of the display interface of the second electronic device included in the display interface of the first electronic device, so that the user can perform touch operation through the first electronic device, and more accurate virtual remote control events are generated.
In a possible implementation manner, after the first electronic device receives a touch operation for the first user interface and before the virtual touch event is generated, the first electronic device obtains at least one touch point coordinate; the first electronic device determines whether the at least one touch point coordinate is within a display interface area of the second electronic device; in response to the first electronic device determining that the at least one touch point coordinate is within the display interface area of the second electronic device, the first electronic device generates the virtual touch event. Based on this, the implementation manner provides a specific implementation manner that the virtual event generation is accurate, and the first electronic device generates the virtual touch event based on the acquired touch point coordinates, so that the accuracy of the generated virtual touch event is ensured.
In one possible implementation manner, in response to receiving the touch operation for the first user interface, the first electronic device generates a virtual touch event by converting the acquired touch point coordinates corresponding to the touch operation for the first user interface into relative coordinates in a display interface area of the second electronic device, and the first electronic device generates the virtual touch event according to the relative coordinates in the current display interface of the second electronic device. Based on the method, after the first electronic device obtains the touch point coordinates corresponding to the user operation, the touch point coordinates are converted into the relative coordinates belonging to the second electronic device according to the display effect of the second electronic device on the two-dimensional projection interface of the first electronic device, and therefore the accuracy of the virtual touch event generated based on the first electronic device is guaranteed.
In one possible implementation manner, the touch operation includes a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or a plurality of coordinates. The scene provides that the user operation can be click operation or sliding operation, and the user can execute different user operations on the first electronic equipment, so that the diversity of the generated virtual remote control events is met, various remote control scenes of the second electronic equipment are met, and the user experience is improved.
According to one possible implementation manner, the first electronic device is a mobile phone, the second electronic device is a smart television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic device is a menu interface of the smart television; the first user interface is a display interface of the first electronic device after entering a remote control mode; the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions; the display interface area of the second electronic device is an image area of a menu interface of the smart television acquired by the mobile phone; the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the smart television in the first user interface; the second electronic device executes the operation corresponding to the relative coordinate in the current display interface of the second electronic device, and the second electronic device executes the function corresponding to one of the plurality of controls in the image of the menu interface of the smart television. Based on this, the implementation mode provides a possible scene of the first electronic device and the second electronic device, namely, a scene of realizing remote control of the smart television through the mobile phone.
In a second aspect, an embodiment of the present application further provides an electronic device, which is adapted to a first electronic device, where the first electronic device includes an image capture apparatus, and a wireless connection is established between the first electronic device and a second electronic device, and the electronic device includes: the touch screen comprises a touch panel and a display screen; one or more processors; a memory; a plurality of application programs; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the first electronic device, cause the first electronic device to perform the steps of: receiving a first operation; in response to receiving the first operation, starting the camera device; acquiring a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is a current display interface of the second electronic equipment; displaying a first user interface, wherein the first user interface comprises the first image; receiving a touch operation directed to the first user interface; in response to receiving the touch operation aiming at the first user interface, acquiring touch point coordinates corresponding to the touch operation aiming at the first user interface, and generating a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic equipment; and sending the virtual touch event to the second electronic device, so that after the second electronic device receives the virtual touch event, the second electronic device responds to the virtual touch event and executes an operation corresponding to the relative coordinate in the current display interface of the second electronic device.
A possible implementation manner is that, when the instruction is executed by the first electronic device, the first electronic device is caused to specifically execute, when receiving a first operation, that: displaying a first application icon, and receiving an operation aiming at the first application icon; or receiving a first voice operation; alternatively, a first gesture operation is received.
A possible implementation, when the instructions are executed by the first electronic device, cause the first electronic device to further perform: before receiving a touch operation for the first user interface, determining that an area inside a screen frame of the second electronic device in the first user interface is a display interface area of the second electronic device.
A possible implementation manner is that, when the instruction is executed by the first electronic device, the first electronic device is caused to specifically execute, when determining an area inside a screen frame of the second electronic device, the following steps: sending an anchor point generating instruction to the second electronic equipment, so that after the second electronic equipment receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction; and determining an area inside a screen frame of the second electronic equipment according to the acquired information of the anchor point in the first image.
A possible implementation manner is that, when the instruction is executed by the first electronic device, the first electronic device further performs, after identifying the display interface area of the second electronic device: judging whether the size of a display interface area of the second electronic equipment is smaller than a first threshold value or not; and if the size of the display interface area of the second electronic equipment is smaller than a first threshold value, adjusting the focal length of the camera device to a first focal length.
A possible implementation manner is that, when the instructions are executed by the first electronic device, the instructions cause the first electronic device to, after receiving a touch operation for the first user interface, further execute, before generating the virtual touch event: acquiring at least one touch point coordinate; determining whether the at least one touch point coordinate is within a display interface area of the second electronic device; generating the virtual touch event in response to the first electronic device determining that the at least one touch point coordinate is within a display interface area of the second electronic device.
A possible implementation manner is that, when the instruction is executed by the first electronic device, the first electronic device executes, when generating a virtual touch event based on the touch point coordinates, specifically: converting the acquired touch point coordinate corresponding to the touch operation aiming at the first user interface into a relative coordinate in a current display interface of the second electronic equipment; and generating the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
In one possible implementation manner, the touch operation includes a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or a plurality of coordinates.
According to one possible implementation manner, the first electronic device is a mobile phone, the second electronic device is a smart television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic device is a menu interface of the smart television; the first user interface is a display interface of the first electronic device after entering a remote control mode; the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions; the display interface area of the second electronic equipment is an image area of a menu interface of the smart television acquired by the mobile phone; the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the smart television in the first user interface; the second electronic device executes the operation corresponding to the relative coordinate in the current display interface of the second electronic device, and the second electronic device executes the function corresponding to one of the plurality of controls in the image of the menu interface of the smart television.
It should be noted that, for beneficial effects of each design of the electronic device provided in the second aspect of the embodiment of the present application, please refer to beneficial effects of any one of the possible designs of the first aspect, which is not described herein again.
In a third aspect, an embodiment of the present application provides a remote control system, including a first electronic device and a second electronic device, where the first electronic device includes an image pickup apparatus, the first electronic device and the second electronic device establish a wireless connection, and the first electronic device is configured to receive a first operation; in response to receiving the first operation, the first electronic equipment starts the camera device; the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment; the first electronic equipment is used for displaying a first user interface, and the first user interface comprises the first image; the first electronic device is used for receiving touch operation aiming at the first user interface; in response to receiving the touch operation aiming at the first user interface, the first electronic device acquires touch point coordinates corresponding to the touch operation aiming at the first user interface and generates a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device; the first electronic device sends the virtual touch event to the second electronic device; the second electronic device receives the virtual touch event; in response to receiving the virtual touch event, the second electronic device performs a third operation.
In a fourth aspect, embodiments of the present application further provide a remote control device, which includes a module/unit that performs the method in any one of the possible implementation manners of the first aspect. These modules/units may be implemented by hardware, or by hardware executing corresponding software.
In a fifth aspect, a computer-readable storage medium is provided, which stores a computer program (which may also be referred to as code or instructions) that, when executed on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program (also called code, or instructions), which when executed, causes a computer to perform the method of any of the possible implementations of the first aspect described above.
In a seventh aspect, a graphical user interface on an electronic device is further provided, where the electronic device has a display screen, one or more memories, and one or more processors, and the one or more processors are configured to execute one or more computer programs stored in the one or more memories, and the graphical user interface includes a graphical user interface displayed when the electronic device executes any of the possible implementations of the first aspect of the embodiments of the present application.
Drawings
Fig. 1a is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 1b is a schematic view of an application scenario provided in the embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an android operating system according to an embodiment of the present application;
fig. 4a is an application scenario diagram of a remote control method according to an embodiment of the present application;
fig. 4b is a second application scenario diagram of a remote control method according to an embodiment of the present application;
fig. 4c is a third application scenario diagram of a remote control method according to an embodiment of the present application;
fig. 5a is a schematic flowchart of a remote control method according to an embodiment of the present application;
fig. 5b is a schematic diagram of anchor point generation provided in the embodiment of the present application;
FIG. 6a is a schematic diagram of coordinate transformation provided in an embodiment of the present application;
FIG. 6b is a second schematic diagram of coordinate transformation provided in the embodiment of the present application;
fig. 7 is a schematic structural diagram of a remote control device according to an embodiment of the present application.
Detailed Description
With the rapid development of society, mobile terminal devices such as mobile phones are becoming more and more popular. The mobile phone not only has a communication function, but also has strong processing capability, storage capability, a photographing function, a data editing function and the like. Therefore, the mobile phone can be used as a communication tool, and is a mobile database of the user, which can provide a mobile computing environment to implement predefined processing on the received data and output an instruction with a control function, such as an instruction sent by the first electronic device to implement a remote controlled virtual touch event. Therefore, based on the convenience and the touch capability of the mobile terminal device, the virtual touch event of the electronic device needing remote control, such as a smart television, can be sent through the mobile terminal device, and the method and the device can be suitable for various possible remote control scenes.
Based on the description in the background art, in the related art, although the remote control mode of adding a touch panel to a conventional remote controller is considered to realize more types of remote control requirements of the smart television, the related art has the disadvantage of inaccurate touch operation, for example, the accurate control of fast forward or backward of a playing video cannot be realized, so that the requirements of the smart television on various control scenes cannot be met. Or, an application for realizing remote control of the smart television is developed on the smart phone, the current display content of the smart television is coded into image data and then sent to the smart phone, the image data is decoded and then displayed on a touch panel of the smart phone, and after touch operation is performed on the touch panel of the smart phone, the display content containing the touch operation is coded and then fed back to the smart television, so that the smart television is remotely controlled by the smart phone.
In view of this, the present application provides a remote control method, based on the principles of an augmented reality technology and an image tracking technology, using the image capturing and presenting capabilities of a first electronic device, using an image capturing apparatus included in the first electronic device to obtain a captured image, where the captured image includes a display interface area of a second electronic device, and by converting a touch operation of a user on the first electronic device into a virtual remote control operation on the second electronic device, and sending a virtual touch event generated by the first electronic device based on the virtual remote control operation to the second electronic device, thereby implementing remote control on the second electronic device.
The embodiments of the present application may be applied to electronic devices such as a mobile phone, a tablet computer, a wearable device (e.g., a watch, a bracelet, a helmet, a headset, etc.), an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a smart home device (e.g., a smart television, a smart projector, a smart speaker, a smart camera, etc.), and the like. It is understood that the embodiment of the present application does not set any limit to the specific type of the electronic device.
Applications (apps) with various functions, such as apps for wechat, mailbox, microblog, video, smart life, intelligent remote control and the like, can be installed in the electronic device. In the embodiment of the application, attention is focused on how an App installed in the first electronic device and used for sending the virtual touch event generates the virtual touch event.
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments herein only and is not intended to be limiting of the application.
In the present embodiment, "at least one" means one or more, and a plurality means two or more. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. It is to be understood that, in this application, unless otherwise indicated, the "invention" means "or" has the meaning of. For example, A/B may represent A or B. In the present application, "and/or" is only one kind of association relation describing an associated object, and means that three kinds of relations may exist.
It should be noted that in the embodiments of the present application, the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or order. The features defined as "first" and "second" may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Referring to fig. 1a, an application scenario diagram of a remote control method according to an embodiment of the present application is shown. As shown in fig. 1a, the application scenario may include: a first electronic device 101 and a second electronic device 102. The first electronic device 101 and the second electronic device 102 may access the same local area network or different local area networks. The example that the first electronic device 101 and the second electronic device 102 access the same local area network may specifically be: the first electronic device 101 and the second electronic device 102 establish a wireless connection with the same wireless access point. For example, the first electronic device 101 and the second electronic device 102 access the same wireless fidelity (WI-FI) hotspot, and for example, the first electronic device 101 and the second electronic device 102 may also access the same bluetooth beacon through a bluetooth protocol. For another example, the first electronic device 101 and the second electronic device 102 may also trigger a communication connection through a Near Field Communication (NFC) tag, and transmit encrypted information through a bluetooth module to perform identity authentication. After the authentication is successful, data transmission is performed in a point-to-point (P2P) manner.
In implementation, the first electronic device 101 may serve as a sending client, and after generating the virtual touch event based on the touch operation of the user, send the virtual touch event to the second electronic device 102. As shown in fig. 1a, in a first possible implementation manner that the first electronic device 101 enters a user interface for generating a virtual touch event, a displayed interface of the first electronic device 101 is a mobile phone main interface including multiple App icons, and the mobile phone main interface includes an intelligent remote control App icon for generating the virtual touch event. A user clicks an intelligent remote control App icon included in the main interface of the mobile phone shown in fig. 1a, after detecting that the intelligent remote control App icon is triggered, the first electronic device 101 jumps to a user interface for generating a virtual touch event, generates the virtual touch event according to the user touch operation detected again, and sends the generated virtual touch event to the second electronic device 102 through a network or through a near field communication connection established between the first electronic device 101 and the second electronic device 102, so that the second electronic device 102 executes a corresponding operation in response to the virtual touch event. In some embodiments, the first electronic device 101 may be a portable electronic device that also includes other functionality, such as personal digital assistant and/or music player functionality, such as an electronic device like a cell phone, a tablet, a wearable device with wireless communication functionality (e.g., a smart watch), etc.; the second electronic device 102 may be an electronic device such as a smart television and a smart projector, which is not specifically limited in this embodiment.
Referring to fig. 1b, the interface displayed by the first electronic device 101 may also be a notification bar drop-down interface of a mobile phone, where the notification bar drop-down interface includes a remote control for remotely controlling a second electronic device, such as a smart television or a smart projector. A user clicks a remote control included in a drop-down interface of a notification bar in the first electronic device 101 in fig. 1b, and after detecting that the remote control is triggered, the first electronic device 101 jumps to a user interface for generating a virtual touch event. The specific implementation of the first electronic device 101 generating the virtual touch event will be described later, and will not be described herein.
Exemplary embodiments of an electronic device to which embodiments of the present application may be applied include, but are not limited to, a mount
Figure BDA0002746280490000071
Figure BDA0002746280490000072
Or other operating system. The portable electronic device may also be other portable electronic devices such as Laptop computers (Laptop) with touch sensitive surfaces (e.g., touch panels), etc.
Referring to fig. 2, an electronic device 200 may be the first electronic device 101 and/or the second electronic device 102 in the embodiment of the present application, and the electronic device 200 provided in the embodiment of the present application is described herein by taking the first electronic device 101 as the electronic device 200 as an example. It will be understood by those skilled in the art that the electronic device 200 shown in fig. 2 is merely an example and does not constitute a limitation of the electronic device 200, and that the electronic device 200 may have more or fewer components than those shown, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, a sensor module 280, a camera 293, and a display screen 294, etc. The sensor module 280 may include a gyroscope sensor 280A and a touch sensor 280K (of course, the electronic device 200 may further include other sensors, such as a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, an air pressure sensor, a bone conduction sensor, etc., which are not shown in the figure).
The processor 210 may operate the remote control method provided in the embodiment of the present application, so as to meet the requirements of various remote control functions for remotely controlling the smart television through the first electronic device on the basis of ensuring the control accuracy, thereby improving the user experience. The processor 210 may include different devices, such as an integrated CPU and a GPU, and the CPU and the GPU may cooperate to execute the remote control method provided by the embodiment of the present application, for example, a part of the algorithm in the remote control method is executed by the CPU, and another part of the algorithm is executed by the GPU, so as to obtain faster processing efficiency.
Display screen 294 may display a photograph, video, web page, or file, etc. In this embodiment, the display screen 294 may display a main interface of a mobile phone of the first electronic device 101 shown in fig. 1a, or a drop-down interface of a notification bar shown in fig. 1 b. When the processor 210 detects a touch event of a finger (or a stylus, etc.) of a user with respect to an application icon, in response to the touch event, a user interface of an application corresponding to the application icon is opened and displayed on the display screen 294.
The camera 293 (front camera or rear camera, or one camera both as front camera and rear camera) is used for capturing still images or video, for example, if the electronic device 200 is the first electronic device 101 as shown in fig. 1a and 1b, the camera of the first electronic device 101 is used for capturing images including the display interface area of the second electronic device 102.
The internal memory 221 may be used to store computer-executable program code, which includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may further store one or more computer programs corresponding to the virtual touch event generation algorithm provided in the embodiment of the present application. When the code of the virtual touch event generation algorithm stored in the internal memory 221 is executed by the processor 210, the processor 210 may perform the generation of the virtual touch event and transmit to the second electronic device 102 through the mobile communication module 251 or the wireless communication module 252.
Of course, the code of the virtual touch event generation algorithm provided in the embodiment of the present application may also be stored in the external memory. In this case, the processor 210 may execute the code of the virtual touch event generation algorithm stored in the external memory through the external memory interface 220, and the processor 210 may execute the generation of the virtual touch event and transmit to the second electronic device 102 through the mobile communication module 251 or the wireless communication module 252.
The function of the sensor module 280 is described below.
The gyro sensor 280A may be used to determine the motion pose of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 280A. I.e., the gyro sensor 280A may be used to detect the current motion state of the electronic device 200, such as shaking or standing still. In this embodiment, if the electronic device 200 is detected to be in a shaking state by the gyroscope sensor 280A, the electronic device 200 may analyze and identify real-time images captured by the camera 293 in time, so as to avoid the problem of inaccurate generation of virtual touch events caused by shaking.
The touch sensor 280K is also referred to as a "touch panel". The touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, which is also called a "touch screen". The touch sensor 280K is used to detect a touch operation acting thereon or nearby, for example, a user touch operation used to generate a virtual touch event in the embodiment of the present application. The touch sensor can communicate the detected touch operation to the application processor to determine a touch event type. Visual output related to touch operations may be provided through the display screen 294. In other embodiments, the touch sensor 280K can be disposed on a surface of the electronic device 200 at a different location than the display screen 294.
Illustratively, the user clicks an icon of the smart remote control in the main interface of the mobile phone shown in fig. 1a through the touch sensor 280K, the trigger processor 210 starts the smart remote control application, displays the jumped user interface for generating the virtual touch event through the display screen 294, and triggers the camera 293 to be turned on.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, the modem processor, the baseband processor, and the like. In this embodiment of the application, information interaction such as a virtual touch event can be achieved between the first electronic device 101 and the second electronic device 102 through a wireless communication function of the electronic device 200.
The wireless communication module 252 may provide a solution for wireless communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), NFC, Infrared (IR), and the like.
It should be understood that in practical applications, the electronic device 200 may include more or less components than those shown in fig. 2, and the embodiment of the present application is not limited thereto.
In order to implement the functions in the method provided by the embodiment of the present application, the electronic device 200 may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
The software system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the application takes an Android system with a layered architecture as an example, and illustrates a software structure of an electronic device. Fig. 3 shows a software structure block diagram of an Android system provided in the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, an application layer, an application framework (framework) layer, an Android runtime (Android runtime) and system library, a hardware abstraction layer, and a kernel layer from top to bottom.
The application layer is the top layer of the operating system and may include a series of application packages. As shown in fig. 3, the application layer may include a native application of the operating system and a third-party application, wherein the native application of the operating system may include a User Interface (UI), a camera, a short message, a call, and the like, and the third-party application may include a map, a smart life, an intelligent remote control, and the like. The application mentioned below may be a native application of an operating system installed when the electronic device is shipped from a factory, or may be a third-party application downloaded from a network or acquired from another electronic device by a user during use of the electronic device.
In some embodiments of the present application, the application layer may be configured to implement presentation of an editing interface, where the editing interface may be used for enabling a user to implement an operation of a virtual touch event generated for a second electronic device in an App, such as an intelligent remote control, that is focused on by the present application. For example, the editing interface may be a control interface of an intelligent remote control App displayed on a touch screen of a first electronic device, for example, a user interface displayed on the first electronic device shown in (1-2) in fig. 5b, where the user interface displays picture information of a real-time display interface of a second electronic device, which is shot by the first electronic device using an image pickup device, so that a virtual remote control operation for the second electronic device is realized by implementing a remote control operation on the control interface of the first electronic device, and further, corresponding operations such as controlling or setting change of the display interface of the second electronic device are realized.
The application framework layer provides an application programming interface and a programming framework for the application of the application layer. The application framework layer may also include some predefined functions. As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like. In some embodiments of the present application, the application framework layer is mainly responsible for invoking a service interface for communicating with the hardware abstraction layer, so as to transfer a virtual touch event generation request to the hardware abstraction layer, where the virtual touch event generation request includes predefined programming of a virtual touch event generation service, and is used for generating various virtual touch events required by the second electronic device in the present application; and also takes charge of managing the user name and password of login authentication, and the like. For example, the virtual touch event generation service may include various modules required for managing generation of virtual touch events, which are involved in the embodiments of the present application. For example, the virtual touch event generation service includes a target detection module, a coordinate conversion module, a WI-FI service, and the like.
The target detection module is used for executing detection and remote control on a display interface area of the second electronic equipment in a control interface of the intelligent remote control App opened from the first electronic equipment, so that more accurate interface control on the second electronic equipment is realized. Fig. 5b is a schematic diagram of anchor point generation according to an embodiment of the present application. For example, as shown in fig. 5b, a screen frame of a second electronic device (e.g., a smart television) is detected from a display interface of a first electronic device (e.g., a mobile phone) shown in (1-2) of fig. 5b, so as to determine a display interface area of the smart television.
The coordinate conversion module is used for determining a coordinate point sequence of touch operation after the first electronic device detects the touch operation of the user in the opened intelligent remote control App, screening out coordinate points belonging to the display interface area of the second electronic device, and then converting coordinates of the screened coordinate points, so that the coordinate point sequence generated by the touch operation of the user in the opened intelligent remote control App is converted into a corresponding coordinate point sequence in the interface of the second electronic device.
The WI-FI service is used for guaranteeing information interaction between the first electronic equipment and the second electronic equipment, so that a virtual touch event generated by the first electronic equipment is sent to the second electronic equipment, and further virtual remote control operation on the second electronic equipment is achieved.
A Hardware Abstraction Layer (HAL) is a support for an application framework layer, and is an important link for connecting the application framework layer and a kernel layer, and can provide services for developers through the application framework layer. For example, the function of the virtual touch event generation service in the embodiment of the present application may be implemented by configuring a first process at a hardware abstraction layer, where the first process may be a sub-process separately constructed in the hardware abstraction layer. The first process may include modules such as a virtual touch event generation service configuration interface, a virtual touch event generation service controller, and the like. The virtual touch event generation service configuration interface is a service interface which communicates with the application framework layer.
The kernel layer may be a Linux kernel layer, which is an abstraction layer between hardware and software. The kernel layer is provided with a plurality of driving programs related to the first electronic equipment, and at least comprises a display driver and a camera driver; driving a camera; audio driving; driving by Bluetooth; WI-FI drive, etc., which the embodiments of the present application do not limit at all.
With reference to the description of the hardware framework of the electronic device in fig. 2 and the description of the software framework of the electronic device in fig. 3, the following describes an exemplary operation principle of the software and hardware of the electronic device 200 for a remote control application scenario.
When the touch sensor 280K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including information such as touch coordinates and a time stamp of the touch operation), where the original input event is, for example, a user touch event in the following embodiments of the present application. The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the example that the touch operation is a click operation and the control corresponding to the click operation is the control of the application icon of the intelligent remote control App, the intelligent remote control App calls the interface of the application framework layer when being started, then starts the camera drive by calling the kernel layer, captures the image containing the display interface of the second electronic device through the camera 293, and displays the image as real-time image information on the display screen 294 of the first electronic device through the display drive, wherein the real-time image information includes the image captured by the camera.
In the following description, a mobile phone will be taken as an example for description. It should be understood that the hardware architecture of the mobile phone can be as shown in fig. 2, and the software architecture can be as shown in fig. 3, wherein a software program and/or a module corresponding to the software architecture in the mobile phone can be stored in the memory 140, and the processor 130 can execute the software program stored in the memory 140 and the flow used to execute the remote control method provided by the embodiment of the present application. For convenience of understanding, terms that may be referred to in the following embodiments are explained below:
(1) a user touch event: the touch control method includes the steps that touch control operation performed on first electronic equipment by a user is indicated, and a touch control point coordinate point or a touch control point coordinate sequence of the touch control operation is included in a user touch control event, for example, if the touch control operation is a click operation, the touch control point coordinate point is included in the user touch control event; if the touch operation is a sliding operation, the user touch event includes a touch point coordinate sequence (the touch point coordinate sequence at least includes a sliding start position coordinate, a sliding end coordinate, or further includes a sliding distance and a sliding direction, etc.). The touch operation includes, but is not limited to, a click operation, a slide operation, a long press operation, a double-click operation, a click operation of a screen designation control, and the like, which is not limited herein.
(2) Virtual touch event: the method and the device for the touch control of the second electronic device are used for indicating the first electronic device to convert into a virtual touch control operation aiming at the second electronic device according to a user touch control event so as to enable the second electronic device to execute corresponding operation according to the virtual touch control event, wherein the virtual touch control event comprises a relative coordinate point or a relative coordinate sequence of the virtual touch control operation. The relative coordinate points (or sequences) are obtained by the first electronic device after coordinate conversion is performed on the touch point coordinate points (or sequences) according to coordinate positions of four corner points of a screen frame of the second electronic device in the two-dimensional projection interface of the first electronic device, and a specific coordinate conversion implementation is introduced later and will not be described in detail herein.
(3) Two-dimensional projection interface: the two-dimensional projection interface is an interface obtained after two-dimensional projection is carried out according to the position relation of the second electronic equipment in the three-dimensional space.
It should be understood that "at least one" in the embodiments of the present application means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b and c can be single or multiple.
The embodiments of the present application relate to a plurality of numbers greater than or equal to two.
In addition, it is to be understood that the terms first, second, etc. in the description of the present application are used for distinguishing between the descriptions and not necessarily for describing a sequential or chronological order.
In addition, in this embodiment of the present application, a "terminal", "first electronic device", "electronic device", "mobile phone", and the like may be used in combination, that is, refer to various devices that may be used to implement this embodiment of the present application.
For convenience of understanding, in the embodiments described below, the first electronic device is taken as a smart phone as an example, but the present application is not limited to the smart phone, and any electronic device capable of implementing touch operation can be taken as the first electronic device in the present application; in addition, the second electronic device is taken as an example of an intelligent television, but the present application is not limited to an intelligent television, and any electronic device that needs to be remotely controlled may be taken as the second electronic device in the present application, and for example, the second electronic device may also be an intelligent projector or the like.
The following describes several scenarios of the embodiments of the present application to facilitate better understanding of the embodiments of the present application.
In order to facilitate understanding of the remote control method provided by the present application, the following describes an interface processing effect that can be achieved by using the remote control method provided by the present application, with reference to the user interfaces shown in fig. 4a to 4 c. The following possible scenarios are included:
scene 1: referring to the schematic diagram of the interface processing effect shown in fig. 4a, where the display interface of the second electronic device (in fig. 4a, a smart tv is taken as an example) shown in (1-1) in fig. 4a displays a program home page, and is a selection interface under the category of dramas, based on the foregoing embodiment, a user may turn on a smart remote control App in the manner shown in fig. 1a and 1b, and then the smart remote control App triggers a mobile phone camera to turn on, and the user captures, based on the mobile phone camera, content displayed on the current smart tv display interface as a real-time image displayed on the mobile phone interface, specifically, content displayed on the display interface of the first electronic device (in fig. 4a, a mobile phone is taken as an example) shown in (1-2) in fig. 4 a.
At this time, the user executes a click operation on the "tv series 2", after receiving the click operation, the mobile phone determines a coordinate of a touch point of the click position in the display interface of the mobile phone, performs coordinate conversion on the coordinate of the touch point to obtain a corresponding relative coordinate on the smart tv, generates a virtual touch event including the obtained relative coordinate on the smart tv, and sends the virtual touch event to the smart tv.
After the smart television receives the virtual touch event, after the virtual touch event is analyzed to obtain the relative coordinate, the position of the relative coordinate on the display interface of the smart television is determined to be the position of the poster picture of the television play 2, so that it can be determined that the user plays the television play 2 in a touch manner, the smart television plays the television play 2 in response to the virtual touch event, and the display interface is changed to the interface displayed by the smart television in (2-1) in fig. 4a, and the display interface is the start playing picture of the television play 2.
Based on the introduced implementation process of the scenario 1, the following introduced scenarios 2 to 3 trigger entering of the interface for generating the virtual touch event are similar in implementation manner, and therefore details are not repeated in the following introduction.
Scene 2: referring to the schematic view of the interface processing effect shown in fig. 4b, (1-1) is a display interface of the smart television, and in fig. 4b, (1-2) is a display interface of the mobile phone. At this time, the user slides the screen from bottom to top in the right half area on the display interface of the mobile phone, after the mobile phone receives the sliding operation of the user through the touch panel, the mobile phone obtains a plurality of touch point coordinates corresponding to the sliding operation of the user, the plurality of touch point coordinates include a sliding start coordinate and a sliding end coordinate, coordinate conversion is respectively performed on the sliding start coordinate and the sliding end coordinate to obtain a corresponding relative sliding start coordinate and a corresponding relative sliding end coordinate on the smart television, and a virtual touch event including the obtained relative sliding start coordinate and the obtained relative sliding end coordinate on the smart television is generated and sent to the smart television.
After the smart television receives the virtual touch event, the virtual touch event is analyzed to obtain a corresponding relative sliding starting coordinate and a corresponding relative sliding ending coordinate on the smart television, a sliding distance between the corresponding sliding starting coordinate and the corresponding sliding ending coordinate on the relative smart television is calculated, and then the corresponding volume is determined according to the pre-stored corresponding relationship between different sliding distances and volume.
If the intelligent television determines that the corresponding relative sliding start coordinate and the corresponding relative sliding end coordinate on the intelligent television are two coordinates obtained by sliding from bottom to top on the display screen of the intelligent television and are positioned in the right half display area of the display interface, the intelligent television can determine that the virtual touch event is for increasing the volume, and the intelligent television can increase the current volume by calling the corresponding volume adjusting control to determine the volume; on the contrary, if the smart television determines that the corresponding relative sliding start coordinate and the corresponding relative sliding end coordinate on the smart television are two coordinates obtained by sliding from top to bottom on the display screen of the smart television, the smart television can determine that the virtual touch event is for reducing the volume, and the smart television reduces the current volume by calling the corresponding volume adjusting control to reduce the determined volume.
In addition, after the smart tv determines that the sliding operation is to adjust the volume, the volume adjustment condition may be displayed on the display interface of the smart tv, for example, the volume popup window displayed in (1-1) in fig. 4b may be a volume adjustment display bar, so that the volume adjustment display bar may be simultaneously displayed on the display interface of the mobile phone, so that the user may adjust the current volume through the volume adjustment display bar displayed on the display interface of the mobile phone, the mobile phone may also send distance information and dragging direction dragged by the user to the volume adjustment display bar to the smart tv through a virtual touch event, the smart tv analyzes distance information and dragging direction included in the virtual touch event, then determines the volume corresponding to the analyzed distance information according to the relationship between different distances and different volumes, and drags the display interface according to the determined volume and dragging direction, and determining whether to increase or decrease the volume of the currently played program, wherein the increased or decreased volume can be the determined volume.
Scene 3: with the scenario described in scenario 1, referring to the schematic diagram of the interface processing effect shown in fig. 4c, (1-1) the smart tv displays a playing picture of the series 2. At this time, after the user performs the click operation on the display interface of the mobile phone as shown in (1-2) in fig. 4c, through the processing of the mobile phone and the smart television with the same principle as that of the scene 1, the smart television may also determine that the click operation of the user on the mobile phone is for invoking a play information control, and then the smart television displays the display interface of the smart television as shown in (2-1) in fig. 4c after responding according to the virtual touch operation, that is, displays a "<" control, a "| |" control of pause, a control representing a play progress bar, and the like for returning in the display interface of the smart television.
Further, the user may further continue to click the "<" control for returning in the display interface of the mobile phone shown in (2-2) in fig. 4c, and through the processing of the mobile phone and the smart television of the same principle as in scene 1 or 2, after the smart television analyzes the virtual touch event, it is determined that the mobile phone performs the click operation on the "<" control, and the control is recalled, so that the smart television performs the operation of exiting the play screen of the drama 2, and the display interface of the smart television is changed to the display interface of the program home page, that is, the display interface of the smart television shown in (1-1) in fig. 4 a. Similarly, the user can click the "| |" control on the display interface of the mobile phone, and the smart television can execute the operation of pausing the current playing picture; or, the user may perform an operation of dragging the play progress bar on the display interface of the mobile phone, and the smart television may control the play progress of the current program according to the dragging direction and the dragging distance of the user, and display the current display interface as a play picture corresponding to the position to which the play progress bar is adjusted.
It should be noted that the content introduced above is several possible scenarios provided by the present application, but the present application is not limited to the several scenarios mentioned above, and the virtual touch event that can be implemented by the second electronic device through the touch operation performed on its own display screen can be implemented by the user executing the user operation on the display interface of the first electronic device, so that the user can conveniently perform the remote control operation to generate the virtual touch event for the second electronic device, thereby improving the user experience. For example, besides the above-mentioned clicking operation and sliding operation, a long-click operation may be included, for example, the method may be used to implement double-speed playing of a currently playing program of the second electronic device; a double-click operation, which may be used to implement, for example, a pause/replay operation of a currently playing program of the second electronic device; the multi-finger operation event may be, for example, a zoom-in/zoom-out operation on a display interface of the second electronic device, or other realizable operations.
Based on the foregoing description of the interface processing effect that can be achieved by using the method provided by the present application, an implementation process of the remote control method provided by the present application is described below to describe how to achieve the interface processing effect described in fig. 4a to 4c by using the method provided by the present application, so that various virtual touch events can be generated by the first electronic device, and the requirements of various control scenarios of the second electronic device can be met. Referring to fig. 5a, a schematic processing flow diagram of a remote control method according to an embodiment of the present application is shown, including the following steps:
s501: the first electronic equipment acquires real-time image information of a display interface area containing the second electronic equipment through the camera device.
Specifically, after detecting that the intelligent remote control App installed in the first electronic device is triggered by the user, the processor in the first electronic device may control the camera of the first electronic device to be turned on, so that the user may operate the camera to control the first electronic device to capture a picture, where the picture includes the display interface of the second electronic device.
That is to say, the processor is first required to determine that the first electronic device is in a scene of a user interface for generating a virtual touch event, and after determining that the first electronic device is in the scene, the processor may generate a call instruction for driving the image pickup device, and send the call instruction to the image pickup device, so that the image pickup device of the first electronic device is turned on to be in an operating state after receiving the call instruction.
Among the possible implementations of determining a scenario at a user interface for generating a virtual touch event are: if the processor in the first electronic device detects, through the touch sensor 280K, a click operation of a user on a touch panel for a designated application icon (the designated application is an App for implementing a remote control function, for example, an App for smart remote control, smart life, and the like included in fig. 1 a), it may be determined that the first electronic device is in a scene of a user interface for generating a virtual touch event, and then the camera is triggered to turn on, and it is determined that the real-time image information captured by the camera serves to generate the virtual touch event. The clicking operation may also be implemented as the user clicking a remote control in the first electronic device notification bar drop-down display interface (for example, a remote control icon control in the smart projection and the smart television included in fig. 1 b). Therefore, in order to provide various embodiments for determining a virtual touch event generation scene, a trigger entry for triggering a scene of a user interface for generating a virtual touch event may be preset in various display interfaces of the first electronic device, so as to facilitate a user to perform a remote control operation.
In addition, another possible implementation of determining a scenario at a user interface for generating a virtual touch event is: the processor in the first electronic device may further determine that the first electronic device is in a scene of the user interface for generating the virtual touch event after receiving a voice control instruction used by the user to start the application implementing the remote control function through the microphone 270C. For example, after receiving a voice control instruction of "opening the intelligent remote control" sent by a user through a microphone, the processor triggers display of an intelligent remote control App editing interface, so that the first electronic device is in a scene of a user interface for generating a virtual touch event.
In order to realize remote control operation of a second electronic device based on a first electronic device, after the first electronic device is determined to be in a scene of a user interface used for generating a virtual touch event, a user aims a camera device of the first electronic device at the second electronic device to shoot, so that the first electronic device receives a display interface of the second electronic device contained in real-time image information shot by the camera device, and the real-time image information is synchronously displayed on the interface of the first electronic device in real time. For example, as shown in fig. 5b (1), which is a shooting area range of the mobile phone, and in fig. 5b (1-2), which is a display interface of an area range shot by the smart phone, the real-time image information displayed on the interface of the smart phone includes a front appearance of the smart television and a display interface of the smart television.
It should be noted that, in the foregoing embodiment, since the function of shooting by the first electronic device is to implement remote control on the second electronic device, the real-time image information captured by the camera device does not need to be stored, but can be implemented to implement real-time synchronous display on the real-time image information captured by the camera device through the preview capability of the first electronic device, so as to save the storage space of the first electronic device, improve the processing efficiency of the first electronic device, reduce the time delay in the virtual touch event generation process, and further avoid the problem of poor tracking performance (i.e., the processing time for the second electronic device to perform corresponding operations is long for the touch operations of the user, so that the user is aware of the operation reactions of the second electronic device) caused by the time delay. When the method is implemented, the first electronic equipment directly transmits the real-time image information shot by the camera device to the display driver, so that the real-time synchronous display of the real-time image information on the user interface of the application program layer is realized through the display driver, namely, the real-time image information is displayed.
S502: and the first electronic equipment identifies a display interface area belonging to the second electronic equipment from the real-time image information.
Because the real-time image information captured by the first electronic device generally includes an area range larger than a range in which the display interface of the second electronic device is located, and the touch operation of the user outside the area of the display interface of the second electronic device is not related to the generation of the virtual touch event, in order to implement more accurate monitoring of the virtual touch operation of the second electronic device, when the first electronic device is implemented, an Object Detection (Object Detection) and an Object Tracking (Object Tracking) technology may be first adopted to analyze each frame image in the captured real-time image information, so as to identify and track a screen frame of the second electronic device from each frame image of the real-time image information.
The screen frame is used for determining a display interface area belonging to a second electronic device, namely, an area inside the screen frame is the display interface area of the second electronic device. Then, the first electronic device screens out touch operations belonging to the screen frame, and filters out touch operations not belonging to the screen frame. By identifying the screen frame of the second electronic device, the screen frame is used as a screening condition of the first electronic device for the user touch event received on the touch panel, so that the accuracy of generating the virtual touch event and the data processing efficiency can be better improved.
In addition, the anti-shaking function on the first electronic equipment can be further realized by identifying and tracking the screen frame of the second electronic equipment, each frame of image in the captured real-time image information can be analyzed and identified during implementation, and the content of the display area belonging to the screen frame can be locked based on the identified screen frame, so that the problem that the display area of the second electronic equipment displayed in the real-time image information is fuzzy due to shaking of the first electronic equipment is avoided.
Moreover, in order to improve the sense of unity of the display interface of the second electronic device displayed in the first electronic device of the user, and avoid the problem that when the user holds the first electronic device to capture the second electronic device by hand, the touch operation on the first electronic device is not convenient due to too far distance, or the display interface of the first electronic device cannot completely cover the display interface of the second electronic device due to too close distance, in implementation, the real-time image can be displayed on the display screen 294 after the processor of the first electronic device receives the real-time image information captured by the camera device, and the display scale is intelligently adjusted based on the displayed real-time image by taking the identified screen frame of the second electronic device as a reference according to the pre-configured display range scale. Specifically, if the first electronic device determines that the size of the display interface area of the second electronic device in the live image is smaller than a first threshold, the focal length of the first electronic device is adjusted. The preset display range ratio may be, for example, that the area range occupied by the display interface of the second electronic device is two thirds of the area range of the display screen of the first electronic device.
For example, if the display ratio of the display interface of the second electronic device on the display interface of the first electronic device is too small due to the too long distance, which is not beneficial to the operation of the user, based on the preset display range ratio of two thirds, when the current display ratio is determined to be one half, the range size of the displayed real-time image may be enlarged with the screen frame of the first electronic device as a reference, so that the display ratio of the real-time image in the display screen of the first electronic device reaches two thirds. Similarly, if the display interface of the second electronic device cannot be completely displayed on the screen of the first electronic device due to the too close distance, the processor may capture a real-time image with a wider range by calling the wide-angle lens of the camera device, so as to satisfy that the display interface of the second electronic device is completely displayed on the screen of the first electronic device and can be further displayed to a preset display scale.
The implementation method for identifying the screen frame of the second electronic device by the first electronic device may include the following steps:
in one possible implementation, the object detection module in the application framework layer in the first electronic device identifies the screen frame of the second electronic device according to a pre-trained object detection model. The first electronic device may train the target detection model in an embodiment of: the first electronic device learns the characteristics of the screen frame of the second electronic device by taking a large number of frame images of real-time image information as training samples and taking the screen frame of the second electronic device as a training target, and finally, the screen frame of the second electronic device is taken as output to obtain the target detection model.
In implementation, after receiving the real-time image information, the target detection module takes the real-time image information as the input of the pre-trained target detection model, then recognizes the screen frame of the second electronic device through the target detection model, and finally outputs the recognized screen frame of the second electronic device, thereby determining the display interface area of the second electronic device according to the screen frame of the second electronic device.
In another possible implementation manner, if the result of recognizing the screen frame by the target detection model is poor, for example, in a dim light environment with poor light, it is difficult to recognize the screen frame of the second electronic device from the real-time image information; or, if the second electronic device is an intelligent projection, the projection interface of the intelligent projection may not have an obvious screen frame, and therefore it may be difficult to accurately identify the screen frame through the pre-trained target detection model. In this scenario, in implementation, the first electronic device may send an anchor point generation instruction to the second electronic device through interaction with the second electronic device, so that the second electronic device generates anchor points at four corners of the display interface (or the projection interface of the smart projection), which are convenient for the first electronic device to detect. After the second electronic device generates the anchor points, the first electronic device detects four anchor points included in the live image information through the target detection module (for example, A, B, C, D points of four screen frame corner positions in the display interface of the second electronic device shown in fig. 5b (1-1)), and then the screen frame of the second electronic device can also be determined according to A, B, C, D anchor points.
S503: the first electronic device receives touch operation of a user on the touch panel, and obtains a first touch coordinate point (or sequence) after screening to obtain a user touch event.
In implementation, after receiving a touch operation of a user on the touch panel through the touch sensor 280K, the processor in the first electronic device acquires a touch coordinate point (or a sequence) corresponding to the touch operation. In one possible implementation, if the touch operation is a click operation, the processor processes the touch event into a user touch event according to information such as coordinates and a timestamp of a touch point of the click operation. In another possible implementation, if the touch operation is a sliding operation, a plurality of touch coordinate points of the sliding operation, that is, a touch coordinate sequence, are obtained, where the touch coordinate sequence at least includes a touch sliding start coordinate and a touch sliding end coordinate, and determines a sliding distance and a sliding direction of the sliding operation, and further, the touch coordinate sequence is processed into a user touch event according to information such as the touch sliding start coordinate, the touch sliding end coordinate, the sliding distance, the sliding direction, and a timestamp. In other possible embodiments, if the touch operation is a long-press click operation, the processor processes the information into a user touch event according to the coordinates of the touch point of the long-press click operation, the length of the long-press click operation and the like; or if the touch operation is a multi-finger operation event, the processor processes the touch operation into a user touch event according to the touch point coordinates, the time stamps and other information of each finger; other possible user operations are processed based on the same principle to obtain the user touch event, which is not described herein again.
Based on the screen frame of the second electronic device identified in S502, screening a first touch coordinate point (or sequence) in the user touch operation, which belongs to the screen frame region, so as to screen out a virtual remote control operation executed by the user on the first electronic device in a display region of the second electronic device, and further generate a virtual touch event according to the first touch coordinate point (or sequence); and the first electronic equipment ignores the second touch coordinate point (or sequence) which does not belong to the screen frame area, and further does not generate a virtual touch event according to the second touch coordinate point (or sequence). By detecting the screen frame of the second electronic device and then screening the touch coordinate points (or sequences) corresponding to the user touch event according to the detected screen frame, the user touch event outside the screen frame area of the second electronic device can be ignored, so that the calculation data amount during the generation of the virtual touch event is reduced, and the accuracy of the generation of the virtual touch event can be improved.
S504: the first electronic device converts the first touch coordinate point (or sequence) into a relative coordinate point (or sequence) of the second electronic device and generates a virtual touch event.
Specifically, the display interface of the second electronic device included in the live image displayed on the interface of the first electronic device is essentially a two-dimensional projection of the second electronic device in a three-dimensional space captured by the imaging device, and since there is an arbitrary angle of capture of the first electronic device, the two-dimensional projection of the second electronic device displayed on the interface of the first electronic device may be a trapezoid, for example, a schematic diagram of a screen frame of the second electronic device shown in fig. 6a (1) and fig. 6b (1). It can be understood that the touch coordinates of the user touch operation received on the first electronic device and the coordinates on the second electronic device cannot correspond to each other one to one, therefore, in order to ensure the accuracy of generating the virtual touch event by the first electronic device, after the first electronic device obtains the first touch coordinate point (or sequence), the first touch coordinate point (or sequence) may be converted into a corresponding touch relative coordinate point (or sequence) on the second electronic device by principles of augmented reality technology, and the first electronic device generates a virtual touch event according to the touch relative coordinate point (or sequence), therefore, after the second electronic device receives and analyzes the virtual touch event, the virtual touch operation on the second electronic device can be obtained according to the relative coordinate point (or sequence), and therefore accurate touch coordinates on the second electronic device are obtained.
In the implementation process of the coordinate conversion, it is assumed that coordinates of four corner points of a screen frame of the second electronic device in the display interface of the first electronic device are respectively represented by (x1, y1), (x2, y2), (x3, y3), and (x4, y4), and a touch coordinate point of a user touch event (assumed to be a click operation) on the display interface of the first electronic device is represented by (x, y) (if the user touch event is a slide operation, the touch coordinate point can be represented by a touch coordinate sequence, in this embodiment, only one coordinate point is taken as an example, coordinate conversion manners of other coordinate points in the coordinate sequence are the same, and details are not described later), and a relative coordinate point of a virtual touch operation on the display interface of the second electronic device after the conversion is represented by (x ', y'). And a set of vertical frames of the second electronic device is represented by lines L1 and L3, a set of parallel frames of the second electronic device is represented by lines L2 and L4, and based on that the frame of the screen of the second electronic device is a rectangle, it can be obtained that the vertical frames of the set are parallel to each other and the horizontal frames of the set are parallel to each other in the three-dimensional space. The following describes, with reference to fig. 6a to 6b, an exemplary embodiment of the coordinate transformation performed by the coordinate transformation module by the first electronic device, which includes the following possible scenarios:
scene 1, in the real-time image information shot by the first electronic device, the screen frames of the second electronic device are displayed as a group of vertical frames which are parallel to each other on the two-dimensional projection interface. Illustratively, referring to lines L1, L3 shown in fig. 6a (1), line L1 is the left vertical frame of the screen frame of the second electronic device, line L3 is the right vertical frame, and lines L1, L3 are two parallel lines on the display interface of the first electronic device.
In an implementation process of determining the relative coordinates (x ', y') of the virtual touch event, the first electronic device determines x 'and y', respectively.
The value of the virtual touch event coordinate x ' depends on the relative distance between (x ', y ') and any vertical frame on the second electronic device in the three-dimensional space. For example, referring to the content shown in fig. 6a, a set of vertical frames of the second electronic device, such as lines L1 and L3, are shown as being parallel to each other in the two-dimensional projection plane, so that the relative distance between (x ', y') and the vertical frame at the same position in the three-dimensional space can be obtained by analogy with the relative distance relationship between (x, y) and any vertical frame on the display interface of the first electronic device (i.e., the two-dimensional projection interface of the second electronic device). In a possible implementation manner, taking the relative distance between the touch point and the left vertical frame as an example, the virtual touch event coordinate x' on the second electronic device can be obtained according to the following formula 1:
Figure BDA0002746280490000181
wherein w in the formula is a width of a screen frame of the second electronic device, x in the formula is an abscissa in coordinates of a user touch event on the first electronic device, x1 is an abscissa of a left screen frame of the second electronic device in a display interface of the first electronic device, and x2 is an abscissa of a right screen frame.
It should be noted that the size information of the second electronic device (for example, including the width information of the screen frame of the second electronic device and the height information involved in the following embodiments) may be requested by the first electronic device from the second electronic device when the first electronic device establishes a communication connection with the second electronic device for the first time, or may be actively sent by the second electronic device to the first electronic device. Then, after the first electronic device acquires the size information of the second electronic device, the size information of the second electronic device may be stored locally for subsequent use. In addition to the size information of the second electronic device, the first electronic device may further obtain information related to other second electronic devices, such as model information of the second electronic device, for example, before the first electronic device generates the virtual touch event, the first electronic device may also determine the size information of the second electronic device according to the obtained model information of the second electronic device, where the first electronic device locally stores a corresponding relationship between the model information and the size information of the second electronic device, or the first electronic device may locally perform network query to determine the size information of the second electronic device.
The value of the virtual touch event coordinate y ' depends on the relative distance between (x ', y ') and any horizontal frame on the second electronic device in the three-dimensional space, and since a group of horizontal frames in the screen frame displayed on the display interface of the first electronic device in the scene 1 are not parallel to each other, for example, lines L2 and L4 in (1) of fig. 6a are displayed as being not parallel to each other, the relative distance between (x ', y ') and the horizontal frame at the same position in the three-dimensional space cannot be obtained through the relative distance relationship between (x, y) and any horizontal frame on the display interface of the first electronic device (i.e., the two-dimensional projection interface of the second electronic device), and so on. Therefore, y 'cannot be determined according to the above embodiment of determining x'.
Although the set of horizontal borders of the second electronic device are shown as being non-parallel to each other on the two-dimensional projection interface, the set of screen borders in the horizontal direction of the second electronic device in three-dimensional space are essentially parallel. On the basis, when implemented, the distance relationship between (x ', y') and the screen frame of the second electronic device in the three-dimensional space can be reversely deduced through a point (x, y) on the two-dimensional projection plane, a line L2 (an upper horizontal frame in the two-dimensional projection interface) and a line L4 (a lower screen frame in the two-dimensional projection interface). Specifically, the three-dimensional projection principle may be adopted, and the lines L2 and L4 are extended, so as to obtain the intersection point (x5, y5) of the two horizontal frames in the two-dimensional projection plane, as shown in (3) in fig. 6 a; then, the intersection points (x5, y5) and (x, y) are connected to obtain intersection points (x6, y6) of the connection line and a line L1 (a left vertical frame in the two-dimensional projection interface). It can be understood that, according to the three-dimensional projection principle, the relative distance between the midpoint (x ', y') in the three-dimensional space and the horizontal frame at the same position can be analogized by dividing the L1 into any sub-line segment behind the sub-line segments L1a and L1b by taking the intersection point (x6, y6) as the dividing point in the two-dimensional projection interface relative to the L1. In one possible implementation manner, taking the proportion of the sub-line segment L1b relative to L1 as an example, the virtual touch event coordinate y' on the second electronic device can be implemented according to the following formula 2:
Figure BDA0002746280490000182
wherein h in the formula is the height of the screen frame of the second electronic device, y6 in the formula is the intersection point of the left screen frame L1 on the first electronic device according to the above embodiment, y3 is the ordinate of the intersection point of the left screen frame L1 and the lower screen frame L4, and y1 is the ordinate of the intersection point of the left screen frame L1 and the upper screen frame L2.
Scene 2, in the real-time image information shot by the first electronic device, the screen frames of the second electronic device are displayed as a group of horizontal frames which are parallel to each other on the two-dimensional projection interface. Illustratively, referring to lines L2 and L4 shown in (1) in fig. 6b, line L2 is an upper screen frame of the second electronic device, line L4 is a lower screen frame, and lines L2 and L4 are two parallel lines on the display interface of the first electronic device.
In an implementation process of determining the relative coordinates (x ', y') of the virtual touch event, the first electronic device determines x 'and y', respectively.
The value of the virtual touch event coordinate x ' depends on the relative distance between (x ', y ') and any vertical frame on the second electronic device in the three-dimensional space. Since a set of vertical borders among the borders of the screen displayed on the display interface of the first electronic device under scene 2 are not parallel to each other, e.g. lines L1, L3 shown in fig. 6b are displayed as not parallel to each other, x 'under scene 2 cannot be determined according to the embodiment of determining x' in scene 1, and x 'under scene 2 can be determined based on the same principle as the embodiment of determining y' in scene 1.
Specifically, the three-dimensional projection principle may be adopted to extend the lines L1 and L3, so as to obtain the intersection points (x5 ', y 5') of the two vertical frames in the two-dimensional projection plane, as shown in (3) in fig. 6 b; then, the intersection points (x5 ', y 5') and (x, y) are connected, and an intersection point (x6 ', y 6') of the connection line and a line L2 (an upper horizontal frame in the two-dimensional projection interface) is obtained. It can be understood that according to the three-dimensional projection principle, the relative distance between the midpoint (x ', y') in the three-dimensional space and the vertical frame at the same position can be analogized by dividing the L2 into the proportion of any sub-line segment behind the sub-line short L2a and L2b relative to the L2 by taking the intersection point (x6 ', y 6') as a dividing point in the two-dimensional projection interface. In one possible implementation, taking the proportion of the sub-line segment L2b relative to L2 as an example, the virtual touch event coordinate x' on the second electronic device can be implemented according to the following disclosure
Formula 3 gives:
Figure BDA0002746280490000191
wherein w in the formula is the width of the screen frame of the second electronic device, x 6' in the formula is the intersection point of the upper screen frame L2 on the first electronic device according to the above embodiment, x2 is the abscissa of the intersection point of the upper screen frame L2 and the right screen frame L3, and x1 is the abscissa of the intersection point of the left screen frame L1 and the upper screen frame L2.
The value of the virtual touch event coordinate y ' depends on the relative distance between (x ', y ') on the second electronic device and any horizontal frame in the three-dimensional space. Illustratively, referring to the depiction in FIG. 6b, lines L2, L4 are shown as being parallel to each other, thus determining y 'in scene 2 based on the same principles as the implementation of determining x' in scene 1. In one possible implementation manner, taking the relative distance between the touch point and the upper horizontal border as an example, the virtual touch event coordinate y' on the second electronic device can be obtained according to the following formula 4:
Figure BDA0002746280490000192
wherein h in the formula is the height of the screen frame of the second electronic device, y in the formula is the ordinate in the coordinates of the user touch event on the first electronic device, y1 is the ordinate of the upper screen frame of the second electronic device in the display interface of the first electronic device, and y2 is the ordinate of the lower screen frame.
Scene 3, in the real-time image information shot by the first electronic equipment, the screen frame of the second electronic equipment is displayed on the two-dimensional projection interface as a group of horizontal frames and a group of vertical frames which are not parallel to each other.
Illustratively, in connection with what is shown in FIGS. 6a through 6b, a set of horizontal borders are shown as lines L2, L4 in FIG. 6a, and a set of vertical borders are shown as lines L1, L3 in FIG. 6 b. If the group of vertical frames or the group of horizontal frames are displayed in a scene which is not parallel to each other, determining a virtual touch event coordinate x 'on the second electronic equipment in the scene 3 according to the same principle of the implementation mode of determining x' in the scene 2; and determining the virtual touch event coordinate y 'on the second electronic device in the scene 3 according to the same principle as the embodiment for determining y' in the scene 1, which is not described herein again in detail.
Scene 4: if the real-time image information shot by the first electronic equipment, the screen frame of the second electronic equipment is displayed on the two-dimensional projection interface as a group of horizontal frames and a group of vertical frames which are parallel to each other.
Illustratively, and illustratively, in connection with what is shown in FIGS. 6a through 6b, a set of horizontal borders are shown as lines L2, L4 in FIG. 6b, and a set of vertical borders are shown as lines L1, L3 in FIG. 6 a. If a group of vertical frames or a group of horizontal frames are displayed in a scene parallel to each other, determining x 'in a scene 4 according to the same principle of the implementation of determining x' in the scene 1, and the detailed description of the implementation is omitted here; similarly, y 'in the scene 4 is determined according to the same principle as the embodiment for determining y' in the scene 2, and the detailed description of the embodiment is omitted here.
S505: the first electronic device establishes a communication connection with the second electronic device.
In order to ensure that the first electronic device can remotely control the second electronic device, the first electronic device establishes a communication connection with the second electronic device when the remote control is implemented. In a possible implementation manner, the first electronic device and the second electronic device may access the same local area network to establish a communication connection. Illustratively, a wireless communication channel may be established by Wi-Fi P2P technology, and the wireless communication channel has the characteristic of low delay. Another possible implementation is that the first electronic device can be connected with the second electronic device by establishing a near field communication.
It should be noted that, the execution timing of S505 is not limited in this application, for example, the first electronic device may establish a communication connection with the second electronic device before determining that the virtual touch event generation scenario is present, for example, the first electronic device and the second electronic device are always connected to the same local area network, that is, the communication connection is always maintained. Alternatively, when the first electronic device described in the foregoing embodiment needs to send an anchor point generation instruction to the second electronic device, a communication connection between the first electronic device and the second electronic device is established. It can be understood that, when the first electronic device needs to interact with the second electronic device, the communication connection between the first electronic device and the second electronic device can be established.
S506: the first electronic equipment sends the virtual touch event to the second electronic equipment; wherein the relative coordinate point (or sequence) is carried in the virtual touch event.
According to the implementation method of transmitting the touch coordinate sequence contained in the touch event to the second electronic device, due to the characteristic of small data transmission amount, the time delay generated by data transmission is low, and therefore the problem of poor tracking property caused by the technical scheme that the whole display interface of the second electronic device is transmitted by encoding and decoding can be well solved.
S507: and the second electronic equipment analyzes the virtual touch event and executes corresponding operation according to the relative coordinate point (or sequence).
After receiving the virtual touch event sent by the first electronic device, the second electronic device performs corresponding processing through its own operating system. Since the second electronic device is also a kind of the first electronic device, the hardware architecture of the second electronic device may also be as shown in fig. 2, and the software architecture is as shown in fig. 3. A software program and/or a module corresponding to the software architecture in the second electronic device may be stored in the memory 140, and the processor 130 may execute the software program stored in the memory 140 and the process to execute the remote control method provided in the embodiment of the present application.
In implementation, after receiving and analyzing a relative coordinate point (or sequence) in a virtual touch event sent by a first electronic device, a second electronic device determines a touch position of the relative coordinate point (or sequence) on a display interface of the second electronic device, and identifies a control to which the touch position relates; and then, operating the corresponding control to enable the second electronic equipment to execute corresponding operation according to the virtual touch event.
For example, if a touch event of a user performs a bottom-up sliding operation on a right half area in a screen frame of a second electronic device displayed in a display interface of a first electronic device, the second electronic device determines that a corresponding coordinate sequence includes a sliding start coordinate and a sliding end coordinate after parsing a corresponding coordinate sequence from a received virtual touch event, and the position areas of the sliding start coordinate and the sliding end coordinate are right half areas of the display interface, and determines that a sliding direction is a positive direction of a y axis (assuming that a coordinate axis is established according to a horizontal screen display interface, such as the coordinate axis establishing manner shown in fig. 6a and 6 b), it is determined that the touch event sent by the first electronic device is for increasing the volume of a currently played program, and then the second electronic device determines that a control to be called by the virtual touch event is a volume control, and executing callback operation of the virtual touch event through a volume control so as to realize the setting of increasing the volume.
According to the embodiments described above, it can be seen that the present application enables generation of a virtual touch event to a second electronic device by a first electronic device. In addition, in the embodiment adopted by the application, after the first electronic device establishes communication connection with the second electronic device, the transmitted data only contains the virtual touch event obtained according to the touch coordinate point (or sequence) of the user touch event, so that the characteristic of small data transmission amount is provided, and the embodiment of the application has the characteristic of low time delay, so that the control accuracy of the first electronic device on the second electronic device can be better improved, and the requirements of various remote control scenes of the second electronic device can be met.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of an electronic device as an execution subject. In order to implement the functions in the method provided by the embodiment of the present application, the first electronic device may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Based on the foregoing embodiments, an embodiment of the present application provides a remote control device, which is applied to a first electronic device, and is used to implement a remote control method provided by the embodiment of the present application. Referring to fig. 7, the apparatus 700 includes: a transceiver 701 and a processor 702. The transceiver 701 is configured to receive a first operation; the processing unit 702 is configured to start the image capturing apparatus in response to receiving the first operation; acquiring a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is a current display interface of the second electronic equipment; displaying a first user interface, wherein the first user interface comprises the first image; the transceiver unit 701 is configured to receive a touch operation for the first user interface; the processing unit 702 is configured to, in response to receiving the touch operation for the first user interface, obtain touch point coordinates corresponding to the touch operation for the first user interface, and generate a virtual touch event based on the touch point coordinates, where the virtual touch event includes relative coordinates in a current display interface of the second electronic device; the virtual touch event is sent to the second electronic device through the transceiver unit 701, so that after the second electronic device receives the virtual touch event, the operation corresponding to the relative coordinate in the current display interface of the second electronic device is executed in response to the virtual touch event.
In one possible design, when the transceiver unit 701 is configured to receive the first operation, specifically: a first application icon is displayed on the first electronic device, and the transceiver 701 receives an operation for the first application icon; alternatively, the transceiver 701 receives a first voice operation; alternatively, the transceiver 701 receives a first gesture operation.
In one possible design, before the transceiver unit 701 is used for the touch operation for the first user interface, it is further used to determine that an area inside a screen frame of the second electronic device in the first user interface is a display interface area of the second electronic device.
In one possible design, when the processing unit 702 is configured to determine the area inside the screen frame of the second electronic device, it is further configured to: sending an anchor point generating instruction to the second electronic device through a transceiving unit 701, so that after the second electronic device receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction; the processing unit 702 is configured to determine an area inside a screen frame of the second electronic device according to the acquired information of the anchor point in the first image.
In one possible design, after the processing unit 702 is configured to identify the display interface area of the second electronic device, it is further configured to: judging whether the size of a display interface area of the second electronic equipment is smaller than a first threshold value or not; if the size of the display interface area of the second electronic device is smaller than a first threshold, the first electronic device adjusts the focal length of the camera device to a first focal length.
In one possible design, after the transceiver unit 701 receives the touch operation for the first user interface and before the virtual touch event is generated, the processing unit 702 is further configured to: acquiring at least one touch point coordinate; determining whether the at least one touch point coordinate is within a display interface area of the second electronic device; in response to the first electronic device determining that the at least one touch point coordinate is within the display interface area of the second electronic device, the first electronic device generates the virtual touch event.
In one possible design, the processing unit 702 generates a virtual touch event based on the touch point coordinates, and is specifically configured to: converting the acquired touch point coordinate corresponding to the touch operation aiming at the first user interface into a relative coordinate in a current display interface of the second electronic equipment; and generating the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
In a possible design, the touch operation includes a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or a plurality of coordinates.
In one possible design, the first electronic device is a mobile phone, the second electronic device is a smart television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic device is a menu interface of the smart television; the first user interface is a display interface of the first electronic device after entering a remote control mode; the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions; the display interface area of the second electronic device is an image area of a menu interface of the smart television acquired by the mobile phone; the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the smart television in the first user interface; the second electronic device executes the operation corresponding to the relative coordinate in the current display interface of the second electronic device, and the second electronic device executes the function corresponding to one of the plurality of controls in the image of the menu interface of the smart television.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A remote control method is applied to a first electronic device, the first electronic device comprises a camera device, and the first electronic device and a second electronic device establish wireless connection, and the method comprises the following steps:
the first electronic equipment receives a first operation;
in response to receiving the first operation, the first electronic equipment starts the camera device;
the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment;
the first electronic equipment displays a first user interface, wherein the first user interface comprises the first image;
the first electronic device receives touch operation aiming at the first user interface;
in response to receiving the touch operation aiming at the first user interface, the first electronic device acquires touch point coordinates corresponding to the touch operation aiming at the first user interface and generates a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device;
the first electronic device sends the virtual touch event to the second electronic device, so that after the second electronic device receives the virtual touch event, the second electronic device responds to the virtual touch event and executes an operation corresponding to the relative coordinate in the current display interface of the second electronic device.
2. The method of claim 1, wherein the first electronic device receives a first operation comprising:
the first electronic equipment displays a first application icon, and receives operation aiming at the first application icon; alternatively, the first and second electrodes may be,
the first electronic equipment receives a first voice operation; alternatively, the first and second electrodes may be,
the first electronic device receives a first gesture operation.
3. The method of claim 1, wherein prior to the first electronic device receiving the touch operation for the first user interface, the method further comprises: the first electronic device determines that an area inside a screen frame of the second electronic device in the first user interface is a display interface area of the second electronic device.
4. The method of claim 3, wherein the first electronic device determining the area inside the screen frame of the second electronic device comprises:
the first electronic device sends an anchor point generating instruction to the second electronic device, so that after the second electronic device receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction;
and the first electronic equipment determines an area inside a screen frame of the second electronic equipment according to the acquired information of the anchor point in the first image.
5. The method of claim 3, wherein after the first electronic device identifies the display interface area of the second electronic device, the method further comprises:
judging whether the size of a display interface area of the second electronic equipment is smaller than a first threshold value or not;
if the size of the display interface area of the second electronic device is smaller than a first threshold, the first electronic device adjusts the focal length of the camera device to a first focal length.
6. The method of claim 3 or 4, wherein after the first electronic device receives the touch operation directed to the first user interface and before the virtual touch event is generated, the method further comprises:
the first electronic equipment acquires at least one touch point coordinate;
the first electronic device determines whether the at least one touch point coordinate is within a display interface area of the second electronic device;
in response to the first electronic device determining that the at least one touch point coordinate is within the display interface area of the second electronic device, the first electronic device generates the virtual touch event.
7. The method of any of claims 1-6, wherein generating the virtual touch event based on the touch point coordinates comprises:
the first electronic equipment converts the acquired touch point coordinate corresponding to the touch operation of the first user interface into a relative coordinate in a current display interface of the second electronic equipment;
and the first electronic equipment generates the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
8. The method according to any one of claims 1 to 7, wherein the touch operation comprises a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface comprise a single coordinate and/or a plurality of coordinates.
9. The method according to any one of claims 1 to 8, comprising:
the first electronic equipment is a mobile phone, the second electronic equipment is an intelligent television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic equipment is a menu interface of the intelligent television;
the first user interface is a display interface of the first electronic device after entering a remote control mode;
the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions;
the display interface area of the second electronic device is an image area of a menu interface of the smart television acquired by the mobile phone;
the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the smart television in the first user interface;
the second electronic device executes the operation corresponding to the relative coordinate in the current display interface of the second electronic device, and the second electronic device executes the function corresponding to one of the plurality of controls in the image of the menu interface of the smart television.
10. An electronic device, corresponding to a first electronic device, the first electronic device and a second electronic device establishing a wireless connection, wherein the first electronic device comprises:
a camera device;
the touch screen comprises a touch panel and a display screen;
one or more processors;
a memory;
a plurality of application programs;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the first electronic device, cause the first electronic device to perform the steps of:
receiving a first operation;
in response to receiving the first operation, starting the camera device;
acquiring a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is a current display interface of the second electronic equipment;
displaying a first user interface, wherein the first user interface comprises the first image;
receiving a touch operation directed to the first user interface;
in response to receiving the touch operation aiming at the first user interface, acquiring touch point coordinates corresponding to the touch operation aiming at the first user interface, and generating a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic equipment;
and sending the virtual touch event to the second electronic device, so that after the second electronic device receives the virtual touch event, the second electronic device responds to the virtual touch event and executes an operation corresponding to the relative coordinate in the current display interface of the second electronic device.
11. The electronic device of claim 10, wherein the first electronic device receives a first operation comprising:
displaying a first application icon, and receiving an operation aiming at the first application icon; alternatively, the first and second electrodes may be,
receiving a first voice operation; alternatively, the first and second electrodes may be,
a first gesture operation is received.
12. The electronic device of claim 10, wherein the instructions, when executed by the first electronic device, cause the first electronic device to further perform: determining an area inside a screen frame of the second electronic device in the first user interface as a display interface area of the second electronic device before receiving a touch operation for the first user interface.
13. The electronic device according to claim 12, wherein the determining, by the first electronic device, the area inside the screen frame of the second electronic device specifically includes:
the first electronic device sends an anchor point generating instruction to the second electronic device, so that after the second electronic device receives the anchor point generating instruction, an anchor point is generated on a display interface in response to the anchor point generating instruction;
and the first electronic equipment determines an area inside a screen frame of the second electronic equipment according to the acquired information of the anchor point in the first image.
14. The electronic device of claim 12, wherein the instructions, when executed by the first electronic device, cause the first electronic device to identify the display interface region of the second electronic device to further perform:
judging whether the size of a display interface area of the second electronic equipment is smaller than a first threshold value or not;
and if the size of the display interface area of the second electronic equipment is smaller than a first threshold value, adjusting the focal length of the camera device to a first focal length.
15. The electronic device of claim 12 or 13, wherein the instructions, when executed by the first electronic device, cause the first electronic device to, after receiving a touch operation directed to the first user interface, further perform, before generating the virtual touch event:
acquiring at least one touch point coordinate;
determining whether the at least one touch point coordinate is within a display interface area of the second electronic device;
generating the virtual touch event in response to the first electronic device determining that the at least one touch point coordinate is within a display interface area of the second electronic device.
16. The electronic device of any of claims 10-15, wherein the instructions, when executed by the first electronic device, cause the first electronic device to perform, when generating a virtual touch event based on the touch point coordinates, specifically:
converting the acquired touch point coordinate corresponding to the touch operation aiming at the first user interface into a relative coordinate in a current display interface of the second electronic equipment;
and generating the virtual touch event according to the relative coordinates in the current display interface of the second electronic equipment.
17. The electronic device according to any one of claims 10 to 16, wherein the touch operation includes a click operation and/or a slide operation, and the touch point coordinates corresponding to the touch operation for the first user interface include a single coordinate and/or a plurality of coordinates.
18. The electronic device of any of claims 10-17, comprising:
the first electronic equipment is a mobile phone, the second electronic equipment is an intelligent television, the camera device is a rear camera of the mobile phone, and a current display interface of the second electronic equipment is a menu interface of the intelligent television;
the first user interface is a display interface of the first electronic device after entering a remote control mode;
the first image is an image comprising a menu interface of the intelligent television, the menu interface of the intelligent television comprises a plurality of controls, and the controls correspond to different functions;
the display interface area of the second electronic equipment is an image area of a menu interface of the smart television acquired by the mobile phone;
the touch operation aiming at the first user interface is a click operation aiming at one of the plurality of controls in the image of the menu interface of the smart television in the first user interface;
the second electronic device executes the operation corresponding to the relative coordinate in the current display interface of the second electronic device, and the second electronic device executes the function corresponding to one of the plurality of controls in the image of the menu interface of the smart television.
19. A remote control system comprising a first electronic device and a second electronic device, wherein the first electronic device comprises a camera, and the first electronic device and the second electronic device establish a wireless connection, comprising:
the first electronic equipment is used for receiving a first operation;
in response to receiving the first operation, the first electronic equipment starts the camera device;
the first electronic equipment acquires a first image by using the camera device, wherein the first image comprises a display interface area of the second electronic equipment, and the content in the display interface area of the second electronic equipment is the current display interface of the second electronic equipment;
the first electronic equipment is used for displaying a first user interface, and the first user interface comprises the first image;
the first electronic device is used for receiving touch operation aiming at the first user interface;
in response to receiving the touch operation aiming at the first user interface, the first electronic device acquires touch point coordinates corresponding to the touch operation aiming at the first user interface and generates a virtual touch event based on the touch point coordinates, wherein the virtual touch event comprises relative coordinates in a current display interface of the second electronic device;
the first electronic device sends the virtual touch event to the second electronic device;
the second electronic device receives the virtual touch event;
and responding to the received virtual touch event, and executing operation corresponding to the relative coordinates in the current display interface of the second electronic equipment by the second electronic equipment.
CN202011167645.6A 2020-10-27 2020-10-27 Remote control method, electronic equipment and system Active CN114513689B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011167645.6A CN114513689B (en) 2020-10-27 2020-10-27 Remote control method, electronic equipment and system
PCT/CN2021/116179 WO2022088974A1 (en) 2020-10-27 2021-09-02 Remote control method, electronic device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011167645.6A CN114513689B (en) 2020-10-27 2020-10-27 Remote control method, electronic equipment and system

Publications (2)

Publication Number Publication Date
CN114513689A true CN114513689A (en) 2022-05-17
CN114513689B CN114513689B (en) 2023-09-12

Family

ID=81381838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011167645.6A Active CN114513689B (en) 2020-10-27 2020-10-27 Remote control method, electronic equipment and system

Country Status (2)

Country Link
CN (1) CN114513689B (en)
WO (1) WO2022088974A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114069A1 (en) * 2022-11-29 2024-06-06 京东方科技集团股份有限公司 Multi-device cooperative control method, display device, and system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895834A (en) * 2022-05-30 2022-08-12 四川启睿克科技有限公司 Display method of intelligent household equipment control page
CN115167752A (en) * 2022-06-28 2022-10-11 华人运通(上海)云计算科技有限公司 Single-screen system and control method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201278211Y (en) * 2008-09-08 2009-07-22 Tcl集团股份有限公司 Remote controller with touch screen and camera
CN103945251A (en) * 2014-04-03 2014-07-23 上海斐讯数据通信技术有限公司 Remote control system and mobile terminal
US20150181278A1 (en) * 2013-12-24 2015-06-25 Samsung Electronics Co., Ltd. Display apparatus and display method thereof
CN110866495A (en) * 2019-11-14 2020-03-06 杭州睿琪软件有限公司 Bill image recognition method, bill image recognition device, bill image recognition equipment, training method and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI702843B (en) * 2012-02-15 2020-08-21 立視科技股份有限公司 Television system operated with remote touch control
CN103491444B (en) * 2012-06-14 2016-09-21 腾讯科技(深圳)有限公司 Image interaction method and system and the display device of correspondence
CN104639962B (en) * 2015-02-02 2018-05-08 惠州Tcl移动通信有限公司 A kind of method and system for realizing TV touch control
CN104703008A (en) * 2015-02-04 2015-06-10 中新科技集团股份有限公司 Method for controlling television through mobile phone
CN106331809A (en) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 Television control method and television control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201278211Y (en) * 2008-09-08 2009-07-22 Tcl集团股份有限公司 Remote controller with touch screen and camera
US20150181278A1 (en) * 2013-12-24 2015-06-25 Samsung Electronics Co., Ltd. Display apparatus and display method thereof
CN103945251A (en) * 2014-04-03 2014-07-23 上海斐讯数据通信技术有限公司 Remote control system and mobile terminal
CN110866495A (en) * 2019-11-14 2020-03-06 杭州睿琪软件有限公司 Bill image recognition method, bill image recognition device, bill image recognition equipment, training method and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114069A1 (en) * 2022-11-29 2024-06-06 京东方科技集团股份有限公司 Multi-device cooperative control method, display device, and system

Also Published As

Publication number Publication date
WO2022088974A1 (en) 2022-05-05
CN114513689B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
WO2020177583A1 (en) Image cropping method and electronic device
WO2021023021A1 (en) Display method and electronic device
WO2022100237A1 (en) Screen projection display method and related product
CN114513689B (en) Remote control method, electronic equipment and system
CN112394895B (en) Picture cross-device display method and device and electronic device
US10802663B2 (en) Information processing apparatus, information processing method, and information processing system
CN112558825A (en) Information processing method and electronic equipment
US20220398059A1 (en) Multi-window display method, electronic device, and system
WO2019174628A1 (en) Photographing method and mobile terminal
CN112527174B (en) Information processing method and electronic equipment
KR20190014638A (en) Electronic device and method for controlling of the same
CN110554816A (en) Interface generation method and equipment
US10152137B2 (en) Using natural movements of a hand-held device to manipulate digital content
CN112398855B (en) Method and device for transferring application contents across devices and electronic device
CN112527222A (en) Information processing method and electronic equipment
WO2022037463A1 (en) Function switching entry determining method and electronic device
WO2022028537A1 (en) Device recognition method and related apparatus
WO2021185374A1 (en) Image capturing method and electronic device
CN113825002A (en) Display device and focus control method
WO2023231697A1 (en) Photographing method and related device
CN112822544A (en) Video material file generation method, video synthesis method, device and medium
CN114510186A (en) Cross-device control method and device
WO2022228259A1 (en) Target tracking method and related apparatus
CN114079691B (en) Equipment identification method and related device
WO2022105793A1 (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant