CN114564162A - Data transmission method, electronic equipment, system and storage medium - Google Patents

Data transmission method, electronic equipment, system and storage medium Download PDF

Info

Publication number
CN114564162A
CN114564162A CN202011368862.1A CN202011368862A CN114564162A CN 114564162 A CN114564162 A CN 114564162A CN 202011368862 A CN202011368862 A CN 202011368862A CN 114564162 A CN114564162 A CN 114564162A
Authority
CN
China
Prior art keywords
information
preset gesture
equipment
display screen
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011368862.1A
Other languages
Chinese (zh)
Inventor
刘超
牛翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011368862.1A priority Critical patent/CN114564162A/en
Publication of CN114564162A publication Critical patent/CN114564162A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Abstract

The embodiment of the application provides a data transmission method, electronic equipment, a system and a storage medium, relates to the technical field of multi-equipment interaction, and can improve convenience in data transmission. The method comprises the following steps: the third device displays an environment picture acquired by the camera, after detecting a first preset gesture and the first device in the environment picture, the third device displays a suspension picture, a file in a display interface of the first device is in the suspension picture, the third device detects a second preset gesture and a file selected by the second preset gesture in the environment picture, the third device detects a third preset gesture and the second device in the environment picture, the third device sends first information to the second device, the first information comprises the selected file, after receiving the first information, the second device sends a file acquisition request to the first device, the file acquisition request carries information of the selected file, and after receiving the file acquisition request, the first device sends the selected file to the second device.

Description

Data transmission method, electronic equipment, system and storage medium
Technical Field
The embodiment of the application relates to the field of multi-device interaction, in particular to a data transmission method, electronic equipment, a system and a storage medium.
Background
With the development of intelligent terminals, more and more electronic devices are used in life and work of people, for example, in life, a user needs to transmit a video on a tablet personal computer to an intelligent television for playing; in operation, a user needs to transfer pictures drawn on a public large screen device to a personal computer.
However, the output device and the input device as data may not be conveniently moved due to being charged, being fixed in position, being used by other users, and the like. In such a scenario, it is inconvenient for the user to operate the output device or the input device for data to realize the transmission of data from the output device to the input device.
Disclosure of Invention
The embodiment of the application provides a data transmission method, electronic equipment, a system and a storage medium, and improves convenience in data transmission.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a data transmission method, which is applied to a system including a first device, a second device, and a third device, and the method includes:
the third equipment displays an environment picture acquired by a camera of the third equipment through a display screen of the third equipment;
the third device detects a first preset gesture and a first device selected by the first preset gesture in an environment picture, wherein a display interface is displayed on a display screen of the first device, and at least one object is displayed in the display interface and comprises a file and a folder;
responding to the first preset gesture, the third equipment displays a floating picture on a display screen of the third equipment, wherein an object in a display interface of the first equipment exists in the floating picture;
the third equipment detects a second preset gesture in the environment picture and detects an object to be transmitted selected in the floating picture by the second preset gesture in the floating picture;
the third device detects a third preset gesture and a second device selected by the third preset gesture in the environment picture;
responding to a third preset gesture, and sending first information to the second equipment by the third equipment, wherein the first information comprises information of an object to be transmitted;
after receiving the first information, the second device sends an object acquisition request to the first device, wherein the object acquisition request carries information of an object to be transmitted, and the object acquisition request is used for indicating the first device to send the object to be transmitted to the second device;
after receiving the object acquisition request, the first device sends an object to be transmitted to the second device.
In the implementation process of the data transmission method provided by the embodiment of the application, the object to be transmitted on the first device can be transmitted to the second device, and in the transmission process, a user only needs to trigger data transmission through gestures. In an application scene that the first device and the second device are inconvenient to move, the convenience of data transmission between the first device and the second device is improved.
In a possible implementation manner of the first aspect, in response to the third preset gesture, the sending, by the third device, the first information to the second device includes:
responding to a third preset gesture, the third equipment sends second information to the first equipment, the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first equipment to send network configuration information of the first equipment to the third equipment;
the third equipment receives network configuration information of the first equipment, which is sent by the first equipment, and sends the first information to the second equipment, wherein the first information carries information of an object to be transmitted and the network configuration information of the first equipment;
before the second device sends the object acquisition request to the first device, the method further comprises the following steps:
the second device receives first information sent by the third device, and the second device establishes network connection with the first device based on the network configuration information of the first device carried in the first information.
In the embodiment of the application, even if the network connection is established in advance between the first device and the second device, the network configuration information of the first device can be transmitted to the second device through the third device in the data transmission process, so that the network connection is established between the first device and the second device, and network communication is provided for data transmission.
In a possible implementation manner of the first aspect, in response to the first preset gesture, the third device displays a floating picture on a display screen of the third device, including:
in response to the first preset gesture, the third device sends a first request to the first device, wherein the first request is used for acquiring information of an object in a display interface of the first device, and the information of the object in the display interface of the first device comprises at least one of the following items: the method comprises the steps that an icon of an object, the name of the object and the coordinates of the object in a display interface of first equipment are displayed;
the third equipment acquires the information of the object in the display interface of the first equipment, and generates a floating picture based on the information of the object in the display interface of the first equipment;
the third device displays the floating picture on a display screen of the third device.
In a possible implementation manner of the first aspect, in response to the first preset gesture, the sending, by the third device, the first request to the first device includes:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture by the third device;
and when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is not within the preset range, the third device sends a first request to the first device.
In the embodiment of the application, in order to avoid that the size difference between the hand of the user in the environment picture of the display screen of the third device and the object displayed in the display screen of the first device is large, so that the user cannot accurately select the object to be transmitted, the third device can acquire the information of the object displayed on the display screen of the first device from the first device, and the third device generates the suspension picture suitable for the size of the hand of the user, so that the user can select the object to be transmitted in the suspension picture.
In a possible implementation manner of the first aspect, after the detecting, by the third device, a coverage ratio of the hand in the first preset gesture in the environment picture to a display screen of the first device in the environment picture, the method further includes:
when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is within a preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
In one possible implementation manner of the first aspect, the detecting, by the third device, the object to be transmitted selected in the environment screen by the second preset gesture includes:
the third equipment identifies a position or an area selected by a second preset gesture in the environment picture;
the third equipment identifies information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted comprises at least one of the following items: an icon of the object, a name of the object, and a location of the object on a display screen of the first device.
In the embodiment of the application, when the difference between the hand of the user and the object displayed on the display screen of the first device is enough to enable the hand of the user to accurately select the object to be transmitted, the floating picture is not displayed, and the information of the object to be transmitted corresponding to the position or the area specified by the hand is determined in the environment picture.
In a possible implementation manner of the first aspect, the detecting, by the third device, the object to be transmitted in the floating screen, which is selected in the floating screen by the second preset gesture, includes:
the third equipment identifies a selected position or area of a second preset gesture in the environment picture;
the third equipment converts the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third equipment;
and the third equipment obtains the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
In this embodiment of the application, since the second preset gesture is detected from the environment picture, the obtained position of the second preset gesture is the position in the environment picture, and the object to be transmitted is located in the floating picture, so that the position of the second preset gesture in the environment picture needs to be converted into the position of the second preset gesture in the floating picture, and the object to be transmitted, which is selected by the second preset gesture in the floating picture, is determined.
In a possible implementation manner of the first aspect, first image identification information is displayed on a display screen of the first device, and the first image identification information carries first network information of the first device;
before the third device sends the first request to the first device, the method further comprises:
the third device detects first image identification information displayed on a display screen of the first device in an environment picture;
the third equipment identifies the first image identification information and obtains first network information of the first equipment carried by the first image identification information;
the third device establishes a first connection with the first device based on the first network information of the first device.
In the embodiment of the application, even if the network connection is not established between the third device and the first device, the first connection can be established between the first device and the third device based on the first image identification information displayed on the display screen of the first device, and the information of the selected object to be transmitted is transmitted through the first connection.
In a possible implementation manner of the first aspect, second image identification information is displayed on a display screen of the second device, and the second image identification information carries second network information of the second device;
before the third device sends the first information to the second device, the method further includes:
the third device detects second image identification information displayed on a display screen of the third device in the environment picture;
the third equipment identifies the second image identification information to obtain second network information of the second equipment carried by the second image identification information;
the third device establishes a second connection with the second device based on the second network information of the second device.
In this embodiment, even if the second device and the third device do not establish a network connection in advance, the third device may establish a second connection with the second device according to the content displayed on the display screen of the second device, and transmit the network configuration information of the first device through the second connection.
In one possible implementation manner of the first aspect, the first image recognition information includes ripple information, and the second image recognition information includes ripple information.
In one possible implementation form of the first aspect, the second information comprises information of an object to be transmitted,
after receiving the object acquisition request, the first device sends an object to be transmitted to the second device, including:
after receiving the object acquisition request, the first device sends the object to be transmitted to the second device after the information of the object to be transmitted in the object acquisition request is compared with the information of the object to be transmitted in the second information sent by the third device.
In this embodiment of the present application, when the third device transmits the network configuration information of the first device to the second device, the third device may also transmit information of an object to be transmitted together, so that authentication may be performed between the first device and the second device based on the information of the object to be transmitted, and after the authentication is successful, the first device sends the object to be transmitted to the second device.
In a possible implementation manner of the first aspect, when the first device sends the object to be transmitted to the second device, the method further includes:
the first device sends transmission information to the third device, wherein the transmission information comprises: information and transmission progress of a file currently being transmitted;
and after the third equipment receives the transmission information, displaying the transmission information through a display screen of the third equipment.
In the embodiment of the application, in order to have better user experience and enable a user to know the current transmission progress in real time, the first device may further send transmission information to the third device, the third device displays the transmission information, and the user knows the transmission progress through the transmission information displayed by the third device.
In a possible implementation manner of the first aspect, when the third device is an augmented reality device, the first preset gesture, the second preset gesture, and the third preset gesture are gestures of a hand of a user within a visual angle of a camera of the augmented reality device;
when the third device is not the augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are touch gestures of the hand of the user on the display screen of the third device.
In a second aspect, an embodiment of the present application provides a data transmission method, which is applied to a third device, and the method includes:
the third equipment displays an environment picture acquired by a camera of the third equipment through a display screen of the third equipment;
the third device detects a first preset gesture and a first device selected by the first preset gesture in an environment picture, wherein a display interface is displayed on a display screen of the first device, and at least one object is displayed in the display interface and comprises a file and a folder;
responding to the first preset gesture, the third equipment displays a floating picture on a display screen of the third equipment, wherein an object in a display interface of the first equipment exists in the floating picture;
the third equipment detects a second preset gesture in the environment picture and detects an object to be transmitted selected in the floating picture by the second preset gesture in the floating picture;
the third device detects a third preset gesture and a second device selected by the third preset gesture in the environment picture;
and responding to a third preset gesture, the third equipment sends first information to the second equipment, wherein the first information comprises information of the object to be transmitted, the first information is used for indicating the second equipment to send an object acquisition request to the first equipment, and the object acquisition request is used for indicating the first equipment to send the object to be transmitted to the second equipment.
In a possible implementation manner of the second aspect, in response to the third preset gesture, the sending, by the third device, the first information to the second device includes:
responding to a third preset gesture, the third equipment sends second information to the first equipment, the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first equipment to send network configuration information of the first equipment to the third equipment;
the third device receives network configuration information of the first device sent by the first device, and the third device sends first information to the second device, wherein the first information carries information of an object to be transmitted and the network configuration information of the first device, and the first information is used for indicating the second device to establish network connection with the first device based on the network configuration information of the first device carried in the first information.
In a possible implementation manner of the second aspect, in response to the first preset gesture, the third device displays a floating picture on a display screen of the third device, including:
in response to the first preset gesture, the third device sends a first request to the first device, wherein the first request is used for acquiring information of an object in a display interface of the first device, and the information of the object in the display interface of the first device comprises at least one of the following items: the method comprises the steps that an icon of an object, the name of the object and the coordinates of the object in a display interface of first equipment are displayed;
the third equipment acquires the information of the object in the display interface of the first equipment, and generates a floating picture based on the information of the object in the display interface of the first equipment;
the third device displays the floating picture on a display screen of the third device.
In a possible implementation manner of the second aspect, in response to the first preset gesture, the third device sends a first request to the first device, including:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture by the third device;
and when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is not within the preset range, the third device sends a first request to the first device.
In a possible implementation manner of the second aspect, after the detecting, by the third device, a coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture, the method further includes:
when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is within the preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
In one possible implementation manner of the second aspect, the detecting, by the third device, the object to be transmitted selected in the environment screen by the second preset gesture includes:
the third equipment identifies a position or an area selected by a second preset gesture in the environment picture;
the third equipment identifies information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted comprises at least one of the following items: an icon of the object, a name of the object, and a location of the object on a display screen of the first device.
In a possible implementation manner of the second aspect, the detecting, by the third device, the object to be transmitted in the floating screen, which is selected in the floating screen by the second preset gesture, includes:
the third equipment identifies a selected position or area of a second preset gesture in the environment picture;
the third equipment converts the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third equipment;
and the third equipment obtains the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
In a possible implementation manner of the second aspect, first image identification information is displayed on a display screen of the first device, and the first image identification information carries first network information of the first device;
before the third device sends the first request to the first device, the method further comprises:
the third device detects first image identification information displayed on a display screen of the first device in an environment picture;
the third equipment identifies the first image identification information and obtains first network information of the first equipment carried by the first image identification information;
the third device establishes a first connection with the first device based on the first network information of the first device.
In a possible implementation manner of the second aspect, second image identification information is displayed on a display screen of the second device, and the second image identification information carries second network information of the second device;
before the third device sends the first information to the second device, the method further includes:
the third device detects second image identification information displayed on a display screen of the third device in the environment picture;
the third equipment identifies the second image identification information to obtain second network information of the second equipment carried by the second image identification information;
the third device establishes a second connection with the second device based on the second network information of the second device.
In one possible implementation of the second aspect, the first image identification information includes ripple information, and the second image identification information includes ripple information.
In a possible implementation manner of the second aspect, after the third device sends the first information to the second device, the method further includes:
the third device receives transmission information sent by the first device, wherein the transmission information comprises: information and transmission progress of a file currently being transmitted;
and the third equipment displays the transmission information through a display screen of the third equipment.
In a third aspect, an embodiment of the present application provides an electronic device, including:
the environment picture display module is used for displaying an environment picture acquired by a camera of the third equipment through a display screen of the third equipment;
the gesture detection module is used for detecting a first preset gesture and first equipment selected by the first preset gesture in an environment picture, wherein a display interface is displayed on a display screen of the first equipment, at least one object is displayed in the display interface, and the object comprises a file and a folder;
the floating picture display module responds to a first preset gesture and displays a floating picture on a display screen of the third equipment, wherein an object in a display interface of the first equipment exists in the floating picture;
the gesture detection module is further used for detecting a second preset gesture in the environment picture and detecting an object to be transmitted selected in the floating picture by the second preset gesture in the floating picture;
the gesture detection module is further used for detecting a third preset gesture and a second device selected by the third preset gesture in the environment picture;
the first information sending module is used for responding to a third preset gesture and sending first information to the second equipment, wherein the first information comprises information of an object to be transmitted, the first information is used for indicating the second equipment to send an object obtaining request to the first equipment, and the object obtaining request is used for indicating the first equipment to send the object to be transmitted to the second equipment.
In a fourth aspect, an electronic device is provided, comprising a processor for executing a computer program stored in a memory, implementing the method of any of the second aspects of the present application.
In a fifth aspect, a data transmission system is provided, which includes: a first device, a second device and a third device,
the third device is the electronic device provided in the fourth aspect;
the second device is used for sending an object acquisition request to the first device after receiving the first information, wherein the object acquisition request carries information of an object to be transmitted, and the object acquisition request is used for indicating the first device to send the object to be transmitted to the second device;
the first device is used for sending the object to be transmitted to the second device after receiving the object acquisition request.
In a sixth aspect, a chip system is provided, which comprises a processor coupled to a memory, the processor executing a computer program stored in the memory to implement the method of any one of the second aspect of the present application.
In a seventh aspect, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when executed by one or more processors, performs the method of any one of the second aspects of the present application.
In an eighth aspect, embodiments of the present application provide a computer program product, which, when run on a device, causes the device to perform any one of the methods of the second aspect.
It is understood that the beneficial effects of the second aspect to the eighth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic view of an application scenario of a data transmission method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a hardware structure of an electronic device executing a data transmission method according to an embodiment of the present application;
fig. 3 is a schematic process diagram of a data transmission direction according to an embodiment of the present application;
fig. 4(a) and fig. 4(b) are schematic diagrams illustrating an apparatus a displaying image identification information in a data transmission method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a floating interface displayed by a device C in a data transmission method according to an embodiment of the present application;
fig. 6 is a schematic view of a scene in which a user selects a file in a manner of clicking in a floating interface displayed by a device C according to the embodiment of the present application;
fig. 7 is a schematic view of a scene in which a user selects a file in a hovering interface displayed by a device C in a circle selection manner according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a manner in which a selected file is marked in a floating interface of a device C according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a third preset gesture provided in the present application;
10(a) and 10(b) are schematic diagrams of a display interface of a device C corresponding to the third preset gesture shown in fig. 9 according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram illustrating a data transmission progress according to an embodiment of the present application;
fig. 12 is a schematic application scenario diagram of another data transmission method according to an embodiment of the present application;
FIGS. 13(a) and 13(b) are schematic diagrams of the display screen and the user's hand of the device A provided in the embodiment of the present application;
FIG. 14 is a schematic diagram illustrating a method for calculating a coverage ratio between a hand and a display screen of a device A according to an embodiment of the present disclosure;
fig. 15(a) and 15(b) are schematic diagrams illustrating two ways of displaying a floating interface by the apparatus C provided in the embodiment of the present application;
fig. 16 is a schematic block diagram of functional structural modules of a device C according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The data transmission method provided by the embodiment of the application can be applied to the application scene shown in fig. 1. As shown in fig. 1, a computer is used as data output equipment, and a file to be transmitted exists on a desktop; the user wants to transfer the file from the computer to the smart screen as a data input device. The user may effect the transfer of files from the computer to the smart screen through the AR glasses as a control device.
The electronic device as the output device may be an electronic device with a display screen, for example, a computer in fig. 1. The display screen of the computer in fig. 1 may display the files to be transferred. The electronic apparatus as the control apparatus is an electronic apparatus with a camera and a display screen, for example, AR glasses in fig. 1. Of course, other electronic devices, such as a mobile phone, a tablet computer, etc., are also possible. Use AR glasses as controlgear for example, the environment picture can be gathered to the camera of AR glasses, and the environment picture that AR glasses gathered can show on the display screen of AR glasses, and the display screen of AR glasses can be the lens of AR glasses, can also be the screen of AR glasses. The user wears the AR glasses to change the position or the posture, so that the computer, the intelligent screen and the gesture of the user can appear in the environment picture collected by the camera of the AR glasses. The AR glasses can determine that the computer is an output device, determine the file to be transmitted and determine the intelligent screen as an input device based on the gesture of the user, and can trigger the file to be transmitted from the computer to the intelligent screen based on the gesture of the user.
In the above description of the application scenario shown in fig. 1, the information interaction process among the computer as the output device, the AR glasses as the control device, and the smart screen as the input device can be referred to the specific description in the following embodiments.
The embodiment of the application provides a data transmission method, which is applicable to a system consisting of output equipment, control equipment and input equipment, wherein electronic equipment in the system can be as follows: the mobile phone, the tablet computer, the wearable device, the vehicle-mounted device, the smart sound box, the smart screen, the Augmented Reality (AR)/Virtual Reality (VR) device, the notebook computer, the ultra-mobile personal computer (UMPC), the netbook, the Personal Digital Assistant (PDA), and other electronic devices. The specific types of the output device, the control device and the input device are not limited in the embodiments of the present application.
Fig. 2 shows a schematic structural diagram of an electronic device. The electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 2, a wireless communication module 260, an audio module 270, a speaker 270A, a microphone 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, buttons 290, a motor 291, an indicator 292, a camera 293, a display 294, and the like. The sensor module 280 may include a pressure sensor 280A, a gyroscope sensor 280B, an acceleration sensor 280E, a distance sensor 280F, a temperature sensor 280J, an ambient light sensor 280L, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. For example, the processor 210 is configured to execute the data transmission method in the embodiment of the present application.
Wherein the controller may be a neural center and a command center of the electronic device 200. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to reuse the instruction or data, it may be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 210 may include multiple sets of I2C buses. The processor 210 may be coupled to the charger, the flash, the camera 293, etc. through different I2C bus interfaces.
The I2S interface may be used for audio communication. In some embodiments, processor 210 may include multiple sets of I2S buses. Processor 210 may be coupled to audio module 270 via an I2S bus to enable communication between processor 210 and audio module 270.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 270 and wireless communication module 260 may be coupled by a PCM bus interface.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
In some embodiments, a UART interface is generally used to connect the processor 210 with the wireless communication module 260. For example: the processor 210 communicates with the bluetooth module in the wireless communication module 260 through the UART interface to implement the bluetooth function. In some embodiments, the audio module 270 may transmit the audio signal to the wireless communication module 260 through a UART interface, so as to implement the function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 210 with peripheral devices such as the display screen 294, the camera 293, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 210 and camera 293 communicate via a CSI interface to implement the capture functionality of electronic device 200. The processor 210 and the display screen 294 communicate through the DSI interface to implement a display function of the electronic device 200.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect processor 210 with camera 293, display 294, wireless communication module 260, audio module 270, sensor module 280, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 230 may be used to connect a charger to charge the electronic device 200, and may also be used to transmit data between the electronic device 200 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 200. In other embodiments of the present application, the electronic device 200 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charge management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 240 may receive charging input from a wired charger via the USB interface 230. In some wireless charging embodiments, the charging management module 240 may receive a wireless charging input through a wireless charging coil of the electronic device 200. The charging management module 240 may also supply power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used to connect the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charging management module 240, and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, and the wireless communication module 260. The power management module 241 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc.
In some other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charging management module 240 may be disposed in the same device.
The wireless communication function of the electronic device 200 may be implemented by the antenna 2, the wireless communication module 260, and the like.
The antenna 2 is used for transmitting and receiving electromagnetic wave signals. The antenna in the electronic device 200 may be used to cover a single or multiple communication bands.
The wireless communication module 260 may provide a solution for wireless communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 2 and the wireless communication module 260 are coupled such that the electronic device 200 may communicate with networks and other devices through wireless communication techniques. The wireless communication technologies may include BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The electronic device 200 implements display functions via the GPU, the display screen 294, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 294 is used to display images, video, and the like. The display screen 294 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 200 may include 1 or N display screens 294, N being a positive integer greater than 1.
The electronic device 200 may implement a shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294, and the application processor.
The ISP is used to process the data fed back by the camera 293. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting the electric signal into an image visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 293.
The camera 293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 200 may include 1 or N cameras 293, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 200 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 200 may support one or more video codecs. In this way, the electronic device 200 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the electronic device 200, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
In the embodiment of the present application, the NPU or other processor may be configured to perform operations such as face detection, face tracking, face feature extraction, and image clustering on a face image in a video stored in the electronic device 200; the operations such as face detection, face feature extraction and the like are performed on the face images in the pictures stored in the electronic device 200, and the pictures stored in the electronic device 200 are clustered according to the face features of the pictures and the clustering results of the face images in the video.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 200. The external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, which includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, and an application program (such as a sound playing function, an image playing function, etc.) required by at least one function. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 200.
In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
Electronic device 200 may implement audio functions via audio module 270, speaker 270A, receiver 270B, microphone 270C, headset interface 270D, and an application processor, among other things. Such as music playing, recording, etc.
Audio module 270 is used to convert digital audio signals to analog audio signals for output and also to convert analog audio inputs to digital audio signals. Audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
The speaker 270A, also called a "horn", is used to convert electrical audio signals into sound signals. The electronic device 200 may listen to music through the speaker 270A.
The receiver 270B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 200 receives a call or voice information, it is possible to receive voice by placing the receiver 270B close to the human ear.
The microphone 270C, also referred to as a "microphone," is used to convert acoustic signals into electrical signals. When transmitting voice information, the user can input a voice signal to the microphone 270C by speaking near the microphone 270C through the mouth. The electronic device 200 may be provided with at least one microphone 270C. In other embodiments, the electronic device 200 may be provided with two microphones 270C, so as to implement a noise reduction function in addition to collecting sound signals. The earphone interface 270D is used to connect wired earphones. The headset interface 270D may be the USB interface 230, or may be an open mobile electronic device platform (OMTP) standard interface of 3.5mm, or a Cellular Telecommunications Industry Association (CTIA) standard interface.
The pressure sensor 280A is used to sense a pressure signal, which can be converted into an electrical signal. In some embodiments, the pressure sensor 280A may be disposed on the display screen 294. The pressure sensor 280A can be of a wide variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 280A, the capacitance between the electrodes changes. The electronic device 200 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 294, the electronic apparatus 200 detects the intensity of the touch operation based on the pressure sensor 280A. The electronic apparatus 200 may also calculate the touched position from the detection signal of the pressure sensor 280A.
The gyro sensor 280B may be used to determine the motion pose of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 280B. The gyro sensor 280B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 280B detects a shake angle of the electronic device 200, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 200 through a reverse motion, thereby achieving anti-shake. The gyro sensor 280B may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 280E may detect the magnitude of acceleration of the electronic device 200 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 200 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 280F for measuring a distance. The electronic device 200 may measure the distance by infrared or laser. In some embodiments, taking a scene, the electronic device 200 may utilize the distance sensor 280F to measure a distance to achieve fast focus.
The ambient light sensor 280L is used to sense the ambient light level. The electronic device 200 may adaptively adjust the brightness of the display screen 294 based on the perceived ambient light level. The ambient light sensor 280L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 280L may also cooperate with the proximity light sensor 280G to detect whether the electronic device 200 is in a pocket to prevent inadvertent contact.
The temperature sensor 280J is used to detect temperature. In some embodiments, the electronic device 200 implements a temperature processing strategy using the temperature detected by the temperature sensor 280J. For example, when the temperature reported by the temperature sensor 280J exceeds the threshold, the electronic device 200 performs a reduction in performance of a processor located near the temperature sensor 280J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 200 heats the battery 242 when the temperature is below another threshold to avoid abnormal shutdown of the electronic device 200 due to low temperature. In other embodiments, when the temperature is below a further threshold, the electronic device 200 performs a boost on the output voltage of the battery 242 to avoid an abnormal shutdown due to low temperature.
The keys 290 include a power-on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be touch keys. The electronic apparatus 200 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 200.
The motor 291 may generate a vibration cue. The motor 291 may be used for vibration cues. Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc.
The embodiment of the present application is not particularly limited, and a specific structure of an execution main body of a data transmission method may be implemented as long as communication can be performed by the data transmission method provided by the embodiment of the present application by running a program recorded with a code of the data transmission method provided by the embodiment of the present application. For example, an execution subject of a data transmission method provided in an embodiment of the present application may be a functional module capable of calling a program and executing the program in an electronic device, or a communication device, such as a chip, applied to the electronic device.
Fig. 3 is a schematic view of an application scenario of the data transmission method according to the embodiment of the present application. In this application scenario, the user needs to transfer the file in device a (output device) to device B (input device). The data transmission method implemented by the application scenario may be performed according to three steps shown in fig. 3:
step 1, equipment C (control equipment) acquires an object list displayed on a display interface of equipment a from equipment a, where the object includes a file and a folder.
And 2, the equipment C determines an object list to be transmitted based on the object list displayed by the equipment A, and sends the object list to be transmitted to the equipment B.
And step 3, after the equipment A successfully authenticates the object list to be transmitted sent by the equipment B, the equipment A sends the objects in the object list to be transmitted to the equipment B.
The above-described division of the steps and the order of the steps in the data transmission method are only one example. In practical applications, other steps may also be present. Of course, the above-mentioned 3 steps may be used as the main frame step, and the other steps may be used as the minor steps in the above main frame step.
As an example, the implementation process of step 1 to step 3 will be described in detail below in conjunction with the gesture of the user.
Preparation work 1: the user first sets device a and device B to a particular operating mode, which may be named AR operating mode. In practical applications, the name may be other names, which is not limited in this application.
Taking device a in AR mode as an example, device a's bluetooth is on and visible. Meanwhile, a ripple may be displayed on the display screen of the device a, where the ripple includes bluetooth information of the device a, for example, a bluetooth name of the device a, and may further include an MAC address of the device a. In this application scenario, if a bluetooth connection needs to be established with device a, device a needs to be searched. However, when there are other electronic devices that are the same as the bluetooth name of device a, it may be further determined which one of the electronic devices that are the same as the bluetooth name is device a based on the MAC address.
In the embodiment of the application, bluetooth is used as a data transmission mode between the device a and the device C. In practical applications, if the device a and the device C transmit data by using other data transmission methods, the ripple displayed on the display screen of the device a may include information related to the other data transmission methods.
The ripple displayed on the display screen of the device a is to enable the camera of the device C to decode information corresponding to the data transmission mode of the device a according to the collected image containing the ripple. Therefore, the display screen of the device a may display image identification information such as ripples, and may also display image identification information corresponding to other codes such as bar codes and two-dimensional codes, and carry information related to a data transmission method between the device a and the device C through the image identification information corresponding to other codes.
Of course, in practical applications, besides carrying information related to the data transmission method of the device a by using the image recognition information, other methods capable of sensing information related to the data transmission method of the device a at a short distance may be used. The embodiment of the present application does not limit this.
Referring to fig. 4(a), for the ripple displayed in the display screen of the device a, the bluetooth information of the device a may be generated into a ripple according to a preset ripple generation rule. Referring to fig. 4(b), the two-dimensional code displayed on the display screen of the device a may be generated from the bluetooth information of the device a according to a preset two-dimensional code generation rule.
In practical application, after the device a enters the AR operating mode, the device a may generate image identification information according to the bluetooth information of the device a, and display the generated image identification information. The device a may also generate image identification information in advance according to the bluetooth information of the device a, store the generated image identification information in the form of a picture, and after the device a enters the AR operating mode, the device a acquires the image identification information in the form of a picture from the storage space and displays the image identification information in the display screen of the device a. When displaying, the image identification information or the corresponding picture may be displayed on the uppermost display interface of the display screen of the device a.
By way of example, a display screen of the device a may display a folder, and when the folder is displayed in a full screen, the image identification information is displayed on an upper layer of a display interface corresponding to the folder.
The two ways of displaying the bluetooth information of the device a shown in fig. 4(a) and fig. 4(b) are only for example, and in practical applications, other ways may also be used, for example, the device a displays the ripple of the bluetooth information of the carrying device a in a full screen. The display of the ripple does not affect the display of other content on the display screen of the device a.
For the case that the device B is in the AR operating mode, reference may be made to the above description of the device a being in the AR operating mode, and details are not described here again. For convenience of distinction, the image identification information displayed on the display screen of the device a may be denoted as first image identification information, and the data transmission mode of the device a carried in the image identification information of the device a may be denoted as first network information of the device a. The image identification information displayed on the display screen of the device B may be recorded as second image identification information, and the data transmission mode of the device B carried in the image identification information of the device B may be recorded as second network information of the device B.
Preparation work 2: the user operates in the device a to display an object on a display screen of the device a, where the displayed object includes an object to be transmitted.
For example, if a file to be transmitted in the device a is stored on a desktop of the device a, the desktop may be used as a display interface currently displayed on a display screen of the device a, that is, the file to be transmitted is displayed on the display screen of the device a.
If the files to be transferred in device a are stored in device a D-disk/folder 1, and named as "transfer file. Then, the display interface currently displayed by the display screen of the device a is an interface corresponding to the D-disk/folder 1, and the content displayed in the interface includes: the icon and name of "transfer file, doc" are shown, and of course, other files or folders may exist in the interface corresponding to the folder 1.
When the files to be transferred in the device a are stored in the D disk/folder 1 of the device a and named as "transfer file 1. doc", and stored on the desktop of the device a and named as "transfer file 2. doc". In this case, the "transmission file 1. doc" and the "transmission file 2. doc" can be made to appear on the display screen of the apparatus a at the same time. For example, the display interface corresponding to the folder 1 displays the "transmission file 1. doc" in a non-full screen mode, and the display interface of the folder 1 does not cover the "transmission file 2. doc" displayed on the desktop. Thus, the "transfer file 1. doc" and the "transfer file 2. doc" can be displayed simultaneously on the display screen of the device a.
When a plurality of files to be transmitted in the device a are all stored in the D-disc/folder 1 of the device a, and the files stored in the folder 1 are all files to be transmitted. In this case, the display screen of the device a can be made to display the interface of the D-disc in which the folder 1 exists as one object. Of course, it is also possible to cause the display screen of the device a to display the files in the folder 1 so that each file in the folder 1 serves as one object. However, in the case where a file that does not need to be transferred is stored in the folder 1, if the display screen of the electronic device is displayed on the interface of the D-disk so that the folder 1 serves as one object, the file that does not need to be transferred in the folder 1 may also be transferred to the device B.
Specifically, how the display screen of the electronic device displays the object to be transmitted can be set by the user based on the storage location of the file to be transmitted. In the subsequent embodiments of the present application, the process of data transmission is described by using a file as an object, which does not mean that the subsequent embodiments can only be applied to file transmission but cannot be applied to folder transmission. In practical applications, data transmission may be performed with a file as an object or a folder as an object, and it is needless to say that data transmission may be performed with a file and a folder as objects at the same time.
Step 1, the device C obtains a file list displayed on a display interface of the device A from the device A. The method can be specifically divided into the following substeps:
step 1.1, the user sets the device C to be in the AR fast transmission mode (the name of the mode is only used for example, in practical application, the mode may also be in other naming modes), after the device C enters the AR fast transmission mode, the camera of the device C is turned on, and the camera of the device C collects the environment picture. The display screen of the device C displays the environment picture acquired by the camera of the device C, that is, the display interface displayed by the display screen of the device C includes the environment picture acquired by the camera.
Of course, in practical applications, when the device C is provided with a depth camera, the camera of the device C may collect other environment data besides the environment picture, for example, related information of the target in the environment picture (depth of the target, three-dimensional information of the target, etc.). The display interface of the device C is generated by the environment data (environment picture, depth of the target in the environment picture, three-dimensional information of the target, etc.) collected by the camera of the device C. In this case, the display interface displayed on the display device of the apparatus C includes an environment picture having a three-dimensional stereoscopic effect.
When the device C is a head-mounted AR device, for example, AR glasses. The display screen of the device C may be a lens of the AR glasses (e.g., a screen for displaying an ambient picture projected on the lens), or may be a screen provided in the AR glasses for displaying the ambient picture.
When the device C is a non-head AR device having a camera and a display, such as a mobile phone, the environment picture may be displayed on the display of the mobile phone.
Step 1.2, the user adjusts the position and/or angle of the device C so that the ripple displayed on the display screen of the device a appears within the viewing angle of the camera of the device C. And a ripple image exists in the environment picture collected by the equipment C, and the collected ripple image is decoded according to a preset ripple decoding rule to obtain the Bluetooth information of the equipment A.
It should be noted that, the ripple generation rule adopted when the device a generates the ripple according to the bluetooth information of the device a and the ripple decoding rule adopted when the device C obtains the bluetooth information according to the ripple decoding correspond to each other.
And step 1.3, the device C establishes Bluetooth connection with the device A based on the identified Bluetooth information of the device A. For ease of description, the bluetooth connection between device a and device C may be referred to as the first bluetooth connection.
Step 1.4, the user adjusts the position of the hand (and may also adjust the position and/or angle of the device C) so that the user's hand and the device a appear simultaneously within the viewing angle of the camera of the device C. As before, the display interface displayed by the display screen of the device C is an environment picture acquired by the camera of the device C in real time, and therefore, the device a and the hand of the user appear in the environment picture displayed by the display interface of the device C at the same time. The user's hand makes a first preset gesture (for example, the user's palm advances towards the device a in a display interface displayed on the display screen of the device C), and the device C is triggered to initiate a first request to the device a through the first bluetooth connection, where the first request is used to obtain all visible file lists in the current display interface of the device a. The file manifest may be at least one of the following information of the file: name, icon (or thumbnail), location coordinates in the display interface of the first device, etc.
Referring to fig. 5, a display interface of a device C has a device a and a hand of a user at the same time, the user makes a first preset gesture (palm is pressed against the device a), and the device C is triggered to initiate a first request to the device a through a first bluetooth connection, so as to obtain all visible file lists in the current display interface of the device a.
Certainly, in practical application, instead of triggering the device C to acquire the file list displayed on the display screen of the device a through the first preset gesture, in step 1.3, after the device C establishes the bluetooth connection with the device a, the device C may acquire the file list displayed on the display screen of the device a from the device a.
In this embodiment of the present application, network accounts may also be set for the device a and the device C in advance, that is, on the device a, a first account is used to log in a preset server or an application program corresponding to the preset server, and on the device C, a second account is used to log in the preset server or the application program corresponding to the preset server. This is because, when the first account and the second account are the same account, the device C has the authority to control the device a, that is, the device C may control the device a to display other display interfaces, so as to obtain the file lists in the other display interfaces.
When any one of the device a and the device C does not log in the preset server, or the first account and the second account are different accounts, the device C cannot control the device a, and the device C can only obtain a file list displayed on a display screen of the device a.
And 2, the equipment C determines a file list to be transmitted based on the file list displayed by the equipment A, and sends the file list to be transmitted to the equipment B. The method can be specifically divided into the following substeps:
and 2.1, generating a floating interface by the equipment C based on the received file list displayed by the equipment A, and displaying the floating interface by the equipment C.
In this step, the floating interface may be a virtual screen generated based on the received file list of the device a, for example, a virtual screen may be generated according to a name of each file in the file list, coordinates of each file on the display screen, and an icon of each file, where the name, the position, and the coordinates of each file in the virtual screen all correspond to the file displayed on the display screen of the device a. And recording the interface corresponding to the virtual picture as a suspension interface. Since the floating interface is generated based on the file list determined by the device a based on the file displayed in the display interface of the device a, the file displayed in the display interface of the device a is displayed in the floating interface. Reference may be made in particular to the files displayed on the display screen of device a and the files displayed in the floating interface in fig. 5.
Before the device C displays the floating interface, the content displayed in the display interface of the device C is an environment picture acquired by a camera of the device C. However, after the device C displays the floating interface, the content displayed in the display interface of the device C includes not only the environment picture captured by the camera of the device C but also a virtual picture generated according to the file list. For convenience of distinguishing, an interface corresponding to the display environment screen in the display interface of the device C may be denoted as an environment interface. That is, after the display screen of the device C displays the floating interface, the display interface of the device C includes an environment interface and a floating interface. The suspension interface is displayed in a suspension mode on the upper layer of the environment interface.
Referring to fig. 5, the display interface of the device C includes an environment interface and a floating interface, where the environment interface is displayed in a full screen mode, that is, an environment picture collected by a camera of the device C is displayed in a full screen mode. And the suspension interface is displayed on the upper layer of the environment interface. If the user moves the position and/or angle of the device C, the position of the floating interface of the device C on the display screen of the device C may be unchanged, and the virtual picture in the floating interface is unchanged. If the user changes the position and/or angle of device C, the position of the device C's context interface on the device C's display screen also does not change, however, the context screen in the context interface changes (e.g., the position of device A and hands changes). In addition, when the suspension interface is displayed on the upper layer of the environment interface in a suspension manner, the shape, the size and the position can be preset.
Of course, in order to improve the user experience, the position of the floating interface can be changed. For example, the floating interface may move position with the movement of the eyeball, such that the floating interface is displayed in the direction of the user's view; the floating interface can also move along with the movement of the gesture, so that after the hand of the user appears in the visual angle of the camera of the device C, the floating interface is located at the area corresponding to the hand of the user.
Of course, the display interface of the device C shown in fig. 5 is only used as an example, and in practical application, other display manners may also be used, which is not limited in this application.
In practical applications, the floating interface is displayed on the display screen of the device C, however, from the perspective of the user's vision, the floating interface is similar to the interface displayed at a certain distance in front. The subsequent user's hand makes a gesture within the field of view of the camera of device C as if the user's hand is operating on the hover interface from the perspective of the user's vision. Of course, such visual effects require some sensor involvement on the device C.
For example, a depth sensor is disposed on the device C, the depth sensor may collect a distance between a hand of the user and the device C, and the device C generates a suspension interface based on the distance, so that the distance between the suspension interface and the device C, which is felt by the user wearing the device C, is consistent with the distance between the hand of the user and the device C, and thus, the visual feeling of the user wearing the AR glasses is as if the hand of the user is operating on the suspension interface displayed at a certain distance in front of the user.
And 2.2, adjusting the position of the hand (or adjusting the position and/or angle of the equipment C) by the user to enable the hand of the user to appear in the visual angle range of the camera of the equipment C, and enabling the hand of the user to be located in the area where the suspension interface is located in the display interface of the equipment C.
Referring to fig. 6, 3 files are displayed on the display screen of the device a, where the 3 files are: file 1, file 2, and file 3. The floating interface containing file 1, file 2 and file 3 is displayed on the display screen of the device C (only the floating interface related to the current application scene is shown in fig. 6). The user wears the equipment C (VR glasses) to see the floating interface as if it were the interface displayed floating in front. The hands of the user operate within the range of the viewing angle of the camera of the device C, the camera of the device C may acquire an environment picture including the hands of the user, and when the hands of the user are located within the range of the suspension interface in the environment picture, as if the hands of the user operate on the suspension interface, specifically, referring to the suspension interface shown in fig. 6, the hands in the environment picture (only the hands in the environment picture are shown in the figure) are located in the area where the suspension interface is located.
And 2.3, making a second preset gesture by the user in the visual field of the camera of the equipment C and in the area corresponding to the suspension interface so as to select at least one file in the suspension interface.
The second preset gesture may be a pointing operation of a finger of the user, which may be a gesture made by the hand of the user with reference to the area where the floating interface is located shown in fig. 6. And each file clicked by the finger of the user in the floating interface is used as a file to be transmitted. The second preset gesture may also be a circling operation of a finger of the user. Reference may be made to the circling operation of the user's finger shown in fig. 7. The fingers of the user can perform multiple circling operations in the suspension interface, and each file selected by the fingers of the user is used as a file to be transmitted.
In order to avoid that the file that the user wants to transmit is inconsistent with the file that the user selects through the gesture due to deviation of the gesture of the user, the file determined by the gesture of the user may be identified, and for example, the file determined by the gesture of the user is displayed in a highlighted form, or a special mark is made on the upper right corner of the file determined by the gesture of the user, or the file determined by the gesture of the user is displayed in a color different from that of other files. Of course, when the user selects a file which is not to be transmitted by mistake, the user can also correct the selected file into an unselected state through a second preset gesture. For example, when a file is in a non-selected state, a second preset gesture of the user points to the file (clicks the file or circles the file), and the file is switched from the non-selected state to a selected state; when the file is in the selected state, a second preset gesture of the user points to the file (the file is clicked or circled), and the file is switched from the selected state to the unselected state.
Referring to FIG. 8, FIG. 8 illustrates a labeling of selected files in the scene shown in FIG. 7.
Certainly, in practical application, after the user selects a file in the suspension interface, the name, the icon or the position of the currently selected file may also be sent to the device a, and the display screen of the device a also synchronously identifies the selected file.
In this embodiment of the present application, the floating interface is generated based on the file list acquired from the device a, and therefore, the relevant information of the file corresponding to each position in the floating interface is stored in the device C. That is, the position of each file in the floating interface is determined, and the position of the floating interface on the display screen of the device C may also be determined, so that the position of each file in the floating interface on the display screen of the device C may be determined. The selected position (corresponding to the click operation) or area (corresponding to the circle operation) of the gesture of the user on the display screen of the device C may also be determined, so that the selected position or area of the gesture of the user in the environment interface in the environment screen may be determined first, and then the selected position or area of the gesture in the environment screen may be converted into the selected position or area of the gesture in the floating interface based on the position of the floating interface in the environment screen, so that the position of the selected file is determined based on the selected position or area of the gesture in the floating interface, and the icon and the name of the file corresponding to the position are determined. And taking the file corresponding to the position as the file to be transmitted.
And 2.4, making a third preset gesture by the user in the visual field of the camera of the device C, wherein the third preset gesture can determine that the file selection is finished, and simultaneously, the third preset gesture also determines the input device of the file.
For example, the third preset gesture may be an operation of grabbing movement or an operation of long-press movement, where the device corresponding to the release position of the third preset gesture is an input device of the file.
Referring to fig. 9, the third preset gesture is a grabbing movement gesture, where an initial position of the grabbing movement gesture is a position of a selected file in the floating interface, and a release position of the grabbing movement gesture is a position of the input device.
Fig. 9 is a view of a scene in which a third preset gesture is performed by the user, and fig. 10(a) and 10(b) are display interfaces displayed on the display screen of the device C in the scene corresponding to fig. 9. Referring to fig. 10(a), in a display interface of a display screen of a device C, since a user adjusts a position and/or an angle of the device C, the device B to be used as an input device appears in a viewing angle of a camera of the device C, an environment interface including the device B and a suspension interface located on the environment interface exist in the display interface of the device C, the user moves a hand, so that the hand collected in the viewing angle of the camera of the device C appears in an area where the suspension interface is located, the hand of the user makes a grabbing gesture in the area where the suspension interface is located, and then the hand moves to the area where the device B is located as shown by a dotted line. Referring to fig. 10(B), after the hand collected within the view angle of the camera of the device C appears in the area where the device B is located, the user opens the palm to perform a releasing action, at this time, the gesture of grabbing movement is finished, and the device B is an input device.
In practical application, the selected file can move along with the movement of the hand of the user, namely after the hand of the user moves to the area of the device B, the selected file moves along with the hand of the user to the display screen of the device B. The selected file may not move with the movement of the user's hand, that is, after the user's hand starts moving, the selected file does not change the original position on the display screen of the device C.
And 2.5, after the device C detects a third preset gesture in the environment interface (or the environment picture), the device C sends a message of ending the file selection to the device A, the message carries a file list to be transmitted, and the message can be recorded as a second message. After the device a receives the message of ending the file selection, the device a sends the network configuration information of the device a to the device C.
The network configuration information of the device a indicates information of network transmission modes supported by the device a. For example, in the case that a bluetooth module exists in the device a, a WIFI module exists in the device a, and the device a can access the internet, the network configuration information of the device a includes: bluetooth information of device a (including authentication information), Wi-Fi P2P connection information of device a (including authentication information), internet access links of selected files (including authentication information).
The file list to be transmitted is the information of the file selected in the floating interface by the user through gestures, and the contents in the file list comprise: the name of the file to be transferred, the icon, or the location of the file in the floating interface (which may also be understood as the location of the file in the display screen of device a).
Step 2.6, after the device C detects the third preset gesture in the environment interface (or the environment picture), it may also determine that the input device is the device B according to the release position of the third preset gesture, at this time, the device C detects ripple information on the screen of the device B, and recognizes bluetooth information of the device B based on the ripple information.
Since device B is already present on the display interface of device C, device C can capture the ripple image on the screen of device B.
The content of the ripple displayed by the device B in this step can refer to the related description of the content of the ripple displayed by the device a in preparation work 1, and is not described herein again.
In practical applications, the third preset gesture may also be set to two gestures, where one gesture represents the end of file selection, and is used to trigger the device C to execute step 2.5. Another gesture represents a selection input device for triggering device C to perform step 2.6.
In the embodiment of the present application, a third preset gesture may trigger the device C to perform two steps, namely step 2.5 and step 2.6, for example, and does not indicate that the number of gestures for triggering the device C to perform step 2.5 and step 2.6 is limited.
In practical applications, when the display screen of the device B appears within the viewing angle of the camera of the device C, the device C may establish a connection with the device B based on the ripple displayed on the display screen of the device B, and in a case where the device B is determined to be an output device through the third preset gesture, the device C may perform information interaction through the established connection.
And 2.7, the device C establishes Bluetooth connection with the device B based on the identified Bluetooth information of the device B, and the device C sends the file list to be transmitted and the network configuration information of the device A to the device B based on the established Bluetooth connection.
For ease of distinction, the bluetooth connection between device a and device C is named first bluetooth connection and the bluetooth connection between device C and device B is named second bluetooth connection.
The precondition for establishing the first bluetooth connection is as follows: and the equipment C obtains the ripple of the Bluetooth information carrying the equipment A on the screen of the equipment A through the camera. The first bluetooth connection is used for information interaction between the device a and the device C, for example, the device a transmits network configuration information of the device a to the device C, the device a transmits information of a file displayed on a display screen of the device a to the device C, the device C transmits information of a selected file to the device a, and the like.
The precondition for establishing the second bluetooth connection is as follows: and the equipment C acquires ripples carrying the Bluetooth information of the equipment B on a screen of the equipment B through the camera. The second bluetooth connection is used for information interaction between the device C and the device B, for example, the device C transmits the network configuration information of the device a and a file list to be transmitted to the device B.
It should be noted that, in practical applications, the network connection between the device a and the device C is not limited to the bluetooth connection, and for convenience of distinction, the network connection between the device a and the device C may be referred to as a first connection. The network connection between the device B and the device C is not limited to the bluetooth connection, and the network connection between the device B and the device C may be denoted as a second connection.
And recording a message sent to the device B by the device C based on the established Bluetooth connection as a first message, wherein the first message carries a file list to be transmitted and the network configuration information of the device A.
And step 3, after the equipment A successfully authenticates the list of the files to be transmitted, which is sent by the equipment B, the equipment A sends the files in the list of the files to be transmitted to the equipment B. The method specifically comprises the following substeps:
and 3.1, the device B establishes network connection with the device A based on the network configuration of the device A, and the device B sends a file acquisition request to the device A based on the network connection, wherein the file acquisition request carries a file list to be transmitted.
The device B may initiate a network connection request to the device a based on the network configuration information of the device a, and simultaneously, carry the file list to be transmitted in the network connection request. The device B may also establish a network connection with the device a first based on the network configuration information of the device a, and after the network connection between the device B and the device a is successfully established, the device B sends a list of files to be transmitted based on the network connection that is successfully established to request the device a to send files in the list to the device B.
In practical application, which way is specifically adopted can be determined according to specific situations.
In addition, the network configuration information of the device a may include information corresponding to a plurality of network connection modes. Therefore, the information corresponding to the plurality of network connection methods in the network configuration information may be set with a priority, for example, the priority of the network connection method of bluetooth > the priority of the Wi-Fi P2P connection method > the priority of the internet access link method.
When the device B does not successfully establish a connection with the device a based on the network connection mode with the highest priority in the network configuration information of the device a, the network connection mode corresponding to the next priority in the network configuration information may be selected to establish a connection with the device a.
The precondition for establishing the network connection between the device a and the device B is as follows: the device A sends the network configuration information of the device A to the device C through the first Bluetooth connection, and the device C sends the network configuration information of the device A to the device B based on the second Bluetooth connection.
And 3.2, after the equipment A receives the file acquisition request sent by the equipment B, comparing the file list to be transmitted in the file acquisition request with the file list to be transmitted sent by the equipment C, and after the comparison between the file list to be transmitted and the file list to be transmitted are consistent, sending the files in the file list to be transmitted to the equipment B by the equipment A based on the network connection between the equipment B and the equipment A.
Referring to fig. 11, in the process of sending a file in a file list to be transmitted to a device B, the device a may send a file currently being transmitted and a total progress of the current transmission to the device C through a first bluetooth connection between the device a and the device C. Device C displays the file currently being transferred (file 1 in fig. 11) and the total progress of the current transfer (58% in fig. 11) through the display interface of device C. Of course, after the file is transferred, the device C may also display a prompt after the transfer is completed. Of course, in practical application, only the total progress of the current transmission may be displayed, and the file currently being transmitted is not displayed.
If the file currently being transferred is displayed, even if data transfer is performed with the folder as an object, the file in the folder may be displayed when the file being transferred is displayed.
It should be noted that the content shown in fig. 11 is a display interface, which may be recorded as a progress interface, and the progress interface may also be a floating interface, and the progress interface is displayed above the environment interface in a floating manner.
In the application scenarios shown in fig. 3 to 11, the device C is taken as an AR glasses for example, and when the user selects a file to be transmitted, the file is selected in the floating interface for example. In practice, other modes are also possible. Reference may be made in particular to the following description of the application scenario.
Because the picture displayed by the display screen of the device C may be an environment picture acquired by the camera of the device C, when the user can make the hand of the user and the screen of the device a appear in the visual angle range of the camera of the device C at the same time by adjusting the position of the hand (or the position and/or angle of the device C), in the environment picture of the device C, the hand of the user can select the file in the display screen of the device a to complete the selection of the file to be transmitted, without generating the previous suspension interface.
As an example, referring to fig. 12, a user wears a device C, a device a and a hand of the user exist in a field of view of a camera of the device C, an environment picture acquired by the camera of the device C generates a display interface of the device C, and when a floating interface does not exist, content displayed in the display interface of the device C is the environment picture acquired by the camera of the device C, that is, the display interface of the device C is the environment interface of the device C. When the device a and the user's hand are present within the viewing angle of the camera of the device C, the device a and the user's hand are also displayed in the display interface of the device C. In fig. 12, only the hand of the user is shown, and if the arm of the user and other objects in the environment are also within the field of view of the camera of the device C, the arm of the user and other objects in the environment may also be present on the display interface of the device C.
As shown in fig. 12, a picture displayed on the display interface of the device C is an environment picture collected within the viewing angle of the camera of the device C.
In the visual angle of the camera, the user selects the file on the screen of the device A as the file to be transmitted in the space between hands. The specific selection manner can refer to the description of step 2.3, and is not described herein again. Unlike step 2.3, in step 2.3, the user selects a file on the floating interface with his hand in the air. In the application scenario, the user selects a file on the display interface of the device a with his hands spaced apart. Of course, whether the user's hand selects a file on the display interface of the device a in the environment screen or the user's hand selects a file on the floating interface, the device C determines the selected file based on the gesture and the position of the hand in the display interface of the display screen of the device C.
When the distance between the device C and the device a is fixed, if the distance between the hand of the user and the device C is small, the area of the device a that is hidden by the hand increases slightly within the angle of view of the camera (in the display interface of the device C). If the distance between the hand of the user and the device C is a little longer, the area of the device a shielded by the hand becomes a little smaller within the viewing angle of the camera (in the display interface of the device C). However, when the user wears the device C, the hand of the user can move only a certain range in front of the device C, and therefore, there is a possibility that: in the display interface of the device C, the hand of the user is too large or too small compared with the display screen of the device a, so that the user is inconvenient to select the file as the file to be transmitted.
Referring to fig. 13 a, the user's hand is too large relative to the screen of the device a in the display interface of the device C (for example, when the device a is a large-screen device near the device a), and referring to fig. 13 b, the user's hand is too small relative to the screen of the device a in the display interface of the device C (for example, when the device a is a computer far away from the device a). For highlighting the comparison, the apparatus a in fig. 13(a) and 13(b) is a computer as an example.
In order to enable the user to accurately select a file displayed on the display screen of the device a, whether the user can accurately select the file may be determined by the proportion of the user's hand covering the display screen of the device a.
By way of example, the area of the hand in the display interface of the device C and the area of the display screen of the device a may be identified, and the coverage ratio of the hand and the display screen of the device a may be determined by the area of the hand and the area of the display screen of the device a.
Certainly, in practical applications, the display screen of the device a in the display interface of the device C may also be cut and divided into a plurality of sub-regions with equal areas, and the ratio of the number of the hands in the sub-regions to the total number of the sub-regions is used as the coverage ratio of the hands and the display screen of the device a.
Taking fig. 14 as an example, in the figure, if the number of sub-regions where the hand exists is 3 (1, 2, and 3 marked in fig. 14), and the total number of sub-regions is 28, 3/28 is the coverage ratio of the hand and the display screen of the device a.
Of course, in practical application, the following method can also be adopted: ripples are displayed on the display screen of the device A in a full screen mode, and the ratio of the area of an area, which does not display ripples, in the display interface of the device C to the area of the display screen of the device A in the display interface of the device C is used as the coverage ratio of the hand and the display screen of the device A. Or, the ripple is displayed on the full screen of the display screen of the device a, the device a in the display interface of the device C is also divided into a plurality of sub-regions with equal areas, each sub-region has sequential codes, and the ratio of the number of codes of the sub-region covered by the hand to the total number of codes of the sub-region is used as the coverage ratio of the hand and the display screen of the device a.
In practice, other ways may be used to evaluate the relative scale of the hand in the display interface of device C and the display screen of device a.
When the coverage ratio is within the preset range, the ratio of the hand of the user in the display interface of the device C to the display screen of the device A is appropriate, and the hand of the user can accurately select the file displayed on the display screen of the device A. The preset range represents a numerical range from a first preset value to a second preset value, wherein the minimum value in the preset range is the first preset value, and the maximum value in the preset range is the second preset value.
It should be noted that, in this case, since the floating interface does not need to be generated, the device C may not request the device a to obtain the file list on the display screen of the device a, but may identify a location (corresponding to the click gesture) or an area (corresponding to the circle selection gesture) selected by the second preset gesture in the environment screen. Based on the position or area, the icon of the selected file in the display interface of the device C is determined by the image segmentation method. And determining the name of the selected file in the display interface through image character recognition. Of course, in practical applications, other manners may also be adopted, for example, a position or an area corresponding to the second preset gesture is determined based on a code of ripples on the display screen of the device a in the display interface of the device C, so as to obtain an icon and a name of a file corresponding to the position or the area. Then, when sending a message of ending the file selection to the device a, the device C carries a file list to be transmitted in the message of ending the file selection, where the file list to be transmitted includes one or more of the obtained file name, the file position, and the file icon.
As an example, the position of the gesture of the user in the environment screen of the device C may be determined, when the second preset gesture of the user is a click selection gesture, the click selection position may be determined, and based on the click selection position, the file icon corresponding to the position and the position of the file icon in the environment screen may be segmented. The text content corresponding to the position can be identified as the name of the file by an image and text identification method, and the position of the display screen of the device A in the environment picture can be determined, so that the position of the file icon in the display screen of the device A can be determined. The file list to be transmitted can be determined in the above manner.
When the coverage ratio is smaller than the first preset value, the hand of the user in the display interface of the device C is smaller than the display screen of the device a, which can also be understood as the display screen of the device a is too large. In this case, a floating interface can be displayed, which is smaller than the display screen of device a in the display interface of device C. Reference may be made in particular to fig. 15 (a).
When the coverage ratio is larger than the second preset value, the hand of the user in the display interface of the device C is larger than the display screen of the device a, and the hand can also be understood as the display screen of the device a is too small. In this case, a floating interface may be displayed, which is larger than the display screen of device a in the display interface of device C. Reference may be made in particular to fig. 15 (b).
The display mode of the suspension interface in fig. 15(a) and 15(b) can refer to the description of step 1.4 to step 2.1. And will not be described in detail herein. The display mode of the floating interface in the application scene is different from that in the step 1.4 in that:
in step 1.4, the user needs to trigger the device C to display the floating interface through the first preset gesture, and in the application scenario, the device C is triggered to display the floating interface based on that the coverage ratio of the hand of the user and the display screen of the device a in the display interface of the device C is not within the preset range.
As another embodiment of the present application, the above application scenarios all take the device C as an AR device worn by the user as an example, in practical applications, the device C may not be an AR device worn by the user, for example, the device C may also be a mobile phone, a tablet computer, and the like. The environmental picture collected by the camera of the device C is displayed on the display screen of the device C. If an environment picture is displayed on the display screen of the device C, a user may press or click a location area of the device a in the environment picture displayed on the display screen of the device C (the display screen is a touch screen) to trigger a request for the device C to acquire a file list displayed on the display screen of the device a from the device a (in practical applications, the device C may also send a request for acquiring a file list displayed on the display screen of the device a to the device a after the device C and the device a establish a first connection), and a virtual interface may also be displayed on the display screen of the device C, and the virtual interface is displayed in a floating effect on an upper layer of the environment interface, so the virtual interface may also be recorded as a floating interface. The second preset gesture of the user may also be a touch operation made by the user on the display screen of the device C. For example, the user clicks or circles on the display screen of the device C with a finger to select a file as a file to be transmitted. The third preset gesture of the user may also be an operation of a long press movement of the user on the display screen of the device C. For example, a location area where any one of the selected files on the hover interface of the device C is located is taken as a starting point of the long-press movement gesture, and a location area of the device B in the environment interface is taken as an ending point of the long-press movement gesture).
It can be understood by this embodiment that, when the device C is an augmented reality device, the first preset gesture (when present), the second preset gesture, and the third preset gesture are gestures of the hand of the user within the view angle of the camera of the device C;
when the device C is not an augmented reality device, the first preset gesture, the second preset gesture, and the third preset gesture are all touch gestures of the hand of the user on the display screen of the device C.
In the above embodiment, the system composed of multiple devices implements the following functions: and controlling the file on the device A to be transmitted to the device B through the device C. However, in practical applications, the functions implemented by the system composed of a plurality of devices may also be: the device C controls the device a to realize a certain function, which may be: setting an alarm clock, playing videos and the like. The functions realized by the system consisting of a plurality of devices can also be as follows: device C controls device a to implement other control functions that are associated with device B. For example, the device C controls the device a (mobile phone 1) and the device B (mobile phone 2) to play the currently played song synchronously; and controlling the device B (the mobile phone 1) and the device A (the mobile phone 2) to play the songs played currently through the device C synchronously. Or, the device C controls the device a (smart speaker) to synchronously play the audio of the video currently played by the device B (mobile phone); and controlling a device B (intelligent sound box) to synchronously play the audio of the video currently played by the device A (mobile phone) through a device C.
As another embodiment of the present application, the floating interface displayed on the display screen of the device C may not only display the file displayed on the display screen of the device a, but also display any other content on the display screen of the device a.
By way of example, the hover interface for device C may replicate the display interface of the display screen of device a. For example, after the user makes the first preset gesture, what device a sends to device C is no longer a file list displayed on the display screen of device a, but a screenshot of the display interface displayed on the display screen of device a or icons and positions of all triggerable controls in the display interface displayed on the display screen of device a. And the equipment C generates a suspension interface based on the received screenshot of the display interface of the equipment A or the icon and the position of the control. After a user makes a gesture in the area of the suspension interface, the device C sends the position of the gesture in the suspension interface to the device A, after the device A receives the position, an instruction is generated on a display screen of the device A based on the position, and the instruction acts on a control corresponding to the position, so that the control of the device C on the device A is achieved. In addition, after the content displayed on the display screen of the device a changes due to the instruction acting on the control corresponding to the position, the device a sends the screenshot of the display interface of the device a or the icons and the positions of all triggerable controls in the display interface currently displayed on the display screen of the device a to the device C. Correspondingly, the content displayed in the floating interface of the device C changes, and the user can continue to make gestures in the area of the floating interface, so as to control the device a by the device C. In this way it is possible to: and the user makes a gesture in an area corresponding to the suspension interface in the display interface of the equipment C through a hand to complete the operation on the display interface of the display screen of the equipment A, so that the control on the equipment A is realized.
The above description is made by taking an example of the point that a user realizes data transmission by using a data transmission system composed of a device a, a device B, and a device C. The following describes the procedure of data transmission in terms of the production of a data transmission system composed of device a, device B, and device C. The third device in the following embodiment may be the device C in the above-described embodiment, the second device in the following embodiment may be the device B in the above-described embodiment, and the first device in the following embodiment may be the device a in the above-described embodiment.
In an embodiment of the data transmission method provided by the present application, the third device displays an environment picture acquired by a camera of the third device through a display screen of the third device;
the third device detects a first preset gesture and a first device selected by the first preset gesture in an environment picture, wherein a display interface is displayed on a display screen of the first device, and at least one object is displayed in the display interface;
responding to the first preset gesture, the third equipment displays a floating picture on a display screen of the third equipment, wherein an object in a display interface of the first equipment exists in the floating picture;
the third equipment detects a second preset gesture in the environment picture and detects an object to be transmitted selected in the floating picture by the second preset gesture in the floating picture;
the third device detects a third preset gesture and a second device selected by the third preset gesture in the environment picture;
responding to a third preset gesture, and sending first information to the second equipment by the third equipment, wherein the first information comprises information of an object to be transmitted;
after receiving the first information, the second device sends an object acquisition request to the first device, wherein the object acquisition request carries information of an object to be transmitted, and the object acquisition request is used for indicating the first device to send the object to be transmitted to the second device;
after receiving the object acquisition request, the first device sends an object to be transmitted to the second device.
In another embodiment, in response to the third preset gesture, the third device sends the first information to the second device, including:
responding to a third preset gesture, the third equipment sends second information to the first equipment, the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first equipment to send network configuration information of the first equipment to the third equipment;
the third equipment receives network configuration information of the first equipment, which is sent by the first equipment, and sends the first information to the second equipment, wherein the first information carries information of an object to be transmitted and the network configuration information of the first equipment;
before the second device sends the object acquisition request to the first device, the method further comprises the following steps:
the second device receives first information sent by the third device, and the second device establishes network connection with the first device based on the network configuration information of the first device carried in the first information.
In another embodiment, in response to the first preset gesture, the third device displays a floating picture on a display screen of the third device, including:
in response to the first preset gesture, the third device sends a first request to the first device, wherein the first request is used for acquiring information of an object in a display interface of the first device, and the information of the object in the display interface of the first device comprises at least one of the following items: the method comprises the steps that an icon of an object, the name of the object and the coordinates of the object in a display interface of first equipment are displayed;
the third equipment acquires the information of the object in the display interface of the first equipment, and generates a floating picture based on the information of the object in the display interface of the first equipment;
the third device displays the floating picture on a display screen of the third device.
In another embodiment, in response to the first preset gesture, the third device sends a first request to the first device, including:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture by the third device;
and when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is not within the preset range, the third device sends a first request to the first device.
In another embodiment, after the third device detects a coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture, the method further includes:
when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is within a preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
In another embodiment, the third device detecting the object to be transmitted selected in the environment screen by the second preset gesture in the environment screen includes:
the third equipment identifies a position or an area selected by a second preset gesture in the environment picture;
the third equipment identifies information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted comprises at least one of the following items: an icon of the object, a name of the object, and a location of the object on a display screen of the first device.
In another embodiment, the detecting, by the third device, the object to be transmitted selected in the floating screen by the second preset gesture includes:
the third equipment identifies a selected position or area of a second preset gesture in the environment picture;
the third device converts the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third device;
and the third equipment obtains the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
In another embodiment, first image identification information is displayed on a display screen of the first device, and the first image identification information carries first network information of the first device;
before the third device sends the first request to the first device, the method further comprises:
the third device detects first image identification information displayed on a display screen of the first device in an environment picture;
the third equipment identifies the first image identification information and obtains first network information of the first equipment carried by the first image identification information;
the third device establishes a first connection with the first device based on the first network information of the first device.
In another embodiment, second image identification information is displayed on a display screen of the second device, and the second image identification information carries second network information of the second device;
before the third device sends the first information to the second device, the method further includes:
the third device detects second image identification information displayed on a display screen of the third device in the environment picture;
the third equipment identifies the second image identification information to obtain second network information of the second equipment carried by the second image identification information;
the third device establishes a second connection with the second device based on the second network information of the second device.
In another embodiment, the first image recognition information includes ripple information and the second image recognition information includes ripple information.
In another embodiment, the second information comprises information of the object to be transmitted,
after receiving the object acquisition request, the first device sends an object to be transmitted to the second device, including:
after receiving the object acquisition request, the first device sends the object to be transmitted to the second device after the information of the object to be transmitted in the object acquisition request is compared with the information of the object to be transmitted in the second information sent by the third device.
In another embodiment, when the first device sends the object to be transmitted to the second device, the method further includes:
the first device sends transmission information to the third device, wherein the transmission information comprises: information and transmission progress of the object currently being transmitted;
and after the third equipment receives the transmission information, displaying the transmission information through a display screen of the third equipment.
In another embodiment, when the third device is an augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are gestures of a hand of a user within a visual angle of a camera of the augmented reality device;
when the third device is not the augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are touch gestures of the hand of the user on the display screen of the third device.
After describing the data transmission method from the product perspective of the data transmission system, the data transmission process is described from the perspective of the third device as the control device.
The data transmission method provided by the embodiment of the application comprises the following steps:
the third equipment displays an environment picture acquired by a camera of the third equipment through a display screen of the third equipment;
the third device detects a first preset gesture and a first device selected by the first preset gesture in an environment picture, wherein a display interface is displayed on a display screen of the first device, and at least one object is displayed in the display interface;
responding to the first preset gesture, the third equipment displays a floating picture on a display screen of the third equipment, wherein an object in a display interface of the first equipment exists in the floating picture;
the third equipment detects a second preset gesture in the environment picture and detects an object to be transmitted selected in the floating picture by the second preset gesture in the floating picture;
the third device detects a third preset gesture and a second device selected by the third preset gesture in the environment picture;
and responding to a third preset gesture, the third equipment sends first information to the second equipment, wherein the first information comprises information of the object to be transmitted, the first information is used for indicating the second equipment to send an object acquisition request to the first equipment, and the object acquisition request is used for indicating the first equipment to send the object to be transmitted to the second equipment.
As another embodiment of the present application, in response to a third preset gesture, the third device sends first information to the second device, including:
responding to a third preset gesture, the third equipment sends second information to the first equipment, the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first equipment to send network configuration information of the first equipment to the third equipment;
the third device receives network configuration information of the first device sent by the first device, and the third device sends first information to the second device, wherein the first information carries information of an object to be transmitted and the network configuration information of the first device, and the first information is used for indicating the second device to establish network connection with the first device based on the network configuration information of the first device carried in the first information.
As another embodiment of the present application, in response to the first preset gesture, the third device displays a floating picture on a display screen of the third device, including:
in response to the first preset gesture, the third device sends a first request to the first device, wherein the first request is used for acquiring information of an object in a display interface of the first device, and the information of the object in the display interface of the first device comprises at least one of the following items: the method comprises the steps that an icon of an object, the name of the object and the coordinates of the object in a display interface of first equipment are displayed;
the third equipment acquires the information of the object in the display interface of the first equipment, and generates a floating picture based on the information of the object in the display interface of the first equipment;
the third device displays the floating picture on a display screen of the third device.
As another embodiment of the present application, in response to the first preset gesture, the third device sends a first request to the first device, including:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture by the third device;
and when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is not within the preset range, the third device sends a first request to the first device.
As another embodiment of the present application, after the third device detects a coverage ratio between a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture, the method further includes:
when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is within a preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
As another embodiment of the present application, the detecting, by the third device, the object to be transmitted in the environment screen, which is selected in the environment screen by the second preset gesture, includes:
the third equipment identifies a position or an area selected by a second preset gesture in the environment picture;
the third equipment identifies information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted comprises at least one of the following items: an icon of the object, a name of the object, and a location of the object on a display screen of the first device.
As another embodiment of the present application, the detecting, by the third device, the object to be transmitted in the floating screen, which is selected in the floating screen by the second preset gesture, includes:
the third equipment identifies a selected position or area of a second preset gesture in the environment picture;
the third device converts the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third device;
and the third equipment obtains the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
As another embodiment of the present application, first image identification information is displayed on a display screen of a first device, where the first image identification information carries first network information of the first device;
before the third device sends the first request to the first device, the method further comprises:
the third equipment detects first image identification information displayed on a display screen of the first equipment in the environment picture;
the third equipment identifies the first image identification information and obtains first network information of the first equipment carried by the first image identification information;
the third device establishes a first connection with the first device based on the first network information of the first device.
As another embodiment of the present application, second image identification information is displayed on a display screen of the second device, and the second image identification information carries second network information of the second device;
before the third device sends the first information to the second device, the method further includes:
the third device detects second image identification information displayed on a display screen of the third device in the environment picture;
the third equipment identifies the second image identification information to obtain second network information of the second equipment carried by the second image identification information;
the third device establishes a second connection with the second device based on the second network information of the second device.
As another embodiment of the present application, the first image recognition information includes ripple information, and the second image recognition information includes ripple information.
As another embodiment of the present application, after the third device sends the first information to the second device, the method further includes:
the third device receives transmission information sent by the first device, wherein the transmission information comprises: information and transmission progress of a file currently being transmitted;
and the third equipment displays the transmission information through a display screen of the third equipment.
As another embodiment of the application, when the third device is an augmented reality device, the first preset gesture, the second preset gesture, and the third preset gesture are gestures of a hand of a user within a visual angle of a camera of the augmented reality device;
when the third device is not the augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are touch gestures of the hand of the user on the display screen of the third device.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the embodiment of the present application, the third device (for example, the device C in the above embodiment) may be divided into functional modules according to the above method example, for example, each functional module may be divided for each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. The following description will be given by taking the case of dividing each function module corresponding to each function:
referring to fig. 16, the third apparatus 1600 includes:
the environment picture display module 1601 is configured to display an environment picture acquired by a camera of a third device through a display screen of the third device;
a gesture detection module 1602, configured to detect a first preset gesture and a first device selected by the first preset gesture in an environment picture, where a display interface is displayed on a display screen of the first device, and at least one object is displayed in the display interface;
the floating image display module 1603 is used for responding to a first preset gesture and displaying a floating image on a display screen of a third device, wherein an object in a display interface of the first device exists in the floating image;
the gesture detection module 1602 is further configured to detect a second preset gesture in the environment picture and detect an object to be transmitted, selected in the floating picture by the second preset gesture, in the floating picture;
the gesture detection module 1602 is further configured to detect a third preset gesture and a second device selected by the third preset gesture in the environment picture;
the first information sending module 1604 is configured to send, in response to the third preset gesture, first information to the second device, where the first information includes information of an object to be transmitted, the first information is used to instruct the second device to send an object obtaining request to the first device, and the object obtaining request is used to instruct the first device to send the object to be transmitted to the second device.
As another embodiment, the first information sending module 1604 is further configured to:
responding to a third preset gesture, sending second information to the first equipment, wherein the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first equipment to send network configuration information of the first equipment to the third equipment;
the method comprises the steps of receiving network configuration information of first equipment sent by the first equipment, and sending first information to second equipment, wherein the first information carries information of an object to be transmitted and the network configuration information of the first equipment, and the first information is used for indicating the second equipment to establish network connection with the first equipment based on the network configuration information of the first equipment carried in the first information.
As another example, floating-screen display module 1603 is further configured to:
responding to a first preset gesture, and sending a first request to the first device, wherein the first request is used for acquiring information of an object in a display interface of the first device, and the information of the object in the display interface of the first device comprises at least one of the following items: the icon of the object, the name of the object and the coordinates of the object in the display interface of the first device;
acquiring information of an object in a display interface of first equipment, and generating a floating picture based on the information of the object in the display interface of the first equipment;
and displaying the floating picture on a display screen of the third device.
As another example, floating-screen display module 1603 is further configured to:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture to a display screen of the first device in the environment picture;
and when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is not within the preset range, sending a first request to the first device.
As another embodiment, the gesture detection module 1602 is further configured to:
when the coverage ratio of the hand in the first preset gesture in the environment picture to the display screen of the first device in the environment picture is within the preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
As another embodiment, the gesture detection module 1602 is further configured to:
identifying a position or an area selected by a second preset gesture in the environment picture;
identifying information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted comprises at least one of the following items: an icon of the object, a name of the object, and a location of the object on a display screen of the first device.
As another embodiment, the gesture detection module 1602 is further configured to:
identifying a position or an area of a second preset gesture selected in the environment picture;
converting the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third device;
and obtaining the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
As another embodiment, first image identification information is displayed on a display screen of the first device, and the first image identification information carries first network information of the first device;
floating picture display module 1603 is further configured to:
detecting first image identification information displayed on a display screen of a first device in an environment picture before sending a first request to the first device;
identifying first image identification information, and acquiring first network information of first equipment carried by the first image identification information;
a first connection is established with a first device based on first network information of the first device.
As another embodiment, second image identification information is displayed on a display screen of the second device, and the second image identification information carries second network information of the second device;
the first information sending module 1604 is further configured to:
detecting second image recognition information displayed on a display screen of a third device in the environment picture before transmitting the first information to the second device;
identifying second image identification information, and acquiring second network information of the second equipment carried by the second image identification information;
and establishing a second connection with the second device based on the second network information of the second device.
As another embodiment, the first image recognition information includes ripple information, and the second image recognition information includes ripple information.
As another embodiment, third device 1600 further includes:
the transmission progress display module is used for receiving transmission information sent by the first equipment after sending the first information to the second equipment, wherein the transmission information comprises: information and transmission progress of a file currently being transmitted;
and displaying the transmission information through a display screen of the third equipment.
As another embodiment, when the third device is an augmented reality device, the first preset gesture, the second preset gesture, and the third preset gesture are gestures of a hand of the user within a visual angle of a camera of the augmented reality device;
when the third device is not the augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are touch gestures of the hand of the user on the display screen of the third device.
It should be noted that, for the above contents of information interaction, execution process, and the like between modules in the third device, specific functions and technical effects brought by the method embodiment based on the same concept may be specifically referred to a part of the method embodiment, and are not described herein again.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the third device is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module, and the integrated module may be implemented in a form of hardware, or in a form of software functional module. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the application. For the specific working process of the module in the third device, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
Embodiments of the present application further provide a computer program product, which when run on a first device, enables the first device to implement the steps in the foregoing method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a first device, including recording media, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
An embodiment of the present application further provides a chip system, where the chip system includes a processor, the processor is coupled to the memory, and the processor executes a computer program stored in the memory to implement the steps of any of the method embodiments of the present application. The chip system may be a single chip or a chip module composed of a plurality of chips.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (28)

1. A data transmission method, applied to a system including a first device, a second device, and a third device, the method comprising:
the third equipment displays an environment picture acquired by a camera of the third equipment through a display screen of the third equipment;
the third device detects a first preset gesture and a first device selected by the first preset gesture in the environment picture, wherein a display interface is displayed on a display screen of the first device, at least one object is displayed in the display interface, and the object comprises a file and a folder;
responding to the first preset gesture, the third device displays a floating picture on a display screen of the third device, wherein an object in a display interface of the first device exists in the floating picture;
the third device detects a second preset gesture in the environment picture and detects an object to be transmitted, which is selected in the floating picture by the second preset gesture, in the floating picture;
the third device detects a third preset gesture and a second device selected by the third preset gesture in the environment picture;
responding to the third preset gesture, and sending first information to the second device by the third device, wherein the first information comprises information of the object to be transmitted;
after receiving the first information, the second device sends an object acquisition request to the first device, wherein the object acquisition request carries information of the object to be transmitted, and the object acquisition request is used for indicating the first device to send the object to be transmitted to the second device;
and after receiving the object acquisition request, the first equipment sends the object to be transmitted to the second equipment.
2. The method of claim 1, wherein said sending, by the third device, first information to the second device in response to the third preset gesture comprises:
responding to the third preset gesture, the third device sends second information to the first device, wherein the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first device to send network configuration information of the first device to the third device;
the third device receives network configuration information of the first device sent by the first device, and sends first information to the second device, wherein the first information carries information of an object to be transmitted and the network configuration information of the first device;
before the second device sends an object obtaining request to the first device, the method further includes:
and the second equipment receives first information sent by the third equipment, and the second equipment establishes network connection with the first equipment based on the network configuration information of the first equipment carried in the first information.
3. The method of claim 2, wherein the third device displaying a floating screen on a display screen of the third device in response to the first preset gesture comprises:
in response to the first preset gesture, the third device sends a first request to the first device, where the first request is used to acquire information of an object in the display interface of the first device, and the information of the object in the display interface of the first device includes at least one of: the icon of the object, the name of the object and the coordinates of the object in the display interface of the first device;
the third equipment acquires the information of the object in the display interface of the first equipment, and generates a floating picture based on the information of the object in the display interface of the first equipment;
and the third equipment displays the floating picture on a display screen of the third equipment.
4. The method of claim 3, wherein said sending, by the third device, a first request to the first device in response to the first preset gesture comprises:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture by the third device;
and when the coverage ratio of the hand in the first preset gesture to the display screen of the first device in the environment picture is not within a preset range, the third device sends a first request to the first device.
5. The method of claim 4, after the third device detects a coverage ratio of a hand in the first preset gesture in the environmental picture to a display screen of the first device in the environmental picture, further comprising:
when the coverage ratio of a hand in the first preset gesture to a display screen of the first device in the environment picture is within a preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
6. The method of claim 5, wherein the third device detecting the object to be transmitted selected in the environmental screen by the second preset gesture in the environmental screen comprises:
the third equipment identifies a position or an area selected by the second preset gesture in the environment picture;
the third device identifies information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted includes at least one of the following items: an icon of an object, a name of an object, and a location of an object on a display screen of the first device.
7. The method of any of claims 3 to 6, wherein the third device detecting the object to be transmitted selected in the hover screen by the second preset gesture in the hover screen comprises:
the third device identifies a selected position or area of the second preset gesture in the environment picture;
the third device converts the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third device;
and the third equipment obtains the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
8. The method of any one of claims 3 to 7, wherein first image identification information is displayed on a display screen of the first device, the first image identification information carrying first network information of the first device;
before the third device sends the first request to the first device, the method further includes:
the third device detects first image identification information displayed on a display screen of the first device in the environment picture;
the third equipment identifies the first image identification information to obtain first network information of the first equipment carried by the first image identification information;
the third device establishes a first connection with the first device based on the first network information of the first device.
9. The method of claim 8, wherein second image identification information is displayed on a display screen of the second device, the second image identification information carrying second network information of the second device;
before the third device sends the first information to the second device, the method further includes:
the third device detects second image identification information displayed on a display screen of the third device in the environment picture;
the third equipment identifies the second image identification information to obtain second network information of the second equipment carried by the second image identification information;
the third device establishes a second connection with the second device based on second network information of the second device.
10. The method of claim 9, wherein the first image identification information includes ripple information and the second image identification information includes ripple information.
11. The method according to any of claims 2 to 10, characterized in that the second information comprises information of the object to be transmitted,
after receiving the object acquisition request, the first device sends the object to be transmitted to the second device, including:
after the first device receives the object acquisition request, the first device sends the object to be transmitted to the second device after the information of the object to be transmitted in the object acquisition request is compared with the information of the object to be transmitted in the second information sent by the third device.
12. The method of any of claims 1 to 11, wherein when the first device sends the object to be transmitted to the second device, the method further comprises:
the first device sends transmission information to the third device, wherein the transmission information comprises: information and transmission progress of a file currently being transmitted;
and after the third equipment receives the transmission information, displaying the transmission information through a display screen of the third equipment.
13. The method of any one of claims 1 to 12, wherein when the third device is an augmented reality device, the first preset gesture, the second preset gesture, and the third preset gesture are gestures of a user's hand within a viewing angle of a camera of the augmented reality device;
when the third device is not the augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are touch gestures of the hand of the user on the display screen of the third device.
14. A method of data transmission, comprising:
the third equipment displays an environment picture acquired by a camera of the third equipment through a display screen of the third equipment;
the third device detects a first preset gesture and a first device selected by the first preset gesture in the environment picture, wherein a display interface is displayed on a display screen of the first device, at least one object is displayed in the display interface, and the object comprises a file and a folder;
responding to the first preset gesture, the third device displays a floating picture on a display screen of the third device, wherein an object in a display interface of the first device exists in the floating picture;
the third device detects a second preset gesture in the environment picture and detects an object to be transmitted, which is selected in the floating picture by the second preset gesture, in the floating picture;
the third device detects a third preset gesture and a second device selected by the third preset gesture in the environment picture;
responding to the third preset gesture, the third device sends first information to the second device, wherein the first information comprises information of the object to be transmitted, the first information is used for indicating the second device to send an object obtaining request to the first device, and the object obtaining request is used for indicating the first device to send the object to be transmitted to the second device.
15. The method of claim 14, wherein said sending, by the third device, first information to the second device in response to the third preset gesture comprises:
responding to the third preset gesture, the third device sends second information to the first device, wherein the second information is used for indicating that the selection of the object to be transmitted is finished, and the second information is used for indicating the first device to send network configuration information of the first device to the third device;
the third device receives network configuration information of the first device sent by the first device, and the third device sends first information to the second device, wherein the first information carries information of an object to be transmitted and the network configuration information of the first device, and the first information is used for indicating the second device to establish network connection with the first device based on the network configuration information of the first device carried in the first information.
16. The method of claim 14 or 15, wherein the third device displaying a floating screen on a display screen of the third device in response to the first preset gesture comprises:
in response to the first preset gesture, the third device sends a first request to the first device, where the first request is used to acquire information of an object in the display interface of the first device, and the information of the object in the display interface of the first device includes at least one of the following items: the icon of the object, the name of the object and the coordinates of the object in the display interface of the first device;
the third equipment acquires the information of the object in the display interface of the first equipment, and generates a floating picture based on the information of the object in the display interface of the first equipment;
and the third equipment displays the floating picture on a display screen of the third equipment.
17. The method of claim 16, wherein said sending, by the third device, a first request to the first device in response to the first preset gesture comprises:
responding to the first preset gesture, and detecting the coverage ratio of a hand in the first preset gesture in the environment picture and a display screen of the first device in the environment picture by the third device;
and when the coverage ratio of the hand in the first preset gesture to the display screen of the first device in the environment picture is not within a preset range, the third device sends a first request to the first device.
18. The method of claim 17, after the third device detects a coverage ratio of a hand in the first preset gesture in the environmental picture to a display screen of the first device in the environmental picture, further comprising:
when the coverage ratio of a hand in the first preset gesture to a display screen of the first device in the environment picture is within a preset range, the third device detects a second preset gesture and an object to be transmitted selected in the environment picture by the second preset gesture in the environment picture.
19. The method of claim 18, wherein the third device detecting the object to be transmitted selected in the environmental screen by the second preset gesture in the environmental screen comprises:
the third equipment identifies a position or an area selected by the second preset gesture in the environment picture;
the third device identifies information of an object to be transmitted corresponding to the position or the area in the environment picture, wherein the information of the object to be transmitted includes at least one of the following items: an icon of an object, a name of an object, and a location of an object on a display screen of the first device.
20. The method of any of claims 16 to 19, wherein the third device detecting the object to be transmitted selected in the floating screen by the second preset gesture in the floating screen comprises:
the third device identifies a selected position or area of the second preset gesture in the environment picture;
the third device converts the position or the area selected by the second preset gesture in the environment picture into the position or the area selected by the second preset gesture in the floating picture according to the position relation between the floating picture and the environment picture displayed on the display screen of the third device;
and the third equipment obtains the information of the object to be transmitted corresponding to the position or the area selected in the floating picture by the second preset gesture.
21. The method of any one of claims 16 to 20, wherein a display screen of the first device displays first image identification information, the first image identification information carrying first network information of the first device;
before the third device sends the first request to the first device, the method further includes:
the third device detects first image identification information displayed on a display screen of the first device in the environment picture;
the third equipment identifies the first image identification information to obtain first network information of the first equipment carried by the first image identification information;
the third device establishes a first connection with the first device based on the first network information of the first device.
22. The method of claim 21, wherein a display screen of the second device displays second image identification information, the second image identification information carrying second network information of the second device;
before the third device sends the first information to the second device, the method further includes:
the third device detects second image identification information displayed on a display screen of the third device in the environment picture;
the third equipment identifies the second image identification information to obtain second network information of the second equipment carried by the second image identification information;
the third device establishes a second connection with the second device based on second network information of the second device.
23. The method of claim 22, wherein the first image recognition information includes ripple information and the second image recognition information includes ripple information.
24. The method of any of claims 14 to 23, further comprising, after the third device sends the first information to the second device:
the third device receives transmission information sent by the first device, wherein the transmission information comprises: information and transmission progress of a file currently being transmitted;
and the third equipment displays the transmission information through a display screen of the third equipment.
25. The method of any one of claims 14 to 24, wherein when the third device is an augmented reality device, the first preset gesture, the second preset gesture, and the third preset gesture are gestures of a user's hand within a viewing angle of a camera of the augmented reality device;
when the third device is not the augmented reality device, the first preset gesture, the second preset gesture and the third preset gesture are touch gestures of the hand of the user on the display screen of the third device.
26. An electronic device, characterized in that the electronic device comprises a processor for executing a computer program stored in a memory for implementing the method according to any of claims 14 to 25.
27. A data transmission system comprising a first device, a second device and a third device, wherein the third device is an electronic device as claimed in claim 26;
the second device is configured to send an object acquisition request to the first device after receiving the first information, where the object acquisition request carries information of the object to be transmitted, and the object acquisition request is used to instruct the first device to send the object to be transmitted to the second device;
and the first equipment is used for sending the object to be transmitted to the second equipment after receiving the object acquisition request.
28. A computer-readable storage medium, in which a computer program is stored which, when run on a processor, implements the method of any one of claims 14 to 25.
CN202011368862.1A 2020-11-27 2020-11-27 Data transmission method, electronic equipment, system and storage medium Pending CN114564162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011368862.1A CN114564162A (en) 2020-11-27 2020-11-27 Data transmission method, electronic equipment, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011368862.1A CN114564162A (en) 2020-11-27 2020-11-27 Data transmission method, electronic equipment, system and storage medium

Publications (1)

Publication Number Publication Date
CN114564162A true CN114564162A (en) 2022-05-31

Family

ID=81711457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011368862.1A Pending CN114564162A (en) 2020-11-27 2020-11-27 Data transmission method, electronic equipment, system and storage medium

Country Status (1)

Country Link
CN (1) CN114564162A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160378294A1 (en) * 2015-06-24 2016-12-29 Shawn Crispin Wright Contextual cursor display based on hand tracking
CN107870722A (en) * 2017-09-27 2018-04-03 努比亚技术有限公司 Document transmission method, mobile terminal and the computer-readable recording medium of terminal room
US20180196510A1 (en) * 2017-01-10 2018-07-12 International Business Machines Corporation Method of instant sharing invoked from wearable devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160378294A1 (en) * 2015-06-24 2016-12-29 Shawn Crispin Wright Contextual cursor display based on hand tracking
US20180196510A1 (en) * 2017-01-10 2018-07-12 International Business Machines Corporation Method of instant sharing invoked from wearable devices
CN107870722A (en) * 2017-09-27 2018-04-03 努比亚技术有限公司 Document transmission method, mobile terminal and the computer-readable recording medium of terminal room

Similar Documents

Publication Publication Date Title
US11785329B2 (en) Camera switching method for terminal, and terminal
JP7391102B2 (en) Gesture processing methods and devices
CN109766066B (en) Message processing method, related device and system
CN110244893B (en) Operation method for split screen display and electronic equipment
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN111443884A (en) Screen projection method and device and electronic equipment
CN110764673A (en) Method for scrolling screen capture and electronic equipment
CN110633043A (en) Split screen processing method and terminal equipment
CN111466112A (en) Image shooting method and electronic equipment
CN111742539B (en) Voice control command generation method and terminal
CN112671976A (en) Control method of electronic equipment and electronic equipment
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN110559645B (en) Application operation method and electronic equipment
CN113934330A (en) Screen capturing method and electronic equipment
CN113746961A (en) Display control method, electronic device, and computer-readable storage medium
CN113010076A (en) Display element display method and electronic equipment
WO2020221062A1 (en) Navigation operation method and electronic device
CN111492678B (en) File transmission method and electronic equipment
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
CN115147451A (en) Target tracking method and device thereof
CN113391775A (en) Man-machine interaction method and equipment
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN114564162A (en) Data transmission method, electronic equipment, system and storage medium
CN113821129A (en) Display window control method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination