WO2022100219A1 - 数据转移方法及相关装置 - Google Patents

数据转移方法及相关装置 Download PDF

Info

Publication number
WO2022100219A1
WO2022100219A1 PCT/CN2021/115822 CN2021115822W WO2022100219A1 WO 2022100219 A1 WO2022100219 A1 WO 2022100219A1 CN 2021115822 W CN2021115822 W CN 2021115822W WO 2022100219 A1 WO2022100219 A1 WO 2022100219A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
devices
source device
gesture
image data
Prior art date
Application number
PCT/CN2021/115822
Other languages
English (en)
French (fr)
Inventor
杨俊拯
万宏
何�轩
钟卫东
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022100219A1 publication Critical patent/WO2022100219A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a data transfer method and related apparatus.
  • the current solutions include using Ultra Wide Band, UWB) sensors, Near Field Communication (NFC) to transfer streaming media information between devices.
  • UWB Ultra Wide Band
  • NFC tags can be installed on both the source device and the target device, and after the NFC tags between the source device and the target device are in close contact, the screen played on the source device can be transferred to the target device for playback; or Install the UWB sensor on the source device and the target device, and use the communication between the UWB sensors to find the mutual position, so that the source device can cast the screen to the target device.
  • the embodiments of the present application provide a data transfer method and a related device, which can automatically determine the source device and the target device from a long distance in combination with image recognition, which greatly improves the user experience.
  • an embodiment of the present application provides a data transfer method, the method includes:
  • the camera modules of the N devices are controlled to collect the first image data, where N is a natural number greater than or equal to 2;
  • the source device is controlled to transfer target content to the target device.
  • an embodiment of the present application provides a data transfer method, the method comprising:
  • the source device outputs the target content
  • the source device is a sending device of the target content determined by the identification device according to the first data generated by N devices
  • N is a natural number greater than or equal to 2
  • the N devices include the Identifying the device, the source device and the target device, and any two of the N devices can communicate with each other, and the identifying device is any one of the N devices;
  • the source device sends the target content to the target device, where the target device is a receiving device of the target content determined by the identification device according to the second data generated by the N devices.
  • an embodiment of the present application provides a data transfer system
  • the data transfer system includes N devices, N is a natural number greater than or equal to 2, and any two devices in the N devices can communicate with each other , the N devices include a trigger device, an identification device, a source device and a target device, the trigger device is any one of the N devices, the identification device is any one of the N devices, and the The source device is any one of the N devices except the target device, and the target device is any one of the N devices except the source device;
  • the triggering device configured to turn on the camera modules of the N devices according to the corresponding triggering instruction
  • the identifying device configured to determine the source device according to the first data generated by the N devices, and determine the target device according to the second data generated by the N devices;
  • the source device is used for transferring the target content to the target device.
  • embodiments of the present application provide an electronic device, including an application processor, a memory, and one or more programs, where the one or more programs are stored in the memory and configured by the The application processor executes the program, and the program includes instructions for executing the steps in the method according to any one of the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer storage medium, where the computer storage medium stores a computer program, the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute The method according to any one of the first aspects of the embodiments of the present application.
  • an embodiment of the present application provides a computer product, wherein the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the above-mentioned computer program is operable to cause a computer to execute the program as described in the embodiments of the present application. Some or all of the steps described in any method of the first aspect.
  • the computer program product may be a software installation package.
  • FIG. 1A is a schematic diagram of a system architecture of a data transfer method provided by an embodiment of the present application.
  • FIG. 1B is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • 1C is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • FIG. 2A is a schematic diagram of a triggering manner provided by an embodiment of the present application.
  • FIG. 2B is a schematic diagram of another triggering manner provided by an embodiment of the present application.
  • 2C is a schematic diagram of a first gesture for determining a source device according to an embodiment of the present application
  • FIG. 2D is a schematic flowchart of determining a source device according to an embodiment of the present application.
  • 2E is a schematic diagram of a second gesture for determining a target device according to an embodiment of the present application.
  • FIG. 2F is a schematic flowchart of determining a target device according to an embodiment of the present application.
  • FIG. 2G is a schematic interface diagram of a data transfer mode provided by an embodiment of the present application.
  • 2H is a schematic interface diagram of another data transfer mode provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a data transfer method according to an embodiment of the present application.
  • FIG. 5 is a block diagram of functional units of a data transfer apparatus provided by an embodiment of the present application.
  • FIG. 6 is a block diagram of functional units of another data transfer apparatus according to an embodiment of the present application.
  • FIG. 1A is a system architecture diagram of a data transfer method provided by an embodiment of the present application.
  • the system architecture in the embodiment of the present application includes a device group 110 and a device management system 120 , wherein the above-mentioned device group 110 includes N devices, where N is a natural number greater than or equal to 2, any two devices in the device group 110 can communicate with each other, and the device management system 120 may be a cloud system for managing all device information under the same user ID.
  • the group 110 and the device management system 120 can communicate with each other, and any device in the device group 110 can obtain a list of all devices in the device group and specific IP addresses from the device management system.
  • the devices in the user's home can belong to the same user account , when the devices in the device group 110 need to communicate with each other, any device in the device group 110 can obtain the IP address of the corresponding device through the device management system 120 for communication.
  • the specific communication mechanism may be that the device group 110 forms the same local area network, or the device group 110 is connected to the same WIFI, which is not specifically limited here.
  • the above-mentioned device group 110 may include a trigger device 111 , an identification device 112 , a source device 113 and a target device 114 .
  • the triggering device 111 may be any one of the N devices
  • the identifying device 112 may be any one of the N devices
  • the source device 113 may be the target device 114 among the N devices.
  • the above-mentioned target device may be any one of the above-mentioned N devices except the source device 113 .
  • the above trigger device 111 can be used as the identification device 112 at the same time, or as the source device 113 at the same time, or as the target device 114 at the same time, or as the identification device 112 and the source device 113 at the same time, or, as the identification device 112 and the source device 113 at the same time.
  • the target device, or the above trigger device 111 is any one of the N devices except the identification device 112 , the source device 113 and the target device 114 .
  • any device in the device group can be used as the trigger device 111 and the identification device 112.
  • “Trigger” and "Identify” just do not avoid confusion in the function The description of the distinction above does not mean that it is necessarily a different device.
  • the identification device 112 may be the device with the strongest computing power in the device group 110 .
  • the above trigger device 111 may turn on its own camera module after receiving the trigger instruction from the user, and send a first camera instruction to N-1 devices other than itself in the device group 110, and the first camera instruction may Make N-1 devices turn on their respective camera modules and start collecting images.
  • the identification device 112 can determine the source device 113 according to the first data generated by the N devices, and the target device 114 can be determined according to the second data generated by the N devices, and the first data can be the captured first image data. Or the first gesture confidence, the second data may be the captured second image data or the second gesture confidence.
  • the above-mentioned source device 113 may be used to transfer the target content to the target device 114 .
  • the source device 113 sends a second camera instruction to N-1 devices other than itself, where the second camera instruction can enable the N-1 devices to use their respective The camera module starts to collect images for the second time.
  • the above-mentioned target device 114 can be used to receive and display target content from the source device 113.
  • the target content can be streaming media information, including but not limited to any one or any combination of images, text, video, and audio.
  • the electronic device in the embodiments of the present application will be described below. It can be understood that the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device. In other embodiments of the present application, the electronic device may include more or less components than shown, or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • N electronic devices may form a device group 110 , and the electronic devices may include various handheld devices with wireless communication functions, vehicle-mounted devices, etc. Devices, wearable devices, computing devices, or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Station (MS), terminal devices, and so on.
  • Any electronic device 200 in this embodiment of the present application may include: a camera module 210 , a trigger sensor 220 , a circulation system 230 and a redirection system 240 .
  • the above-mentioned circulation system 230 includes a gesture recognition unit 231, a trigger recognition unit 232, a control unit 233 and a special effect unit 234.
  • the above-mentioned gesture recognition unit 231 is used to receive the image data from the camera module 210, perform gesture recognition, and determine the source device 113 and the target device 114 according to the gesture;
  • the above-mentioned trigger recognition unit 232 is used to receive the trigger instruction from the trigger sensor 220, and confirm the Whether the trigger instruction complies with the preset trigger condition, when the trigger condition is met, the control unit 233 is notified to send an instruction to other electronic devices 200 to enable the other electronic devices to turn on the camera module 210;
  • the control unit 233 can also be used to control the special effect unit 234 to generate special effects,
  • the special effect represents data transfer from the source device to the target device.
  • the special effect unit will only generate the relevant transfer special effect when the electronic device 200 is used as the source device 113 or the target device 114 .
  • the above-mentioned transfer system 230 is mainly used to determine whether data transfer is required, determine the source device 113 and the target device 114, and generate relevant data transfer effects.
  • the aforementioned camera module 210 may include a camera array, which is not specifically limited herein.
  • the above trigger sensor 220 may include a gravity sensor, an optical sensor, an acoustic sensor, etc., which are not specifically limited herein.
  • the redirection system 240 described above is mainly used to control the specific implementation manner of data transfer, such as screen projection, after the transfer system 230 determines the source device 113 and the target device 114 .
  • FIG. 1C is a schematic structural diagram of another electronic equipment provided in the embodiments of the present application.
  • SoC 310 external memory interface 320, internal memory 321, universal serial bus (USB) interface 330, charging management module 340, power management module 341, battery 342, antenna 1, antenna 2, mobile Communication module 350, wireless communication module 360, audio module 370, speaker 370A, receiver 370B, microphone 370C, sensor module 380, key 390, motor 391, indicator 392, camera 393, display screen 394, and subscriber identification module module, SIM) card interface 395 and so on.
  • USB universal serial bus
  • the sensor module 380 may include a pressure sensor 380A, a gyroscope sensor 380B, an air pressure sensor 380C, a magnetic sensor 380D, an acceleration sensor 380E, a distance sensor 380F, a proximity light sensor 380G, a fingerprint sensor 380H, a temperature sensor 380J, a touch sensor 380K, and ambient light.
  • the wireless communication function of the electronic device 300 can be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 300 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 350 may provide a wireless communication solution including 2G/3G/4G/5G/6G, etc. applied on the electronic device 300 .
  • the mobile communication module 350 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 350 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 350 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 350 may be provided in the processor 340 .
  • at least part of the functional modules of the mobile communication module 350 may be provided in the same device as at least part of the modules of the processor 340 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 370A, the receiver 370B, etc.), or displays images or videos through the display screen 394 .
  • the modem processor may be a stand-alone device.
  • the modulation and demodulation processor may be independent of the processor 340, and may be provided in the same device as the mobile communication module 350 or other functional modules.
  • the wireless communication module 360 can provide applications on the electronic device 300 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 360 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 360 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 340 .
  • the wireless communication module 360 can also receive the signal to be sent from the processor 340 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 300 is coupled with the mobile communication module 350, and the antenna 2 is coupled with the wireless communication module 360, so that the electronic device 300 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the charging management module 340 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 340 may receive charging input from the wired charger through the USB interface 330 .
  • the charging management module 340 may receive wireless charging input through the wireless charging coil of the electronic device 300 . While the charging management module 340 is charging the battery 342 , it can also supply power to the electronic device through the power management module 341 .
  • the power management module 341 is used to connect the battery 342 , the charging management module 340 and the processor 340 .
  • the power management module 341 receives input from the battery 342 and/or the charge management module 340, and supplies power to the processor 340, the internal memory 321, the external memory, the display screen 394, the camera 393, and the wireless communication module 360.
  • the power management module 341 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 341 may also be provided in the processor 340 .
  • the power management module 341 and the charging management module 340 may also be provided in the same device.
  • the electronic device 300 implements a display function through a GPU, a display screen 394, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 394 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 340 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 394 is used to display images, videos, and the like.
  • Display screen 394 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 300 may include 1 or N display screens 394 , where N is a natural number greater than 1.
  • the display screen 394 may be used to display red dots or a number of red dots on the icons of each APP, to prompt the user that there is a new message to be processed.
  • the electronic device 300 can realize the shooting function through the ISP, the camera 393, the video codec, the GPU, the display screen 394, and the application processor.
  • the ISP is used to process the data fed back by the camera 393 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 393 .
  • Camera 393 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 300 may include 1 or N cameras 393 , where N is a natural number greater than 1.
  • the external memory interface 320 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 300.
  • the external memory card communicates with the processor 340 through the external memory interface 320 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 321 may be used to store computer executable program code, which includes instructions.
  • the processor 340 executes various functional applications and data processing of the electronic device 300 by executing the instructions stored in the internal memory 321 .
  • the internal memory 321 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 300 and the like.
  • the internal memory 321 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the internal memory 321 may be used to store the data of each APP message, and may also be used to store the red dot elimination policy corresponding to each APP.
  • the electronic device 300 may implement audio functions through an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 370 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be provided in the processor 340 , or some functional modules of the audio module 370 may be provided in the processor 340 .
  • Speaker 370A also referred to as "horn" is used to convert audio electrical signals into sound signals.
  • the electronic device 300 can listen to music through the speaker 370A, or listen to a hands-free call.
  • the receiver 370B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 370B close to the human ear.
  • the microphone 370C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 370C through a human mouth, and input the sound signal into the microphone 370C.
  • the electronic device 300 may be provided with at least one microphone 370C. In other embodiments, the electronic device 300 may be provided with two microphones 370C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may further be provided with three, four or more microphones 370C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the pressure sensor 380A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 380A may be provided on the display screen 394 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to pressure sensor 380A, the capacitance between the electrodes changes.
  • the electronic device 300 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 394, the electronic device 300 detects the intensity of the touch operation according to the pressure sensor 380A.
  • the electronic device 300 may also calculate the touched position according to the detection signal of the pressure sensor 380A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 380B may be used to determine the motion attitude of the electronic device 300 .
  • the angular velocity of electronic device 300 about three axes ie, x, y, and z axes
  • the gyro sensor 380B can be used for image stabilization.
  • the gyroscope sensor 380B detects the shaking angle of the electronic device 300, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shaking of the electronic device 300 through reverse motion to achieve anti-shake.
  • the gyro sensor 380B can also be used for navigation and somatosensory game scenarios.
  • Air pressure sensor 380C is used to measure air pressure. In some embodiments, the electronic device 300 calculates the altitude from the air pressure value measured by the air pressure sensor 380C to assist in positioning and navigation.
  • Magnetic sensor 380D includes a Hall sensor.
  • the electronic device 300 can detect the opening and closing of the flip holster using the magnetic sensor 380D.
  • the electronic device 300 can detect the opening and closing of the flip according to the magnetic sensor 380D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 380E can detect the magnitude of the acceleration of the electronic device 300 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 300 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 300 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 300 can use the distance sensor 380F to measure the distance to achieve fast focusing.
  • Proximity light sensor 380G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 300 emits infrared light to the outside through the light emitting diode.
  • Electronic device 300 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 300 . When insufficient reflected light is detected, the electronic device 300 may determine that there is no object near the electronic device 300 .
  • the electronic device 300 can use the proximity light sensor 380G to detect that the user holds the electronic device 300 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 380G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 380L is used to sense ambient light brightness.
  • the electronic device 300 can adaptively adjust the brightness of the display screen 394 according to the perceived ambient light brightness.
  • the ambient light sensor 380L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 380L can also cooperate with the proximity light sensor 380G to detect whether the electronic device 300 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 380H is used to collect fingerprints.
  • the electronic device 300 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 380J is used to detect the temperature.
  • the electronic device 300 utilizes the temperature detected by the temperature sensor 380J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 380J exceeds a threshold value, the electronic device 300 may reduce the performance of the processor located near the temperature sensor 380J in order to reduce power consumption and implement thermal protection.
  • the electronic device 300 when the temperature is lower than another threshold, the electronic device 300 heats the battery 342 to avoid abnormal shutdown of the electronic device 300 caused by the low temperature.
  • the electronic device 300 boosts the output voltage of the battery 342 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 380K also known as "touch panel”.
  • the touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, also called a "touch screen”.
  • the touch sensor 380K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 394 .
  • the touch sensor 380K may also be disposed on the surface of the electronic device 300 , which is different from the location where the display screen 394 is located.
  • the bone conduction sensor 380M can acquire vibration signals.
  • the bone conduction sensor 380M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 380M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 380M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 370 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 380M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 380M, and realize the function of heart rate detection.
  • the keys 390 include a power-on key, a volume key, and the like. Keys 390 may be mechanical keys. It can also be a touch key.
  • the electronic device 300 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 300 .
  • Motor 391 can generate vibrating cues.
  • the motor 391 can be used for incoming call vibration alerts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 391 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 394 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 392 may be an indicator light, which may be used to indicate charging status, battery changes, and may also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 395 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 300 by inserting into the SIM card interface 395 or pulling out from the SIM card interface 395 .
  • the electronic device 300 may support 1 or N SIM card interfaces, where N is a natural number greater than 1.
  • the SIM card interface 395 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • the same SIM card interface 395 can insert multiple cards at the same time.
  • the types of the plurality of cards may be the same or different.
  • the SIM card interface 395 can also be compatible with different types of SIM cards.
  • the SIM card interface 395 is also compatible with external memory cards.
  • the electronic device 300 interacts with the network through the SIM card to implement functions such as calls and data communication.
  • the electronic device 300 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300 .
  • the data transfer method in the embodiment of the present application can be divided into a trigger stage, a source device determination stage, a target device determination stage, and a data transfer stage. The following description will be made in order.
  • the device group In the trigger stage, all devices in the device group detect the user's trigger command. Different devices can have different trigger conditions. When the trigger command meets the trigger conditions, the device turns on its own camera module and sends commands to other devices. After receiving the instruction, the camera module is also turned on.
  • the trigger condition of the mobile phone may be that the body changes from a horizontal state to a vertical state within a preset time, such as 2 seconds, and faces the user. Then, when the user changes the mobile phone from the horizontal state to the vertical state within 2 seconds and the screen faces the front of the user, the mobile phone can determine that it has received a trigger command that meets its own trigger conditions, turn on its own camera, and send a message to other devices to turn on the camera at the same time. instruction.
  • the trigger condition of the mobile phone may be shaking of the body, then after the user picks up the mobile phone and shakes it, the mobile phone can determine that it has received a trigger instruction that meets its own trigger conditions , turn on your own camera, and send a command to turn on the camera to other devices at the same time.
  • the triggering device in the above triggering stage can be any device in the device group, and different devices can have different triggering conditions.
  • the above examples are just for easy understanding and do not represent limitations to the present application.
  • the device can be prevented from keeping the camera module turned on all the time, and the camera module can be turned on only when the user has a data transfer requirement, which saves the power consumption of the device.
  • each device sends the image data collected by itself to the identification device; second , each device performs recognition processing on the image data collected by itself, obtains its own first gesture confidence, and then sends all the first gesture confidences to the recognition device for sorting.
  • the identification device may be any device in the device group, such as the device with the strongest computing power, and the identification device may determine that the device corresponding to the first gesture confidence with the highest confidence is the source device.
  • the first gesture may be a palm grasping action. As shown in FIG. 2C , in order to reduce the amount of calculation during recognition, the confidence of the first gesture can be divided into the confidence of recognizing the palm, and the confidence of recognizing the palm. The confidence that the fist is recognized after the palm.
  • the recognition device can use a neural network model to recognize the image data collected by each device, and determine that the device corresponding to the image data with the highest probability of appearing the first gesture in the image is the source device. Do repeat.
  • S1 detect whether a hand image appears in the first preset period; if a hand image is detected, go to S3; if no hand image is detected, go to S2.
  • S3 Determine the confidence that the hand image is a palm gesture. Enter the second preset cycle.
  • S4 Detect whether a hand image appears in the second preset period; if a hand image is detected, go to S6; if no hand image is detected, go to S5.
  • S6 determine the confidence level that the hand image is a fist gesture.
  • S7 Send the confidence level of the palm gesture and the confidence level of the fist gesture to the recognition device for sorting processing.
  • the identification device can filter out the device corresponding to the image with the highest comprehensive confidence as the source device.
  • the confidence level can be determined according to the angle of the gesture and the screen ratio of the gesture. For the real source device, gestures without too many offset angles are generally recognized, and the gestures generally occupy the largest proportion of the screen. Therefore, according to the The above rules can determine the first gesture confidence.
  • the second method can reduce the calculation amount of the recognition device and improve the recognition efficiency.
  • the source device may generate prompt information, where the prompt information is used to inform the user that the device is about to perform data transfer.
  • the promotion information can be displayed in any manner or in any manner such as text, image, video, voice, etc., which is not specifically limited here. By generating the promotion information, the user can be made aware of the state of the source device, and it is convenient for the user to specify the target device through gestures.
  • different prompt information can be generated according to the difference between the source device and the target device.
  • a special effect interface can be generated on the mobile phone.
  • the special effect interface includes but is not limited to displaying There are no specific restrictions on interface zooming, adding text annotations, and voice broadcasts.
  • different prompt information may be generated according to different data types currently to be transferred by the source device, and the data types may include screen-casting applications, display applications, and interactive applications, etc., which are not detailed here. limited.
  • the source device will send an instruction to prepare to receive the second gesture to other devices other than itself.
  • the source device can obtain the information of all devices in the device group through the device management system, and then send the device other than itself to prepare to receive.
  • the instruction of the second gesture, the instruction can make other devices extend the turn-on time of the camera module, that is, enter the third preset period and the fourth preset period.
  • the devices other than the source device in the device group start to collect images in the third preset period and the fourth preset period.
  • the identification device may be any device in the device group, such as the device with the strongest computing power, and the identification device may determine that the device corresponding to the second gesture confidence with the highest confidence is the target device.
  • the second gesture can be an action of spreading a palm.
  • the confidence of the second gesture can be divided into the confidence of recognizing the fist, and the confidence of recognizing the fist. The confidence that the palm is recognized after the fist is reached.
  • the recognition device can use the neural network model to recognize the image data collected by each device, and determine the device corresponding to the image data with the highest possibility of appearing the second gesture in the image as the target device. I won't go into details.
  • S8 Detect whether a hand image appears in the third preset period; if a hand image is detected, go to S10; if no hand image is detected, go to S9.
  • S13 Determine the confidence level that the hand image is a palm gesture.
  • S14 Send the confidence level of the palm gesture and the confidence level of the fist gesture to the recognition device for sorting processing.
  • the identification device can filter out the device corresponding to the image with the highest comprehensive confidence as the target device.
  • the confidence can be determined according to the angle of the gesture and the screen ratio of the gesture.
  • the gesture without too much offset angle is generally recognized, and the gesture is generally the largest in the screen ratio. , so the confidence of the second gesture can be determined according to the above rules.
  • the device does not need to be equipped with additional hardware, and can perform data transfer over a long distance, which greatly improves the user experience.
  • the camera modules of all devices can be turned off, and the specific data transfer process can be started.
  • the source device can cancel the prompt information first, and then execute different data transfer strategies according to the currently running data type. For example, if the current data type is a screen-casting application or display type, the source device The device can intercept the currently displayed interface, and then generate special effects for interface transfer to transfer the interface to the target device for display. After the transfer is completed, the source device can also keep the display state or stop the display state; if the current data type is an interactive application and the target device There is an application on the device that is compatible with the application currently running on the source device. The source device can generate interface transfer effects to transfer the interface to the target device, and the target device displays the interface synchronously. After the transfer is completed, the source device closes the application.
  • the current data type is a screen-casting application or display type
  • the source device The device can intercept the currently displayed interface, and then generate special effects for interface transfer to transfer the interface to the target device for display. After the transfer is completed, the source device can also keep the display state or stop the display state; if the current data type is an interactive
  • the source device can generate the special effect that the interface to be transferred moves to the target device, and the target device can also generate the interface to be transferred synchronously and enter the target device.
  • Special effects as shown in Figure 2H, the source device can gradually shrink the interface to be transferred until it disappears, and the target device can synchronously gradually enlarge the interface to be transferred until it is displayed normally.
  • the source device and the target device can be automatically determined according to the user's gesture, so as to realize long-distance data transfer, which greatly improves the user's experience.
  • the protection scope of the rights of the data transfer method in the embodiment of the present application is as follows.
  • FIG. 3 is a schematic flowchart of a data transfer method provided by an embodiment of the present application, which specifically includes the following steps:
  • Step 301 in response to the trigger instruction, control the camera modules of the N devices to collect first image data.
  • N is a natural number greater than or equal to 2, .
  • the N devices can be devices in the family and share the same user account.
  • the first image data may be an image or a video composed of a set of images.
  • Step 302 Determine a source device among the N devices based on the first image data.
  • Step 303 Control the camera modules of N-1 devices other than the source device to collect second image data.
  • the second image data may be an image or a video composed of a set of images.
  • Step 304 Determine a target device among the N-1 devices based on the second image data.
  • Step 305 controlling the source device to transfer the target content to the target device.
  • the determining a source device among the N devices based on the first image data includes:
  • the device corresponding to the first gesture confidence degree with the highest confidence degree is the source device.
  • the determining a source device among the N devices based on the first image data includes:
  • the first gesture confidence degree is sorted, and the device corresponding to the first gesture confidence degree with the highest confidence degree is determined as the source device.
  • the determining a target device among the N-1 devices based on the second image set includes:
  • the device corresponding to the second gesture confidence degree with the highest confidence degree is the target device.
  • the determining a target device among the N-1 devices based on the second image set includes:
  • the second gesture confidence degree is sorted, and the device corresponding to the second gesture confidence degree with the highest confidence degree is determined as the target device.
  • controlling the source device to transfer the target content to the target device includes:
  • the target content is transferred to the target device for display according to a preset transfer rule corresponding to the application type.
  • FIG. 4 is a schematic flowchart of another data transfer method provided by an embodiment of the present application, which specifically includes the following steps:
  • Step 401 the source device outputs the target content.
  • the source device is a sending device of the target content determined by the identification device according to the first data generated by N devices, N is a natural number greater than or equal to 2, and the N devices include the identification device, the The source device and the target device, and any two devices in the N devices can communicate with each other, and the identification device is any one of the N devices; the target content can be text, image, video, audio, etc. Any one or any combination of streaming media information.
  • the first data may be first image data or first gesture confidence.
  • Step 402 the source device sends the target content to the target device.
  • the target device is a receiving device of the target content determined by the identification device according to the second data generated by the N devices.
  • the second data may be second image data or second gesture confidence.
  • the identification device is the source device; there are the following possible embodiments:
  • the method before the source device outputs the target content, the method further includes:
  • the first data includes first image data; the source device determines that it is the sending device of the target content through the following steps:
  • the source device collects first image data through a camera module, and receives first image data from N-1 devices other than the source device;
  • the source device performs a first recognition process on the first image data, and determines a first gesture confidence level of the first image data;
  • the source device filters out the first gesture confidence with the highest confidence
  • the source device determines that it is the device corresponding to the first gesture confidence with the highest confidence.
  • the method before the source device outputs the target content, the method further includes:
  • the first data includes a first gesture confidence level; the source device determines itself as a sending device of the target content through the following steps:
  • the source device performs first processing on the first image data collected by itself to obtain a first gesture confidence, and receives first gesture confidences from N-1 devices other than the source device;
  • the source device performs sorting processing on the confidence of the first gesture
  • the source device determines that it is the device corresponding to the first gesture confidence level with the highest confidence level.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the second data includes second image data; the source device determines that the target device is a sink device of the target content through the following steps:
  • the source device receives second image data from N-1 devices other than the source device;
  • the source device performs a second recognition process on the second image data, and determines a second gesture confidence level of the second image data;
  • the source device filters out the second gesture confidence with the highest confidence
  • the source device determines that the device corresponding to the second gesture confidence degree with the highest confidence degree is the target device.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the second data includes a second gesture confidence; the source device determines that the target device is a receiving device of the target content by performing the following steps:
  • the source device receives second gesture confidences from N-1 devices other than the source device;
  • the source device performs sorting processing on the confidence of the second gesture
  • the source device determines that the device corresponding to the second gesture confidence degree with the highest confidence degree is the target device.
  • the identification device is the target device; there are the following possible embodiments:
  • the method before the source device outputs the target content, the method further includes:
  • the first data includes first image data; the target device determines that the source device is a sending device of the target content through the following steps:
  • the target device collects first image data through a camera module, and receives first image data from N-1 devices other than the target device;
  • the target device performs a first recognition process on the first image data, and determines a first gesture confidence level of the first image data;
  • the target device filters out the first gesture confidence with the highest confidence
  • the target device determines that the device corresponding to the first gesture confidence degree with the highest confidence degree is the source device.
  • the method before the source device outputs the target content, the method further includes:
  • the first data includes a first gesture confidence; the target device determines that the source device is a sending device of the target content by performing the following steps:
  • the target device performs first processing on the first image data collected by itself to obtain a first gesture confidence, and receives first gesture confidences from N-1 devices other than the target device;
  • the target device performs sorting processing on the confidence of the first gesture
  • the target device determines that the device corresponding to the first gesture confidence with the highest confidence is the source device.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the second data includes second image data; the target device determines that it is a receiving device of the target content through the following steps:
  • the target device collects second image data through a camera module, and receives second image data from the source device and N-2 devices other than itself;
  • the target device performs a second recognition process on the second image data, and determines a second gesture confidence level of the second image data;
  • the target device filters out the second gesture confidence with the highest confidence
  • the target device determines that it is the device corresponding to the first gesture confidence level with the highest confidence level.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the second data includes a second gesture confidence level; the target device determines that it is a receiving device of the target content through the following steps:
  • the target device performs second processing on the second image data collected by itself to obtain a second gesture confidence, and receives the second gesture confidence from the source device and N-2 devices other than itself ;
  • the target device performs sorting processing on the confidence level of the second gesture
  • the target device determines that it is the device corresponding to the second gesture confidence level with the highest confidence level.
  • the identification device is a first device
  • the first device is any one of the N devices except the source device and the target device
  • the method before the source device outputs the target content, the method further includes:
  • the first data includes first image data; the first device determines that the source device is a sending device of the target content by performing the following steps:
  • the first device collects first image data through a camera module, and receives first image data from N-1 devices other than the first device;
  • the first device performs a first recognition process on the first image data, and determines a first gesture confidence level of the first image data;
  • the first device filters out the first gesture confidence with the highest confidence
  • the first device determines that the device corresponding to the first gesture confidence with the highest confidence is the source device.
  • the method before the source device outputs the target content, the method further includes:
  • the first data includes a first gesture confidence; the first device determines that the source device is a sending device of the target content by performing the following steps:
  • the first device performs first processing on the first image data collected by itself to obtain a first gesture confidence, and receives first gesture confidences from N-1 devices other than the first device;
  • the first device performs sorting processing on the confidence of the first gesture
  • the first device determines that the device corresponding to the first gesture confidence with the highest confidence is the source device.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the second data includes second image data; the first device determines that the target device is a receiving device of the target content through the following steps:
  • the first device collects second image data through a camera module, and receives second image data from the source device and N-2 devices other than itself;
  • the first device performs a second recognition process on the second image data, and determines a second gesture confidence level of the second image data;
  • the first device filters out the second gesture confidence with the highest confidence
  • the first device determines that the device corresponding to the second gesture confidence degree with the highest confidence degree is the target device.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the second data includes a second gesture confidence; the first device determines that the target device is a receiving device of the target content by performing the following steps:
  • the first device performs second processing on the second image data collected by itself to obtain a second gesture confidence, and receives the second gesture confidence from the source device and N-2 devices other than itself Spend;
  • the first device performs sorting processing on the confidence of the second gesture
  • the first device determines that the device corresponding to the second gesture confidence degree with the highest confidence degree is the target device.
  • the method before the source device outputs the target content, the method further includes:
  • the trigger device turns on its own camera module according to the trigger command, and sends a first camera command to at least one device, where the first camera command is used to enable the device to turn on its own camera module, and the trigger device is one of the N devices. any one of .
  • the trigger device turns on its own camera module according to the trigger instruction, and sends a first camera instruction to at least one device, including:
  • the trigger device When the trigger device receives a trigger instruction that meets the preset trigger condition, it turns on its own camera module, and obtains the current working states of N-1 devices other than itself, and the working states include running states and non-running states. state;
  • the triggering device selects the device in the running state from the N-1 devices;
  • the trigger device sends a first imaging instruction to the device in the running state.
  • the method before the source device sends the target content to the target device, the method further includes:
  • the source device sends a second imaging instruction to the N-1 devices other than itself, where the second imaging instruction is used to make the N-1 devices maintain the ON state of the camera module.
  • the images collected by all devices can be identified to determine the source device and the target device, and the transfer of streaming media information between devices that meet the user's needs can be realized without additional hardware, which greatly improves the User interaction experience.
  • the electronic device includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the electronic device may be divided into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is schematic, and is only a logical function division, and other division methods may be used in actual implementation.
  • the device includes:
  • the first collection unit 510 is configured to control the camera modules of N devices to collect the first image data in response to the trigger instruction, where N is a natural number greater than or equal to 2;
  • a source device determining unit 520 configured to determine a source device in the N devices based on the first image data
  • a second acquisition unit 530 configured to control the camera modules of N-1 devices other than the source device to acquire second image data
  • a target device determining unit 540 configured to determine a target device in the N-1 devices based on the second image data
  • the transfer control unit 550 is configured to control the source device to transfer the target content to the target device.
  • FIG. Functional unit composition block diagram the device includes:
  • the content output unit 610 is configured to enable a source device to output target content, where the source device is a sending device of the target content determined by the identification device according to the first data generated by the N devices, where N is a natural number greater than or equal to 2,
  • the N devices include the identification device, the source device, and the target device, and any two devices in the N devices can communicate with each other, and the identification device is any one of the N devices;
  • a content transfer unit 620 configured to make the source device send the target content to the target device, where the target device is the target content determined by the identification device according to the second data generated by the N devices receiving equipment.
  • Embodiments of the present application further provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes the computer to execute part or all of the steps of any method described in the above method embodiments , the above computer includes electronic equipment.
  • Embodiments of the present application further provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute any one of the method embodiments described above. some or all of the steps of the method.
  • the computer program product may be a software installation package, and the computer includes a fish school detection device.
  • the disclosed apparatus may be implemented in other manners.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and other division methods may be used in actual implementation, for example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the above-mentioned integrated units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable memory.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a memory.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • the aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种数据转移方法及相关设备,首先,响应于触发指令,控制N个设备的摄像模块采集第一图像数据,N为大于或等于2的自然数;接着基于所述第一图像数据确定所述N个设备中的源设备;然后,控制所述源设备之外的N-1个设备的摄像模块采集第二图像数据;然后,基于所述第二图像数据确定所述N-1个设备中的目标设备;最后,控制所述源设备将目标内容转移至所述目标设备。可以对所有设备采集到的图像进行识别来确定源设备和目标设备,无需搭载额外的硬件,就可以实现符合用户需求的设备之间的流媒体信息的转移,大大提升了用户的交互体验。

Description

数据转移方法及相关装置
本发明要求2020年11月16日递交的发明名称为“数据转移方法及相关装置”的申请号202011283401.4的在先申请优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本申请涉及通信技术领域,具体涉及一种数据转移方法及相关装置。
背景技术
随着技术的发展,万物互联的时代已经到来,在一些应用场景下,用户需要将一台设备上的流媒体信息转移至另一台设备,目前的解决方法包括利用超宽带(Ultra Wide Band,UWB)传感器、近场通信(Near Field Communication,NFC)来进行设备之间的流媒体信息转移。举例来说,可以在源设备和目标设备上都装上NFC标签,在源设备和目标设备之间的NFC标签近距离接触后,将源设备上播放的画面转移至目标设备上播放;也可以在源设备和目标设备上装上UWB传感器,利用UWB传感器之间的通信找到相互的位置,以使源设备向目标设备进行投屏。
但上述方法依赖于硬件,存在很多限制,且需要近距离触发,用户体验不佳。
发明内容
本申请实施例提供了一种数据转移方法及相关装置,可以结合图像识别,远距离自动确定源设备和目标设备,大大提升了用户的使用体验。
第一方面,本申请实施例提供了一种数据转移方法,所述方法包括:
响应于触发指令,控制N个设备的摄像模块采集第一图像数据,N为大于或等于2的自然数;
基于所述第一图像数据确定所述N个设备中的源设备;
控制所述源设备之外的N-1个设备的摄像模块采集第二图像数据;
基于所述第二图像数据确定所述N-1个设备中的目标设备;
控制所述源设备将目标内容转移至所述目标设备。
第二方面,本申请实施例提供了一种数据转移方法,所述方法包括:
源设备输出目标内容,所述源设备为识别设备根据N个设备所生成的第一数据确定的所述目标内容的发送设备,N为大于或等于2的自然数,所述N个设备包括所述识别设备、所述源设备和目标设备,且所述N个设备中任意两个设备都可相互通信,所述识别设备为所述N个设备中的任意一个;
所述源设备向所述目标设备发送所述目标内容,所述目标设备为所述识别设备根据所述N个设备所生成的第二数据确定的所述目标内容的接收设备。
第三方面,本申请实施例提供了一种数据转移系统,所述数据转移系统包括N个设备,N为大于或等于2的自然数,所述N个设备中的任意两个设备都可相互通信,所述N个设备包括触发设备、识别设备、源设备和目标设备,所述触发设备为所述N个设备中的任意一个,所述识别设备为所述N个设备中的任意一个,所述源设备为所述N个设备中除所述目标设备之外的任意一个,所述目标设备为所述N个设备中除所述源设备之外的任意一个;
所述触发设备,用于根据对应的触发指令开启所述N个设备的摄像模块;
所述识别设备,用于根据所述N个设备生成的第一数据确定所述源设备,以及,根据所述N个设备生成的第二数据确定所述目标设备;
所述源设备,用于将目标内容转移至所述目标设备。
第四方面,本申请实施例提供了一种电子设备,包括应用处理器、存储器,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述应用处理器执行,所述程序包括用于执行如本申请实施例第一方面任一项所述的方法中的步骤的指令。
第五方面,本申请实施例提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如本申请实施例第一方面任一项所述的方法。
第六方面,本申请实施例提供了一种计算机产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A为本申请实施例提供的一种数据转移方法的系统架构示意图;
图1B为本申请实施例提供的一种电子设备的结构示意图;
图1C为本申请实施例提供的另一种电子设备的结构示意图;
图2A为本申请实施例提供的一种触发方式的示意图;
图2B为本申请实施例提供的另一种触发方式的示意图;
图2C为本申请实施例提供的一种确定源设备的第一手势示意图;
图2D为本申请实施例提供的一种确定源设备的流程示意图;
图2E为本申请实施例提供的一种确定目标设备的第二手势示意图;
图2F为本申请实施例提供的一种确定目标设备的流程示意图;
图2G为本申请实施例提供的一种数据转移方式的界面示意图;
图2H为本申请实施例提供的另一种数据转移方式的界面示意图;
图3为本申请实施例提供的一种数据转移方法的流程示意图;
图4为本申请实施例提供的另一种数据转移方法的流程示意图;
图5为本申请实施例提供的一种数据转移装置的功能单元组成框图;
图6为本申请实施例提供的另一种数据转移装置的功能单元组成框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、软件、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。下面结合附图,对本申请实施例进行详细介绍。
下面分别从软硬件运行环境(第一部分)、整体流程方法和关键技术实现(第二部分)以及权要保护范围(第三部分)三个方面全面介绍本申请实施例。
第一部分,本申请所公开的技术方案的软硬件运行环境介绍如下。
如图1A所示,图1A为本申请实施例提供的一种数据转移方法的系统架构图,本申请实施例中的系统架构包括设备组110和设备管理系统120,其中,上述设备组110包括N个设备,N为大于或等于2的自然数,设备组110中的任意两个设备都可以相互通信,上述设备管理系统120可以为云端系统,用于管理同一用户标识下的全部设备信息,设备组110和设备管理系统120可以相互通信,设备组110中的任意一台设备可以从设备管理系统中获取设备组中所有设备的列表以及具体IP地址等,如用户家里的设备可以属于同一用户账户,在设备组110中的设备需要相互通信时,设备组110中任意一台设备可以通过设备管理系统120获取到对应的设备的IP地址,进行通信。具体的通信机制,可以为设备组110组成同一个局域网,或,设备组110连接同一个WIFI,在此不做具体限定。
其中,上述设备组110可以包括触发设备111、识别设备112、源设备113以及目标设备114。
需要说明的是,上述触发设备111可以为N个设备中的任意一个设备,上述识别设备112可以为N个设备中的任意个设备,上述源设备113可以为上述N个设备中除了目标设备114之外的任意一个设备,上述目标设备可以为上述N个设备中除了源设备113之外的任意一个设备。可以理解,上述触发设备111可以同时作为识别设备112,或,同时作为源设备113,或,同时作为目标设备114,或,同时作为识别设备112和源设备113,或,同时作为识别设备112和目标设备,或,上述触发设备111为N个设备中除了识别设备112、源设备113以及目标设备114之外的任意一个设备。总之,除了源设备113和目标设备114无法为同一台设备之外,设备组中的任意一台设备都可以作为触发设备111和识别设备112,“触发”以及“识别”仅仅未避免混淆在功能上进行区分的描述,并不代表其一定为不同设备。在一种优选的实施例中,识别设备112可以为设备组110中算力最强的设备。
其中,上述触发设备111可以在接收到用户的触发指令后,开启自身的摄像模块,以及,向设备组110中自身之外的N-1个设备发送第一摄像指令,该第一摄像指令可以使N-1个设备开启各自的摄像模块,开始采集图像。
其中,上述识别设备112可以根据N个设备生成的第一数据确定源设备113,以及,根据N个设备生成的第二数据确定目标设备114,该第一数据可以为拍摄到的第一图像数据或第一手势置信度,该第二数据可以为拍摄到的第二图像数据或第二手势置信度。
其中,上述源设备113可以用于将目标内容转移至目标设备114。在一个实施例中,在识别设备112确定源设备113后,源设备113会向自身之外的N-1个设备发送第二摄像指令,该第二摄像指令可以使N-1设备使用各自的摄像模块,开始第二次采集图像。
上述目标设备114可以用于接收来自源设备113的目标内容并进行展示,该目标内容可以为流媒体信息,包括但不限于图像、文字、视频、音频中的任意一种或任意组合。
下面对本申请实施例中的电子设备进行说明,可以理解的是,本申请实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
如图1B所示,图1B为本申请实施例提供的一种电子设备的结构示意图,N台电子设备即可以组成设备组110,该电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。本申请实施例中的任意一台电子设备200可以包括:摄像模块210、触发传感器220、流转系统230以及重定向系统240。
其中,上述流转系统230包括手势识别单元231、触发识别单元232、控制单元233以及特效 单元234。上述手势识别单元231用于接收来自摄像模块210的图像数据,并进行手势识别,根据手势确定源设备113和目标设备114;上述触发识别单元232用于接收来自触发传感器220的触发指令,并确认该触发指令是否符合预设触发条件,在符合触发条件时通知控制单元233向其他电子设备200发送指令,使其他电子设备开启摄像模块210;控制单元233还可以用于控制特效单元234生成特效,该特效表示源设备向目标设备进行数据转移,可以理解的是,一般在电子设备200作为源设备113或目标设备114时特效单元才会生成相关的转移特效。综上所述,上述流转系统230主要用于确定是否需要进行数据转移,确定源设备113和目标设备114,以及,生成相关的数据转移特效。
其中,上述摄像模块210可以包括摄像头阵列,在此不做具体限定。
其中,上述触发传感器220可以包括重力传感器、光学传感器、声学传感器等等,在此不做具体限定。
其中,上述重定向系统240主要用于在流转系统230确定源设备113和目标设备114之后,控制数据转移的具体实现方式,如投屏等。
上述电子设备主要从功能单元方面进行了说明,下面结合图1C从硬件结构方面对本申请实施例中常见的电子设备形态进行说明,图1C为本申请实施例提供的另一种电子设备的结构示意图,包括系统级芯片310,外部存储器接口320,内部存储器321,通用串行总线(universal serial bus,USB)接口330,充电管理模块340,电源管理模块341,电池342,天线1,天线2,移动通信模块350,无线通信模块360,音频模块370,扬声器370A,受话器370B,麦克风370C,传感器模块380,按键390,马达391,指示器392,摄像头393,显示屏394,以及用户标识模块(subscriber identification module,SIM)卡接口395等。其中传感器模块380可以包括压力传感器380A,陀螺仪传感器380B,气压传感器380C,磁传感器380D,加速度传感器380E,距离传感器380F,接近光传感器380G,指纹传感器380H,温度传感器380J,触摸传感器380K,环境光传感器380L,骨传导传感器380M等。
电子设备300的无线通信功能可以通过天线1,天线2,移动通信模块350,无线通信模块360,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备300中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块350可以提供应用在电子设备300上的包括2G/3G/4G/5G/6G等无线通信的解决方案。移动通信模块350可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块350可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块350还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块350的至少部分功能模块可以被设置于处理器340中。在一些实施例中,移动通信模块350的至少部分功能模块可以与处理器340的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器370A,受话器370B等)输出声音信号,或通过显示屏394显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器340,与移动通信模块350或其他功能模块设置在同一个器件中。
无线通信模块360可以提供应用在电子设备300上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距 离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块360可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块360经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器340。无线通信模块360还可以从处理器340接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备300的天线1和移动通信模块350耦合,天线2和无线通信模块360耦合,使得电子设备300可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
充电管理模块340用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块340可以通过USB接口330接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块340可以通过电子设备300的无线充电线圈接收无线充电输入。充电管理模块340为电池342充电的同时,还可以通过电源管理模块341为电子设备供电。
电源管理模块341用于连接电池342,充电管理模块340与处理器340。电源管理模块341接收电池342和/或充电管理模块340的输入,为处理器340,内部存储器321,外部存储器,显示屏394,摄像头393,和无线通信模块360等供电。电源管理模块341还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块341也可以设置于处理器340中。在另一些实施例中,电源管理模块341和充电管理模块340也可以设置于同一个器件中。
电子设备300通过GPU,显示屏394,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏394和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器340可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏394用于显示图像,视频等。显示屏394包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备300可以包括1个或N个显示屏394,N为大于1的自然数。本申请实施例中,显示屏394可用于在各个APP的图标上显示红点或数量红点,用于提示用户有新消息待处理。
电子设备300可以通过ISP,摄像头393,视频编解码器,GPU,显示屏394以及应用处理器等实现拍摄功能。
ISP用于处理摄像头393反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头393中。
摄像头393用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary  metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备300可以包括1个或N个摄像头393,N为大于1的自然数。
外部存储器接口320可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备300的存储能力。外部存储卡通过外部存储器接口320与处理器340通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器321可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器340通过运行存储在内部存储器321的指令,从而执行电子设备300的各种功能应用以及数据处理。内部存储器321可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备300使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器321可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。本申请实施例中,内部存储器321可以用于存储各个APP消息的数据,还可用于存储各个APP对应的红点消除策略。
电子设备300可以通过音频模块370,扬声器370A,受话器370B,麦克风370C,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块370用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块370还可以用于对音频信号编码和解码。在一些实施例中,音频模块370可以设置于处理器340中,或将音频模块370的部分功能模块设置于处理器340中。
扬声器370A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备300可以通过扬声器370A收听音乐,或收听免提通话。
受话器370B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备300接听电话或语音信息时,可以通过将受话器370B靠近人耳接听语音。
麦克风370C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风370C发声,将声音信号输入到麦克风370C。电子设备300可以设置至少一个麦克风370C。在另一些实施例中,电子设备300可以设置两个麦克风370C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备300还可以设置三个,四个或更多麦克风370C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
压力传感器380A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器380A可以设置于显示屏394。压力传感器380A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器380A,电极之间的电容改变。电子设备300根据电容的变化确定压力的强度。当有触摸操作作用于显示屏394,电子设备300根据压力传感器380A检测所述触摸操作强度。电子设备300也可以根据压力传感器380A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器380B可以用于确定电子设备300的运动姿态。在一些实施例中,可以通过陀螺仪传感器380B确定电子设备300围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器380B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器380B检测电子设备300抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备300的抖动,实现防抖。陀螺仪传感器380B还可以用于导航,体感游戏场景。
气压传感器380C用于测量气压。在一些实施例中,电子设备300通过气压传感器380C测得 的气压值计算海拔高度,辅助定位和导航。
磁传感器380D包括霍尔传感器。电子设备300可以利用磁传感器380D检测翻盖皮套的开合。在一些实施例中,当电子设备300是翻盖机时,电子设备300可以根据磁传感器380D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器380E可检测电子设备300在各个方向上(一般为三轴)加速度的大小。当电子设备300静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器380F,用于测量距离。电子设备300可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备300可以利用距离传感器380F测距以实现快速对焦。
接近光传感器380G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备300通过发光二极管向外发射红外光。电子设备300使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备300附近有物体。当检测到不充分的反射光时,电子设备300可以确定电子设备300附近没有物体。电子设备300可以利用接近光传感器380G检测用户手持电子设备300贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器380G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器380L用于感知环境光亮度。电子设备300可以根据感知的环境光亮度自适应调节显示屏394亮度。环境光传感器380L也可用于拍照时自动调节白平衡。环境光传感器380L还可以与接近光传感器380G配合,检测电子设备300是否在口袋里,以防误触。
指纹传感器380H用于采集指纹。电子设备300可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器380J用于检测温度。在一些实施例中,电子设备300利用温度传感器380J检测的温度,执行温度处理策略。例如,当温度传感器380J上报的温度超过阈值,电子设备300执行降低位于温度传感器380J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备300对电池342加热,以避免低温导致电子设备300异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备300对电池342的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器380K,也称“触控面板”。触摸传感器380K可以设置于显示屏394,由触摸传感器380K与显示屏394组成触摸屏,也称“触控屏”。触摸传感器380K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏394提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器380K也可以设置于电子设备300的表面,与显示屏394所处的位置不同。
骨传导传感器380M可以获取振动信号。在一些实施例中,骨传导传感器380M可以获取人体声部振动骨块的振动信号。骨传导传感器380M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器380M也可以设置于耳机中,结合成骨传导耳机。音频模块370可以基于所述骨传导传感器380M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器380M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键390包括开机键,音量键等。按键390可以是机械按键。也可以是触摸式按键。电子设备300可以接收按键输入,产生与电子设备300的用户设置以及功能控制有关的键信号输入。
马达391可以产生振动提示。马达391可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏394不同区域的触摸操作,马达391也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器392可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来 电,通知等。
SIM卡接口395用于连接SIM卡。SIM卡可以通过插入SIM卡接口395,或从SIM卡接口395拔出,实现和电子设备300的接触和分离。电子设备300可以支持1个或N个SIM卡接口,N为大于1的自然数。SIM卡接口395可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口395可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口395也可以兼容不同类型的SIM卡。SIM卡接口395也可以兼容外部存储卡。电子设备300通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备300采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备300中,不能和电子设备300分离。
第二部分,本申请实施例所公开的数据转移方法的整体流程和关键技术介绍如下。
本申请实施例中的数据转移方法可以分为触发阶段、源设备确定阶段、目标设备确定阶段以及数据转移阶段。下面依次进行说明。
触发阶段,设备组中的所有设备检测用户的触发指令,不同的设备可以具备不同的触发条件,在触发指令符合触发条件时,该设备开启自身的摄像模块,同时向其他设备发送指令,其他设备在接收到该指令后,同样开启摄像模块。
具体的,以手机作为触发设备进行举例说明,如图2A所示,手机的触发条件可以为在预设时间,如2秒内,机身从水平状态变为竖直状态,并且面向用户。那么在用户在2秒内将手机从水平状态变为竖直状态且屏幕面对自己正面时,手机可以确定接收到了符合自身触发条件的触发指令,开启自身摄像头,同时向其他设备发送打开摄像头的指令。
在一种可能的实施例中,如图2B所示,手机的触发条件可以为机身的摇晃,那么用户在拿起手机摇一摇之后,手机即可确定接收到了符合自身触发条件的触发指令,开启自身摄像头,同时向其他设备发送打开摄像头的指令。
可以理解的是,上述触发阶段的触发设备可以为设备组中的任意一台设备,且不同的设备可以具备不同的触发条件,上述例子只是便于理解的说明,并不代表对本申请的限定。
通过触发阶段的流程,可以避免设备一直保持摄像模块的开启,只在用户存在数据转移需求的时候开启摄像模块,节省设备功耗。
源设备确定阶段,设备组中的所有设备开始在预设周期内采集图像,采集完成之后,可以存在两种处理方式:其一,每个设备将自身采集的图像数据发送至识别设备;其二,每个设备对自身采集的图像数据进行识别处理,得到各自的第一手势置信度,然后将全部的第一手势置信度发送给识别设备进行排序处理。识别设备可以为设备组中任意一台设备,如算力最强的设备,识别设备可以确定置信度最高的第一手势置信度对应的设备为源设备。
其中,第一手势可以为一个手掌抓握的动作,如图2C所示,在识别时,为降低计算量,第一手势置信度可以分为识别到手掌的置信度,以及在,在识别到手掌之后识别到拳头的置信度。
第一种处理方式下,识别设备可以采用神经网络模型对每个设备采集到的图像数据进行识别,确定图像中出现第一手势的可能性最高的图像数据对应的设备为源设备,在此不做赘述。
第二种处理方式下,为便于理解,以任意一台设备的处理流程进行举例来说,如图2D所示,进入源设备确定阶段后,
S1,在第一预设周期内检测是否出现手部图像;若检测到手部图像,则进入S3;若未检测到手部图像,则进入S2。
S2,在每个时钟周期反复检测是否出现手部图像;若在第一预设周期结束前检测到手部图像,则进入S3;若在第一预设周期结束时未检测到手部图像,则确定第一手势置信度为0。
S3,确定手部图像为手掌手势的置信度。进入第二预设周期。
S4,在第二预设周期内检测是否出现手部图像;若检测到手部图像,则进入S6;若未检测到手部图像,则进入S5。
S5,在每个时钟周期反复检测是否出现手部图像;若在第二预设周期结束前检测到手部图像,则进入S6;若在第二预设周期结束时未检测到手部图像,则确定第一手势置信度为0。
S6,确定手部图像为拳头手势的置信度。
S7,将手掌手势的置信度和拳头手势的置信度发送至识别设备进行排序处理。
之后,识别设备就可以筛选出综合置信度最高的图像对应的设备作为源设备。
置信度可以根据手势的角度、手势的画面占比大小来确定,对于真正的源设备而言,一般会识别到没有太多偏移角度的手势,且手势在画面占比一般为最大,所以根据上述规则可以确定第一手势置信度。
可见,采用第二种方式,可以降低识别设备的计算量,提升识别效率。
需要说明的是,确定源设备后,源设备可以生成提示信息,提示信息用于告知用户该设备即将进行数据转移。提升信息可以为文字、图像、视频、语音等任意一种方式或任意一种方式进行展示,在此不做具体限定。通过生成提升信息,可以使得用户知晓源设备的状态,方便用户通过手势指定目标设备。
在一种可能的实施例中,可以根据源设备和目标设备的不同而生成不同的提示信息,如手机到电视的数据转移,可以在手机上生成特效界面,该特效界面包括但不限于对显示界面的缩放、新增文字注释、语音播报等,在此不做具体限定。
在一种可能的实施例中,可以根据源设备当前要转移的数据类型不同而生成不同的提示信息,数据类型可以包括投屏类应用、显示类应用以及交互类应用等,在此不做具体限定。
且源设备会向自身以外的其他设备发送准备接收第二手势的指令,具体的,源设备可以通过设备管理系统获取到设备组中所有设备的信息,然后向自身之外的设备发送准备接收第二手势的指令,该指令可以使得其他设备延长摄像模块的开启时间,即进入第三预设周期和第四预设周期。
目标设备的确定阶段,设备组中源设备之外的设备开始在第三预设周期和第四预设周期内采集图像,采集完成之后,可以存在两种处理方式:其一,每个设备将自身采集的图像数据发送至识别设备;其二,每个设备对自身采集的图像数据进行识别处理,得到各自的第二手势置信度,然后将全部的第二手势置信度发送给识别设备进行排序处理。识别设备可以为设备组中任意一台设备,如算力最强的设备,识别设备可以确定置信度最高的第二手势置信度对应的设备为目标设备。
其中,第二手势可以为一个手掌展开的动作,如图2E所示,在识别时,为降低计算量,第二手势置信度可以分为识别到拳头的置信度,以及在,在识别到拳头之后识别到手掌的置信度。
第一种处理方式下,识别设备可以采用神经网络模型对每个设备采集到的图像数据进行识别,确定图像中出现第二手势的可能性最高的图像数据对应的设备为目标设备,在此不做赘述。
第二种处理方式下,为便于理解,以任意一台设备的处理流程进行举例来说,如图2F所示,进入目标设备确定阶段后,
S8,在第三预设周期内检测是否出现手部图像;若检测到手部图像,则进入S10;若未检测到手部图像,则进入S9。
S9,在每个时钟周期反复检测是否出现手部图像;若在第三预设周期结束前检测到手部图像,则进入S10;若在第三预设周期结束时未检测到手部图像,则确定第二手势置信度为0。
S10,确定手部图像为拳头手势的置信度。进入第四预设周期。
S11,在第四预设周期内检测是否出现手部图像;若检测到手部图像,则进入S13;若未检测到手部图像,则进入S12。
S12,在每个时钟周期反复检测是否出现手部图像;若在第四预设周期结束前检测到手部图像,则进入S13;若在第四预设周期结束时未检测到手部图像,则确定第二手势置信度为0。
S13,确定手部图像为手掌手势的置信度。
S14,将手掌手势的置信度和拳头手势的置信度发送至识别设备进行排序处理。
之后,识别设备就可以筛选出综合置信度最高的图像对应的设备作为目标设备。
同理,置信度可以根据手势的角度、手势的画面占比大小来确定,对于真正的源设备而言,一般会识别到没有太多偏移角度的手势,且手势在画面占比一般为最大,所以根据上述规则可以确定第二手势置信度。
可见,通过手势来确定源设备和目标设备,设备无需搭载额外的硬件,且可以远距离进行数据转移,大大提升了用户体验。
数据转移阶段,在源设备和目标设备都确定以后,可以关闭所有设备的摄像模块,开始具体的数据转移流程。
在一种可能的实施例中,源设备可以先取消提示信息,然后根据当前运行的数据类型执行不同的数据转移策略,举例来说,若当前的数据类型为投屏类应用或显示类,源设备可以截取当前显示的界面,然后生成界面转移的特效将界面转移至目标设备上显示,转移完成后源设备也可以继续保持显示状态或停止显示状态;若当前的数据类型为交互类应用且目标设备上有应用能兼容源设备当前运行的应用,源设备可以生成界面转移的特效将界面转移至目标设备,目标设备同步显示该界面,转移完成后源设备关闭此应用。
结合图2G、2H对本申请实施例中的数据转移特效进行说明,如图2G所示,源设备可以生成待转移界面向目标设备移动的特效,目标设备也会同步生成待转移界面进入目标设备的特效;如图2H所示,源设备可以将待转移界面逐渐缩小直到消失不见,而目标设备可以同步将待转移界面逐渐放大直到正常显示。
可以理解的是,上述数据转移的特效可以包括多种,为了便于理解而以上述例子进行说明,并不代表对本申请的数据转移方式的限定。
通过上述方法,可以在根据用户的手势自动确定源设备和目标设备,实现远距离的数据转移,大大提升了用户的使用体验。
第三部分,本申请实施例中数据转移方法的权要保护范围如下。
请参阅图3,图3为本申请实施例提供的一种数据转移方法的流程示意图,具体包括以下步骤:
步骤301,响应于触发指令,控制N个设备的摄像模块采集第一图像数据。
其中,N为大于或等于2的自然数,。N个设备可以为家庭中的设备,共享同一个用户账号。第一图像数据可以为图像也可以为图像集合构成的视频。
步骤302,基于所述第一图像数据确定所述N个设备中的源设备。
步骤303,控制所述源设备之外的N-1个设备的摄像模块采集第二图像数据。
其中,第二图像数据可以为图像也可以为图像集合构成的视频。
步骤304,基于所述第二图像数据确定所述N-1个设备中的目标设备。
步骤305,控制所述源设备将目标内容转移至所述目标设备。
在一个可能的实施例中,所述基于所述第一图像数据确定所述N个设备中的源设备,包括:
对所述N个设备采集到的第一图像数据进行第一识别处理,得到所述第一图像数据的第一手势置信度;
确定置信度最高的第一手势置信度对应的设备为所述源设备。
在一个可能的实施例中,所述基于所述第一图像数据确定所述N个设备中的源设备,包括:
接收所述N个设备根据所述第一图像数据生成的第一手势置信度;
对所述第一手势置信度进行排序处理,确定置信度最高的第一手势置信度对应的设备为所述源设备。
在一个可能的实施例中,所述基于所述第二图像集确定所述N-1个设备中的目标设备,包括:
对所述N-1个设备采集到的第二图像集进行第二识别处理,得到所述第二图像集的第二手势置信度;
确定置信度最高的第二手势置信度对应的设备为所述目标设备。
在一个可能的实施例中,所述基于所述第二图像集确定所述N-1个设备中的目标设备,包括:
接收所述N-1个设备根据所述第二图像数据生成的第二手势置信度;
对所述第二手势置信度进行排序处理,确定置信度最高的第二手势置信度对应的设备为所述目标设备。
在一个可能的实施例中,所述控制所述源设备将目标内容转移至所述目标设备,包括:
获取所述目标内容的应用类型;
根据所述应用类型对应的预设转移规则将所述目标内容转移至所述目标设备上进行显示。
请参阅图4,图4为本申请实施例提供的另一种数据转移方法的流程示意图,具体包括以下步骤:
步骤401,源设备输出目标内容。
其中,所述源设备为识别设备根据N个设备所生成的第一数据确定的所述目标内容的发送设备,N为大于或等于2的自然数,所述N个设备包括所述识别设备、所述源设备和目标设备,且所述N个设备中任意两个设备都可相互通信,所述识别设备为所述N个设备中的任意一个;目标内容可以为文字、图像、视频、音频中的任意一个或任意组合构成的流媒体信息。第一数据可以为第一图像数据或者第一手势置信度。
步骤402,所述源设备向所述目标设备发送所述目标内容。
其中,所述目标设备为所述识别设备根据所述N个设备所生成的第二数据确定的所述目标内容的接收设备。第二数据可以为第二图像数据或者第二手势置信度。
所述识别设备为所述源设备的情况下;存在如下可能的实施例:
在一个可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
所述第一数据包括第一图像数据;所述源设备通过如下步骤确定自身为所述目标内容的发送设备:
所述源设备通过摄像模块采集第一图像数据,以及,接收来自所述源设备之外的N-1个设备的第一图像数据;
所述源设备对所述第一图像数据进行第一识别处理,确定所述第一图像数据的第一手势置信度;
所述源设备筛选出置信度最高的第一手势置信度;
所述源设备确定自身为所述置信度最高的第一手势置信度对应的设备。
在一个可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
所述第一数据包括第一手势置信度;所述源设备通过如下步骤确定自身为所述目标内容的发送设备:
所述源设备对自身采集的第一图像数据进行第一处理,得到第一手势置信度,以及,接收来自所述源设备之外的N-1个设备的第一手势置信度;
所述源设备对所述第一手势置信度进行排序处理;
所述源设备确定自身为置信度最高的第一手势置信度对应的设备。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述第二数据包括第二图像数据;所述源设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
所述源设备接收来自所述源设备之外的N-1个设备的第二图像数据;
所述源设备对所述第二图像数据进行第二识别处理,确定所述第二图像数据的第二手势置信度;
所述源设备筛选出置信度最高的第二手势置信度;
所述源设备确定所述置信度最高的第二手势置信度对应的设备为所述目标设备。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述第二数据包括第二手势置信度;所述源设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
所述源设备接收来自所述源设备之外的N-1个设备的第二手势置信度;
所述源设备对所述第二手势置信度进行排序处理;
所述源设备确定置信度最高的第二手势置信度对应的设备为所述目标设备。
所述识别设备为所述目标设备的情况下;存在如下可能的实施例:
在一种可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
所述第一数据包括第一图像数据;所述目标设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
所述目标设备通过摄像模块采集第一图像数据,以及,接收来自所述目标设备之外的N-1个设备的第一图像数据;
所述目标设备对所述第一图像数据进行第一识别处理,确定所述第一图像数据的第一手势置信度;
所述目标设备筛选出置信度最高的第一手势置信度;
所述目标设备确定所述置信度最高的第一手势置信度对应的设备为所述源设备。
在一个可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
所述第一数据包括第一手势置信度;所述目标设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
所述目标设备对自身采集的第一图像数据进行第一处理,得到第一手势置信度,以及,接收来自所述目标设备之外的N-1个设备的第一手势置信度;
所述目标设备对所述第一手势置信度进行排序处理;
所述目标设备确定置信度最高的第一手势置信度对应的设备为所述源设备。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述第二数据包括第二图像数据;所述目标设备通过如下步骤确定自身为所述目标内容的接收设备:
所述目标设备通过摄像模块采集第二图像数据,以及,接收来自所述源设备和自身之外的N-2个设备的第二图像数据;
所述目标设备对所述第二图像数据进行第二识别处理,确定所述第二图像数据的第二手势置信度;
所述目标设备筛选出置信度最高的第二手势置信度;
所述目标设备确定自身为所述置信度最高的所述第一手势置信度对应的设备。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述第二数据包括第二手势置信度;所述目标设备通过如下步骤确定自身为所述目标内容的接收设备:
所述目标设备对自身采集的第二图像数据进行第二处理,得到第二手势置信度,以及,接收来自所述源设备和自身之外的N-2个设备的第二手势置信度;
所述目标设备对所述第二手势置信度进行排序处理;
所述目标设备确定自身为置信度最高的第二手势置信度对应的设备。
所述识别设备为第一设备,所述第一设备为所述N个设备中所述源设备和所述目标设备之外的任意一个设备的情况下,存在如下可能的实施例:
在一个可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
所述第一数据包括第一图像数据;所述第一设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
所述第一设备通过摄像模块采集第一图像数据,以及,接收来自所述第一设备之外的N-1个设备的第一图像数据;
所述第一设备对所述第一图像数据进行第一识别处理,确定所述第一图像数据的第一手势置信 度;
所述第一设备筛选出置信度最高的第一手势置信度;
所述第一设备确定所述置信度最高的第一手势置信度对应的设备为所述源设备。
在一个可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
所述第一数据包括第一手势置信度;所述第一设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
所述第一设备对自身采集的第一图像数据进行第一处理,得到第一手势置信度,以及,接收来自所述第一设备之外的N-1个设备的第一手势置信度;
所述第一设备对所述第一手势置信度进行排序处理;
所述第一设备确定置信度最高的第一手势置信度对应的设备为所述源设备。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述第二数据包括第二图像数据;所述第一设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
所述第一设备通过摄像模块采集第二图像数据,以及,接收来自所述源设备和自身之外的N-2个设备的第二图像数据;
所述第一设备对所述第二图像数据进行第二识别处理,确定所述第二图像数据的第二手势置信度;
所述第一设备筛选出置信度最高的第二手势置信度;
所述第一设备确定所述置信度最高的第二手势置信度对应的设备为所述目标设备。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述第二数据包括第二手势置信度;所述第一设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
所述第一设备对自身采集的第二图像数据进行第二处理,得到第二手势置信度,以及,接收来自所述源设备和自身之外的N-2个设备的第二手势置信度;
所述第一设备对所述第二手势置信度进行排序处理;
所述第一设备确定置信度最高的第二手势置信度对应的设备为所述目标设备。
在一个可能的实施例中,所述源设备输出目标内容之前,所述方法还包括:
触发设备根据触发指令开启自身的摄像模块,以及,向至少一个设备发送第一摄像指令,所述第一摄像指令用于使设备开启自身的摄像模块,所述触发设备为所述N个设备中的任意一个。
进一步的,所述触发设备根据触发指令开启自身的摄像模块,以及,向至少一个设备发送第一摄像指令,包括:
所述触发设备在接收到符合预设触发条件的触发指令时,开启自身的摄像模块,以及,获取自身之外的N-1个设备当前的工作状态,所述工作状态包括运行状态和非运行状态;
所述触发设备从所述N-1个设备中筛选出处于所述运行状态的设备;
所述触发设备向处于所述运行状态的设备发送第一摄像指令。
在一个可能的实施例中,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
所述源设备向自身之外的N-1个设备发送第二摄像指令,所述第二摄像指令用于使所述N-1个设备维持摄像模块的开启状态。
可见,通过上述方法,可以对所有设备采集到的图像进行识别来确定源设备和目标设备,无需搭载额外的硬件,就可以实现符合用户需求的设备之间的流媒体信息的转移,大大提升了用户的交互体验。
上述主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该 很容易意识到,结合本文中所提供的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,下面结合图5对本申请实施例中的一种数据转移装置进行详细说明,图5为本申请实施例提供的一种数据转移装置500的功能单元组成框图,所述装置包括:
第一采集单元510,用于响应于触发指令,控制N个设备的摄像模块采集第一图像数据,N为大于或等于2的自然数;
源设备确定单元520,用于基于所述第一图像数据确定所述N个设备中的源设备;
第二采集单元530,用于控制所述源设备之外的N-1个设备的摄像模块采集第二图像数据;
目标设备确定单元540,用于基于所述第二图像数据确定所述N-1个设备中的目标设备;
转移控制单元550,用于控制所述源设备将目标内容转移至所述目标设备。
在采用对应各个功能划分各个功能模块的情况下,下面结合图6对本申请实施例中的另一种数据转移装置进行详细说明,图6为本申请实施例提供的另一种数据转移装置600的功能单元组成框图,所述装置包括:
内容输出单元610,用于使源设备输出目标内容,所述源设备为识别设备根据N个设备所生成的第一数据确定的所述目标内容的发送设备,N为大于或等于2的自然数,所述N个设备包括所述识别设备、所述源设备和目标设备,且所述N个设备中任意两个设备都可相互通信,所述识别设备为所述N个设备中的任意一个;
内容转移单元620,用于使所述源设备向所述目标设备发送所述目标内容,所述目标设备为所述识别设备根据所述N个设备所生成的第二数据确定的所述目标内容的接收设备。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
可以理解的是,由于方法实施例与装置实施例为相同技术构思的不同呈现形式,因此,本申请中方法实施例部分的内容应同步适配于装置实施例部分,此处不再赘述。上述数据转移装置500和数据转移装置600均可执行上述实施例包括的全部的数据转移方法。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括鱼群检测设备。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如, 以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (26)

  1. 一种数据转移方法,其特征在于,所述方法包括:
    响应于触发指令,控制N个设备的摄像模块采集第一图像数据,N为大于或等于2的自然数;
    基于所述第一图像数据确定所述N个设备中的源设备;
    控制所述源设备之外的N-1个设备的摄像模块采集第二图像数据;
    基于所述第二图像数据确定所述N-1个设备中的目标设备;
    控制所述源设备将目标内容转移至所述目标设备。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述第一图像数据确定所述N个设备中的源设备,包括:
    对所述N个设备采集到的第一图像数据进行第一识别处理,得到所述第一图像数据的第一手势置信度;
    确定置信度最高的第一手势置信度对应的设备为所述源设备。
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述第一图像数据确定所述N个设备中的源设备,包括:
    接收所述N个设备根据所述第一图像数据生成的第一手势置信度;
    对所述第一手势置信度进行排序处理,确定置信度最高的第一手势置信度对应的设备为所述源设备。
  4. 根据权利要求1所述的方法,其特征在于,所述基于所述第二图像集确定所述N-1个设备中的目标设备,包括:
    对所述N-1个设备采集到的第二图像集进行第二识别处理,得到所述第二图像集的第二手势置信度;
    确定置信度最高的第二手势置信度对应的设备为所述目标设备。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述第二图像集确定所述N-1个设备中的目标设备,包括:
    接收所述N-1个设备根据所述第二图像数据生成的第二手势置信度;
    对所述第二手势置信度进行排序处理,确定置信度最高的第二手势置信度对应的设备为所述目标设备。
  6. 根据权利要求1所述的方法,其特征在于,所述控制所述源设备将目标内容转移至所述目标设备,包括:
    获取所述目标内容的应用类型;
    根据所述应用类型对应的预设转移规则将所述目标内容转移至所述目标设备上进行显示。
  7. 一种数据转移方法,其特征在于,所述方法包括:
    源设备输出目标内容,所述源设备为识别设备根据N个设备所生成的第一数据确定的所述目标内容的发送设备,N为大于或等于2的自然数,所述N个设备包括所述识别设备、所述源设备和目标设备,且所述N个设备中任意两个设备都可相互通信,所述识别设备为所述N个设备中的任意一个;
    所述源设备向所述目标设备发送所述目标内容,所述目标设备为所述识别设备根据所述N个设备所生成的第二数据确定的所述目标内容的接收设备。
  8. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述源设备;所述源设备输出目标内容之前,所述方法还包括:
    所述第一数据包括第一图像数据;所述源设备通过如下步骤确定自身为所述目标内容的发送设备:
    所述源设备通过摄像模块采集第一图像数据,以及,接收来自所述源设备之外的N-1个设备的第一图像数据;
    所述源设备对所述第一图像数据进行第一识别处理,确定所述第一图像数据的第一手势置信度;
    所述源设备筛选出置信度最高的第一手势置信度;
    所述源设备确定自身为所述置信度最高的第一手势置信度对应的设备。
  9. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述源设备;所述源设备输出目标内容之前,所述方法还包括:
    所述第一数据包括第一手势置信度;所述源设备通过如下步骤确定自身为所述目标内容的发送设备:
    所述源设备对自身采集的第一图像数据进行第一处理,得到第一手势置信度,以及,接收来自所述源设备之外的N-1个设备的第一手势置信度;
    所述源设备对所述第一手势置信度进行排序处理;
    所述源设备确定自身为置信度最高的第一手势置信度对应的设备。
  10. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述源设备;所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述第二数据包括第二图像数据;所述源设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
    所述源设备接收来自所述源设备之外的N-1个设备的第二图像数据;
    所述源设备对所述第二图像数据进行第二识别处理,确定所述第二图像数据的第二手势置信度;
    所述源设备筛选出置信度最高的第二手势置信度;
    所述源设备确定所述置信度最高的第二手势置信度对应的设备为所述目标设备。
  11. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述源设备;所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述第二数据包括第二手势置信度;所述源设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
    所述源设备接收来自所述源设备之外的N-1个设备的第二手势置信度;
    所述源设备对所述第二手势置信度进行排序处理;
    所述源设备确定置信度最高的第二手势置信度对应的设备为所述目标设备。
  12. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述目标设备;所述源设备输出目标内容之前,所述方法还包括:
    所述第一数据包括第一图像数据;所述目标设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
    所述目标设备通过摄像模块采集第一图像数据,以及,接收来自所述目标设备之外的N-1个设备的第一图像数据;
    所述目标设备对所述第一图像数据进行第一识别处理,确定所述第一图像数据的第一手势置信度;
    所述目标设备筛选出置信度最高的第一手势置信度;
    所述目标设备确定所述置信度最高的第一手势置信度对应的设备为所述源设备。
  13. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述目标设备;所述源设备输出目标内容之前,所述方法还包括:
    所述第一数据包括第一手势置信度;所述目标设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
    所述目标设备对自身采集的第一图像数据进行第一处理,得到第一手势置信度,以及,接收来自所述目标设备之外的N-1个设备的第一手势置信度;
    所述目标设备对所述第一手势置信度进行排序处理;
    所述目标设备确定置信度最高的第一手势置信度对应的设备为所述源设备。
  14. 根据权利要求7所述的方法,其特征在于,所述识别设备所述目标设备;所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述第二数据包括第二图像数据;所述目标设备通过如下步骤确定自身为所述目标内容的接收设备:
    所述目标设备通过摄像模块采集第二图像数据,以及,接收来自所述源设备和自身之外的N-2个设备的第二图像数据;
    所述目标设备对所述第二图像数据进行第二识别处理,确定所述第二图像数据的第二手势置信度;
    所述目标设备筛选出置信度最高的第二手势置信度;
    所述目标设备确定自身为所述置信度最高的所述第一手势置信度对应的设备。
  15. 根据权利要求7所述的方法,其特征在于,所述识别设备为所述目标设备;所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述第二数据包括第二手势置信度;所述目标设备通过如下步骤确定自身为所述目标内容的接收设备:
    所述目标设备对自身采集的第二图像数据进行第二处理,得到第二手势置信度,以及,接收来自所述源设备和自身之外的N-2个设备的第二手势置信度;
    所述目标设备对所述第二手势置信度进行排序处理;
    所述目标设备确定自身为置信度最高的第二手势置信度对应的设备。
  16. 根据权利要求7所述的方法,其特征在于,所述识别设备为第一设备,所述第一设备为所述N个设备中所述源设备和所述目标设备之外的任意一个设备;所述源设备输出目标内容之前,所述方法还包括:
    所述第一数据包括第一图像数据;所述第一设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
    所述第一设备通过摄像模块采集第一图像数据,以及,接收来自所述第一设备之外的N-1个设备的第一图像数据;
    所述第一设备对所述第一图像数据进行第一识别处理,确定所述第一图像数据的第一手势置信 度;
    所述第一设备筛选出置信度最高的第一手势置信度;
    所述第一设备确定所述置信度最高的第一手势置信度对应的设备为所述源设备。
  17. 根据权利要求7所述的方法,其特征在于,所述识别设备为第一设备,所述第一设备为所述N个设备中所述源设备和所述目标设备之外的任意一个设备;所述源设备输出目标内容之前,所述方法还包括:
    所述第一数据包括第一手势置信度;所述第一设备通过如下步骤确定所述源设备为所述目标内容的发送设备:
    所述第一设备对自身采集的第一图像数据进行第一处理,得到第一手势置信度,以及,接收来自所述第一设备之外的N-1个设备的第一手势置信度;
    所述第一设备对所述第一手势置信度进行排序处理;
    所述第一设备确定置信度最高的第一手势置信度对应的设备为所述源设备。
  18. 根据权利要求7所述的方法,其特征在于,所述识别设备为第一设备,所述第一设备为所述N个设备中所述源设备和所述目标设备之外的任意一个设备;所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述第二数据包括第二图像数据;所述第一设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
    所述第一设备通过摄像模块采集第二图像数据,以及,接收来自所述源设备和自身之外的N-2个设备的第二图像数据;
    所述第一设备对所述第二图像数据进行第二识别处理,确定所述第二图像数据的第二手势置信度;
    所述第一设备筛选出置信度最高的第二手势置信度;
    所述第一设备确定所述置信度最高的第二手势置信度对应的设备为所述目标设备。
  19. 根据权利要求7所述的方法,其特征在于,所述识别设备为第一设备,所述第一设备为所述N个设备中所述源设备和所述目标设备之外的任意一个设备;所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述第二数据包括第二手势置信度;所述第一设备通过如下步骤确定所述目标设备为所述目标内容的接收设备:
    所述第一设备对自身采集的第二图像数据进行第二处理,得到第二手势置信度,以及,接收来自所述源设备和自身之外的N-2个设备的第二手势置信度;
    所述第一设备对所述第二手势置信度进行排序处理;
    所述第一设备确定置信度最高的第二手势置信度对应的设备为所述目标设备。
  20. 根据权利要求7所述的方法,其特征在于,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述源设备生成提示信息,所述提示信息用于提示用户所述目标内容准备进行转移。
  21. 根据权利要求7所述的方法,其特征在于,所述源设备输出目标内容之前,所述方法还包括:
    触发设备根据触发指令开启自身的摄像模块,以及,向至少一个设备发送第一摄像指令,所述 第一摄像指令用于使设备开启自身的摄像模块,所述触发设备为所述N个设备中的任意一个。
  22. 根据权利要求21所述的方法,其特征在于,所述触发设备根据触发指令开启自身的摄像模块,以及,向至少一个设备发送第一摄像指令,包括:
    所述触发设备在接收到符合预设触发条件的触发指令时,开启自身的摄像模块,以及,获取自身之外的N-1个设备当前的工作状态,所述工作状态包括运行状态和非运行状态;
    所述触发设备从所述N-1个设备中筛选出处于所述运行状态的设备;
    所述触发设备向处于所述运行状态的设备发送第一摄像指令。
  23. 根据权利要求7所述的方法,其特征在于,所述源设备向所述目标设备发送所述目标内容之前,所述方法还包括:
    所述源设备向自身之外的N-1个设备发送第二摄像指令,所述第二摄像指令用于使所述N-1个设备维持摄像模块的开启状态。
  24. 一种数据转移系统,其特征在于,所述数据转移系统包括N个设备,N为大于或等于2的自然数,所述N个设备中的任意两个设备都可相互通信,所述N个设备包括触发设备、识别设备、源设备和目标设备,所述触发设备为所述N个设备中的任意一个,所述识别设备为所述N个设备中的任意一个,所述源设备为所述N个设备中除所述目标设备之外的任意一个,所述目标设备为所述N个设备中除所述源设备之外的任意一个;
    所述触发设备,用于根据对应的触发指令开启所述N个设备的摄像模块;
    所述识别设备,用于根据所述N个设备生成的第一数据确定所述源设备,以及,根据所述N个设备生成的第二数据确定所述目标设备;
    所述源设备,用于将目标内容转移至所述目标设备。
  25. 一种电子设备,其特征在于,包括应用处理器、存储器,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述应用处理器执行,所述程序包括用于执行如权利要求1~6或7~23任一项所述的方法中的步骤的指令。
  26. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1~6或7~23任一项所述的方法。
PCT/CN2021/115822 2020-11-16 2021-08-31 数据转移方法及相关装置 WO2022100219A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011283401.4 2020-11-16
CN202011283401.4A CN112272191B (zh) 2020-11-16 2020-11-16 数据转移方法及相关装置

Publications (1)

Publication Number Publication Date
WO2022100219A1 true WO2022100219A1 (zh) 2022-05-19

Family

ID=74340753

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115822 WO2022100219A1 (zh) 2020-11-16 2021-08-31 数据转移方法及相关装置

Country Status (2)

Country Link
CN (1) CN112272191B (zh)
WO (1) WO2022100219A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024002636A1 (en) * 2022-06-28 2024-01-04 International Business Machines Corporation Dynamic resource allocation method for sensor-based neural networks using shared confidence intervals

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112272191B (zh) * 2020-11-16 2022-07-12 Oppo广东移动通信有限公司 数据转移方法及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2667567A1 (en) * 2012-05-24 2013-11-27 BlackBerry Limited System and Method for Sharing Data Across Multiple Electronic Devices
CN104123191A (zh) * 2014-07-31 2014-10-29 北京智谷睿拓技术服务有限公司 任务迁移控制方法、装置和系统
US20160112501A1 (en) * 2012-02-29 2016-04-21 Google Inc. Transferring Device States Between Multiple Devices
CN111459718A (zh) * 2020-03-31 2020-07-28 珠海格力电器股份有限公司 一种多终端系统及其数据备份方法和存储介质
CN112272191A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 数据转移方法及相关装置

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8059111B2 (en) * 2008-01-21 2011-11-15 Sony Computer Entertainment America Llc Data transfer using hand-held device
US8756532B2 (en) * 2010-01-21 2014-06-17 Cisco Technology, Inc. Using a gesture to transfer an object across multiple multi-touch devices
US8849633B2 (en) * 2010-10-29 2014-09-30 Accuray Incorporated Method and apparatus for selecting a tracking method to use in image guided treatment
US20120198353A1 (en) * 2011-01-28 2012-08-02 Microsoft Corporation Transferring data using a physical gesture
CN102831404B (zh) * 2012-08-15 2016-01-13 深圳先进技术研究院 手势检测方法及系统
CN102866777A (zh) * 2012-09-12 2013-01-09 中兴通讯股份有限公司 一种数字媒体内容播放转移的方法及播放设备及系统
CN102938825B (zh) * 2012-11-12 2016-03-23 小米科技有限责任公司 一种拍摄照片和视频的方法及装置
US9910499B2 (en) * 2013-01-11 2018-03-06 Samsung Electronics Co., Ltd. System and method for detecting three dimensional gestures to initiate and complete the transfer of application data between networked devices
EP2891957B1 (en) * 2014-01-07 2019-03-06 Samsung Electronics Co., Ltd Computing system with command-sense mechanism and method of operation thereof
CN204013873U (zh) * 2014-04-15 2014-12-10 深圳市泰丰网络设备有限公司 一种安防监控系统
CN104008580A (zh) * 2014-05-09 2014-08-27 深圳市灵动飞扬科技有限公司 车辆受撞击时启动360全景录像的系统和方法
US10176198B1 (en) * 2016-05-09 2019-01-08 A9.Com, Inc. Techniques for identifying visually similar content
CN106375703A (zh) * 2016-09-14 2017-02-01 北京小米移动软件有限公司 视频通信方法及装置
CN106507286A (zh) * 2016-11-22 2017-03-15 北京三快在线科技有限公司 一种文件传输方法、源设备和非源设备
CN106993161A (zh) * 2017-04-12 2017-07-28 合肥才来科技有限公司 远程无线视频监控系统
CN106982330A (zh) * 2017-05-15 2017-07-25 珠海市魅族科技有限公司 摄像控制方法以及装置、计算机装置、可读存储介质
JP6924901B2 (ja) * 2017-10-14 2021-08-25 華為技術有限公司Huawei Technologies Co.,Ltd. 写真撮影方法および電子装置
CN108038452B (zh) * 2017-12-15 2020-11-03 厦门瑞为信息技术有限公司 一种基于局部图像增强的家电手势快速检测识别方法
US11017217B2 (en) * 2018-10-09 2021-05-25 Midea Group Co., Ltd. System and method for controlling appliances using motion gestures
CN110319882B (zh) * 2019-06-24 2021-04-20 中国路桥工程有限责任公司 公路沿线坡体灾害监测系统
CN111062313A (zh) * 2019-12-13 2020-04-24 歌尔股份有限公司 一种图像识别方法、装置、监控系统及存储介质
CN116320782B (zh) * 2019-12-18 2024-03-26 荣耀终端有限公司 一种控制方法、电子设备、计算机可读存储介质、芯片

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160112501A1 (en) * 2012-02-29 2016-04-21 Google Inc. Transferring Device States Between Multiple Devices
EP2667567A1 (en) * 2012-05-24 2013-11-27 BlackBerry Limited System and Method for Sharing Data Across Multiple Electronic Devices
CN104123191A (zh) * 2014-07-31 2014-10-29 北京智谷睿拓技术服务有限公司 任务迁移控制方法、装置和系统
CN111459718A (zh) * 2020-03-31 2020-07-28 珠海格力电器股份有限公司 一种多终端系统及其数据备份方法和存储介质
CN112272191A (zh) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 数据转移方法及相关装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024002636A1 (en) * 2022-06-28 2024-01-04 International Business Machines Corporation Dynamic resource allocation method for sensor-based neural networks using shared confidence intervals

Also Published As

Publication number Publication date
CN112272191B (zh) 2022-07-12
CN112272191A (zh) 2021-01-26

Similar Documents

Publication Publication Date Title
WO2020259038A1 (zh) 一种拍摄方法及设备
CN110381197B (zh) 多对一投屏中音频数据的处理方法、装置及系统
WO2020168965A1 (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
CN110244893B (zh) 一种分屏显示的操作方法及电子设备
WO2021052214A1 (zh) 一种手势交互方法、装置及终端设备
WO2021213164A1 (zh) 应用界面交互方法、电子设备和计算机可读存储介质
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
CN112351156B (zh) 一种镜头切换方法及装置
WO2022100610A1 (zh) 投屏方法、装置、电子设备及计算机可读存储介质
WO2020173370A1 (zh) 一种应用图标的移动方法及电子设备
CN112492193B (zh) 一种回调流的处理方法及设备
WO2020221060A1 (zh) 一种卡片处理方法及设备
WO2020216098A1 (zh) 一种跨电子设备转接服务的方法、设备以及系统
WO2021036898A1 (zh) 折叠屏设备中应用打开方法及相关装置
CN114125130B (zh) 控制通信服务状态的方法、终端设备和可读存储介质
WO2021208723A1 (zh) 全屏显示方法、装置和电子设备
WO2022042326A1 (zh) 显示控制的方法及相关装置
WO2022116930A1 (zh) 内容共享方法、电子设备及存储介质
WO2022028537A1 (zh) 一种设备识别方法及相关装置
WO2022100219A1 (zh) 数据转移方法及相关装置
WO2021197071A1 (zh) 无线通信系统及方法
CN112312366A (zh) 一种通过nfc标签实现功能的方法、电子设备及系统
WO2020221062A1 (zh) 一种导航操作方法及电子设备
WO2021052388A1 (zh) 一种视频通信方法及视频通信装置
WO2022152167A1 (zh) 一种网络选择方法及设备

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21890747

Country of ref document: EP

Kind code of ref document: A1