WO2022017209A1 - Electronic device and distributed system - Google Patents

Electronic device and distributed system Download PDF

Info

Publication number
WO2022017209A1
WO2022017209A1 PCT/CN2021/105741 CN2021105741W WO2022017209A1 WO 2022017209 A1 WO2022017209 A1 WO 2022017209A1 CN 2021105741 W CN2021105741 W CN 2021105741W WO 2022017209 A1 WO2022017209 A1 WO 2022017209A1
Authority
WO
WIPO (PCT)
Prior art keywords
display screen
microcontroller
processor
user
fuselage
Prior art date
Application number
PCT/CN2021/105741
Other languages
French (fr)
Chinese (zh)
Inventor
朱欣
罗育峰
张文铿
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022017209A1 publication Critical patent/WO2022017209A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0225Power saving arrangements in terminal devices using monitoring of external events, e.g. the presence of a signal
    • H04W52/0248Power saving arrangements in terminal devices using monitoring of external events, e.g. the presence of a signal dependent on the time of the day, e.g. according to expected transmission activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0261Power saving arrangements in terminal devices managing power supply demand, e.g. depending on battery level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the field of terminal technologies, and in particular, to an electronic device and a distributed system.
  • the intelligent terminal device is usually equipped with a full-featured chip system, so that the intelligent terminal device has the complete ability to run the application program APP, display the image and interact with the user.
  • the embodiments of the present application provide an electronic device and a distributed system, which can reduce the volume and weight of the electronic device, achieve longer battery life, facilitate the user to hold, and improve the user experience.
  • an embodiment of the present application provides an electronic device, the electronic device includes: a detachable first body and a second body, a processor, a microcontroller and a display screen; the microcontroller and the display screen
  • the first fuselage is arranged on the first fuselage
  • the processor is arranged on the second fuselage; the first fuselage and the second fuselage establish a data connection through a wired or wireless manner; the processor is used to generate the first image data, and send the first image data to the Microcontroller; the microcontroller is used for outputting the first image data to the display screen for display.
  • the first body includes a display screen and a microcontroller, which are mainly used to realize interactive functions such as image display of the electronic device
  • the second body includes a processor, which is mainly used to realize the electronic device's image display and other interactive functions. calculation function.
  • the first body and the second body both have some components reduced, so the body size can be smaller, the thickness can be thinner, and it is more convenient for users to hold. In daily use, users only need to hold the first body to realize common functions of electronic devices, and the second body can be placed in a backpack or pocket without holding, thus improving the user experience.
  • the first body and the second body both reduce a part of the components, when the thickness of the first body and the second body is the same as that of the traditional electronic device, a battery with a larger capacity can be accommodated, and the Longer battery life.
  • the first body includes a touch panel, and the touch panel is used to detect the user's touch input on the display screen; the microcontroller is used to send the touch input to the processor; the processor is used to generate the touch input according to the touch input. first image data.
  • the first body includes keys, and the keys are used to detect a user's key input; the microcontroller is used to send the key input to the processor; the processor is used to generate the first image data according to the key input.
  • the first body includes a sensor module; the microcontroller is configured to send sensor data collected by the sensor module to the processor; and the processor is configured to generate the first image data according to the sensor data.
  • the first body includes a camera, and the camera is used to collect the second image data; the microcontroller is used to send the second image data to the display screen for display; the microcontroller is also used to store the second image data After encoding, it is sent to the second fuselage.
  • the first body and the second body further include a wireless communication module; the wireless communication module includes a Bluetooth module and/or a Wi-Fi module; the first body and the second body pass through the Bluetooth module and/or the Wi-Fi module. or Wi-Fi module to establish a data connection.
  • the wireless communication module includes a Bluetooth module and/or a Wi-Fi module
  • the first body and the second body pass through the Bluetooth module and/or the Wi-Fi module. or Wi-Fi module to establish a data connection.
  • the first body and the second body further include an external interface; when a cable is used to connect the external interfaces of the first body and the second body, the first body and the second body establish Wired data connection.
  • the processor is configured to acquire display screen parameters of the display screen, and the display screen parameters include the resolution and/or refresh rate of the display screen; the processor adjusts the first image data according to the display screen parameters, and the adjustment includes adjusting the The first image data is subjected to at least one of scaling, cropping, frame insertion, and compression; the processor sends the adjusted first image data to the microcontroller.
  • the microcontroller is used for broadcasting a beacon frame, and the beacon frame includes display screen parameters; the processor is used for acquiring display screen parameters from the beacon frame.
  • the processor is configured to send a first request message to the microcontroller; the microcontroller is configured to send display screen parameters to the processor according to the first request message.
  • the microcontroller controls the first body to enter a first sleep state, wherein the first sleep state includes that the first body turns off the display screen and locks the screen, and maintains a data connection with the second body through the Wi-Fi module and the Bluetooth module.
  • the microcontroller controls the first body to enter the second sleep state
  • the second sleep state includes that the first body turns off the display screen and locks the screen, and maintains a data connection with the second body through a Wi-Fi module or a Bluetooth module.
  • the microcontroller controls the first body to enter the third sleep state state, wherein the third sleep state includes that the first body turns off the display screen and locks the screen, and disconnects the data connection from the second body.
  • the device sleep method provided by the embodiment of the present application sets three sleep states for the first body according to the length of time that the user does not operate the first body, and gradually disconnects the first body and the second body according to the level.
  • the data connection between the two fuselage not only enables the first fuselage to synchronize data with the second fuselage during short-term sleep, so that the user can use it again, but also enables the second fuselage to disconnect from the second fuselage during long-term sleep.
  • the data connection of the fuselage to save power consumption and prolong battery life.
  • multiple sensors are distributed on both sides of the display screen, and some or all of the multiple sensors are used to detect whether the first body is touched by the user and whether it is picked up by the user; the microcontroller is used for When the first body is touched by the user in the third sleep state, the first body is controlled to establish a data connection with the second body; the processor is configured to obtain a display after the first body and the second body establish a data connection display screen parameters of the screen, and generate the first image data according to the display screen parameters; the microcontroller is used to send a first wake-up message to the processor when the first body is lifted by the user; the processor is used to point to the first wake-up message Turn on the display screen and send the first image data to the microcontroller.
  • the first body when the first body detects the user's touch, it starts to interact with the second body to wake up. the image data, so that the first body can wake up quickly when it is detected that the user lifts up, and the user experience is improved.
  • an embodiment of the present application provides a distributed system, including: a plurality of first fuselage and a second fuselage as provided in the first aspect and each implementation manner of the embodiment of the present application; a plurality of first fuselage The fuselage and the second fuselage establish a data connection through a wired or wireless manner.
  • an embodiment of the present application provides a chip system, where the chip system includes a microcontroller, which is used to support the above-mentioned first body to implement the functions involved in the above-mentioned aspects and implementation manners, such as displaying an image on a display screen data.
  • an embodiment of the present application provides a chip system, where the chip system includes a processor for supporting the above-mentioned second body to implement the functions involved in the above-mentioned aspects and implementation manners, such as generating image data.
  • the embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the microcontroller of the first body, the microcontroller enables the microcontroller to implement the above Aspects and functions of the first fuselage involved in their implementation.
  • the embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the processor of the second fuselage, the processor enables the processor to implement the above aspects and The function of the second fuselage involved in its implementation.
  • Fig. 1 is the system structure block diagram of the current intelligent terminal equipment
  • FIG. 2 is a schematic diagram of a scenario in which two terminal devices currently share data
  • Fig. 3 is a scene diagram of the interaction of multiple terminal devices at present
  • FIG. 4 is a schematic structural diagram of a first fuselage
  • FIG. 5 is a schematic structural diagram of a second fuselage
  • FIG. 6 is a schematic diagram of the physical connection between the first body and the second body
  • Fig. 7 is a schematic diagram of the combined use of the first fuselage and the second fuselage in a separated state
  • FIG. 8 is another schematic diagram of the combined use of the first body and the second body in a separated state
  • 10 is a logical block diagram of data interaction when the first body and the second body are used in combination;
  • FIG. 11 is a schematic diagram of the thickness of a first body, a second body and a conventional mobile phone
  • FIG. 12 is a schematic diagram of the first body and the second body establishing a wireless data connection through a Wi-Fi link;
  • 13 is a schematic diagram of the second body wirelessly charging the first body
  • FIG. 14 is a schematic diagram of a device form of the first fuselage and the second fuselage
  • 15 is a schematic diagram of another device form of the first body and the second body;
  • FIG. 16 is a schematic diagram of another equipment form of the first fuselage and the second fuselage;
  • FIG. 17 is a schematic diagram of another equipment form of the first fuselage and the second fuselage;
  • FIG. 18 is a schematic diagram of a second body displaying a UI image on a display screen of the first body
  • 19 is a schematic diagram of a usage scenario of the camera and the microphone of the second body
  • 20 is a schematic diagram of another usage scenario of the camera and the microphone of the second body
  • 21 is a schematic diagram of the speakers of the first fuselage and the second fuselage playing audio together;
  • FIG. 22 is a flowchart of a shooting control method provided by an embodiment of the present application.
  • FIG. 23 is a schematic diagram of a scene in which the first fuselage controls the second fuselage to take pictures;
  • 24 is a schematic diagram of multi-focus shooting performed by the second body
  • FIG. 25 is a schematic diagram of the outline of a character displayed on the display screen of the first fuselage by the second fuselage;
  • Figure 26 is a schematic diagram of a distributed system
  • Figure 27 is a schematic diagram of a usage scenario of a distributed system
  • Figure 29 is a schematic diagram of another usage scenario of the distributed system.
  • FIG. 30 is a flowchart of an image adaptation method provided by an embodiment of the present application.
  • FIG. 31 is an example diagram of an image adaptation method provided by an embodiment of the present application.
  • FIG. 32 is a flowchart of a device dormancy method provided by an embodiment of the present application.
  • FIG. 34 is a flowchart of a device wake-up method provided by an embodiment of the present application.
  • FIG. 35 is a schematic diagram of sensor distribution of the first fuselage provided by an embodiment of the present application.
  • smart terminal devices include, for example, mobile phones, smart TVs, wearable devices (such as smart watches, wristbands), tablet computers, smart TVs, smart screens, smart projectors, smart speakers, and virtual reality (VR) devices , intelligent driving computing platform, etc.
  • a variety of intelligent terminal devices provide users with various services in the fields of learning, entertainment, life, health management, office, and travel.
  • users can use the various services and functions enjoyed by the mobile phone at any time and place, such as making calls, instant data connection, video chat, taking pictures, positioning and navigation, mobile Payment, online shopping, mobile office, multimedia services, etc.
  • the smart terminal device is usually equipped with a full-featured chip system.
  • the full-featured chip system here means that the smart terminal device can have a complete running application program APP, The ability to display images and interact with the user.
  • FIG. 1 is a block diagram of the system structure of the current intelligent terminal equipment.
  • an intelligent terminal device with a full-featured chip system as shown in Figure 1, needs to include at least the following parts:
  • a central processing unit (central processing unit, CPU) 101 includes a CPU based on an ARM architecture, a CPU based on an X86 architecture, or a CPU based on a MIPS architecture.
  • the CPU 101 is mainly used to run application programs, execute related program instructions, process data, and the like.
  • a graphics processor (graphic processing unit, GPU) 102 is used to perform graphics operations to render images that the application or game needs to display on the display screen.
  • the video codec 103 is used to compress (encode) or decompress (decode) digital videos that support different formats, so as to facilitate storage, playback or transmission in smart terminal devices, wherein the formats of common digital videos such as The standard format of H.264 and H.265 of advanced video coding (AVC) may be included.
  • AVC advanced video coding
  • the display subsystem (DSS) 104 may include, for example, a display controller (DISPC), etc.
  • the DSS 104 can perform operations such as overlaying and transforming images such as interfaces and status bars of different applications, and finally output to display on the display.
  • the wireless communication module 105 may include one or more chip devices, such as a baseband processor, a modem, a Wi-Fi chip, a Bluetooth chip, etc., for at least one data connection of a cellular network, Wi-Fi, Bluetooth, etc. of the intelligent terminal device Features. It can be understood that, in order to realize the above-mentioned transmission of the wireless data connection signal, the intelligent terminal device is further provided with corresponding components such as an antenna and a power amplifier.
  • chip devices such as a baseband processor, a modem, a Wi-Fi chip, a Bluetooth chip, etc.
  • the peripheral interface 106 may include a mobile industry processor interface (mobile industry processor interface)-display serial interface specification (display serial interface specification) MIPI-DSI interface, a displayport (DP) interface for supporting the display screen to display images, Mobile industry processor interface for supporting cameras - camera serial interface specification (camera serial interface specification) MIPI-CSI interface, serial peripheral interface (SPI) interface for supporting sensors, integrated circuit bus ( inter-integrated circuit, I2C) interface, etc.
  • a mobile industry processor interface mobile industry processor interface
  • display serial interface specification display serial interface specification
  • DP displayport
  • MIPI-CSI interface mobile industry processor interface
  • SPI serial peripheral interface
  • I2C integrated circuit bus
  • the size of the camera should not be too large, otherwise it will not only increase the weight of the equipment, but also compress the battery space, limiting the camera performance. further improvement.
  • Figures 2 and 3 illustrate two use cases where there are deficiencies.
  • FIG. 2 is a schematic diagram of a scenario in which two terminal devices currently share data.
  • the user has two smart terminal devices A and B; on a certain day, the user takes device A to go out to work, and leaves device B at home. While the user is out of office, the user uses the device A to process some documents, but some documents remain unfinished. Then, when the user returns home, if he wants to continue to use device B to process these documents, he may need to transfer the documents in device A to device B in the following ways: 1. Use a third-party application to transfer the documents in device A to device A. 2. Send the documents in device A to device B through functions such as one-touch transfer between devices A and B of a specific model; 3. Send the documents in device A through cloud storage sharing The document is synced to device B.
  • the above methods all require the user to perform multiple operations on two devices, such as starting the corresponding third-party application, sending documents, receiving documents, saving documents to cloud storage, downloading documents from cloud storage, etc.
  • the above operations are not friendly enough for the user; and, if the device A is not around the user, the above methods cannot be implemented.
  • FIG. 3 is a scene diagram of interaction between multiple terminal devices at present.
  • the embodiments of the present application provide an electronic device, the electronic device is composed of a first body and a second body that can be detachably arranged. and the specific structure of the second fuselage will be described.
  • FIG. 4 is a schematic structural diagram of the first fuselage.
  • the first body may include: a display screen 201, a touch panel 202, a camera 203, a fingerprint identification module 204, a Bluetooth module 205, a wireless fidelity (Wi-Fi) module 206, and a satellite navigation system.
  • System Global Navigation Satellite System, GNSS
  • GNSS Global Navigation Satellite System
  • earpiece 208 speaker 209
  • microphone 210 motor 211
  • power management module 212 battery 213, key 214
  • MCU microcontroller
  • the structure shown in the embodiments of the present application does not constitute a specific limitation on the first body, and the first body may mainly include modules or devices related to the user interaction function, and some components used to realize the data connection function. module or device.
  • the above-mentioned modules or devices related to user interaction functions may include, for example, modules or devices that transmit information to users, such as: display screen 201, microphone 210, motor 211, etc., as well as modules or devices that receive user input information, such as: camera 203 , earpiece 208, fingerprint recognition module 204, etc.
  • the first body may not include modules or devices for realizing the main computing capabilities of the electronic device, such as: application processors, graphics processors, modems, and the like.
  • the first fuselage may include more or less components than those shown in FIG. 4 , or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the display screen 201 is used to display images, videos and the like.
  • the display includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light emitting diode flexible light emitting diode (flex light-emitting diode, FLED), mini light emitting diode MiniLED
  • micro light emitting diode MicroLED micro light emitting organic diode Micro-OLED, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) )Wait.
  • the first body may include 1 or N display screens 201 , where N is a positive integer greater than 1.
  • the display screen 201 may have different shapes, such as a flat shape, a hyperboloid shape, a quadrangular shape, a waterfall screen shape, a foldable shape, a scroll shape, and the like.
  • the display screen 201 may also have different sizes, such as small-sized display screens of mobile phones and wearable devices, medium-sized display screens of tablet computers, personal computers, etc., and large-sized display screens such as televisions.
  • the touch panel 202 and the display screen 201 can form a touch screen, also called a touch screen, and the touch panel 202 is used to detect a touch operation on or near the display screen 201 .
  • the touch panel 202 can transmit the detected touch operation to its own microcontroller 215 or to the processor of the second body to determine the content of the touch operation, thereby providing visual output related to the touch operation through the display screen 201 .
  • the touch panel 202 can also be disposed on the surface of the first body, which is different from the position where the display screen 201 is located.
  • the camera 203 is used to collect image data, such as taking photos and videos.
  • the camera 203 may include a lens and a photosensitive element.
  • the object generates an optical image through the lens and projects it to the photosensitive element of the camera.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS). ) phototransistor.
  • the photosensitive element is used to convert the optical signal into an electrical signal, and transmit the electrical signal to an image signal processor (ISP) to convert it into a digital image signal.
  • the ISP outputs the digital image signal to a digital signal processor (DSP) for processing.
  • DSP converts digital image signals into standard GB, RYYB, YUV and other formats of image signals.
  • the ISP and the DSP may preferably be arranged in the second body, or may be arranged in the first body.
  • the first body may include 1 or N cameras 203 , where N is a positive integer greater than 1.
  • the fingerprint identification module 204 is used to collect user fingerprints, and the first body can use the collected fingerprint characteristics to realize functions such as fingerprint unlocking, access application lock, and fingerprint payment.
  • the fingerprint identification module 204 can be implemented by different fingerprint identification technologies, such as ultrasonic fingerprint identification technology, optical fingerprint identification technology, capacitive fingerprint identification technology, and the like.
  • the Bluetooth module 205 is used to realize the function of wireless data connection between the first body and the second body based on the Bluetooth protocol, and is used to realize the function of the wireless data connection between the first body and other electronic devices based on the Bluetooth protocol.
  • the Bluetooth module 205 preferably supports the Bluetooth low energy protocol to reduce power consumption.
  • the Wi-Fi module 206 is used to realize the function of wireless data connection between the first body and the second body based on the IEEE 802.11 standard, and is used to realize the function of the first body and the access point device AP based on the IEEE 802.11 standard.
  • the function of wireless data connection enables the first body to access the Internet through the AP.
  • the above-mentioned 802.11 standards may include, for example, 802.11ac, 802.11ax, 802.11ad, 802.11ay standards, and the like.
  • the Wi-Fi module 206 may also be used to implement a Wi-Fi Direct (Wi-Fi Direct) data connection or a FlashLinQ data connection between the first body and the second body or other electronic devices.
  • Wi-Fi Direct Wi-Fi Direct
  • the GNSS module 207 is used to provide location services for the first body, so as to realize functions such as positioning and navigation.
  • the GNSS module 207 may provide a system including a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), One or more location services including quasi-zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the earpiece 208 is used to convert audio electrical signals into sound signals.
  • the receiver 208 can be close to the human ear to answer the voice.
  • the speaker 209 is used to convert audio electrical signals into sound signals.
  • the user can listen to audio through speaker 209, or listen to a hands-free call.
  • the first body may include two or more speakers 209 to achieve stereo effect.
  • the microphone 210 also referred to as a "microphone" is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 210 through a human mouth, and input the sound signal into the microphone 210 .
  • the first body may be provided with at least one microphone 210 .
  • the first body may be provided with two microphones 210, which can implement a noise reduction function in addition to collecting sound signals.
  • three, four or more microphones 210 may be provided on the first body to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the motor 211 is used to generate vibration, which can be used for vibration prompting of incoming calls and messages, and can also be used for touch vibration feedback.
  • the vibration of the motor 211 can be transmitted to the user, so that the user can feel the interactive feedback brought by the motor 211 .
  • the motor 211 may vibrate with the user's click; when the user presses the fingerprint recognition area of the display screen to unlock the device, the motor 211 may vibrate at the moment of unlocking.
  • different application scenarios eg, time reminder, receiving information, alarm clock, game, etc.
  • the power management module 212 is used for managing the charging and discharging strategy of the battery 213 of the first body and monitoring the battery status.
  • the first body may receive the charging input of the wired charger through the external interface 218 or the like.
  • the first body can receive wireless charging input through a built-in wireless charging coil.
  • the first body may not include a processor, so the display screen 201 is the main energy-consuming device of the first body.
  • the battery 213 may be a small capacity battery.
  • the keys 214 include a power-on key, a volume key, and the like. Keys 214 may be mechanical keys. It can also be a touch key.
  • the first body can receive key input, and generate key signal input related to user settings and function control of the first body.
  • the microcontroller 215 is used to realize a small part of the computing power of the electronic device to satisfy the interactive function of the first body.
  • the microcontroller 215 may include a video decoder module for decoding the encoded video data sent by the second body, and output it to the display screen 201 for display as an image;
  • the microcontroller 215 may also include an audio decoder module , which is used to decode the encoded audio data sent by the second body, and output it to the earpiece 208 or the speaker 209 to play sound.
  • Sensor module 216 may include one or more sensors, such as: proximity light sensor 216A, ambient light sensor 216B, gravity sensor 216C, compass 216D, gyroscope 216E, and the like.
  • the sensor module 216 collects information related to device posture, user behavior, and interaction through one or more sensors, and implements various functions independently or in cooperation with other devices or modules. For example: when the user uses the camera 203 to shoot, the gyroscope 216E can detect the shaking of the first body, and calculate the compensation value of the lens according to the shaking, so that the lens can offset the shaking of the first body through the reverse motion to realize anti-shake;
  • the instrument 216E can also be used for navigation, somatosensory game scenarios.
  • the proximity light sensor 216A can detect whether there is an object near the first body, so as to realize functions such as automatically turning on the screen when taking it out of a pocket or bag, and automatically turning off the display screen 201 when talking close to the ear.
  • the first body may further include memory, including volatile memory and non-volatile memory, for buffering or saving data generated by each device or module of the first body during operation, as well as saving the microcontroller, etc. Program instructions, etc., required for the operation of the device or module.
  • FIG. 5 is a schematic structural diagram of the second fuselage.
  • the second body may include: a processor 301 , a camera 302 , a flash 303 , a laser ranging sensor 304 , a power management module 305 , a battery 306 , a Bluetooth module 307 , a Wi-Fi module 308 , and a GNSS module 309 , a radio frequency module 310, a card slot 311, an external interface 312, a button 313, a near-field communication (NFC) module 314, a microphone 315, a speaker 316, and the like.
  • NFC near-field communication
  • the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the second body, and the second body may mainly include modules or devices related to computing capabilities such as the processor 301, and a module or device for realizing data connection. functional modules or devices.
  • the second body may not include modules or devices for realizing the main interactive capabilities of the electronic device, such as a display screen, a touch panel, a fingerprint identification module, a sensor module, and the like.
  • the second fuselage may include more or less components than those shown in FIG. 5 , or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 301 undertakes the main computing power of the electronic device.
  • the processor 301 may include an application processor (application processor, AP), a baseband processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller , video codec, digital signal processor (digital signal processor, DSP), and/or neural network processor (neural-network processing unit, NPU), etc.
  • Different processors 301 may be independent devices, or may be inherited in one or more processors 301, for example, some or all of the above-mentioned processors 301 may be integrated in a system on a chip (SoC) .
  • SoC system on a chip
  • the sensor size, pixel size and pixel resolution of the camera 302 in the second body can be larger, so the camera 302 can be used as the main camera for taking high-quality pictures.
  • the camera 203 can be used as an auxiliary camera, and is used in scenarios that do not require high shooting quality, such as self-portraits, video chats, and scanning of two-dimensional codes.
  • the second body may include two or more cameras 302, and different cameras 302 may have different specifications, such as: a macro camera, a telephoto camera, a wide-angle camera, etc., to achieve more shooting mode.
  • the flash 303 and the laser ranging sensor 304 are used to cooperate with the camera to achieve better shooting quality.
  • the flash 303 is used to turn on when the camera 302 focuses or exposes, and plays a role of supplementary light, so as to improve the shooting quality.
  • the flash 303 can be a dual-color temperature flash, including two light sources with different color temperatures, so as to have a better fill light effect and make the white balance of the shooting more accurate.
  • the laser ranging sensor 304 is used to measure the distance between the camera 302 and the object to be photographed, so as to realize the phase detection auto focus (PDAF) function of the camera 302 .
  • PDAF phase detection auto focus
  • the power management module 305 is used for managing the charging and discharging strategy of the battery 306 of the second body and monitoring the battery status.
  • the second body may receive the charging input of the wired charger through the external interface 312 or the like.
  • the second body can receive wireless charging input through a built-in wireless charging coil.
  • the battery 306 in the second body mainly supplies power to the processor 301 and other devices in the second body. Since the processor 301 is the main energy-consuming device in the electronic device, the Battery 306 may be a high-capacity battery.
  • the Wi-Fi module 308 is used to realize the function of wireless data connection between the second body and the first body based on the IEEE 802.11 standard, and is used to realize the connection between the second body and the access point device AP based on the IEEE 802.11 standard The function of wireless data connection enables the second body to access the Internet through the AP.
  • the above-mentioned 802.11 standards may include, for example, 802.11ac, 802.11ax, 802.11ad, 802.11ay standards, and the like.
  • the Wi-Fi module 308 may also be used for the second body and the first body to implement a Wi-Fi Direct data connection, or a FlashLinQ data connection.
  • the GNSS module 309 is used to provide location services for the second body to implement functions such as positioning and navigation.
  • the GNSS module 309 may provide one or more including Global Positioning System GPS, Global Navigation Satellite System GLONASS, Beidou Satellite Navigation System BDS, Quasi-Zenith Satellite System QZSS and/or Satellite Based Augmentation System SBAS Multiple location services.
  • the radio frequency module 310 together with the baseband processor and the like, realizes the cellular mobile data connection function of the electronic device.
  • the radio frequency module 310 may include a radio frequency circuit, a radio frequency power amplifier, an antenna, etc., and is used to implement the transmission and reception of electromagnetic wave signals of a cellular mobile network and/or a Wi-Fi network.
  • the cellular mobile data connection function of an electronic device may include: global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access) division multiple access (CDMA), wideband code division multiple access (WCDMA), time division code division multiple access (TD-SCDMA), long term evolution (LTE) ), 5th generation mobile networks new radio (5G NR), etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time division code division multiple access
  • LTE long term evolution
  • the card slot 311 may include a SIM card slot, and may also include a TransFlash (TF) card slot.
  • the second body may contain only a SIM card slot.
  • the second body may include a SIM card slot and a TF card slot at the same time.
  • the external interface 312 may include a USB interface, a 3.5mm headphone jack, and the like.
  • the external interface 312 is used to connect external devices such as data cables and earphones, and realize functions such as charging, data transmission, and audio playback.
  • the keys 313 include a power-on key, a volume key, and the like.
  • the keys 313 may be mechanical keys. It can also be a touch key.
  • the second body can receive key input and generate key signal input related to user settings and function control of the second body.
  • the NFC module 314 is used to implement various NFC-based functions, such as: NFC mobile payment, swiping public transportation cards, simulating access control cards, reading or writing data from smart cards (IC cards), and identification cards (Identification Card), etc. .
  • the second body may be provided with at least one microphone 315 .
  • the second body may be provided with two microphones 315, which can implement a noise reduction function in addition to collecting sound signals.
  • three, four or more microphones 315 may be set on the second body to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • Speakers 316 are used to convert audio electrical signals into sound signals. The user can listen to audio through speaker 316, or listen to a hands-free call.
  • the second body may include two or more speakers 316 to achieve stereo effect.
  • the first body and the second body can be used in combination to realize all the functions of the electronic device.
  • the first fuselage and the second fuselage can be physically connected to form a whole to realize combined use.
  • FIG. 6 is a schematic diagram of the physical connection between the first body and the second body.
  • both the first body 100 and the second body 200 may be designed in the form of a mobile phone.
  • the first body 100 may include a display screen 201 , a touch panel, a fingerprint identification module, and other related devices for user interaction, as well as other devices as shown in FIG. 4 .
  • the first body 100 does not include some or all of the components in the second body 200 , for example, does not include a processor, a large-capacity battery, a radio frequency module, a main camera, etc. Therefore, compared with a traditional mobile phone, the first body 100
  • the thickness of the fuselage can be thinner, the volume can be smaller, and the weight can be lighter.
  • the second body 200 may include computing-related devices such as a processor, as well as a large-capacity battery, a radio frequency module, a main camera, and other devices as shown in FIG. 5 , and the second body 200 may not include a display screen 201 and a touch panel , fingerprint recognition module and other user interaction related devices, therefore, compared with the traditional mobile phone, the thickness of the second body 200 can also be thinner, the volume can be smaller, and the weight can be lighter.
  • computing-related devices such as a processor, as well as a large-capacity battery, a radio frequency module, a main camera, and other devices as shown in FIG. 5
  • the second body 200 may not include a display screen 201 and a touch panel , fingerprint recognition module and other user interaction related devices, therefore, compared with the traditional mobile phone, the thickness of the second body 200 can also be thinner, the volume can be smaller, and the weight can be lighter.
  • first body 100 and the second body 200 may form a whole in the form of a mobile phone by being fastened up and down.
  • the first fuselage 100 and the second fuselage 200 may have the same or similar length and width, so that the two have stronger integrity after being fastened together.
  • the first body 100 and the second body 200 may further include a fixing structure.
  • the first fuselage 100 and the second fuselage 200 may respectively include magnetic attraction devices corresponding to positions. When the first fuselage 100 and the second fuselage 200 are close to each other, the magnetic attraction devices The first body 100 and the second body 200 are fixed.
  • the first body 100 and the second body 200 may include buckle structures that cooperate with each other. When the first body 100 and the second body 200 are buckled, the buckle structures are locked to lock the first body 100 and the second body 200 together. A body 100 is fixed to the second body 200 .
  • one of the first body 100 and the second body 200 may be provided with a back clip structure.
  • a back clip structure is provided.
  • the sub-device clamps another sub-device through the back clip structure to fix the first body 100 and the second body 200 .
  • first body 100 and the second body 200 are used in combination, some data interaction may be required between the first body 100 and the second body 200.
  • first body The body 100 and the second body 200 may also be provided with data connection interfaces that cooperate with each other.
  • the data connection interface may be disposed on the buckling surface of the first body 100 and the second body 200.
  • the data connection interface may include a pair of mutually matched male terminals 217 and female terminals 317, The male terminal 217 is arranged on one of the sub-devices, and the female terminal 317 is arranged on the other sub-device.
  • the male terminal 217 and the female terminal 317 are inserted into each other. Together, a data transmission channel between the first body 100 and the second body 200 is established.
  • the data connection interface can be designed based on a specific interface protocol, such as: PCI Express protocol (PCI-E for short), Thunderbolt and the like.
  • the data connection interface may be an optical interface for receiving and receiving optical signals.
  • the optical interface is connected to establish the connection between the first body 100 and the second body 200 .
  • the optical signal transmission channel between the two bodies 200 enables the first body 100 and the second body 200 to transmit data based on the optical signal.
  • first fuselage 100 and the second fuselage 200 may also be used in combination when they are separated from each other.
  • FIG. 7 is a schematic diagram of the combined use of the first body 100 and the second body 200 in a separated state.
  • a wired data connection can be established between the first body 100 and the second body 200 through a cable 401 .
  • the cable 401 may be a cable that transmits data through electrical signals or short-distance millimeter wave technology. Then, both the first body 100 and the second body 200 are provided with an external interface for docking the terminals of the cable 401 .
  • the external interfaces 218 and 312 can be correspondingly PCI-E interfaces, Thunderbolt interfaces, USB interfaces, and the like.
  • the cable 401 may be an optical fiber, and data is transmitted through an optical signal. Then, the first body 100 and the second body 200 are both provided with optical interfaces. According to different types of optical fiber connectors, the optical interfaces are correspondingly It can be an SC-type optical fiber interface, an FC-type optical fiber interface, and the like.
  • FIG. 8 is another schematic diagram of the combined use of the first body 100 and the second body 200 in a separated state.
  • the first body 100 and the second body 200 can establish a wireless data connection.
  • both the first body 100 and the second body 200 include a Wi-Fi module, and the first body 100 and the second body 200 can establish a wireless data connection through the Wi-Fi module.
  • the first body 100 and the second body 200 can be data connected to an access point (AP) device 402 through their respective Wi-Fi modules, and a wireless data connection is established through data forwarding of the AP device 402;
  • the first body 100 and the second body 200 can establish a Wi-Fi Direct data connection through the Wi-Fi module, so that a wireless data connection can also be established without other devices.
  • AP access point
  • both the first body 100 and the second body 200 include a Bluetooth module, and the first body 100 and the second body 200 can establish a wireless data connection through the Bluetooth module.
  • the first body 100 includes a Wi-Fi module and can be connected to the Internet through the Wi-Fi module
  • the second body 200 includes a radio frequency module and a baseband processor and has a cellular mobile data connection function
  • the base station 404 can be connected to the Internet, so the first body 100 and the second body 200 can establish a wireless data connection based on the Internet.
  • both the first body 100 and the second body 200 have a cellular mobile data connection function, so the first body 100 and the second body 200 can establish a 3GPP standard device based on the cellular mobile data connection function Direct data connection to the device-to-device (D2D) protocol, thus eliminating the need to rely on other devices.
  • D2D device-to-device
  • FIG. 9 and 10 are logical block diagrams of data interaction when the first body and the second body are used in combination. Below, based on FIG. 9 and FIG. 10 and in combination with some usage scenarios, the data interaction logic between the first body and the second body under wired data connection and wireless data connection state will be exemplarily described.
  • Scenario 1 The first body plays the image data provided by the second body.
  • the processor of the second body can encode the image data to be displayed and output it to its PCI-E interface,
  • the encoded image data is sent to the first fuselage through the PCI-E cable;
  • the first fuselage receives the encoded image data through its PCI-E interface, and hands it over to the microcontroller of the first fuselage for processing;
  • the controller has a built-in video decoder module, which can decode the encoded image data and send the decoded image data to the display screen through the MIPI-DSI interface for display.
  • the processor of the second body can encode the image data with display and output to its Wi-Fi module;
  • the Wi-Fi module of the second body sends the encoded image data to the Wi-Fi module of the first body through the Wi-Fi link established based on Wi-Fi Direct or AP device;
  • the Wi-Fi module of the first body receives the encoded image data and hands it over to the microcontroller MCU of the first body for processing;
  • the microcontroller MCU has a built-in video decoder module, which can decode the encoded image data and convert the decoded image
  • the data is sent to the display screen for display through the MIPI-DSI interface.
  • Scenario 2 The first body sends image data to the second body.
  • the camera driver running on the microcontroller MCU reads the image data captured by the camera through the MIPI-CSI interface; the image data is encoded by the video encoder inside the MCU Then output to the second fuselage.
  • the microcontroller when the first body and the second body are connected by wired data, the microcontroller can output the encoded image data to its PCI-E interface, so that the encoded image The data is sent to the PCI-E interface of the second body through the PCI-E cable; as shown in Figure 10(b), when the first body and the second body are connected by wireless data, the microcontroller can encode the The resulting image data is output to its Wi-Fi module; the Wi-Fi module of the first fuselage sends the encoded image data to the second fuselage through the Wi-Fi link established based on Wi-Fi Direct or AP devices. Wi-Fi module.
  • FIG. 11 is a schematic diagram of the thickness of the first body 100 , the second body 200 and a conventional mobile phone.
  • the first body 100 since the first body 100 only includes some components for realizing all functions of the electronic device, and does not include a processor, a large-capacity battery, a radio frequency module, a main camera, etc., therefore, compared with a traditional mobile phone, the Under the condition that the size of the display screen is the same, the thickness B1 of the first body 100 will be smaller and the weight will be lighter, which is convenient for the user to hold.
  • the second body 200 also only includes some components for realizing all functions of the electronic device, and does not include components related to user interaction such as a display screen, a touch panel, a fingerprint identification module, etc., therefore, compared with a traditional mobile phone, the Under the condition that the length and width are the same and the battery capacity is the same, the thickness B2 of the second body 200 will also be smaller.
  • the second body since the second body does not need to be held frequently by the user, it is not necessary to consider whether its thickness and weight are easy to hold in the design. Therefore, the thickness of the second body 200 can be further increased to B3 to accommodate a larger capacity
  • the battery 306 can improve the battery life of the second body 200 .
  • the first body and the second body can cooperate to realize all functions of the mobile phone.
  • the first body 100 and the second body 200 establish a wireless data connection through a Wi-Fi link.
  • the second body 200 can be placed in a pocket of clothing or a backpack 403, and the user only needs to hold the first body 100 and perform interactive operations on the first body 100 to complete making calls, sending and receiving Information, using APP, playing media, motion monitoring, fingerprint payment, Selfie and other operations do not need to take out the second body 200, and these operations cover most scenarios of users using mobile phones.
  • the user only needs to hold the second fuselage 200 in a few scenarios, as shown in FIG.
  • the user when the user wants to take high-quality photos or videos, the user can choose from the pocket or backpack 403 Take out the second body 200 and use the second body 200 to complete the shooting process. After the shooting is completed, the second body 200 can be put back into the pocket or backpack 403 .
  • the design of the second body 200 does not need to consider the device thickness, volume, grip, weight, etc. that affect the user experience. Therefore, the second body 200 can have a larger thickness, accommodate a battery with a larger capacity, and improve the longer battery life. In this way, even if the power consumption of the second body 200 increases due to the support of the 5G network, the battery life can be achieved. Not shortened or even longer.
  • the first body 100 may further include a wireless charging coil 219 , and the wireless charging coil 318 is coupled to the battery 213 of the first body 100 through the power management module of the first body 100 .
  • the second body 200 may also be provided with a wireless charging coil 318 , and the wireless charging coil 318 is coupled to the battery 306 of the second body 200 through the power management module of the second body 200 . In this way, when the first body 100 and the second body 200 are buckled into a whole, the power management module of the second body 200 can obtain the battery power of the first electronic device from the power management module of the first body 100.
  • the power management module of the second electronic device enables the wireless charging function, and uses the electromagnetic induction between the coils to use the power of the second body 200
  • the electric energy in the battery charges the battery 213 of the first body 100 to prolong the battery life of the first body 100 .
  • first body and the second body in the embodiments of the present application may also have other forms, and some of the achievable forms will be exemplarily described below with reference to more drawings.
  • the first body 100 may be a device in the form of a mobile phone
  • the second body 200 may be a device in the form of a handheld game console.
  • the second body 200 includes buttons 320 and/or joysticks 321 for implementing a game control input function, and a structure for fixing the first body 100 .
  • the structure for fixing the first body 100 may be, for example, a groove 319 provided on the body of the second body 200, and the shape of the groove 319 matches the shape of the first body 100, so that the The first body 100 can be embedded in the groove 319 to form a whole with the second body 200 . Based on the structure shown in FIG.
  • the first body 100 when the first body 100 is separated from the second body 200 , the first body 100 can make calls, send and receive information, use APPs, play media, and monitor motion according to user operations. , fingerprint payment, Selfie and other functions; in the case where the first body 100 and the second body 200 form a whole, the first body 100 and the second body 200 can jointly realize the functions of a handheld game console.
  • the second body 200 is held by the user, and is used for receiving the game control input generated by the user through the buttons 320 and the joystick 321, and the first body 100 is used for displaying the game screen.
  • the first body 100 may be a device in the form of a mobile phone, and the second body 200 may be a smart speaker device.
  • the processor of the second body 200 can send the encoded audio data to the first body 100 through a Wi-Fi link or the like, and after the first body 100 receives the encoded audio data, the The audio decoder in the MCU decodes and outputs it to the speaker of the first body 100 for playback.
  • the second body 200 When the second body 200 can detect the approach of the first body 100 through the NFC module 314, the laser ranging sensor, etc. (for example, the user places the first body 100 on the second body 200), the second body 200 will take over the audio playback behavior of the first body 100 to realize the conversion of different audio playback modes; in specific implementation, the second body 200 can stop sending encoded audio data to the first body 100, and at the same time, the second body 200 can stop sending encoded audio data to the first body 100.
  • the processor of the body 200 can decode the encoded audio data and output it to the speaker 316 of the second body 200 for playback.
  • the first body 100 may be in the form of a wearable device, such as a smart watch, a smart bracelet, a VR device, etc.
  • the second electronic device may be any device that can be put into a clothes pocket
  • the device form in the backpack 403 is not specifically limited in this embodiment of the present application.
  • the first body 100 and the second body 200 can establish a low-power Bluetooth data connection through their respective Bluetooth modules to exchange data.
  • the MCU of the first body 100 can collect the user's physiological data, such as blood flow, cardiac electrical signals, etc., through various sensors it is equipped with, and send it to the second body 200 through the Bluetooth module, and the Bluetooth module of the second body 200 After receiving the physiological data, it is processed by the second body 200 to obtain the user's heart rate, electrocardiogram and other information, and sent back to the first body 100 for display, or through the Wi-Fi module or cellular mobile data connection function Upload to the cloud for users to share data on other electronic devices. In this way, the first body 100 does not need to perform calculation and analysis on the data collected by the sensor, so power consumption can be saved and battery life can be prolonged.
  • physiological data such as blood flow, cardiac electrical signals, etc.
  • the first body 100 may be a large-screen display device, such as a television, a monitor, etc.
  • the second electronic device may be, for example, a smart camera.
  • the second body 200 and the first body 100 may be distributed at different positions, and the second body 200 may also be fixed on the first body 100 .
  • the image data displayed on the display screen of the first body 100 is provided by the second body 200 .
  • the second body 200 is further configured to receive and process control instructions transmitted by the user through a remote control or voice, run an application program, and display a corresponding interface on the display screen of the first body 100 .
  • the second body 200 can be woken up by a remote control signal or voice, wherein the voice command for waking up the second body 200 can be a default voice command or a User-defined voice commands.
  • the voice command is "Xiaoyi Xiaoyi”
  • the second fuselage 200 can remember the user's voice characteristics by reading "Xiaoyi Xiaoyi” a few times. In this way, when the second body 200 detects that the user says "Xiaoyi Xiaoyi" after that, it wakes up itself.
  • the processor of the second body 200 can turn on the camera of the second body 200 to capture an image, and encode the image and send it to the first body 100.
  • the microcontroller of the first body 100 decodes the image and sends it to the display screen 201 for display.
  • a user interface (UI) image 501 may be generated according to parameters such as the resolution and refresh rate of the display screen of the first fuselage, and the UI image may include, for example, a status bar, a camera application interface, etc.
  • the status bar may include information such as battery power, network status, and time of the second body;
  • the camera application program interface may include multiple virtual buttons or options, for example: the working mode of the camera (eg, professional shooting, video recording mode, Photo mode, portrait mode, night scene mode, panorama mode, micro-recording Vlog mode, slow motion mode, time-lapse photography, etc.), photo settings (such as: high dynamic range imaging HDR switch, artificial intelligence AI mode switch, filters, etc.), etc.
  • the processor of the second fuselage After the processor of the second fuselage generates the UI image, the UI image, the image 502 collected by the camera, and the images of other users received through the network can be layered, for example, the UI image is superimposed on the image collected by the camera.
  • a video stream is obtained, and then the video stream is encoded and sent to the microcontroller of the first fuselage.
  • the microcontroller of the first body After the microcontroller of the first body receives the video stream, it decodes the video stream and sends it to the display screen for display.
  • the user can use the remote control to issue an operation command to the second body to select different camera working modes or different photographing settings, etc., so as to improve the user experience.
  • the camera 302 of the second body 200 can be arranged on a rotating mechanism 322, and the rotating mechanism 322 can make the camera 302 rotate within a certain angle range, preferably, it can rotate 360 degrees.
  • the microphone 315 of the second body 200 may be a multi-microphone array.
  • the second body 200 turns on the camera 302 to capture images, and the processor of the second body 200 is used to encode the captured images, and then send them to the first body 100 for display, and to send to The remote device of the call party is displayed.
  • the multi-microphone array collects the user's voice information, and transmits the voice information to the processor for analysis and processing.
  • the azimuth rotation mechanism 322 sends a rotation instruction to control the rotation of the camera 302 , so that the camera 302 is always aimed at the speaking user, and the user is always located in the center of the image captured by the camera 302 .
  • the microphone 315 (for example, a multi-microphone array) can simultaneously collect multiple users.
  • the processor of the second body 200 can identify the voice information of each user from the voice information of multiple users according to voiceprint recognition or timbre recognition, and judge the orientation of each user respectively.
  • the second body 200 may further include a laser ranging sensor 304.
  • the laser ranging sensor 304 may be arranged in the same direction as the camera 302. When there are multiple users talking at the same time, the laser ranging sensor 304 may measure the relative distance of each user respectively. Based on the distance of the camera 302, the distance measurement result is sent to the processor.
  • the processor determines whether the direction of the camera 302 can be adjusted according to the orientation of each user, the distance relative to the camera 302, and the viewing angle of the camera 302, so that the above-mentioned multiple users are simultaneously within the field of view of the camera 302;
  • the processor can send a rotation instruction to the rotation mechanism 322 to control the rotation of the camera 302, so that the above-mentioned multiple users appear in the field of view of the camera 302 at the same time; If two users are located in the field of view of the camera 302 at the same time, the processor can send a rotation instruction to the rotation mechanism 322 to control the rotation of the camera 302, so that the camera 302 is aimed at the user closest to the camera 302, so that the user is located in the center of the field of view of the camera 302 .
  • the rotation mechanism 322 may have its own angular coordinate system, and the angular coordinate may take the camera 302 facing directly in front of the display screen of the first body 100 as an angle of 0°, and the rotation mechanism 322 rotates clockwise or The counterclockwise direction is used as the positive direction of the coordinate system.
  • the rotation command of the processor may be an angle command, for example: 30°, so that the rotation mechanism 322 rotates to a position of 30°.
  • the rotation command may further include at least one speed command and one stop command, where the speed command is used to instruct the angular speed at which the rotation mechanism 322 rotates.
  • the rotation mechanism 322 starts to rotate when it receives a speed command, and if a new speed command is received during the rotation, the angular velocity is adjusted according to the new speed command; when the rotation mechanism 322 receives a stop command, it stops rotating.
  • the processor may first send a speed command of 10° per second to the rotation mechanism 322, and then send a stop command to the rotation mechanism 322 after 3 seconds.
  • the processor of the second fuselage 200 may change the orientation of the user within a period of time before the current moment, The user's next azimuth change trend is predicted, the angular velocity at which the rotating mechanism 322 rotates next is determined according to the predicted result of the azimuth change trend, and a corresponding speed command is generated and sent to the rotating mechanism 322 .
  • the camera 302 can rotate in real time following the change of the user's orientation, and the user is always kept in the center of the camera 302's field of view.
  • both the first body and the second body may include one or more speakers, and the speaker 209 of the first body and the speaker 316 of the second body may be used simultaneously to Output stereo effect.
  • the processor 301 of the second body may decode the audio of a part of the channels and output it to the speaker 316 of the second body for playback;
  • the processor 301 of the second body can also send the audio of another part of the channel to the first body, which is decoded by the microcontroller 215 of the first body and then sent to the speaker 209 of the first body for playback.
  • both the first body and the second body include two speakers, each speaker may include at least one speaker, and the audio resources played by the second body include at least four channels, for example: left channel, right channel channel, center channel and subwoofer channel.
  • the processor 301 can send the audio of the center channel and the subwoofer channel to the first body, and the microcontroller 215 of the first body can decode the audio of the center channel and send it to the one of the first body.
  • One speaker 209 plays, and the audio of the subwoofer channel is decoded and sent to another speaker 209 of the first body for playback.
  • the processor 301 can decode the audio of the left channel and send it to the speaker 316 on the left side of the second body for playback, and decode the audio of the right channel and send it to the speaker 316 on the right side of the second body for playback. Therefore, the speakers 209 of the first body and the speakers 316 of the second body cooperate with each other to realize stereo effect output, which improves the user's audio-visual experience.
  • the left and right channels can be used for To play background music and ambient sound effects
  • the subwoofer channel can be used to play sound effects such as drum music
  • the center channel can be used to play the voice of the character speaking, with clear layers and a good audio-visual experience.
  • the second body when the second body is a smart camera, it can be used as a monitoring camera, so that when someone enters the monitoring area, the second body can use its array microphone to lock the position of the person, and keep The picture of the shooting personnel, and send an alarm to the first body, and transmit the real-time image of the captured picture to the first body, etc.
  • the embodiment of the present application provides a shooting control method, and the steps of the method are described in detail below with reference to a specific application scenario:
  • the user wants to take a picture for himself when he is playing alone, he can take out the first body 100 and the second body 200 and place the second body 200 at a certain position , so that the camera 302 of the second body 200 faces the landscape to be photographed. Then, the user can click on the camera application icon 503 on the display screen 201 of the first body 100 to launch the camera application.
  • the microcontroller of the first body sends an instruction to start the camera application to the processor of the second body.
  • the processor of the second body starts the camera application and turns on the camera of the second body.
  • the processor encodes the image data collected by the camera in real time and sends it to the first body.
  • the microcontroller of the first body receives the encoded image data, it decodes it and sends it to the display screen of the first body for display, so that the user can see the first body on the display screen of the first body in real time. The image captured by the camera of the fuselage.
  • the first body and the second body can establish a connection and transmit data through one or more of cellular mobile network, Wi-Fi Direct, Bluetooth, etc.
  • the connection is established by Fi Direct, Bluetooth, etc.
  • the first body and the second body can establish a connection at any place without being restricted by geographical location. For example, when the user is exploring or climbing in the wild, even if there is no communication base station in these places , the first fuselage and the second fuselage can also be connected and used normally.
  • the user can adjust the relative position of himself and the second body 200 according to the image displayed on the display screen 201 , so that the user appears at an appropriate position within the viewing range of the camera 302 .
  • the user can click the photo button 504 on the display screen 201 .
  • the microcontroller of the first body sends a photographing instruction to the processor of the second body.
  • the processor of the second fuselage starts to count down the photographing, for example, 10 seconds, 5 seconds, etc.
  • the duration of the photographing countdown may be a preset default value or a value preset by the user.
  • the user may perform preparation activities for taking pictures, such as posing for a pose or putting the first body in a clothing pocket or a backpack, and so on.
  • the processor controls the camera to expose and take pictures, and sends the taken pictures to the first fuselage for display on the display screen of the first fuselage.
  • the processor may control the camera to take multiple pictures continuously, so as to avoid the user blinking or moving when taking a single picture, which affects the picture quality.
  • the above-mentioned multiple photos may also be photos with different focal points.
  • the processor of the second body 200 can analyze the images collected by the camera 302 based on artificial intelligence technologies such as deep neural networks, so as to identify the character part and the background part.
  • the processor can control the camera 302 to focus at different focal points, and take at least one picture after each focal point is focused, so that the user can select his favorite picture from multiple pictures, Improved user experience.
  • the first body and the second body may cooperate with each other to realize the function of automatic composition.
  • the processor of the second fuselage can analyze the images collected by the camera to identify the close-range content, such as trees, roads, mountains, buildings, etc., and identify the distant content, such as the sky, Sun, Moon, etc. Then, according to the distribution of these contents in the image, the processor determines a suitable position where the character appears in the framing content, and generates a character outline 506 at the position. As shown in FIG. 25 , the character outline 506 and the image captured by the camera The data is collectively sent to the microcontroller of the first body.
  • the microcontroller of the first body After receiving the outline 506 of the person and the image data collected by the camera, the microcontroller of the first body decodes and sends it to the display screen 201 for display. In this way, the user can adjust his position relative to the second fuselage according to the character outline 506, for example, adjust his position within the character outline 506, thereby taking high-quality pictures and improving the user experience.
  • the processor of the second body can send a countdown prompt sound to the microcontroller of the first body after the countdown starts.
  • the microcontroller of the first body can decode the received countdown sound and output it to the speaker of the first body for playback.
  • the countdown prompt sound may be a beep sound played by pulses, or a countdown number broadcast by voice or the like.
  • the user can know the end time of the photo-taking countdown according to the countdown sound, so that the POSE can be placed before the photo-taking countdown ends, and the user can wait for the photo to be taken, which improves the user experience.
  • the second body 200 may also establish data connections with multiple first bodies 100 (A to N) simultaneously by means of wireless data connection or wired data connection, A distributed system is formed, wherein N is a positive integer, and a plurality of first fuselages 100 (A ⁇ N) share data resources and network resources of the second fuselage 200, so as to realize a plurality of first fuselages 100 (A to N) Data synchronization between A to N).
  • N is a positive integer
  • a plurality of first fuselages 100 (A ⁇ N) share data resources and network resources of the second fuselage 200, so as to realize a plurality of first fuselages 100 (A to N) Data synchronization between A to N).
  • the distributed system consists of two first fuselages and one second fuselage.
  • the two first fuselages are referred to as first fuselage A respectively here.
  • the first fuselage B is referred to as first fuselage A respectively here.
  • the user carries the first fuselage A to go out to work, and places the first fuselage B at home.
  • the first fuselage A and the second fuselage maintain a data connection through a cellular mobile network or a Wi-Fi network, and the user uses the first fuselage A to remotely create a document in the second fuselage and edit it documents, and these documents are all stored in the second fuselage, and the second fuselage is used to generate an editing page of the document and send it to the first fuselage A for display.
  • the user can use the first fuselage B to directly open the above-mentioned document located in the second fuselage, and continue the editing work.
  • the second fuselage is used to generate the editing page of the document and send it to The first body B is displayed.
  • the operation of sending a document from device A to device B as shown in FIG. 2 is eliminated, so that there is no need to share any data between the first fuselage A and the first fuselage B, which simplifies user operations and improves user experience.
  • an embodiment of the present application further provides a device verification method.
  • the NFC modules of the first body and the second body detect that they can approach each other, and then establish an initial connection.
  • the processor of the second body can acquire the device information of the first body, such as the device model, media access control (MAC) address, international mobile device identity (international mobile device identity) mobile equipment identity, IMEI) and other information.
  • the processor of the second fuselage generates a dedicated seed password for the first fuselage, and the seed password can be generated randomly or in a specific manner.
  • the processor of the second body uses the seed password to encrypt the device information of the first body, and uses the encryption result as the master password for storing data in the NFC module of the first body, wherein the seed password is used
  • Encryption of the device information of the first body may optionally be implemented in various ways, such as the advanced encryption standard AES algorithm and the like.
  • the processor of the second body can assign an exclusive security token token and a dynamic check code, such as a cyclic redundancy check (CRC) code to the first body, and use the master password
  • CRC cyclic redundancy check
  • the NFC module of the second body reads the encrypted data in the NFC module of the first body and hands it to the processor of the second body for processing. decrypt. After the processor of the second fuselage decrypts the encrypted data using the master password, the decrypted token and dynamic check code, as well as the device model, media access control and/or international mobile equipment identification code of the first fuselage are processed.
  • Consistency check if the consistency check passes, the processor of the second fuselage generates an edit page of the document and sends it to the first fuselage for display to realize the document synchronization function; if the consistency check fails, the first fuselage The second fuselage is disconnected from the first fuselage, and the processor of the second fuselage adds the current token, the dynamic verification code and all device information of the first fuselage to the blacklist.
  • the processor of the second machine is also used to generate a new dynamic check code, and then generates new encrypted data according to the token and the new check code, and writes it to the first machine In the NFC module of the body, to update the original encrypted data.
  • the first fuselage and the second fuselage use dynamic verification for device verification. Since the dynamic verification code changes, when an illegal device steals the encrypted data stored in the NFC module of the first fuselage, When trying to verify with the second body, the processor of the second body will detect that the dynamic check code from the illegal device is inconsistent with the current dynamic check code, so that the illegal device is connected to the blacklist, making the illegal device It is impossible to establish a connection with the second fuselage, which improves the security of file synchronization.
  • the distributed system consists of at least two first fuselage and one second fuselage.
  • the at least two first fuselage is referred to as the first fuselage here respectively.
  • the second body may be, for example, a smart camera
  • the first body A may be, for example, a large-screen display device
  • the first body B may be, for example, a mobile phone
  • the first body N may be, for example, a tablet computer
  • the second body can be fixed on the first body A and establish a wired data connection with the first body; the first body B to the second body N can be held by different local family members, and the first body The body B to the second body N can establish a wireless data connection with the second body.
  • the second body can receive the video image Vr from the remote home device through the network, and distribute the video image Vr to the first body A ⁇ the first body N, the first body A ⁇ the first body N can use their own microcontroller to decode the received video image Vr, and display it on the display screen; on the other hand, the first body B ⁇ the first body N
  • the cameras can respectively capture the video images Vb ⁇ Vn of the family members who hold them, and these video images Vb ⁇ Vn are encoded by the respective first body respectively and can be sent to the second body, and the second body receives the video.
  • Vb to Vn After the images Vb to Vn, operations such as Vb to Vn decoding, cropping, scaling and arrangement can be performed on the video image, and the video image Vs is obtained by superimposing with the video image Va collected by the second body itself, and finally the Vs is encoded and passed through.
  • the network is sent to the remote home device, so that the user of the remote home device can see multiple local family members in one video image Vs at the same time, and the above-mentioned multiple local family members can be located in different locations, different rooms in the home, not It needs to be gathered in front of the second fuselage, which greatly improves the user experience of multi-person video.
  • the display screen of the first body can present the best display effect.
  • the second body may establish a data connection with a different first body, and the display screen resolutions and refresh rates of the different first bodies are also likely to be different, then, if the second body If the fuselage sends images of the same resolution and frame rate to the above-mentioned different first fuselage, there will inevitably be a part of the first fuselage whose display screen resolution and refresh rate do not match the received image resolution and frame rate , which affects the display effect.
  • the embodiment of the present application also provides an image adaptation method. As shown in FIG. 30 , the method may include the following steps:
  • Step S101 the second fuselage acquires the display screen parameters of the first fuselage.
  • the parameters of the display screen may include at least one of a resolution and a refresh rate of the display screen.
  • step S101 may occur during the process of establishing a data connection between the first body and the second body.
  • the microcontroller of the first body can carry the data of the display screen in the beacon frame broadcast by the Wi-Fi module.
  • the processor of the second fuselage can search for beacon frames through the Wi-Fi module to obtain display parameters.
  • step S101 may occur after the first body establishes a data connection with the second body.
  • the processor of the second fuselage may send a request message for acquiring parameter information to the microcontroller of the first fuselage, and the first fuselage After receiving the request message, the microcontroller sends the display screen parameters to the processor of the second fuselage.
  • the second fuselage may maintain a display screen parameter list, and the display screen parameter list may be used to record the display screen parameters of the first fuselage known to the second fuselage, so that when there are unknown parameters of the first fuselage
  • the processor of the second fuselage can save the unknown display screen parameters of the first fuselage in the display parameter list.
  • the processor of the second fuselage can directly obtain the known display screen parameters of the first fuselage from the information parameter list.
  • the display parameter list may be in the form shown in Table 1 below:
  • Display resolution Display refresh rate XX-XX-XX-XX-XX 1920 ⁇ 1080(1080P) 60Hz YY-YY-YY-YY 2340 ⁇ 1080 90Hz ZZ-ZZ-ZZ-ZZ-ZZ 1280 ⁇ 720(720P) 60Hz ... ... ...
  • the device ID may be, for example, a media access control address (MAC), a service set identifier (SSID), and an international mobile equipment identity (IMEI) of the first body. , or other information that can be used to uniquely identify the identity of the first fuselage.
  • Display resolution refers to the number of rows ⁇ columns of display pixels.
  • Display refresh rate refers to the number of image frames per second the display can display, and 60 hertz (Hz) means the display can display up to 60 frames per second. Among them, the resolution of 1920 ⁇ 1080 and the refresh rate of 60Hz can also be represented by 1920 ⁇ 1080@60Hz.
  • Step S102 the second body generates an image to be displayed according to the parameters of the display screen.
  • the processor of the second fuselage can directly draw the image with the resolution and resolution of the display screen of the first fuselage in the stage of drawing the image.
  • An image with a matching refresh rate if the image to be displayed is video, picture and other media content, the processor of the second fuselage can perform scaling, cropping, frame insertion, compression and other adjustment operations on the image to convert the image into An image that matches the display resolution and refresh rate of the first body.
  • Step S103 the second body sends the adjusted image to the first body for display.
  • the second fuselage establishes data connections with the first fuselage A, the first fuselage B, and the first fuselage C, wherein the display screen parameter of the first fuselage A is 1920 ⁇ 1080@60Hz, The display parameters of the first body B are 2340 ⁇ 1080@90Hz, and the display parameters of the first body C are 1280 ⁇ 720@60Hz.
  • the second fuselage When the second body needs to send the video with the original quality of 1920 ⁇ 1080@60Hz to the three first bodies for display, for the first body A, since its display parameters are the same as the original quality of the video, so The second fuselage does not need to process the video, but directly sends it to the first fuselage A for display.
  • the second fuselage For the first fuselage B, the second fuselage can add a black background with a size of 210 ⁇ 1080 on the left and right sides of the video. , so that the resolution of the video is expanded to 2340 ⁇ 1080@60Hz, and then the video is sent to the first body B for display.
  • the second body For the first body C, the second body can compress the resolution of the video to 1280 ⁇ 720@60Hz, and then send the video to the first body C for display. It can be seen from this that the second fuselage can make different adjustments to the video to be displayed according to the display parameters of the first fuselage, so as to ensure that the video can be adapted to the display parameters of each first
  • an embodiment of the present application also provides a device dormancy method. As shown in FIG. 32, the method may include the following steps:
  • Step S201 in the case that the first body and the second body have established a data connection through Wi-Fi and Bluetooth, if the first body does not obtain a user operation within the first preset time period, the first body Enter the first sleep state.
  • the first body turns off the display screen and enters a lock screen state, but maintains a data connection with the second body through Wi-Fi and Bluetooth.
  • the processor of the second body turns off part of the physical core cores, and only reserves a few physical cores with low power consumption for performing background tasks, so as to save power consumption.
  • the "user operation” may include, for example, the user picks up the first body, presses a button of the first body, unlocks the first body, and displays on the display screen when the first body is unlocked Perform swipe and tap actions, etc. These user operations can be sensed through the sensors, buttons and touch panel of the first body, and transmitted to the microcontroller of the first body. If the microcontroller does not acquire the user operation within the first preset time period, the microcontroller The controller may control the first body to enter the first sleep state.
  • Step S202 after the first body enters the first sleep state, if the first body does not acquire a user operation within the second preset time period, the first body enters the second sleep state.
  • the microcontroller may control the first body to enter the second sleep state.
  • the first body keeps the display screen off and the lock screen state, and disconnects the Wi-Fi data connection with the second body, or disconnects the Bluetooth data connection with the second body, so that the first body The first body and the second body only maintain data connection through one of Wi-Fi or Bluetooth, so as to further save power consumption.
  • Step S203 after the first body enters the second sleep state, if the first body does not acquire a user operation within the third preset time period, the first body enters the third sleep state.
  • the microcontroller may control the first body to enter the third sleep state.
  • the first body keeps the display screen off and the lock screen state, and disconnects the Wi-Fi and Bluetooth data connection with the second body, so that the first body and the second body are connected to each other.
  • the data connection is no longer maintained between, to further save power consumption.
  • Step S301 in the case where a data connection is established between the first body and the second body through Wi-Fi or Bluetooth, if the first body does not obtain a user operation within a first preset time period, the first body Enter the first sleep state.
  • the first body turns off the display screen and enters the lock screen state, but maintains the data connection with the second body.
  • Step S302 after the first body enters the first sleep state, if the first body does not acquire a user operation within the second preset time period, the first body enters the third sleep state.
  • the first body keeps the display screen off and the lock screen state, and disconnects the data connection with the second body, so as to further save energy consumption.
  • the device sleep method provided by the embodiment of the present application sets three sleep states for the first body according to the length of time that the user does not operate the first body, and gradually disconnects the first body and the second body according to the level.
  • the data connection between the two fuselage not only enables the first fuselage to synchronize data with the second fuselage during short-term sleep, so that the user can use it again, but also enables the second fuselage to disconnect from the second fuselage during long-term sleep.
  • the data connection of the fuselage to save power consumption and prolong battery life.
  • an embodiment of the present application also provides a device wake-up method. As shown in FIG. 34 , the method may include the following steps:
  • Step S401 when the first body detects that it is touched by the user in the sleep state, establishes a data connection with the second body.
  • the first body may be equipped with an ultrasonic sensor, a pressure sensor, a capacitive sensor, an inductive sensor, a millimeter-wave sensor, or other sensors and sensor arrays thereof to detect user touch.
  • an ultrasonic sensor can measure the distance of an object by emitting ultrasonic waves and receiving reflected echoes, so it can sense the behavior of the user's palm approaching or touching the first body; capacitive sensors will produce capacitance changes when an external conductor approaches , so the user's palm approaching or touching the first body can also be sensed; when an external conductor approaches, the electric sensor will produce an inductance change, so it can also sense the user's palm approaching or touching the first body.
  • the microcontroller can control the first body to establish a data connection with the second body.
  • FIG. 35 is a schematic diagram of sensor distribution of the first fuselage provided by an embodiment of the present application.
  • the first body 100 may include multiple sensors, such as sensor S1, sensor S2, sensor S3, sensor S4 to sensor SN, these sensors may be a sensor array composed of the same sensor, Different sensors are also possible.
  • the sensors S1 to SN can be evenly distributed on both sides of the display screen of the first body, so that when the user approaches or touches the first body from any direction, they can be monitored by the sensors.
  • step S401 can be implemented in different ways, for example:
  • step S401 can be omitted.
  • the microcontroller can enable another data connection method, so that the first body and the second body can establish a data connection through Wi-Fi and Bluetooth at the same time.
  • the microcontroller can enable the Wi-Fi or Bluetooth module, so that the first body and the second body pass through Wi-Fi at the same time Establish a data connection with Bluetooth.
  • Step S402 the second body acquires the display screen parameters of the first body, and generates image data according to the display screen parameters.
  • the processor of the second body may send a request message to the microcontroller of the first body after the first body and the second body establish a data connection to obtain the display screen of the display screen parameters, and generate image data based on display parameters.
  • the processor of the second fuselage may acquire the display screen parameters of the display screen from the beacon frame broadcast by the first fuselage during the process of establishing a data connection between the first fuselage and the second fuselage , and generate the first image data according to the parameters of the display screen.
  • Step S403 when the first body detects that it is lifted by the user, it sends a wake-up message to the second body.
  • the first body may include an acceleration sensor and/or a gyro sensor.
  • the acceleration sensor may be used to measure the acceleration of the first body
  • the gyro sensor may be used to measure the angular acceleration of the first body.
  • the acceleration sensor and/or the gyroscope sensor can transmit the measured acceleration and/or angular acceleration to the microprocessor, so that the microprocessor can judge whether the first body is lifted by the user, and if the first body is lifted by the user , the microcontroller can send a wake-up message to the processor of the second body.
  • the microcontroller acquires that the button of the first body is pressed, it can also send a wake-up message to the processor of the second body.
  • Step S404 the second body lights up the display screen according to the wake-up message, and sends the image data to the display screen for display.
  • the processor of the second body after the processor of the second body receives the wake-up message, it can send a screen-on message and the generated image data to the microcontroller of the first sub-device; After the screen-on message and the image data are obtained, the display screen can be lighted according to the screen-on message, and the image data can be decoded and sent to the display screen for display.
  • the first body when the first body detects the user's touch, it starts to interact with the second body to wake up. the image data, so that the first body can wake up quickly when it is detected that the user lifts up, and the user experience is improved.
  • Embodiments of the present application provide a chip system, where the chip system includes a microcontroller, which is used to support the above-mentioned first body to implement the functions involved in the above-mentioned embodiments, such as displaying image data on a display screen.
  • a microcontroller which is used to support the above-mentioned first body to implement the functions involved in the above-mentioned embodiments, such as displaying image data on a display screen.
  • An embodiment of the present application provides a chip system, where the chip system includes a processor for supporting the second body to implement the functions involved in the foregoing embodiments, such as generating image data.
  • Embodiments of the present application also provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the microcontroller of the first body, the microcontroller can implement the above aspects and implementation thereof way involved in the function of the first fuselage.
  • Embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the processor of the second body, the processor enables the processor to implement the above aspects and implementation manners.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

The present application provides an electronic device and a distributed system. The electronic device comprises: a first body and a second body which are separable, a processor, a microcontroller, and a display screen; the microcontroller and the display screen are provided on the first body, and the processor is provided on the second body; the first body and the second body establish a data connection in a wired or wireless manner; the processor is configured to generate first image data, and send the first image data to the microcontroller; the microcontroller is configured to output the first image data to the display screen for display. Compared with conventional electronic devices, both the first body and the second body abandon use of some devices, so that the size of the body is smaller and the thickness is thinner, it is convenient to hold by a user, and it is also convenient to accommodate a battery having larger capacity. In daily use, the user is only needed to hold the first body to implement common functions of the electronic device, and the second body may be arbitrarily placed in a backpack or pocket instead of being held, thus improving user experience.

Description

一种电子设备和分布式系统An electronic device and distributed system
本申请要求于2020年07月21日提交到国家知识产权局、申请号为202010704193.4、发明名称为“一种电子设备和分布式系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on July 21, 2020 with the application number 202010704193.4 and the invention title "An Electronic Device and Distributed System", the entire contents of which are incorporated by reference in in this application.
技术领域technical field
本申请涉及终端技术领域,尤其涉及一种电子设备和分布式系统。The present application relates to the field of terminal technologies, and in particular, to an electronic device and a distributed system.
背景技术Background technique
随着半导体技术、无线数据连接技术(wireless communication)和互联网技术的发展,智能终端设备越来越多地走进人们的生活中,逐渐成为人们生活中的必需品。了实现上述的各类服务和功能,智能终端设备通常会配备全功能的芯片系统,使智能终端设备具备完整的运行应用程序APP、显示图像以及和用户进行交互的能力。With the development of semiconductor technology, wireless data connection technology (wireless communication) and Internet technology, more and more intelligent terminal devices have entered people's lives and have gradually become a necessity in people's lives. In order to realize the above-mentioned various services and functions, the intelligent terminal device is usually equipped with a full-featured chip system, so that the intelligent terminal device has the complete ability to run the application program APP, display the image and interact with the user.
近年来,为了提高用户的使用体验,满足更多场景的需求,智能终端设备的性能不断提升。以手机为例,其性能的提升主要体现在:采用了架构更先进、晶体管数量更多的处理器以提升计算性能;采用了传感器尺寸更大、数量更多的摄像头模组以提升拍照性能;采用尺寸更大、分辨率更高的屏幕以提升用户观感等。然而,随着智能终端设备性能的提升,其功耗也越来越高,因此智能终端设备需要配备容量更大的电池以保证续航,而电池容量越大,终端设备的体积就越大,重量也越重,严重影响了用户手持终端设备的使用体验。In recent years, in order to improve the user experience and meet the needs of more scenarios, the performance of smart terminal devices has been continuously improved. Taking a mobile phone as an example, its performance improvement is mainly reflected in: using a processor with a more advanced architecture and more transistors to improve computing performance; using a larger sensor size and more camera modules to improve camera performance; Use larger, higher-resolution screens to enhance user perception, etc. However, with the improvement of the performance of intelligent terminal equipment, its power consumption is also getting higher and higher, so intelligent terminal equipment needs to be equipped with a battery with a larger capacity to ensure battery life. It is also heavier, which seriously affects the user experience of the handheld terminal device.
并且,随着第五代移动数据连接技术(5th generation mobile networks,5G)的逐渐普及,终端设备的功耗会进一步提升,为终端设备兼顾尺寸、重量、续航和美观等因素的设计带来了更大的考验。Moreover, with the gradual popularization of the 5th generation mobile networks (5G), the power consumption of terminal devices will be further improved, which brings advantages to the design of terminal devices that take into account factors such as size, weight, battery life and aesthetics. bigger test.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种电子设备和分布式系统,能够使在减小电子设备的体积和重量的同时,实现更长的续航时间,便于用户持握,提升用户使用体验。The embodiments of the present application provide an electronic device and a distributed system, which can reduce the volume and weight of the electronic device, achieve longer battery life, facilitate the user to hold, and improve the user experience.
为达到上述目的,本申请实施例提供了以下技术方案:To achieve the above purpose, the embodiments of the present application provide the following technical solutions:
第一方面,本申请实施例提供了一种电子设备,该电子设备包括:可分离设置的第一机身和第二机身、处理器、微控制器和显示屏;微控制器和显示屏设置于第一机身,处理器设置于第二机身;第一机身和第二机身通过有线或者无线方式建立数据连接;处理器用于生成第一图像数据,将第一图像数据发送给微控制器;微控制器用于将第一图像数据输出到显示屏进行显示。In a first aspect, an embodiment of the present application provides an electronic device, the electronic device includes: a detachable first body and a second body, a processor, a microcontroller and a display screen; the microcontroller and the display screen The first fuselage is arranged on the first fuselage, and the processor is arranged on the second fuselage; the first fuselage and the second fuselage establish a data connection through a wired or wireless manner; the processor is used to generate the first image data, and send the first image data to the Microcontroller; the microcontroller is used for outputting the first image data to the display screen for display.
根据本申请实施例提供的电子设备,第一机身包括显示屏和微控制器,主要用于实现电子设备的图像显示等交互功能,第二机身包括处理器,主要用于实现电子设备的计算功能。相比于传统的电子设备,第一机身和第二机身均减少了一部分器件,因此机身尺寸可以更小、厚度可以更薄,更便于用户持握。用户在日常使用中,仅需要手持第一机身即可实现电子设备的常用功能,第二机身则可以任意放置在背包或口袋中,无需持 握,因此提升了用户使用体验。另外,由于第一机身和第二机身均减少了一部分器件,因此当第一机身和第二机身的厚度与传统的电子设备的厚度相同时,可以容纳更大容量的电池,获得更长的续航时间。According to the electronic device provided by the embodiments of the present application, the first body includes a display screen and a microcontroller, which are mainly used to realize interactive functions such as image display of the electronic device, and the second body includes a processor, which is mainly used to realize the electronic device's image display and other interactive functions. calculation function. Compared with the traditional electronic device, the first body and the second body both have some components reduced, so the body size can be smaller, the thickness can be thinner, and it is more convenient for users to hold. In daily use, users only need to hold the first body to realize common functions of electronic devices, and the second body can be placed in a backpack or pocket without holding, thus improving the user experience. In addition, since the first body and the second body both reduce a part of the components, when the thickness of the first body and the second body is the same as that of the traditional electronic device, a battery with a larger capacity can be accommodated, and the Longer battery life.
在一种实现方式中,第一机身包括触控面板,触控面板用于检测用户在显示屏上的触摸输入;微控制器用于将触摸输入发送给处理器;处理器用于根据触摸输入生成第一图像数据。In an implementation manner, the first body includes a touch panel, and the touch panel is used to detect the user's touch input on the display screen; the microcontroller is used to send the touch input to the processor; the processor is used to generate the touch input according to the touch input. first image data.
在一种实现方式中,第一机身包括按键,按键用于检测用户的按键输入;微控制器用于将按键输入发送给处理器;处理器用于根据按键输入生成第一图像数据。In an implementation manner, the first body includes keys, and the keys are used to detect a user's key input; the microcontroller is used to send the key input to the processor; the processor is used to generate the first image data according to the key input.
在一种实现方式中,第一机身包括传感器模块;微控制器用于将传感器模块采集的传感器数据发送给处理器;处理器用于根据传感器数据生成第一图像数据。In an implementation manner, the first body includes a sensor module; the microcontroller is configured to send sensor data collected by the sensor module to the processor; and the processor is configured to generate the first image data according to the sensor data.
在一种实现方式中,第一机身包括摄像头,摄像头用于采集第二图像数据;微控制器用于将第二图像数据发送到显示屏进行显示;微控制器还用于将第二图像数据编码后发送给第二机身。In an implementation manner, the first body includes a camera, and the camera is used to collect the second image data; the microcontroller is used to send the second image data to the display screen for display; the microcontroller is also used to store the second image data After encoding, it is sent to the second fuselage.
在一种实现方式中,第一机身和第二机身还包括无线通信模块;无线通信模块包括蓝牙模块和/或Wi-Fi模块;第一机身和第二机身通过蓝牙模块和/或Wi-Fi模块建立数据连接。In an implementation manner, the first body and the second body further include a wireless communication module; the wireless communication module includes a Bluetooth module and/or a Wi-Fi module; the first body and the second body pass through the Bluetooth module and/or the Wi-Fi module. or Wi-Fi module to establish a data connection.
在一种实现方式中,第一机身和第二机身还包括外部接口;当使用线缆连接第一机身和第二机身的外部接口时,第一机身和第二机身建立有线数据连接。In an implementation manner, the first body and the second body further include an external interface; when a cable is used to connect the external interfaces of the first body and the second body, the first body and the second body establish Wired data connection.
在一种实现方式中,处理器用于获取显示屏的显示屏参数,显示屏参数包括显示屏的分辨率和/或刷新率;处理器根据显示屏参数对第一图像数据进行调整,调整包括对第一图像数据进行缩放、裁剪、插帧、压缩中的至少一种;处理器将调整后的第一图像数据发送给微控制器。In an implementation manner, the processor is configured to acquire display screen parameters of the display screen, and the display screen parameters include the resolution and/or refresh rate of the display screen; the processor adjusts the first image data according to the display screen parameters, and the adjustment includes adjusting the The first image data is subjected to at least one of scaling, cropping, frame insertion, and compression; the processor sends the adjusted first image data to the microcontroller.
在一种实现方式中,微控制器用于广播信标帧,信标帧包括显示屏参数;处理器用于从信标帧中获取显示屏参数。In an implementation manner, the microcontroller is used for broadcasting a beacon frame, and the beacon frame includes display screen parameters; the processor is used for acquiring display screen parameters from the beacon frame.
在一种实现方式中,处理器用于向微控制器发送第一请求消息;微控制器用于根据第一请求消息向处理器发送显示屏参数。In an implementation manner, the processor is configured to send a first request message to the microcontroller; the microcontroller is configured to send display screen parameters to the processor according to the first request message.
在一种实现方式中,在第一机身与第二机身通过Wi-Fi模块和蓝牙模块建立数据连接的情况下,如果微控制器在第一预设时长内没有获取到用户操作,则微控制器控制第一机身进入第一休眠状态,其中,第一休眠状态包括第一机身熄灭显示屏并且锁屏,以及与第二机身通过Wi-Fi模块和蓝牙模块保持数据连接。In an implementation manner, when the first body and the second body establish a data connection through the Wi-Fi module and the Bluetooth module, if the microcontroller does not acquire the user operation within the first preset time period, the The microcontroller controls the first body to enter a first sleep state, wherein the first sleep state includes that the first body turns off the display screen and locks the screen, and maintains a data connection with the second body through the Wi-Fi module and the Bluetooth module.
在一种实现方式中,在第一机身进入第一休眠状态之后,如果微控制器在第二预设时长内没有获取到用户操作,则微控制器控制第一机身进入到第二休眠状态,其中,第二休眠状态包括第一机身熄灭显示屏并且锁屏,以及与第二机身通过Wi-Fi模块或蓝牙模块保持数据连接。In an implementation manner, after the first body enters the first sleep state, if the microcontroller does not acquire a user operation within the second preset time period, the microcontroller controls the first body to enter the second sleep state The second sleep state includes that the first body turns off the display screen and locks the screen, and maintains a data connection with the second body through a Wi-Fi module or a Bluetooth module.
在一种实现方式中,在第一机身进入第二休眠状态之后,如果微控制器在第三预设时长内没有获取到用户操作,则微控制器控制第一机身进入到第三休眠状态,其中,第三休眠状态包括第一机身熄灭显示屏并且锁屏,以及与第二机身断开数据连接。In an implementation manner, after the first body enters the second sleep state, if the microcontroller does not obtain a user operation within a third preset time period, the microcontroller controls the first body to enter the third sleep state state, wherein the third sleep state includes that the first body turns off the display screen and locks the screen, and disconnects the data connection from the second body.
可以理解的是,本申请实施例提供的设备休眠方法根据用户未操作第一机身的时长,对第一机身设置了三种休眠状态,按层次逐渐断开第一机身与第二机身之间的数据连接,既使得第一机身能够在短期休眠时能够与第二机身同步数据,以便于用户再次使用,也 使得第二机身能够在长期休眠时,断开与第二机身的数据连接,以节省功耗,延长续航时间。It can be understood that the device sleep method provided by the embodiment of the present application sets three sleep states for the first body according to the length of time that the user does not operate the first body, and gradually disconnects the first body and the second body according to the level. The data connection between the two fuselage not only enables the first fuselage to synchronize data with the second fuselage during short-term sleep, so that the user can use it again, but also enables the second fuselage to disconnect from the second fuselage during long-term sleep. The data connection of the fuselage to save power consumption and prolong battery life.
在一种实现方式中,显示屏的两侧分布设置有多个传感器,多个传感器中的部分或者全部传感器用于检测第一机身是否被用户触摸和是否被用户拾起;微控制器用于当第一机身在第三休眠状态下被用户触摸时,控制第一机身与第二机身建立数据连接;处理器用于在第一机身与第二机身建立数据连接之后,获取显示屏的显示屏参数,以及根据显示屏参数生成第一图像数据;微控制器用于当第一机身被用户抬起时,向处理器发送第一唤醒消息;处理器用于根据第一唤醒消息点亮显示屏,将第一图像数据发送给微控制器。In an implementation manner, multiple sensors are distributed on both sides of the display screen, and some or all of the multiple sensors are used to detect whether the first body is touched by the user and whether it is picked up by the user; the microcontroller is used for When the first body is touched by the user in the third sleep state, the first body is controlled to establish a data connection with the second body; the processor is configured to obtain a display after the first body and the second body establish a data connection display screen parameters of the screen, and generate the first image data according to the display screen parameters; the microcontroller is used to send a first wake-up message to the processor when the first body is lifted by the user; the processor is used to point to the first wake-up message Turn on the display screen and send the first image data to the microcontroller.
可以理解的是,在本申请实施例提供的设备唤醒方法中,第一机身在检测到用户触摸时,即开始与第二机身进行唤醒所需的交互,获取显示屏亮屏后需要显示的图像数据,使得第一机身能够在检测到用户抬起时迅速唤醒,提升用户使用体验。It can be understood that, in the device wake-up method provided by the embodiment of the present application, when the first body detects the user's touch, it starts to interact with the second body to wake up. the image data, so that the first body can wake up quickly when it is detected that the user lifts up, and the user experience is improved.
第二方面,本申请实施例提供了一种分布式系统,包括:如本申请实施例第一方面及其各个实现方式提供的多个第一机身和一个第二机身;多个第一机身和第二机身通过有线或者无线方式建立数据连接。In a second aspect, an embodiment of the present application provides a distributed system, including: a plurality of first fuselage and a second fuselage as provided in the first aspect and each implementation manner of the embodiment of the present application; a plurality of first fuselage The fuselage and the second fuselage establish a data connection through a wired or wireless manner.
第三方面,本申请实施例提供了一种芯片系统,该芯片系统包括微控制器,用于支持上述第一机身实现上述各方面及其实现方式所涉及的功能,例如在显示屏显示图像数据。In a third aspect, an embodiment of the present application provides a chip system, where the chip system includes a microcontroller, which is used to support the above-mentioned first body to implement the functions involved in the above-mentioned aspects and implementation manners, such as displaying an image on a display screen data.
第四方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,用于支持上述第二机身实现上述各方面及其实现方式所涉及的功能,例如在生成图像数据。In a fourth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor for supporting the above-mentioned second body to implement the functions involved in the above-mentioned aspects and implementation manners, such as generating image data.
第五方面,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在第一机身的微控制器上运行时,使得微控制器实现上述各方面及其实现方式所涉及的第一机身的功能。In a fifth aspect, the embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the microcontroller of the first body, the microcontroller enables the microcontroller to implement the above Aspects and functions of the first fuselage involved in their implementation.
第六方面,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在第二机身的处理器上运行时,使得处理器实现上述各方面及其实现方式所涉及的第二机身的功能。In a sixth aspect, the embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the processor of the second fuselage, the processor enables the processor to implement the above aspects and The function of the second fuselage involved in its implementation.
附图说明Description of drawings
图1是目前智能终端设备的系统结构框图;Fig. 1 is the system structure block diagram of the current intelligent terminal equipment;
图2是目前两台终端设备进行数据共享的场景示意图;FIG. 2 is a schematic diagram of a scenario in which two terminal devices currently share data;
图3是目前多台终端设备进行互动的场景图;Fig. 3 is a scene diagram of the interaction of multiple terminal devices at present;
图4是第一机身的结构示意图;4 is a schematic structural diagram of a first fuselage;
图5是第二机身的结构示意图;5 is a schematic structural diagram of a second fuselage;
图6是第一机身与第二机身物理连接的示意图;6 is a schematic diagram of the physical connection between the first body and the second body;
图7是第一机身与第二机身在分离状态下组合使用的一个示意图;Fig. 7 is a schematic diagram of the combined use of the first fuselage and the second fuselage in a separated state;
图8是第一机身与第二机身在分离状态下组合使用的另一个示意图;8 is another schematic diagram of the combined use of the first body and the second body in a separated state;
图9是第一机身与第二机身组合使用时数据交互的逻辑框图;9 is a logical block diagram of data interaction when the first body and the second body are used in combination;
图10是第一机身与第二机身组合使用时数据交互的逻辑框图;10 is a logical block diagram of data interaction when the first body and the second body are used in combination;
图11是第一机身、第二机身与传统形态的手机的厚度示意图;FIG. 11 is a schematic diagram of the thickness of a first body, a second body and a conventional mobile phone;
图12是第一机身与第二机身通过Wi-Fi链路建立无线数据连接的示意图;12 is a schematic diagram of the first body and the second body establishing a wireless data connection through a Wi-Fi link;
图13是第二机身为第一机身无线充电的示意图;13 is a schematic diagram of the second body wirelessly charging the first body;
图14是第一机身和第二机身的一种设备形态示意图;14 is a schematic diagram of a device form of the first fuselage and the second fuselage;
图15是第一机身和第二机身的另一种设备形态示意图;15 is a schematic diagram of another device form of the first body and the second body;
图16是第一机身和第二机身的另一种设备形态示意图;FIG. 16 is a schematic diagram of another equipment form of the first fuselage and the second fuselage;
图17是第一机身和第二机身的另一种设备形态示意图;FIG. 17 is a schematic diagram of another equipment form of the first fuselage and the second fuselage;
图18是第二机身在第一机身的显示屏显示UI图像的示意图;18 is a schematic diagram of a second body displaying a UI image on a display screen of the first body;
图19是第二机身的摄像头和麦克风的一种使用场景示意图;19 is a schematic diagram of a usage scenario of the camera and the microphone of the second body;
图20是第二机身的摄像头和麦克风的另一种使用场景示意图;20 is a schematic diagram of another usage scenario of the camera and the microphone of the second body;
图21是第一机身和第二机身的扬声器共同播放音频的示意图;21 is a schematic diagram of the speakers of the first fuselage and the second fuselage playing audio together;
图22是本申请实施例提供的一种拍摄控制方法的流程图;22 is a flowchart of a shooting control method provided by an embodiment of the present application;
图23是第一机身控制第二机身进行拍照的场景示意图;23 is a schematic diagram of a scene in which the first fuselage controls the second fuselage to take pictures;
图24是第二机身进行多焦点拍摄的示意图;24 is a schematic diagram of multi-focus shooting performed by the second body;
图25是第二机身在第一机身的显示屏显示人物轮廓线的示意图;FIG. 25 is a schematic diagram of the outline of a character displayed on the display screen of the first fuselage by the second fuselage;
图26是分布式系统的示意图;Figure 26 is a schematic diagram of a distributed system;
图27是分布式系统的一个使用场景示意图;Figure 27 is a schematic diagram of a usage scenario of a distributed system;
图28是本申请实施例提供的一种设备验证方法的流程图;28 is a flowchart of a device verification method provided by an embodiment of the present application;
图29是分布式系统的另一个使用场景示意图;Figure 29 is a schematic diagram of another usage scenario of the distributed system;
图30是本申请实施例提供的一种图像适配方法的流程图;30 is a flowchart of an image adaptation method provided by an embodiment of the present application;
图31是本申请实施例提供的一种图像适配方法的示例图;31 is an example diagram of an image adaptation method provided by an embodiment of the present application;
图32是本申请实施例提供的一种设备休眠方法的流程图;FIG. 32 is a flowchart of a device dormancy method provided by an embodiment of the present application;
图33是本申请实施例提供的另一种设备休眠方法的流程图;33 is a flowchart of another device sleep method provided by an embodiment of the present application;
图34是本申请实施例提供的一种设备唤醒方法的流程图;FIG. 34 is a flowchart of a device wake-up method provided by an embodiment of the present application;
图35是本申请实施例提供的第一机身的传感器分布示意图。FIG. 35 is a schematic diagram of sensor distribution of the first fuselage provided by an embodiment of the present application.
具体实施方式detailed description
随着半导体技术、无线数据连接技术(wireless communication)和互联网技术的发展,智能终端设备越来越多地走进人们的生活中,逐渐成为人们生活中的必需品。常见的智能终端设备例如包括手机、智能电视、可穿戴设备(如:智能手表、手环)、平板电脑、智能电视、智慧屏、智能投影仪、智能音箱、虚拟现实(virtual reality,VR)设备、智能驾驶计算平台等。各种各样的智能终端设备在学习、娱乐、生活、健康管理、办公、出行等领域为用户提供了多方面的服务。以用户随身携带并且使用最广泛的手机为例,用户可以不限时间不限地点地使用手机享受的各类服务和功能,例如拨打电话、即时数据连接、视频聊天、拍照摄像、定位导航、移动支付、网络购物、移动办公、多媒体服务等。With the development of semiconductor technology, wireless data connection technology (wireless communication) and Internet technology, more and more intelligent terminal devices have entered people's lives and have gradually become a necessity in people's lives. Common smart terminal devices include, for example, mobile phones, smart TVs, wearable devices (such as smart watches, wristbands), tablet computers, smart TVs, smart screens, smart projectors, smart speakers, and virtual reality (VR) devices , intelligent driving computing platform, etc. A variety of intelligent terminal devices provide users with various services in the fields of learning, entertainment, life, health management, office, and travel. Taking the most widely used mobile phone carried by users as an example, users can use the various services and functions enjoyed by the mobile phone at any time and place, such as making calls, instant data connection, video chat, taking pictures, positioning and navigation, mobile Payment, online shopping, mobile office, multimedia services, etc.
为了实现上述的各类服务和功能,智能终端设备通常会配备全功能的芯片系统,这里的全功能指的是依靠智能终端设备的芯片系统就能够使智能终端设备具备完整的运行应用程序APP、显示图像以及和用户进行交互的能力。In order to realize the above-mentioned various services and functions, the smart terminal device is usually equipped with a full-featured chip system. The full-featured chip system here means that the smart terminal device can have a complete running application program APP, The ability to display images and interact with the user.
图1是目前智能终端设备的系统结构框图。一般来说,具备全功能的芯片系统的智能终端设备如图1所示至少需要包括以下部分:FIG. 1 is a block diagram of the system structure of the current intelligent terminal equipment. Generally speaking, an intelligent terminal device with a full-featured chip system, as shown in Figure 1, needs to include at least the following parts:
中央处理器(central processing unit,CPU)101,包括基于ARM架构的CPU、基于 X86架构的CPU或者基于MIPS架构的CPU等。CPU 101主要用于运行应用程序,执行相关的程序指令和处理数据等。A central processing unit (central processing unit, CPU) 101 includes a CPU based on an ARM architecture, a CPU based on an X86 architecture, or a CPU based on a MIPS architecture. The CPU 101 is mainly used to run application programs, execute related program instructions, process data, and the like.
图形处理器(graphic processing unit,GPU)102,用于执行绘图运算工作,以渲染应用程序或者游戏需要在显示屏上显示的图像。A graphics processor (graphic processing unit, GPU) 102 is used to perform graphics operations to render images that the application or game needs to display on the display screen.
视频编解码器103,用于对支持对不同格式的数字视频进行压缩(编码)或者解压缩(解码),以利于在智能终端设备中存储、播放或者传输,其中,常见的数字视频的格式例如可以包括高级视频编码(advanced video coding,AVC)的H.264、H.265的标准格式等。The video codec 103 is used to compress (encode) or decompress (decode) digital videos that support different formats, so as to facilitate storage, playback or transmission in smart terminal devices, wherein the formats of common digital videos such as The standard format of H.264 and H.265 of advanced video coding (AVC) may be included.
显示子系统(display subsystem,DSS)104,例如可以包括显示控制器(display controller,DISPC)等,DSS 104可以对不同应用程序的界面、状态栏等图像进行叠加、变换等操作,并最终输出到显示屏进行显示。The display subsystem (DSS) 104 may include, for example, a display controller (DISPC), etc. The DSS 104 can perform operations such as overlaying and transforming images such as interfaces and status bars of different applications, and finally output to display on the display.
无线通信模块105,可以包括一个或者多个芯片器件,例如基带处理器、调制解调器、Wi-Fi芯片、蓝牙芯片等,用于智能终端设备的蜂窝网络、Wi-Fi、蓝牙等至少一种数据连接功能。可以理解的是,为实现上述无线数据连接信号的传输,智能终端设备还会设置有相应的天线、功率放大器等部件。The wireless communication module 105 may include one or more chip devices, such as a baseband processor, a modem, a Wi-Fi chip, a Bluetooth chip, etc., for at least one data connection of a cellular network, Wi-Fi, Bluetooth, etc. of the intelligent terminal device Features. It can be understood that, in order to realize the above-mentioned transmission of the wireless data connection signal, the intelligent terminal device is further provided with corresponding components such as an antenna and a power amplifier.
外设接口106,例如可以包括用于支持显示屏显示图像的行动产业处理器接口(mobile industry processor interface)-显示器串行接口规范(display serial interface specification)MIPI-DSI接口、displayport(DP)接口,用于支持摄像头的行动产业处理器接口-摄像头串行接口规范(camera serial interface specification)MIPI-CSI接口,用于支持传感器的串行外设接口(serial peripheral interface,SPI)接口、集成电路总线(inter-integrated circuit,I2C)接口等。The peripheral interface 106, for example, may include a mobile industry processor interface (mobile industry processor interface)-display serial interface specification (display serial interface specification) MIPI-DSI interface, a displayport (DP) interface for supporting the display screen to display images, Mobile industry processor interface for supporting cameras - camera serial interface specification (camera serial interface specification) MIPI-CSI interface, serial peripheral interface (SPI) interface for supporting sensors, integrated circuit bus ( inter-integrated circuit, I2C) interface, etc.
近年来,为了提高用户的使用体验,满足更多场景的需求,智能终端设备的性能不断提升。以手机为例,其性能的提升主要体现在:采用了架构更先进、晶体管数量更多的处理器以提升计算性能;采用了传感器尺寸更大、数量更多的摄像头模组以提升拍照性能;采用尺寸更大、分辨率更高的屏幕以提升用户观感等。然而,随着智能终端设备性能的提升,其功耗也越来越高,因此智能终端设备需要配备容量更大的电池以保证续航,而电池容量越大,终端设备的体积就越大,重量也越重,对于这样一种主要由用户手持操作的设备来说,过大的尺寸和过重的重量会十分影响用户使用体验。In recent years, in order to improve the user experience and meet the needs of more scenarios, the performance of smart terminal devices has been continuously improved. Taking a mobile phone as an example, its performance improvement is mainly reflected in: using a processor with a more advanced architecture and more transistors to improve computing performance; using a larger sensor size and more camera modules to improve camera performance; Use larger, higher-resolution screens to enhance user perception, etc. However, with the improvement of the performance of intelligent terminal equipment, its power consumption is also getting higher and higher, so intelligent terminal equipment needs to be equipped with a battery with a larger capacity to ensure battery life. It is also heavier. For such a device that is mainly operated by a user by hand, the excessive size and excessive weight will greatly affect the user experience.
并且,随着第五代移动数据连接技术(5th generation mobile networks,5G)的逐渐普及,越来越多的智能终端设备会支持5G网络。为此,智能终端设备就需要额外增加5G芯片、5G天线和功率放大器等器件,然而,这些器件的增加也同时带来了一些新的问题。例如:(一)智能终端设备的功耗会进一步提升,如果要保证智能终端设备的续航需求,就必须采用容量更大的电池,但是会进一步增加智能终端设备的尺寸和重量,影响用户使用体验。(二)智能终端设备功耗的提高对机身内部的散热设计也提出了更高的要求,如果智能终端设备的散热能力不足以应对智能终端设备的高功耗,那么智能终端设备的性能就会受到限制,影响用户使用体验。(三)5G天线的增加对智能终端设备内部的天线设计也带来了更大的挑战,要求天线设计既不会过度增加智能终端设备的体积,又能够发挥出天线的性能,如果天线设计不好,极有可能会牺牲电线的性能和频段。Moreover, with the gradual popularization of 5th generation mobile networks (5G), more and more smart terminal devices will support 5G networks. To this end, smart terminal equipment requires additional devices such as 5G chips, 5G antennas, and power amplifiers. However, the addition of these devices also brings some new problems. For example: (1) The power consumption of the smart terminal device will be further increased. If the battery life of the smart terminal device is to be guaranteed, a battery with a larger capacity must be used, but it will further increase the size and weight of the smart terminal device and affect the user experience. . (2) The increase in power consumption of intelligent terminal equipment also puts forward higher requirements for the heat dissipation design inside the fuselage. If the heat dissipation capacity of intelligent terminal equipment is not enough to cope with the high power consumption of intelligent terminal equipment, the performance of intelligent terminal equipment will be reduced. will be restricted and affect the user experience. (3) The increase of 5G antennas also brings greater challenges to the antenna design inside the smart terminal equipment. It is required that the antenna design will not increase the volume of the smart terminal equipment excessively, but can also exert the performance of the antenna. Well, it is very likely that the performance and frequency band of the wire will be sacrificed.
另外,在目前的智能终端设备中,由于要兼顾尺寸、重量、续航和美观等因素,其 摄像头的尺寸也不能过大,否则不仅会增加设备重量还会压缩电池的空间,限制了拍照性能的进一步提升。In addition, in the current smart terminal equipment, due to consideration of factors such as size, weight, battery life and aesthetics, the size of the camera should not be too large, otherwise it will not only increase the weight of the equipment, but also compress the battery space, limiting the camera performance. further improvement.
另外,目前的智能终端设备在一些实际使用场景中也存在着不足。图2和图3示出了两种存在不足的使用场景。In addition, the current smart terminal devices also have deficiencies in some actual usage scenarios. Figures 2 and 3 illustrate two use cases where there are deficiencies.
图2是目前两台终端设备进行数据共享的场景示意图。FIG. 2 is a schematic diagram of a scenario in which two terminal devices currently share data.
如图2所示,假设用户有两台智能终端设备A和B;在某一天,用户携带设备A外出办公,而将设备B置于家中。在用户外出办公期间,用户使用设备A处理了一些文档,但是仍有一部分文档没有处理完毕。那么,当用户回到家中之后,如果想要继续使用设备B处理这些文档,则可能需要通过以下方式等将设备A中的文档转移到设备B中:1、通过第三方应用程序将设备A中的文档发送给设备B;2、通过满足特定型号的设备A和B之间的一碰传等功能将设备A中的文档发送给设备B;3、通过云存储共享的方式将设备A中的文档同步给设备B。然而上述方式均要求用户在两个设备上执行多个步骤的操作,例如启动对应的第三方应用程序、发送文档、接收文档、将文档保存到云存储、从云存储下载文档等,对不熟悉上述操作的用户来说不够友好;并且,如果设备A不在用户身边,上述方式均无法实现。As shown in FIG. 2 , it is assumed that the user has two smart terminal devices A and B; on a certain day, the user takes device A to go out to work, and leaves device B at home. While the user is out of office, the user uses the device A to process some documents, but some documents remain unfinished. Then, when the user returns home, if he wants to continue to use device B to process these documents, he may need to transfer the documents in device A to device B in the following ways: 1. Use a third-party application to transfer the documents in device A to device A. 2. Send the documents in device A to device B through functions such as one-touch transfer between devices A and B of a specific model; 3. Send the documents in device A through cloud storage sharing The document is synced to device B. However, the above methods all require the user to perform multiple operations on two devices, such as starting the corresponding third-party application, sending documents, receiving documents, saving documents to cloud storage, downloading documents from cloud storage, etc. The above operations are not friendly enough for the user; and, if the device A is not around the user, the above methods cannot be implemented.
图3是目前多台终端设备进行互动的场景图。FIG. 3 is a scene diagram of interaction between multiple terminal devices at present.
如图3所示,当用户1在家中使用终端设备A与用户2进行网络视频通话时,如果其他家庭成员用户3也想加入视频通话,那么用户3就需要来到终端设备A的摄像头面前,才能够使得用户2看到用户3的视频画面。这样,如果用户3希望与用户2继续进行视频通话,就不能回到自己的房间做其他事情,影响了用户3的使用体验。As shown in Figure 3, when user 1 uses terminal device A to make an online video call with user 2 at home, if user 3, another family member, also wants to join the video call, then user 3 needs to come to the camera of terminal device A. Only then can user 2 see the video screen of user 3 . In this way, if user 3 wishes to continue the video call with user 2, he cannot go back to his room to do other things, which affects user 3's use experience.
为解决上述问题,提高用户使用体验,本申请实施例提供了一种电子设备,该电子设备由可分离设置的第一机身和第二机身组成,下面结合附图分别对第一机身和第二机身的具体结构进行说明。In order to solve the above problems and improve the user experience, the embodiments of the present application provide an electronic device, the electronic device is composed of a first body and a second body that can be detachably arranged. and the specific structure of the second fuselage will be described.
图4是第一机身的结构示意图。如图4所示,第一机身可以包括:显示屏201、触控面板202、摄像头203、指纹识别模块204、蓝牙模块205、无线保真(wireless fidelity,Wi-Fi)模块206、卫星导航系统(Global Navigation Satellite System,GNSS)模块207、听筒208、扬声器209、麦克风210、马达211,电源管理模块212、电池213、按键214、微控制器(microcontroller Unit,MCU)215和传感器模块216等。FIG. 4 is a schematic structural diagram of the first fuselage. As shown in FIG. 4 , the first body may include: a display screen 201, a touch panel 202, a camera 203, a fingerprint identification module 204, a Bluetooth module 205, a wireless fidelity (Wi-Fi) module 206, and a satellite navigation system. System (Global Navigation Satellite System, GNSS) module 207, earpiece 208, speaker 209, microphone 210, motor 211, power management module 212, battery 213, key 214, microcontroller (MCU, MCU) 215 and sensor module 216, etc. .
可以理解的是,本申请实施例示意的结构并不构成对第一机身的具体限定,第一机身主要可以包括与用户交互功能相关的模块或者器件,以及部分用于实现数据连接功能的模块或者器件。上述与用户交互功能相关的模块或者器件例如可以包括向用户传递信息的模块或者器件,如:显示屏201、麦克风210、马达211等,以及包括接收用户输入信息的模块或者器件,如:摄像头203、听筒208、指纹识别模块204等。除此之外,第一机身可以不包含用于实现电子设备主要计算能力的模块或者器件,如:应用处理器、图形处理器、调制解调器等。It can be understood that the structure shown in the embodiments of the present application does not constitute a specific limitation on the first body, and the first body may mainly include modules or devices related to the user interaction function, and some components used to realize the data connection function. module or device. The above-mentioned modules or devices related to user interaction functions may include, for example, modules or devices that transmit information to users, such as: display screen 201, microphone 210, motor 211, etc., as well as modules or devices that receive user input information, such as: camera 203 , earpiece 208, fingerprint recognition module 204, etc. Besides, the first body may not include modules or devices for realizing the main computing capabilities of the electronic device, such as: application processors, graphics processors, modems, and the like.
在本申请另一些实施例中,第一机身可以包括比图4所示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。In other embodiments of the present application, the first fuselage may include more or less components than those shown in FIG. 4 , or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
显示屏201用于显示图像,视频等。显示屏包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode, OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),迷你发光二极管MiniLED,微发光二极管MicroLED,微发光有机二极管Micro-OLED,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,第一机身可以包括1个或N个显示屏201,N为大于1的正整数。在一些实施例中,显示屏201可以具有不同的形态,例如:平面形态、双曲面形态、四曲面形态、瀑布屏形态、可折叠形态、卷轴形态等。在一些实施例中,显示屏201还可以具有不同的尺寸,例如手机、可穿戴设备的小尺寸显示屏,平板电脑、个人电脑等中尺寸的显示屏,以及电视等大尺寸的显示屏等。The display screen 201 is used to display images, videos and the like. The display includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light emitting diode (flex light-emitting diode, FLED), mini light emitting diode MiniLED, micro light emitting diode MicroLED, micro light emitting organic diode Micro-OLED, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) )Wait. In some embodiments, the first body may include 1 or N display screens 201 , where N is a positive integer greater than 1. In some embodiments, the display screen 201 may have different shapes, such as a flat shape, a hyperboloid shape, a quadrangular shape, a waterfall screen shape, a foldable shape, a scroll shape, and the like. In some embodiments, the display screen 201 may also have different sizes, such as small-sized display screens of mobile phones and wearable devices, medium-sized display screens of tablet computers, personal computers, etc., and large-sized display screens such as televisions.
触控面板202可以与显示屏201组成触摸屏,也称触控屏,触控面板202用于检测作用于显示屏201之上或附近的触摸操作。触控面板202可以将检测到的触摸操作传递给自身的微控制器215或者传递给第二机身的处理器,以确定触摸操作的内容,从而通过显示屏201提供与触摸操作相关的视觉输出。在另一些实施例中,触控面板202也可以设置于第一机身的表面,与显示屏201所处的位置不同。The touch panel 202 and the display screen 201 can form a touch screen, also called a touch screen, and the touch panel 202 is used to detect a touch operation on or near the display screen 201 . The touch panel 202 can transmit the detected touch operation to its own microcontroller 215 or to the processor of the second body to determine the content of the touch operation, thereby providing visual output related to the touch operation through the display screen 201 . In other embodiments, the touch panel 202 can also be disposed on the surface of the first body, which is different from the position where the display screen 201 is located.
摄像头203用于采集图像数据,例如拍摄照片和视频等。摄像头203可以包括镜头和感光元件,物体通过镜头生成光学图像投射到摄像头感光元件,感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件用于把光信号转换成电信号,将电信号传递给图像信号处理器(image signal processor,ISP)转换成数字图像信号。ISP将数字图像信号输出到数字信号处理器(digital signal processor,DSP)加工处理。DSP将数字图像信号转换成标准的GB,RYYB,YUV等格式的图像信号。ISP和DSP优选可以设置在第二机身中,也可以设置在第一机身中。在一些实施例中,第一机身可以包括1个或N个摄像头203,N为大于1的正整数。The camera 203 is used to collect image data, such as taking photos and videos. The camera 203 may include a lens and a photosensitive element. The object generates an optical image through the lens and projects it to the photosensitive element of the camera. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS). ) phototransistor. The photosensitive element is used to convert the optical signal into an electrical signal, and transmit the electrical signal to an image signal processor (ISP) to convert it into a digital image signal. The ISP outputs the digital image signal to a digital signal processor (DSP) for processing. DSP converts digital image signals into standard GB, RYYB, YUV and other formats of image signals. The ISP and the DSP may preferably be arranged in the second body, or may be arranged in the first body. In some embodiments, the first body may include 1 or N cameras 203 , where N is a positive integer greater than 1.
指纹识别模块204用于采集用户指纹,第一机身可以利用采集的指纹特性实现指纹解锁、访问应用锁、指纹支付等功能。指纹识别模块204可以采用不同的指纹识别技术实现,例如:超声波指纹识别技术、光学指纹识别技术、电容指纹识别技术等。The fingerprint identification module 204 is used to collect user fingerprints, and the first body can use the collected fingerprint characteristics to realize functions such as fingerprint unlocking, access application lock, and fingerprint payment. The fingerprint identification module 204 can be implemented by different fingerprint identification technologies, such as ultrasonic fingerprint identification technology, optical fingerprint identification technology, capacitive fingerprint identification technology, and the like.
蓝牙模块205用于实现第一机身与第二机身基于蓝牙协议的无线数据连接的功能,以及,用于实现第一机身与其他电子设备基于蓝牙协议的无线数据连接的功能。蓝牙模块205优选支持低功耗蓝牙协议,以降低功耗。The Bluetooth module 205 is used to realize the function of wireless data connection between the first body and the second body based on the Bluetooth protocol, and is used to realize the function of the wireless data connection between the first body and other electronic devices based on the Bluetooth protocol. The Bluetooth module 205 preferably supports the Bluetooth low energy protocol to reduce power consumption.
Wi-Fi模块206用于实现第一机身与第二机身基于IEEE的802.11标准的无线数据连接的功能,以及,用于实现第一机身与接入点设备AP基于IEEE的802.11标准的无线数据连接的功能,使第一机身通过AP接入到互联网。上述802.11标准例如可以包括802.11ac,802.11ax,802.11ad,802.11ay标准等。在一些实施例中,Wi-Fi模块206还可以用于第一机身与第二机身或者其他电子设备实现Wi-Fi直连(Wi-Fi Direct)数据连接,或者FlashLinQ数据连接。The Wi-Fi module 206 is used to realize the function of wireless data connection between the first body and the second body based on the IEEE 802.11 standard, and is used to realize the function of the first body and the access point device AP based on the IEEE 802.11 standard. The function of wireless data connection enables the first body to access the Internet through the AP. The above-mentioned 802.11 standards may include, for example, 802.11ac, 802.11ax, 802.11ad, 802.11ay standards, and the like. In some embodiments, the Wi-Fi module 206 may also be used to implement a Wi-Fi Direct (Wi-Fi Direct) data connection or a FlashLinQ data connection between the first body and the second body or other electronic devices.
GNSS模块207用于为第一机身提供位置服务,以实现定位和导航等功能。在一些实施例中,GNSS模块207可以提供包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)在内 的一种或者多种位置服务。The GNSS module 207 is used to provide location services for the first body, so as to realize functions such as positioning and navigation. In some embodiments, the GNSS module 207 may provide a system including a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), One or more location services including quasi-zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
听筒208用于将音频电信号转换成声音信号。当用户使用第一机身接听电话或语音信息时,可以将听筒208靠近人耳接听语音。The earpiece 208 is used to convert audio electrical signals into sound signals. When the user uses the first body to answer a call or a voice message, the receiver 208 can be close to the human ear to answer the voice.
扬声器209用于将音频电信号转换为声音信号。用户可以通过扬声器209收听音频,或收听免提通话。在一些实施例中,第一机身可以包括两个或者两个以上的扬声器209,以实现立体声效果。The speaker 209 is used to convert audio electrical signals into sound signals. The user can listen to audio through speaker 209, or listen to a hands-free call. In some embodiments, the first body may include two or more speakers 209 to achieve stereo effect.
麦克风210也称“话筒”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风210发声,将声音信号输入到麦克风210。第一机身可以设置至少一个麦克风210。在另一些实施例中,第一机身可以设置两个麦克风210,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,第一机身还可以设置三个,四个或更多麦克风210,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 210, also referred to as a "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 210 through a human mouth, and input the sound signal into the microphone 210 . The first body may be provided with at least one microphone 210 . In other embodiments, the first body may be provided with two microphones 210, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, three, four or more microphones 210 may be provided on the first body to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
马达211用于产生振动,可以用于来电、信息的振动提示,也可以用于触摸振动反馈。当用户与第一机身接触时,马达211的振动可以传递给用户,使用户能够感受到由马达211带来的交互反馈。例如,当用户点击显示屏的输入法界面打字时,马达211可以随着用户的点击而振动;当用户按压显示屏的指纹识别区域解锁设备时,马达211可以在解锁瞬间产生振动。在一些实施例中,不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。The motor 211 is used to generate vibration, which can be used for vibration prompting of incoming calls and messages, and can also be used for touch vibration feedback. When the user is in contact with the first body, the vibration of the motor 211 can be transmitted to the user, so that the user can feel the interactive feedback brought by the motor 211 . For example, when the user clicks on the input method interface of the display screen to type, the motor 211 may vibrate with the user's click; when the user presses the fingerprint recognition area of the display screen to unlock the device, the motor 211 may vibrate at the moment of unlocking. In some embodiments, different application scenarios (eg, time reminder, receiving information, alarm clock, game, etc.) may also correspond to different vibration feedback effects.
电源管理模块212用于管理第一机身的电池213充放电策略和监测电池状态。在一些实施例中,第一机身可以通过外部接口218等接收有线充电器的充电输入。在另一些实施例中,第一机身可以通过内置的无线充电线圈接收无线充电输入。本申请实施例中,第一机身可以不包含处理器,因此显示屏201为第一机身的主要耗能器件,由于显示屏201的功耗通常远低于处理器,因此第一机身的电池213可以是小容量电池。The power management module 212 is used for managing the charging and discharging strategy of the battery 213 of the first body and monitoring the battery status. In some embodiments, the first body may receive the charging input of the wired charger through the external interface 218 or the like. In other embodiments, the first body can receive wireless charging input through a built-in wireless charging coil. In this embodiment of the present application, the first body may not include a processor, so the display screen 201 is the main energy-consuming device of the first body. The battery 213 may be a small capacity battery.
按键214包括开机键,音量键等。按键214可以是机械按键。也可以是触摸式按键。第一机身可以接收按键输入,产生与第一机身的用户设置以及功能控制有关的键信号输入。The keys 214 include a power-on key, a volume key, and the like. Keys 214 may be mechanical keys. It can also be a touch key. The first body can receive key input, and generate key signal input related to user settings and function control of the first body.
微控制器215,用于实现电子设备的少部分计算能力,以满足第一机身的交互功能。例如:微控制器215可以包括视频解码器模块,用于将第二机身发送的编码的视频数据进行解码,并输出到显示屏201显示成图像;微控制器215还可以包括音频解码器模块,用于将第二机身发送的编码的音频数据进行解码,并输出到听筒208或者扬声器209以播放声音。The microcontroller 215 is used to realize a small part of the computing power of the electronic device to satisfy the interactive function of the first body. For example: the microcontroller 215 may include a video decoder module for decoding the encoded video data sent by the second body, and output it to the display screen 201 for display as an image; the microcontroller 215 may also include an audio decoder module , which is used to decode the encoded audio data sent by the second body, and output it to the earpiece 208 or the speaker 209 to play sound.
传感器模块216可以包括一种或者多种传感器,例如:接近光传感器216A、环境光传感器216B、重力传感器216C、指南针216D、陀螺仪216E等。传感器模块216通过一种或者多种传感器采集与设备姿态、用户行为、以及交互相关的信息,独立地或者与其他器件或者模块配合实现多种功能。例如:当用户使用摄像头203拍摄时,陀螺仪216E可以检测第一机身的抖动,根据抖动计算出镜头的补偿值,使镜头通过反向运动抵消第一机身的抖动,实现防抖;陀螺仪216E还可以用于导航,体感游戏场景。又例如,接近光传感器216A可以检测第一机身附近是否有物体,以实现从口袋或者包中拿出自动亮屏、贴近耳朵通话自动熄灭显示屏201等功能。Sensor module 216 may include one or more sensors, such as: proximity light sensor 216A, ambient light sensor 216B, gravity sensor 216C, compass 216D, gyroscope 216E, and the like. The sensor module 216 collects information related to device posture, user behavior, and interaction through one or more sensors, and implements various functions independently or in cooperation with other devices or modules. For example: when the user uses the camera 203 to shoot, the gyroscope 216E can detect the shaking of the first body, and calculate the compensation value of the lens according to the shaking, so that the lens can offset the shaking of the first body through the reverse motion to realize anti-shake; The instrument 216E can also be used for navigation, somatosensory game scenarios. For another example, the proximity light sensor 216A can detect whether there is an object near the first body, so as to realize functions such as automatically turning on the screen when taking it out of a pocket or bag, and automatically turning off the display screen 201 when talking close to the ear.
另外,第一机身还可以包括存储器,包括易失性存储器和非易失性存储器,用于缓 存或者保存第一机身的各个器件或者模块在运行时产生的数据,以及保存微控制器等器件或模块运行所需的程序指令等。In addition, the first body may further include memory, including volatile memory and non-volatile memory, for buffering or saving data generated by each device or module of the first body during operation, as well as saving the microcontroller, etc. Program instructions, etc., required for the operation of the device or module.
图5是第二机身的结构示意图。如图5所示,第二机身可以包括:处理器301、摄像头302、闪光灯303、激光测距传感器304、电源管理模块305、电池306、蓝牙模块307、Wi-Fi模块308、GNSS模块309、射频模块310、卡槽311、外部接口312、按键313、近距离无线数据连接(near-field communication,NFC)模块314、麦克风315、扬声器316等。FIG. 5 is a schematic structural diagram of the second fuselage. As shown in FIG. 5 , the second body may include: a processor 301 , a camera 302 , a flash 303 , a laser ranging sensor 304 , a power management module 305 , a battery 306 , a Bluetooth module 307 , a Wi-Fi module 308 , and a GNSS module 309 , a radio frequency module 310, a card slot 311, an external interface 312, a button 313, a near-field communication (NFC) module 314, a microphone 315, a speaker 316, and the like.
可以理解的是,本申请实施例示意的结构并不构成对第二机身的具体限定,第二机身主要可以包括处理器301等与计算能力相关的模块或者器件,以及用于实现数据连接功能的模块或者器件。除此之外,第二机身可以不包含用于实现电子设备主要交互能力的模块或者器件,如:显示屏、触控面板、指纹识别模块、传感器模块等。It can be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the second body, and the second body may mainly include modules or devices related to computing capabilities such as the processor 301, and a module or device for realizing data connection. functional modules or devices. In addition, the second body may not include modules or devices for realizing the main interactive capabilities of the electronic device, such as a display screen, a touch panel, a fingerprint identification module, a sensor module, and the like.
在本申请另一些实施例中,第二机身可以包括比图5所示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。In other embodiments of the present application, the second fuselage may include more or less components than those shown in FIG. 5 , or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器301承担了电子设备的主要计算能力。在一些实施例中,处理器301可以包括应用处理器(application processor,AP)、基带处理器、图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、和/或神经网络处理器(neural-network processing unit,NPU)等。不同的处理器301可以是独立的器件,也可以继承在一个或者多个处理器301中,例如上述各个处理器301中的部分或者全部可以集成在一个系统芯片(system on a chip,SoC)中。The processor 301 undertakes the main computing power of the electronic device. In some embodiments, the processor 301 may include an application processor (application processor, AP), a baseband processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller , video codec, digital signal processor (digital signal processor, DSP), and/or neural network processor (neural-network processing unit, NPU), etc. Different processors 301 may be independent devices, or may be inherited in one or more processors 301, for example, some or all of the above-mentioned processors 301 may be integrated in a system on a chip (SoC) .
与第一机身中的摄像头203不同,第二机身中的摄像头302的传感器尺寸、像素尺寸和像素分辨率可以更大,因此摄像头302可以作为主摄像头使用,用于拍摄高质量的图片,而摄像头203则可以作为辅摄像头使用,用于用户自拍、视频聊天、扫描二维码等对拍摄质量要求不高的场景。在一些实施例中,第二机身可以包括两颗或者两颗以上的摄像头302,不同的摄像头302可以具有不同的规格,例如:微距摄像头、长焦摄像头、广角摄像头等,以实现更多的拍摄模式。Different from the camera 203 in the first body, the sensor size, pixel size and pixel resolution of the camera 302 in the second body can be larger, so the camera 302 can be used as the main camera for taking high-quality pictures, The camera 203 can be used as an auxiliary camera, and is used in scenarios that do not require high shooting quality, such as self-portraits, video chats, and scanning of two-dimensional codes. In some embodiments, the second body may include two or more cameras 302, and different cameras 302 may have different specifications, such as: a macro camera, a telephoto camera, a wide-angle camera, etc., to achieve more shooting mode.
闪光灯303和激光测距传感器304用于配合摄像头实现更好的拍摄质量。闪光灯303用于在摄像头302对焦或者曝光时开启,起到补光作用,以提升拍摄质量。在一些实施例中,闪光灯303可以时双色温闪光灯,包括两颗色温不同的光源,以具备更好的补光效果,使拍摄的白平衡更准确。激光测距传感器304用于测量摄像头302与被拍摄物体之间的距离,以实现摄像头302的相位对焦功能(phase detection auto focus,PDAF)。The flash 303 and the laser ranging sensor 304 are used to cooperate with the camera to achieve better shooting quality. The flash 303 is used to turn on when the camera 302 focuses or exposes, and plays a role of supplementary light, so as to improve the shooting quality. In some embodiments, the flash 303 can be a dual-color temperature flash, including two light sources with different color temperatures, so as to have a better fill light effect and make the white balance of the shooting more accurate. The laser ranging sensor 304 is used to measure the distance between the camera 302 and the object to be photographed, so as to realize the phase detection auto focus (PDAF) function of the camera 302 .
电源管理模块305用于管理第二机身的电池306充放电策略和监测电池状态。在一些实施例中,第二机身可以通过外部接口312等接收有线充电器的充电输入。在另一些实施例中,第二机身可以通过内置的无线充电线圈接收无线充电输入。本申请实施例中,第二机身中的电池306主要为第二机身的处理器301和其他器件供电,由于处理器301属于电子设备中的主要耗能器件,因此第二机身中的电池306可以是大容量电池。The power management module 305 is used for managing the charging and discharging strategy of the battery 306 of the second body and monitoring the battery status. In some embodiments, the second body may receive the charging input of the wired charger through the external interface 312 or the like. In other embodiments, the second body can receive wireless charging input through a built-in wireless charging coil. In the embodiment of the present application, the battery 306 in the second body mainly supplies power to the processor 301 and other devices in the second body. Since the processor 301 is the main energy-consuming device in the electronic device, the Battery 306 may be a high-capacity battery.
Wi-Fi模块308用于实现第二机身与第一机身基于IEEE的802.11标准的无线数据连接的功能,以及,用于实现第二机身与接入点设备AP基于IEEE的802.11标准的无线数据连接的功能,使第二机身通过AP接入到互联网。上述802.11标准例如可以包括 802.11ac,802.11ax,802.11ad,802.11ay标准等。在一些实施例中,Wi-Fi模块308还可以用于第二机身与第一机身实现Wi-Fi Direct数据连接,或者FlashLinQ数据连接。The Wi-Fi module 308 is used to realize the function of wireless data connection between the second body and the first body based on the IEEE 802.11 standard, and is used to realize the connection between the second body and the access point device AP based on the IEEE 802.11 standard The function of wireless data connection enables the second body to access the Internet through the AP. The above-mentioned 802.11 standards may include, for example, 802.11ac, 802.11ax, 802.11ad, 802.11ay standards, and the like. In some embodiments, the Wi-Fi module 308 may also be used for the second body and the first body to implement a Wi-Fi Direct data connection, or a FlashLinQ data connection.
GNSS模块309用于为第二机身提供位置服务,以实现定位和导航等功能。在一些实施例中,GNSS模块309可以提供包括全球卫星定位系统GPS,全球导航卫星系统GLONASS,北斗卫星导航系统BDS,准天顶卫星系统QZSS和/或星基增强系统SBAS在内的一种或者多种位置服务。The GNSS module 309 is used to provide location services for the second body to implement functions such as positioning and navigation. In some embodiments, the GNSS module 309 may provide one or more including Global Positioning System GPS, Global Navigation Satellite System GLONASS, Beidou Satellite Navigation System BDS, Quasi-Zenith Satellite System QZSS and/or Satellite Based Augmentation System SBAS Multiple location services.
射频模块310与基带处理器等共同实现电子设备的蜂窝移动数据连接功能。射频模块310可以包括射频电路、射频功率放大器和天线等,用于实现蜂窝移动网络和/或Wi-Fi网络的电磁波信号的收发。一般来说,电子设备的蜂窝移动数据连接功能可以包括:全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进技术(long term evolution,LTE)、第五代移动数据连接网络新空口技术(5th generation mobile networks new radio,5G NR)等。The radio frequency module 310, together with the baseband processor and the like, realizes the cellular mobile data connection function of the electronic device. The radio frequency module 310 may include a radio frequency circuit, a radio frequency power amplifier, an antenna, etc., and is used to implement the transmission and reception of electromagnetic wave signals of a cellular mobile network and/or a Wi-Fi network. Generally speaking, the cellular mobile data connection function of an electronic device may include: global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access) division multiple access (CDMA), wideband code division multiple access (WCDMA), time division code division multiple access (TD-SCDMA), long term evolution (LTE) ), 5th generation mobile networks new radio (5G NR), etc.
卡槽311可以包括SIM卡槽,还可以包括TransFlash(TF)卡槽。在一个实施例中,第二机身可以仅包含SIM卡槽。在另一个实施例中,第二机身可以同时包含SIM卡槽和TF卡槽。The card slot 311 may include a SIM card slot, and may also include a TransFlash (TF) card slot. In one embodiment, the second body may contain only a SIM card slot. In another embodiment, the second body may include a SIM card slot and a TF card slot at the same time.
外部接口312可以包括USB接口、3.5mm耳机接口等。外部接口312用于连接数据线、耳机等外部设备,实现充电、数据传输和音频播放等功能。The external interface 312 may include a USB interface, a 3.5mm headphone jack, and the like. The external interface 312 is used to connect external devices such as data cables and earphones, and realize functions such as charging, data transmission, and audio playback.
按键313包括开机键,音量键等。按键313可以是机械按键。也可以是触摸式按键。第二机身可以接收按键输入,产生与第二机身的用户设置以及功能控制有关的键信号输入。The keys 313 include a power-on key, a volume key, and the like. The keys 313 may be mechanical keys. It can also be a touch key. The second body can receive key input and generate key signal input related to user settings and function control of the second body.
NFC模块314,用于实现各种基于NFC的功能,例如:NFC移动支付、刷公共交通卡、模拟门禁卡、读取或者写入智能卡(IC card)、身份识别卡(Identification Card)的数据等。The NFC module 314 is used to implement various NFC-based functions, such as: NFC mobile payment, swiping public transportation cards, simulating access control cards, reading or writing data from smart cards (IC cards), and identification cards (Identification Card), etc. .
第二机身可以设置至少一个麦克风315。在另一些实施例中,第二机身可以设置两个麦克风315,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,第二机身还可以设置三个,四个或更多麦克风315,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The second body may be provided with at least one microphone 315 . In other embodiments, the second body may be provided with two microphones 315, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, three, four or more microphones 315 may be set on the second body to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
扬声器316用于将音频电信号转换为声音信号。用户可以通过扬声器316收听音频,或收听免提通话。在一些实施例中,第二机身可以包括两个或者两个以上的扬声器316,以实现立体声效果。 Speakers 316 are used to convert audio electrical signals into sound signals. The user can listen to audio through speaker 316, or listen to a hands-free call. In some embodiments, the second body may include two or more speakers 316 to achieve stereo effect.
第一机身和第二机身可以组合使用,以实现电子设备的全部功能。The first body and the second body can be used in combination to realize all the functions of the electronic device.
在一个实施例中,第一机身与第二机身可以物理连接成一个整体,以实现组合使用。In one embodiment, the first fuselage and the second fuselage can be physically connected to form a whole to realize combined use.
图6是第一机身与第二机身物理连接的示意图。FIG. 6 is a schematic diagram of the physical connection between the first body and the second body.
如图6所示,第一机身100和第二机身200均可以被设计成手机形态。其中,第一机身100可以包括显示屏201、触控面板、指纹识别模块等用户交互相关的器件以及如图4所示的其他器件。第一机身100不包含第二机身200中的部分或者全部器件,例如不包含处理器、大容量电池、射频模块、主摄像头等,因此,与传统的手机相比,第一 机身100的厚度可以更薄、体积可以更小、重量可以更轻,当用户单独持握第一机身100时,会觉得手感更好、更加轻盈便携,并且长时间持握也不会累手。第二机身200可以包括处理器等计算相关的器件,以及大容量电池、射频模块、主摄像头和如图5所示的其他器件,第二机身200可以不包括显示屏201、触控面板、指纹识别模块等用户交互相关的器件,因此,与传统的手机相比,第二机身200的厚度也可以更薄、体积可以更小、重量可以更轻。As shown in FIG. 6 , both the first body 100 and the second body 200 may be designed in the form of a mobile phone. Wherein, the first body 100 may include a display screen 201 , a touch panel, a fingerprint identification module, and other related devices for user interaction, as well as other devices as shown in FIG. 4 . The first body 100 does not include some or all of the components in the second body 200 , for example, does not include a processor, a large-capacity battery, a radio frequency module, a main camera, etc. Therefore, compared with a traditional mobile phone, the first body 100 The thickness of the fuselage can be thinner, the volume can be smaller, and the weight can be lighter. When the user holds the first body 100 alone, they will feel better in hand, lighter and more portable, and will not tire their hands when holding for a long time. The second body 200 may include computing-related devices such as a processor, as well as a large-capacity battery, a radio frequency module, a main camera, and other devices as shown in FIG. 5 , and the second body 200 may not include a display screen 201 and a touch panel , fingerprint recognition module and other user interaction related devices, therefore, compared with the traditional mobile phone, the thickness of the second body 200 can also be thinner, the volume can be smaller, and the weight can be lighter.
进一步如图6所示,第一机身100和第二机身200可以通过上下扣合的方式形成一个手机形态的整体。第一机身100和第二机身200可以具有相同或者接近的长度和宽度,使二者扣合之后的整体性更强。Further as shown in FIG. 6 , the first body 100 and the second body 200 may form a whole in the form of a mobile phone by being fastened up and down. The first fuselage 100 and the second fuselage 200 may have the same or similar length and width, so that the two have stronger integrity after being fastened together.
另外,为了实现第一机身100和第二机身200在扣合之后能相互固定,第一机身100和/或第二机身200还可以包括固定结构。在一个实施例中,第一机身100和第二机身200可以分别包括位置相对应的磁吸装置,当第一机身100与第二机身200接近时,磁吸装置相互吸引,将第一机身100与第二机身200固定。在一个实施例中,第一机身100和第二机身200可以包括相互配合的卡扣结构,当第一机身100与第二机身200扣合时,卡扣结构锁紧,将第一机身100与第二机身200固定。在一个实施例中,第一机身100和第二机身200中的其中一个可以设置有背夹结构,当第一机身100与第二机身200扣合时,设置有背夹结构的子设备通过背夹结构夹持住另一个子设备,将第一机身100与第二机身200固定。In addition, in order to realize that the first body 100 and the second body 200 can be fixed to each other after being snapped together, the first body 100 and/or the second body 200 may further include a fixing structure. In one embodiment, the first fuselage 100 and the second fuselage 200 may respectively include magnetic attraction devices corresponding to positions. When the first fuselage 100 and the second fuselage 200 are close to each other, the magnetic attraction devices The first body 100 and the second body 200 are fixed. In one embodiment, the first body 100 and the second body 200 may include buckle structures that cooperate with each other. When the first body 100 and the second body 200 are buckled, the buckle structures are locked to lock the first body 100 and the second body 200 together. A body 100 is fixed to the second body 200 . In one embodiment, one of the first body 100 and the second body 200 may be provided with a back clip structure. When the first body 100 and the second body 200 are fastened together, a back clip structure is provided. The sub-device clamps another sub-device through the back clip structure to fix the first body 100 and the second body 200 .
可以理解的是,当第一机身100与第二机身200组合使用时,第一机身100和第二机身200之间可能需要进行一些数据交互,为实现这一目的,第一机身100和第二机身200还可以设置有相互配合的数据连接接口。It can be understood that when the first body 100 and the second body 200 are used in combination, some data interaction may be required between the first body 100 and the second body 200. To achieve this purpose, the first body The body 100 and the second body 200 may also be provided with data connection interfaces that cooperate with each other.
在一个实现方式中,数据连接接口可以设置在第一机身100与第二机身200的扣合面,例如,数据连接接口可以包括一对相互配合的公头端子217和母头端子317,公头端子217设置在其中一个子设备上,母头端子317设置在另一个子设备上,当第一机身100与第二机身200扣合时,公头端子217和母头端子317插接在一起,建立第一机身100与第二机身200之间的数据传输通道。在一些实施例中,数据连接接口可以基于特定的接口协议进行设计,例如:PCI Express协议(简称PCI-E)、雷电接口协议(Thunderbolt)等。在另一个实现方式中,数据连接接口可以是用于收发光信号的光接口,当第一机身100与第二机身200扣合时,光接口实现对接,建立第一机身100与第二机身200之间的光信号传输通道,使得第一机身100和第二机身200可以基于光信号传输数据。In an implementation manner, the data connection interface may be disposed on the buckling surface of the first body 100 and the second body 200. For example, the data connection interface may include a pair of mutually matched male terminals 217 and female terminals 317, The male terminal 217 is arranged on one of the sub-devices, and the female terminal 317 is arranged on the other sub-device. When the first body 100 and the second body 200 are fastened together, the male terminal 217 and the female terminal 317 are inserted into each other. Together, a data transmission channel between the first body 100 and the second body 200 is established. In some embodiments, the data connection interface can be designed based on a specific interface protocol, such as: PCI Express protocol (PCI-E for short), Thunderbolt and the like. In another implementation manner, the data connection interface may be an optical interface for receiving and receiving optical signals. When the first body 100 and the second body 200 are fastened together, the optical interface is connected to establish the connection between the first body 100 and the second body 200 . The optical signal transmission channel between the two bodies 200 enables the first body 100 and the second body 200 to transmit data based on the optical signal.
在一个实施例中,第一机身100与第二机身200在相互分离的状态下也可以实现组合使用。In one embodiment, the first fuselage 100 and the second fuselage 200 may also be used in combination when they are separated from each other.
图7是第一机身100与第二机身200在分离状态下组合使用的一个示意图。FIG. 7 is a schematic diagram of the combined use of the first body 100 and the second body 200 in a separated state.
如图7所示,第一机身100与第二机身200可以通过线缆401建立有线数据连接。在一个实现方式中,线缆401可以是电缆,通过电信号或者短距毫米波技术传输数据,那么,第一机身100和第二机身200均设置有用于线缆401的端子对接的外部接口218和312,根据线缆的端子协议类型的不同,外部接口218和312相应地可以是PCI-E接口、雷电接口、USB接口等。在另一个实现方式中,线缆401可以是光纤,通过光信号传输数据,那么,第一机身100和第二机身200均设置有光接口,根据光纤接头类型的不同,光接口相应地可以是SC型光纤接口、FC型光纤接口等。As shown in FIG. 7 , a wired data connection can be established between the first body 100 and the second body 200 through a cable 401 . In an implementation manner, the cable 401 may be a cable that transmits data through electrical signals or short-distance millimeter wave technology. Then, both the first body 100 and the second body 200 are provided with an external interface for docking the terminals of the cable 401 . For the interfaces 218 and 312, according to the different types of terminal protocols of the cables, the external interfaces 218 and 312 can be correspondingly PCI-E interfaces, Thunderbolt interfaces, USB interfaces, and the like. In another implementation manner, the cable 401 may be an optical fiber, and data is transmitted through an optical signal. Then, the first body 100 and the second body 200 are both provided with optical interfaces. According to different types of optical fiber connectors, the optical interfaces are correspondingly It can be an SC-type optical fiber interface, an FC-type optical fiber interface, and the like.
图8是第一机身100与第二机身200在分离状态下组合使用的另一个示意图。FIG. 8 is another schematic diagram of the combined use of the first body 100 and the second body 200 in a separated state.
如图8所示,第一机身100与第二机身200可以建立无线数据连接。在一个实现方式中,第一机身100和第二机身200均包含Wi-Fi模块,第一机身100和第二机身200可以通过Wi-Fi模块建立无线数据连接。具体实现中,第一机身100和第二机身200可以通过各自的Wi-Fi模块数据连接至接入点(access point,AP)设备402,通过AP设备402的数据转发建立无线数据连接;第一机身100和第二机身200可以通过Wi-Fi模块建立Wi-Fi Direct数据连接,从而不需要其他设备也可建立无线数据连接。在另一个实现方式中,第一机身100和第二机身200均包含蓝牙模块,第一机身100和第二机身200可以通过蓝牙模块建立无线数据连接。在另一个实现方式中,第一机身100包含Wi-Fi模块,并且可以通过Wi-Fi模块接入到互联网,第二机身200包含射频模块与基带处理器且具备蜂窝移动数据连接功能,并且可以通过基站404接入到互联网,因此第一机身100和第二机身200可以基于互联网建立无线数据连接。在另一个实施例中,第一机身100和第二机身200均具备蜂窝移动数据连接功能,因此第一机身100和第二机身200可以基于蜂窝移动数据连接功能建立3GPP标准的设备到设备(device-to-device,D2D)协议的直接数据连接,从而不需要依赖其他设备。As shown in FIG. 8 , the first body 100 and the second body 200 can establish a wireless data connection. In an implementation manner, both the first body 100 and the second body 200 include a Wi-Fi module, and the first body 100 and the second body 200 can establish a wireless data connection through the Wi-Fi module. In a specific implementation, the first body 100 and the second body 200 can be data connected to an access point (AP) device 402 through their respective Wi-Fi modules, and a wireless data connection is established through data forwarding of the AP device 402; The first body 100 and the second body 200 can establish a Wi-Fi Direct data connection through the Wi-Fi module, so that a wireless data connection can also be established without other devices. In another implementation manner, both the first body 100 and the second body 200 include a Bluetooth module, and the first body 100 and the second body 200 can establish a wireless data connection through the Bluetooth module. In another implementation manner, the first body 100 includes a Wi-Fi module and can be connected to the Internet through the Wi-Fi module, the second body 200 includes a radio frequency module and a baseband processor and has a cellular mobile data connection function, And the base station 404 can be connected to the Internet, so the first body 100 and the second body 200 can establish a wireless data connection based on the Internet. In another embodiment, both the first body 100 and the second body 200 have a cellular mobile data connection function, so the first body 100 and the second body 200 can establish a 3GPP standard device based on the cellular mobile data connection function Direct data connection to the device-to-device (D2D) protocol, thus eliminating the need to rely on other devices.
图9和图10是第一机身与第二机身组合使用时数据交互的逻辑框图。下面基于图9和图10并且结合一些使用场景,对第一机身与第二机身在有线数据连接和无线数据连接状态下的数据交互逻辑进行示例性说明。9 and 10 are logical block diagrams of data interaction when the first body and the second body are used in combination. Below, based on FIG. 9 and FIG. 10 and in combination with some usage scenarios, the data interaction logic between the first body and the second body under wired data connection and wireless data connection state will be exemplarily described.
场景1:第一机身播放第二机身提供的图像数据。Scenario 1: The first body plays the image data provided by the second body.
如图9(a)所示,当第一机身与第二机身采用有线数据连接时,第二机身的处理器可以将待显示的图像数据编码之后,输出至其PCI-E接口,使编码后的图像数据通过PCI-E线缆发送给第一机身;第一机身通过其PCI-E接口接收编码后的图像数据,并交由第一机身的微控制器处理;微控制器内置有视频解码器模块,可以对编码后的图像数据进行解码,并将解码后的图像数据通过MIPI-DSI接口发送给显示屏进行显示。As shown in Figure 9(a), when the first body and the second body are connected by wired data, the processor of the second body can encode the image data to be displayed and output it to its PCI-E interface, The encoded image data is sent to the first fuselage through the PCI-E cable; the first fuselage receives the encoded image data through its PCI-E interface, and hands it over to the microcontroller of the first fuselage for processing; The controller has a built-in video decoder module, which can decode the encoded image data and send the decoded image data to the display screen through the MIPI-DSI interface for display.
如图9(b)所示,当第一机身与第二机身采用无线数据连接时,第二机身的处理器可以将带显示的图像数据编码之后,输出至其Wi-Fi模块;第二机身的Wi-Fi模块通过基于Wi-Fi Direct或者AP设备建立的Wi-Fi链路将编码后的图像数据发送给第一机身的Wi-Fi模块;第一机身的Wi-Fi模块接收编码后的图像数据,并交由第一机身的微控制器MCU处理;微控制器MCU内置有视频解码器模块,可以对编码后的图像数据进行解码,并将解码后的图像数据通过MIPI-DSI接口发送给显示屏进行显示。As shown in Figure 9(b), when the first body and the second body are connected by wireless data, the processor of the second body can encode the image data with display and output to its Wi-Fi module; The Wi-Fi module of the second body sends the encoded image data to the Wi-Fi module of the first body through the Wi-Fi link established based on Wi-Fi Direct or AP device; the Wi-Fi module of the first body The Fi module receives the encoded image data and hands it over to the microcontroller MCU of the first body for processing; the microcontroller MCU has a built-in video decoder module, which can decode the encoded image data and convert the decoded image The data is sent to the display screen for display through the MIPI-DSI interface.
场景2:第一机身向第二机身发送图像数据。Scenario 2: The first body sends image data to the second body.
如图10所示,在第一机身开启摄像头的情况下,运行在微控制器MCU的摄像头驱动程序通过MIPI-CSI接口读取摄像头拍摄的图像数据;图像数据经由MCU内部的视频编码器编码后输出到第二机身。其中,如图10(a)所示,当第一机身与第二机身采用有线数据连接时,微控制器可以将编码后的图像数据输出至其PCI-E接口,使编码后的图像数据通过PCI-E线缆发送到第二机身的PCI-E接口;如图10(b)所示,当第一机身与第二机身采用无线数据连接时,微控制器可以将编码后的图像数据输出至其Wi-Fi模块;第一机身的Wi-Fi模块通过基于Wi-Fi Direct或者AP设备建立的Wi-Fi链路将编码后的图像数据发送给第二机身的Wi-Fi模块。As shown in Figure 10, when the camera is turned on in the first body, the camera driver running on the microcontroller MCU reads the image data captured by the camera through the MIPI-CSI interface; the image data is encoded by the video encoder inside the MCU Then output to the second fuselage. Among them, as shown in Figure 10(a), when the first body and the second body are connected by wired data, the microcontroller can output the encoded image data to its PCI-E interface, so that the encoded image The data is sent to the PCI-E interface of the second body through the PCI-E cable; as shown in Figure 10(b), when the first body and the second body are connected by wireless data, the microcontroller can encode the The resulting image data is output to its Wi-Fi module; the Wi-Fi module of the first fuselage sends the encoded image data to the second fuselage through the Wi-Fi link established based on Wi-Fi Direct or AP devices. Wi-Fi module.
由此可见,无论是在有线数据连接状态,还是在无线数据连接状态,第一机身和第 二机身之间均可以实现数据传输,从而可以相互配合实现电子设备的全部功能。It can be seen that, no matter in the wired data connection state or the wireless data connection state, data transmission can be realized between the first body and the second body, so that all functions of the electronic device can be realized in cooperation with each other.
图11是第一机身100、第二机身200与传统形态的手机的厚度示意图。如图11所示,由于第一机身100仅包含了用于实现电子设备全部功能的部分器件,不包含处理器、大容量电池、射频模块、主摄像头等,因此,与传统手机相比,在显示屏尺寸相同的情况下,第一机身100的厚度B1会更小,重量更轻,便于用户持握。另外,由于第二机身200同样仅包含了用于实现电子设备全部功能的部分器件,不包含显示屏、触控面板、指纹识别模块等用户交互相关的器件,因此,与传统手机相比,在长度和宽度相同、电池容量相同的情况下,第二机身200的厚度B2也会更小。另外,由于第二机身不需要用户经常持握,因此在设计时不需要考虑其厚度和重量是否便于持握,因此可以进一步将第二机身200的厚度增加到B3,以容纳更大容量的电池306,提升第二机身200的续航时间。FIG. 11 is a schematic diagram of the thickness of the first body 100 , the second body 200 and a conventional mobile phone. As shown in FIG. 11 , since the first body 100 only includes some components for realizing all functions of the electronic device, and does not include a processor, a large-capacity battery, a radio frequency module, a main camera, etc., therefore, compared with a traditional mobile phone, the Under the condition that the size of the display screen is the same, the thickness B1 of the first body 100 will be smaller and the weight will be lighter, which is convenient for the user to hold. In addition, because the second body 200 also only includes some components for realizing all functions of the electronic device, and does not include components related to user interaction such as a display screen, a touch panel, a fingerprint identification module, etc., therefore, compared with a traditional mobile phone, the Under the condition that the length and width are the same and the battery capacity is the same, the thickness B2 of the second body 200 will also be smaller. In addition, since the second body does not need to be held frequently by the user, it is not necessary to consider whether its thickness and weight are easy to hold in the design. Therefore, the thickness of the second body 200 can be further increased to B3 to accommodate a larger capacity The battery 306 can improve the battery life of the second body 200 .
在一个实施例中,第一机身与第二机身可以配合实现手机的全部功能。In one embodiment, the first body and the second body can cooperate to realize all functions of the mobile phone.
作为一种实现方式,如图12所示,第一机身100与第二机身200通过Wi-Fi链路建立无线数据连接,当用户使用本申请的电子设备时,如图12(a)所示,可以将第二机身200放置于衣物的口袋或者背包403中,用户只需要手持第一机身100,并且在第一机身100之上执行交互操作,就可以完成拨打电话、收发信息、使用APP、播放媒体、运动监测、指纹支付、自拍等操作,无需拿出第二机身200,这些操作覆盖了用户使用手机的多数场景。对应地,用户仅会在很少数的场景中需要手持第二机身200,如图12(b)所示,当用户要拍摄高质量的照片或者视频时,可以选择从口袋或者背包403中取出第二机身200,使用第二机身200完成拍摄过程,拍摄完成后,则可以将第二机身200放回口袋或者背包403中。由此可以看出,由于在多数场景中用户都不需要手持第二机身200,第二机身200在设计时就不需要考虑设备厚度、体积、持握手感、重量等影响用户使用体验的因素,因此第二机身200可以具有更大的厚度,容纳更大容量的电池,提高更长的续航能力,这样,即使第二机身200由于支持5G网络而功耗增加,也能够实现续航不缩短甚至更长。As an implementation, as shown in FIG. 12 , the first body 100 and the second body 200 establish a wireless data connection through a Wi-Fi link. When the user uses the electronic device of the present application, as shown in FIG. 12( a ) As shown, the second body 200 can be placed in a pocket of clothing or a backpack 403, and the user only needs to hold the first body 100 and perform interactive operations on the first body 100 to complete making calls, sending and receiving Information, using APP, playing media, motion monitoring, fingerprint payment, Selfie and other operations do not need to take out the second body 200, and these operations cover most scenarios of users using mobile phones. Correspondingly, the user only needs to hold the second fuselage 200 in a few scenarios, as shown in FIG. 12(b), when the user wants to take high-quality photos or videos, the user can choose from the pocket or backpack 403 Take out the second body 200 and use the second body 200 to complete the shooting process. After the shooting is completed, the second body 200 can be put back into the pocket or backpack 403 . It can be seen from this that, since the user does not need to hold the second body 200 in most scenarios, the design of the second body 200 does not need to consider the device thickness, volume, grip, weight, etc. that affect the user experience. Therefore, the second body 200 can have a larger thickness, accommodate a battery with a larger capacity, and improve the longer battery life. In this way, even if the power consumption of the second body 200 increases due to the support of the 5G network, the battery life can be achieved. Not shortened or even longer.
在一个实施例中,如图13所示,第一机身100还可以包括无线充电线圈219,无线充电线圈318通过第一机身100的电源管理模块与第一机身100的电池213耦合。相应地,第二机身200也可以设置有无线充电线圈318,并且无线充电线圈318通过第二机身200的电源管理模块与第二机身200的电池306耦合。这样,当第一机身100与第二机身200扣合成一个整体时,第二机身200的电源管理模块可以从第一机身100的电源管理模块获取第一电子设备的电池电量,如果第一电子设备的电池电量低于一定的阈值,例如100%或者50%等,则第二电子设备的电源管理模块开启无线充电功能,利用线圈之间的电磁感应,使用第二机身200的电池中的电能为第一机身100的电池213充电,以延长第一机身100的续航时间。In one embodiment, as shown in FIG. 13 , the first body 100 may further include a wireless charging coil 219 , and the wireless charging coil 318 is coupled to the battery 213 of the first body 100 through the power management module of the first body 100 . Correspondingly, the second body 200 may also be provided with a wireless charging coil 318 , and the wireless charging coil 318 is coupled to the battery 306 of the second body 200 through the power management module of the second body 200 . In this way, when the first body 100 and the second body 200 are buckled into a whole, the power management module of the second body 200 can obtain the battery power of the first electronic device from the power management module of the first body 100. If When the battery level of the first electronic device is lower than a certain threshold, such as 100% or 50%, the power management module of the second electronic device enables the wireless charging function, and uses the electromagnetic induction between the coils to use the power of the second body 200 The electric energy in the battery charges the battery 213 of the first body 100 to prolong the battery life of the first body 100 .
除手机形态之外,本申请实施例的第一机身和第二机身还可以具有更多其他的形态,下面结合更多附图,对其中的一部分可实现的形态进行示例性地说明。In addition to the mobile phone form, the first body and the second body in the embodiments of the present application may also have other forms, and some of the achievable forms will be exemplarily described below with reference to more drawings.
在一个实施例中,如图14所示,第一机身100可以是手机形态的设备,而第二机身200可以是掌上游戏机手柄形态的设备。其中,第二机身200包括用于实现游戏控制输入功能的按键320和/或摇杆321,以及用于固定第一机身100的结构。示例地,用于固定第一机身100的结构例如可以是设置在第二机身200机身之上的凹槽319,该凹槽319 的形状与第一机身100的形状相匹配,使得第一机身100可以嵌入到该凹槽319内,与第二机身200构成一个整体。基于图14示出的结构,在第一机身100与第二机身200分离的状态下,第一机身100根据用户的操作,实现拨打电话、收发信息、使用APP、播放媒体、运动监测、指纹支付、自拍等功能;在第一机身100与第二机身200组成一个整体的情况下,第一机身100与第二机身200可以共同实现掌上游戏机的功能,此时第二机身200由用户持握,用于接收用户通过按键320和摇杆321产生的游戏控制输入,第一机身100则用于显示游戏画面。In one embodiment, as shown in FIG. 14 , the first body 100 may be a device in the form of a mobile phone, and the second body 200 may be a device in the form of a handheld game console. The second body 200 includes buttons 320 and/or joysticks 321 for implementing a game control input function, and a structure for fixing the first body 100 . For example, the structure for fixing the first body 100 may be, for example, a groove 319 provided on the body of the second body 200, and the shape of the groove 319 matches the shape of the first body 100, so that the The first body 100 can be embedded in the groove 319 to form a whole with the second body 200 . Based on the structure shown in FIG. 14 , when the first body 100 is separated from the second body 200 , the first body 100 can make calls, send and receive information, use APPs, play media, and monitor motion according to user operations. , fingerprint payment, Selfie and other functions; in the case where the first body 100 and the second body 200 form a whole, the first body 100 and the second body 200 can jointly realize the functions of a handheld game console. The second body 200 is held by the user, and is used for receiving the game control input generated by the user through the buttons 320 and the joystick 321, and the first body 100 is used for displaying the game screen.
在一个实施例中,如图15所示,第一机身100可以是手机形态的设备,而第二机身200可以是智能音箱设备。基于图15示出的结构,在第一机身100与第二机身200分离的状态下,如果第一机身100接收到用户播放音频的操作,那么第一机身100会使用自身的扬声器播放音频;具体实现中,第二机身200的处理器可以将编码的音频数据通过Wi-Fi链路等发送给第一机身100,第一机身100接收到编码的音频数据之后,由MCU内的音频解码器进行解码,并输出到第一机身100的扬声器进行播放。当第二机身200可以通过NFC模块314、激光测距传感器等检测到第一机身100接近时(例如用户将第一机身100放置于第二机身200之上),第二机身200会接管第一机身100的播放音频的行为,实现不同音频播放模式的转换;具体实现中,第二机身200可以停止向第一机身100发送编码的音频数据,同时,第二机身200的处理器可以对编码的音频数据进行解码,并输出到第二机身200的扬声器316进行播放。In one embodiment, as shown in FIG. 15 , the first body 100 may be a device in the form of a mobile phone, and the second body 200 may be a smart speaker device. Based on the structure shown in FIG. 15 , in the state where the first body 100 is separated from the second body 200 , if the first body 100 receives an operation of the user to play audio, the first body 100 will use its own speakers Play audio; in specific implementation, the processor of the second body 200 can send the encoded audio data to the first body 100 through a Wi-Fi link or the like, and after the first body 100 receives the encoded audio data, the The audio decoder in the MCU decodes and outputs it to the speaker of the first body 100 for playback. When the second body 200 can detect the approach of the first body 100 through the NFC module 314, the laser ranging sensor, etc. (for example, the user places the first body 100 on the second body 200), the second body 200 will take over the audio playback behavior of the first body 100 to realize the conversion of different audio playback modes; in specific implementation, the second body 200 can stop sending encoded audio data to the first body 100, and at the same time, the second body 200 can stop sending encoded audio data to the first body 100. The processor of the body 200 can decode the encoded audio data and output it to the speaker 316 of the second body 200 for playback.
在一个实施例中,如图16所示,第一机身100可以是可穿戴设备的形态,例如智能手表、智能手环、VR设备等,第二电子设备可以是任意可放入衣服口袋、背包403中的设备形态,本申请实施例不做具体限定。以第一机身100是智能手表为例,基于图16所示的结构,第一机身100与第二机身200可以通过各自的蓝牙模块建立低功耗蓝牙数据连接,以交互数据。第一机身100的MCU可以通过其配备的各类传感器采集用户的生理数据,例如血流量、心脏电信号等,并通过蓝牙模块发送给第二机身200,第二机身200的蓝牙模块接收到生理数据之后,交由第二机身200进行处理,以得到用户的心率、心电图等信息,并回传给第一机身100进行显示,或者通过Wi-Fi模块或蜂窝移动数据连接功能上传到云端,以供用户在其他电子设备上共享数据。这样,第一机身100不需要再对传感器采集的数据进行计算分析,因此可以节省功耗,延长续航时间。In one embodiment, as shown in FIG. 16 , the first body 100 may be in the form of a wearable device, such as a smart watch, a smart bracelet, a VR device, etc., and the second electronic device may be any device that can be put into a clothes pocket, The device form in the backpack 403 is not specifically limited in this embodiment of the present application. Taking the first body 100 as a smart watch as an example, based on the structure shown in FIG. 16 , the first body 100 and the second body 200 can establish a low-power Bluetooth data connection through their respective Bluetooth modules to exchange data. The MCU of the first body 100 can collect the user's physiological data, such as blood flow, cardiac electrical signals, etc., through various sensors it is equipped with, and send it to the second body 200 through the Bluetooth module, and the Bluetooth module of the second body 200 After receiving the physiological data, it is processed by the second body 200 to obtain the user's heart rate, electrocardiogram and other information, and sent back to the first body 100 for display, or through the Wi-Fi module or cellular mobile data connection function Upload to the cloud for users to share data on other electronic devices. In this way, the first body 100 does not need to perform calculation and analysis on the data collected by the sensor, so power consumption can be saved and battery life can be prolonged.
在一个实施例中,如图17所示,第一机身100可以是大屏显示设备,例如电视机、显示器等,第二电子设备例如可以是智能摄像头。第二机身200可以与第一机身100分布设置在不同位置,第二机身200也可以固定于第一机身100之上。第一机身100的显示屏所显示的图像数据由第二机身200提供。第二机身200还用于接收和处理用户通过遥控器或者语音等方式传递的控制指令,运行应用程序,并在第一机身100的显示屏上显示相应的界面。In one embodiment, as shown in FIG. 17 , the first body 100 may be a large-screen display device, such as a television, a monitor, etc., and the second electronic device may be, for example, a smart camera. The second body 200 and the first body 100 may be distributed at different positions, and the second body 200 may also be fixed on the first body 100 . The image data displayed on the display screen of the first body 100 is provided by the second body 200 . The second body 200 is further configured to receive and process control instructions transmitted by the user through a remote control or voice, run an application program, and display a corresponding interface on the display screen of the first body 100 .
在一种实现方式中,如图17所示,第二机身200可以通过遥控器信号或者语音方式被唤醒,其中,用于唤醒第二机身200的语音指令可以是默认的语音指令或者是用户自定义的语音指令。示例地,当语音指令是“小艺小艺”时,用户可以在首次使用第二机身200时,通过多次朗读“小艺小艺”使第二机身200记住用户的声音特征,这样,当第二机身200在此之后检测到用户说出“小艺小艺”时,将自身唤醒。第二机身200在唤醒之后,第二机身200的处理器可以开启第二机身200的摄像头采集图像,并且将图 像编码后发送到的第一机身100。第一机身100的微控制器在接收到编码的图像之后,对图像进行解码,然后发送给显示屏201进行显示。In an implementation manner, as shown in FIG. 17 , the second body 200 can be woken up by a remote control signal or voice, wherein the voice command for waking up the second body 200 can be a default voice command or a User-defined voice commands. For example, when the voice command is "Xiaoyi Xiaoyi", when the user uses the second fuselage 200 for the first time, the second fuselage 200 can remember the user's voice characteristics by reading "Xiaoyi Xiaoyi" a few times. In this way, when the second body 200 detects that the user says "Xiaoyi Xiaoyi" after that, it wakes up itself. After the second body 200 wakes up, the processor of the second body 200 can turn on the camera of the second body 200 to capture an image, and encode the image and send it to the first body 100. After receiving the encoded image, the microcontroller of the first body 100 decodes the image and sends it to the display screen 201 for display.
在一种实现方式中,如图18所示,第二机身在唤醒之后,还可以接管第一机身显示屏的信号源,使得第一机身显示屏所显示的图像全部由第二机身提供。具体实现中,第二机身在唤醒之后,可以根据第一机身显示屏的分辨率和刷新率等参数生成用户界面(user interface,UI)图像501,该UI图像例如可以包括状态栏,摄像头应用程序界面等。示例地,状态栏可以包括第二机身的电池电量、网络状态、时间等信息;摄像头应用程序界面可以包括多个虚拟按键或者选项,例如:摄像头的工作模式(如、专业拍摄、录像模式、拍照模式、人像模式、夜景模式、全景模式、微录Vlog模式、慢动作模式、延时摄影等)、拍照设置(如:高动态范围成像HDR开关、人工智能AI模式开关、滤镜等)等。进一步地,第二机身的处理器在生成UI图像之后,可以将UI图像、摄像头采集到的图像502以及通过网络接收到的其他用户的图像进行图层叠加,例如将UI图像叠加到摄像头采集到的图像之上,得到一个视频流,然后将视频流编码后发送给第一机身的微控制器。第一机身的微控制器接收到视频流之后,对视频流进行解码并发送到显示屏显示。这样,用户可以根据显示屏显示的UI图像,使用遥控器向第二机身发出操作指令,以选择不同的摄像头工作模式或者不同的拍照设置等,提高用户使用体验。In an implementation manner, as shown in FIG. 18 , after the second fuselage wakes up, it can also take over the signal source of the display screen of the first fuselage, so that all the images displayed on the display screen of the first fuselage are provided by the second fuselage. provided. In a specific implementation, after the second fuselage wakes up, a user interface (UI) image 501 may be generated according to parameters such as the resolution and refresh rate of the display screen of the first fuselage, and the UI image may include, for example, a status bar, a camera application interface, etc. For example, the status bar may include information such as battery power, network status, and time of the second body; the camera application program interface may include multiple virtual buttons or options, for example: the working mode of the camera (eg, professional shooting, video recording mode, Photo mode, portrait mode, night scene mode, panorama mode, micro-recording Vlog mode, slow motion mode, time-lapse photography, etc.), photo settings (such as: high dynamic range imaging HDR switch, artificial intelligence AI mode switch, filters, etc.), etc. . Further, after the processor of the second fuselage generates the UI image, the UI image, the image 502 collected by the camera, and the images of other users received through the network can be layered, for example, the UI image is superimposed on the image collected by the camera. On top of the received image, a video stream is obtained, and then the video stream is encoded and sent to the microcontroller of the first fuselage. After the microcontroller of the first body receives the video stream, it decodes the video stream and sends it to the display screen for display. In this way, according to the UI image displayed on the display screen, the user can use the remote control to issue an operation command to the second body to select different camera working modes or different photographing settings, etc., so as to improve the user experience.
进一步如图19所示,第二机身200的摄像头302可以设置在一个转动机构322之上,该转动机构322可以使摄像头302在一定的角度范围内旋转,优选可以达到360度旋转,另外,第二机身200的麦克风315可以为多麦克风阵列。当用户进行视频通话时,第二机身200开启摄像头302以采集图像,第二机身200的处理器用于将采集的图像进行编码,然后发送给第一机身100进行显示,以及,发送给通话对方的远端设备进行显示。当有用户说话时,多麦克风阵列采集用户的声音信息,将声音信息传递给处理器进行分析处理,处理器根据多麦克风阵列采集到的声音信息,判断出说话用户的方位,并且根据用户说话的方位向转动机构322发出转动指令,以控制摄像头302的旋转,使摄像头302始终对准说话的用户,保持该用户始终位于摄像头302采集到的图像的中心位置。Further as shown in FIG. 19 , the camera 302 of the second body 200 can be arranged on a rotating mechanism 322, and the rotating mechanism 322 can make the camera 302 rotate within a certain angle range, preferably, it can rotate 360 degrees. In addition, The microphone 315 of the second body 200 may be a multi-microphone array. When the user makes a video call, the second body 200 turns on the camera 302 to capture images, and the processor of the second body 200 is used to encode the captured images, and then send them to the first body 100 for display, and to send to The remote device of the call party is displayed. When a user speaks, the multi-microphone array collects the user's voice information, and transmits the voice information to the processor for analysis and processing. The azimuth rotation mechanism 322 sends a rotation instruction to control the rotation of the camera 302 , so that the camera 302 is always aimed at the speaking user, and the user is always located in the center of the image captured by the camera 302 .
进一步如图20所示,当有多个用户同时说话时(这里的同时说话可以指的时至少两个用户的说话时间存在交集),麦克风315(例如多麦克风阵列)能够同时采集到多个用户的声音信息,第二机身200的处理器可以根据声纹识别或者音色识别的方式从多个用户的声音信息分别识别出每个用户的声音信息,并分别判断每个用户的方位。另外,第二机身200还可以包括激光测距传感器304,激光测距传感器304可以与摄像头302同向设置,当有多个用户同时说话时,激光测距传感器304可以分别测量每个用户相对于摄像头302的距离,将距离测量结果发送给处理器。接下来,处理器根据每个用户的方位、相对于摄像头302的距离、以及摄像头302的视角来判断是否可以调整摄像头302的方向以使得上述多个用户同时位于摄像头302的视野内;如果能够使得上述多个用户同时位于摄像头302的视野内,则处理器可以向转动机构322发送转动指令以控制摄像头302的旋转,使上述多个用户同时出现在摄像头302的视野中;如果不能够使得上述多个用户同时位于摄像头302的视野内,则处理器可以向转动机构322发送转动指令以控制摄像头302的旋转,使得摄像头302对准距离摄像头302最近的用户,使该用户位于摄像头302的视野中心位置。Further as shown in FIG. 20 , when there are multiple users speaking at the same time (the simultaneous speaking here may refer to the intersection of the speaking times of at least two users), the microphone 315 (for example, a multi-microphone array) can simultaneously collect multiple users. The processor of the second body 200 can identify the voice information of each user from the voice information of multiple users according to voiceprint recognition or timbre recognition, and judge the orientation of each user respectively. In addition, the second body 200 may further include a laser ranging sensor 304. The laser ranging sensor 304 may be arranged in the same direction as the camera 302. When there are multiple users talking at the same time, the laser ranging sensor 304 may measure the relative distance of each user respectively. Based on the distance of the camera 302, the distance measurement result is sent to the processor. Next, the processor determines whether the direction of the camera 302 can be adjusted according to the orientation of each user, the distance relative to the camera 302, and the viewing angle of the camera 302, so that the above-mentioned multiple users are simultaneously within the field of view of the camera 302; The above-mentioned multiple users are located in the field of view of the camera 302 at the same time, then the processor can send a rotation instruction to the rotation mechanism 322 to control the rotation of the camera 302, so that the above-mentioned multiple users appear in the field of view of the camera 302 at the same time; If two users are located in the field of view of the camera 302 at the same time, the processor can send a rotation instruction to the rotation mechanism 322 to control the rotation of the camera 302, so that the camera 302 is aimed at the user closest to the camera 302, so that the user is located in the center of the field of view of the camera 302 .
在一种实现方式中,转动机构322可以拥有自己的角度坐标系,该角度坐标可以以 摄像头302面向第一机身100的显示屏正前方为0°角,以转动机构322旋转的顺时针或者逆时针方向作为坐标系的正方向。基于上述角度坐标系,处理器的转动指令可以是一个角度指令,例如:30°,使得转动机构322旋转至30°的位置。In an implementation manner, the rotation mechanism 322 may have its own angular coordinate system, and the angular coordinate may take the camera 302 facing directly in front of the display screen of the first body 100 as an angle of 0°, and the rotation mechanism 322 rotates clockwise or The counterclockwise direction is used as the positive direction of the coordinate system. Based on the above angle coordinate system, the rotation command of the processor may be an angle command, for example: 30°, so that the rotation mechanism 322 rotates to a position of 30°.
在另一种实现方式中,转动指令还可以包括至少一个速度指令以及一个停止指令,该速度指令用于指示转动机构322旋转的角速度。转动机构322在接收到速度指令时开始旋转,在旋转过程中如果接收到了新的速度指令,则根据新的速度指令调整角速度;当转动机构322接收到停止指令时,即停止旋转。示例地,当需要将摄像头从0°旋转到30°时,处理器可以首先向转动机构322发送一个10°每秒的速度指令,在3秒钟之后,再向转动机构322发送一个停止指令。In another implementation manner, the rotation command may further include at least one speed command and one stop command, where the speed command is used to instruct the angular speed at which the rotation mechanism 322 rotates. The rotation mechanism 322 starts to rotate when it receives a speed command, and if a new speed command is received during the rotation, the angular velocity is adjusted according to the new speed command; when the rotation mechanism 322 receives a stop command, it stops rotating. For example, when the camera needs to be rotated from 0° to 30°, the processor may first send a speed command of 10° per second to the rotation mechanism 322, and then send a stop command to the rotation mechanism 322 after 3 seconds.
在一些其他的实现方式中,当第二机身200的处理器根据麦克风阵列采集到的声音信息判断出用户一边走动一边说话时,处理器可以根据用户在当前时刻之前的一段时间内方位变化,预测用户接下来的方位变化趋势,根据方位变化趋势的预测结果确定转动机构322接下来旋转的角速度,并且生成相应的速度指令发送给转动机构322。这样,当用户一边走动一边说话时,摄像头302就可以实时跟随用户的方位变化而旋转,始终保持该用户始终位于摄像头302视野的中心位置。In some other implementation manners, when the processor of the second fuselage 200 determines that the user is talking while walking according to the sound information collected by the microphone array, the processor may change the orientation of the user within a period of time before the current moment, The user's next azimuth change trend is predicted, the angular velocity at which the rotating mechanism 322 rotates next is determined according to the predicted result of the azimuth change trend, and a corresponding speed command is generated and sent to the rotating mechanism 322 . In this way, when the user talks while walking, the camera 302 can rotate in real time following the change of the user's orientation, and the user is always kept in the center of the camera 302's field of view.
在一个实施例中,如图21所示,第一机身和第二机身均可以包括一个或者多个扬声器,并且第一机身的扬声器209和第二机身的扬声器316可以同时使用以输出立体声效。具体实现中,当第二机身播放的音频资源包含多个声道时,第二机身的处理器301可以将其中一部分声道的音频解码后输出到第二机身的扬声器316进行播放;第二机身的处理器301还可以将另一部分声道的音频发送给第一机身,由第一机身的微控制器215解码后发送给第一机身的扬声器209进行播放。示例地,第一机身和第二机身均包含两个扬声器,每个扬声器可以包含至少一个喇叭,第二机身播放的音频资源包含至少四个声道,例如:左声道、右声道、中央声道和重低音声道。那么,处理器301可以将中央声道和重低音声道的音频发送给第一机身,第一机身的微控制器215可以将中央声道的音频解码后发送给第一机身的其中一个扬声器209播放,将重低音声道的音频解码后发送给第一机身的另一个扬声器209播放。另外,处理器301可以将左声道的音频解码后发送给第二机身的左侧的扬声器316播放,将右音声道的音频解码后发送给第二机身的右侧的扬声器316播放。由此,第一机身的扬声器209和第二机身的扬声器316相互配合实现了立体声效输出,提高了用户的视听体验,例如:当用户看电影时,左声道和右声道可以用来播放背景音乐和环境音效,重低音声道可以用来播放鼓乐等声效,中央声道则可以用来播放角色说话的声音,层次分明,视听体验好。In one embodiment, as shown in FIG. 21 , both the first body and the second body may include one or more speakers, and the speaker 209 of the first body and the speaker 316 of the second body may be used simultaneously to Output stereo effect. In a specific implementation, when the audio resource played by the second body includes multiple channels, the processor 301 of the second body may decode the audio of a part of the channels and output it to the speaker 316 of the second body for playback; The processor 301 of the second body can also send the audio of another part of the channel to the first body, which is decoded by the microcontroller 215 of the first body and then sent to the speaker 209 of the first body for playback. For example, both the first body and the second body include two speakers, each speaker may include at least one speaker, and the audio resources played by the second body include at least four channels, for example: left channel, right channel channel, center channel and subwoofer channel. Then, the processor 301 can send the audio of the center channel and the subwoofer channel to the first body, and the microcontroller 215 of the first body can decode the audio of the center channel and send it to the one of the first body. One speaker 209 plays, and the audio of the subwoofer channel is decoded and sent to another speaker 209 of the first body for playback. In addition, the processor 301 can decode the audio of the left channel and send it to the speaker 316 on the left side of the second body for playback, and decode the audio of the right channel and send it to the speaker 316 on the right side of the second body for playback. Therefore, the speakers 209 of the first body and the speakers 316 of the second body cooperate with each other to realize stereo effect output, which improves the user's audio-visual experience. For example, when the user is watching a movie, the left and right channels can be used for To play background music and ambient sound effects, the subwoofer channel can be used to play sound effects such as drum music, and the center channel can be used to play the voice of the character speaking, with clear layers and a good audio-visual experience.
另外,在其他实施例中,当第二机身为智能摄像头时,其可以作为监控摄像头使用,这样当有人进入到监控区域时,第二机身可以利用其阵列麦克风锁定人员的位置,并且持续拍摄人员的画面,以及向第一机身发送报警,将拍摄到的画面实时图传给第一机身等。In addition, in other embodiments, when the second body is a smart camera, it can be used as a monitoring camera, so that when someone enters the monitoring area, the second body can use its array microphone to lock the position of the person, and keep The picture of the shooting personnel, and send an alarm to the first body, and transmit the real-time image of the captured picture to the first body, etc.
当第一机身和第二机身共同使用用于拍摄视频或者照片时,本申请实施例提供了一种拍摄控制方法,下面结合一个具体应用场景对该方法的步骤进行具体说明:When the first fuselage and the second fuselage are jointly used for shooting videos or photos, the embodiment of the present application provides a shooting control method, and the steps of the method are described in detail below with reference to a specific application scenario:
如图22所示,当用户一个人在外游玩时,如果想为自己拍摄照片,那么用户可以拿出第一机身100和第二机身200,并且将第二机身200置于某一位置,使得第二机身200的摄像头302面向要拍摄的风景。然后,用户可以点击第一机身100显示屏201上的相 机应用图标503,以启动相机应用。As shown in FIG. 22 , if the user wants to take a picture for himself when he is playing alone, he can take out the first body 100 and the second body 200 and place the second body 200 at a certain position , so that the camera 302 of the second body 200 faces the landscape to be photographed. Then, the user can click on the camera application icon 503 on the display screen 201 of the first body 100 to launch the camera application.
接下来,如图23所示,当第一机身的显示屏检测到用户点击相机应用图标时,第一机身的微控制器向第二机身的处理器发送启动相机应用的指令。第二机身的处理器接收到启动相机应用的指令之后,启动相机应用并且开启第二机身的摄像头。在此之后,处理器实时将摄像头采集到的图像数据编码并且发送给第一机身。第一机身的微控制器接收到编码的图像数据之后,对其进行解码并且发送到第一机身的显示屏进行显示,使得用户能够在第一机身的显示屏上实时看到第一机身的摄像头采集到的图像。其中,第一机身和第二机身可以通过蜂窝移动网络、Wi-Fi Direct、蓝牙等方式中的一种或者多种建立连接并且传输数据,当第一机身和第二机身通过Wi-Fi Direct、蓝牙等方式建立连接时,第一机身和第二机身可以在任意地点建立连接,不受地理位置的约束,例如当用户在野外探险、登山时,即使这些地点没有通信基站,第一机身和第二机身也可以建立连接,正常使用。Next, as shown in FIG. 23 , when the display screen of the first body detects that the user clicks the camera application icon, the microcontroller of the first body sends an instruction to start the camera application to the processor of the second body. After receiving the instruction to start the camera application, the processor of the second body starts the camera application and turns on the camera of the second body. After that, the processor encodes the image data collected by the camera in real time and sends it to the first body. After the microcontroller of the first body receives the encoded image data, it decodes it and sends it to the display screen of the first body for display, so that the user can see the first body on the display screen of the first body in real time. The image captured by the camera of the fuselage. Wherein, the first body and the second body can establish a connection and transmit data through one or more of cellular mobile network, Wi-Fi Direct, Bluetooth, etc. -When the connection is established by Fi Direct, Bluetooth, etc., the first body and the second body can establish a connection at any place without being restricted by geographical location. For example, when the user is exploring or climbing in the wild, even if there is no communication base station in these places , the first fuselage and the second fuselage can also be connected and used normally.
接下来,如图22所示,用户可以根据显示屏201显示的图像,调整自身与第二机身200的相对位置,使得用户出现在摄像头302取景范围的合适位置。当用户调整好自己的位置,准备拍照时,用户可以在显示屏201上点击拍照按钮504。Next, as shown in FIG. 22 , the user can adjust the relative position of himself and the second body 200 according to the image displayed on the display screen 201 , so that the user appears at an appropriate position within the viewing range of the camera 302 . When the user has adjusted his position and is ready to take a photo, the user can click the photo button 504 on the display screen 201 .
接下来,如图23所示,当第一机身的显示屏检测到用户点击拍照按钮,第一机身的微控制器向第二机身的处理器发送拍照指令。第二机身的处理器接收到拍照指令之后,开始拍照倒计时,例如倒计时10秒、5秒等,拍照倒计时的时长可以是预设的默认值,也可以是用户预先设置的值。在拍照倒计时的这一段时间内,用户可以进行拍照的准备活动,例如摆POSE或者将第一机身放进衣物口袋或背包中等等。在拍照倒计时结束之后,处理器即控制摄像头曝光拍照,并且将拍摄好的照片发送给第一机身,以在第一机身的显示屏显示。Next, as shown in FIG. 23 , when the display screen of the first body detects that the user clicks the photographing button, the microcontroller of the first body sends a photographing instruction to the processor of the second body. After receiving the photographing instruction, the processor of the second fuselage starts to count down the photographing, for example, 10 seconds, 5 seconds, etc. The duration of the photographing countdown may be a preset default value or a value preset by the user. During the period of the countdown for taking pictures, the user may perform preparation activities for taking pictures, such as posing for a pose or putting the first body in a clothing pocket or a backpack, and so on. After the countdown for taking pictures ends, the processor controls the camera to expose and take pictures, and sends the taken pictures to the first fuselage for display on the display screen of the first fuselage.
在一个实现方式中,在拍照倒计时结束之后,处理器可以控制摄像头连续拍摄多张照片,以避免在拍摄单张照片时用户眨眼或者移动而影响照片质量。另外,为了避免照片的焦点选取不恰当导致照片的亮度过亮或过暗、清晰度差或者白平衡不准确,上述多张照片还可以是焦点不同的照片。具体实现中,如图24所示,在拍照倒计时期间,第二机身200的处理器可以基于深度神经网络等人工智能技术对摄像头302采集的图像进行分析,以识别其中的人物部分和背景部分,并且在人物部分和背景部分分别选取至少一个焦点,例如在人物的面部选取一个焦点、在人物的身体上选取一个焦点、在背景的近点选取一个焦点、在背景的远点选取一个焦点等;在拍照倒计时结束之后,处理器可以控制摄像头302分别在不同的焦点处对焦,并且在每个焦点对焦之后拍摄至少一张照片,使得用户可以从多张照片中选择出自己最喜欢的照片,提升了用户体验。In an implementation manner, after the countdown for taking pictures ends, the processor may control the camera to take multiple pictures continuously, so as to avoid the user blinking or moving when taking a single picture, which affects the picture quality. In addition, in order to avoid inappropriate selection of the focal point of the photo resulting in too bright or too dark, poor definition or inaccurate white balance of the photo, the above-mentioned multiple photos may also be photos with different focal points. In a specific implementation, as shown in FIG. 24 , during the countdown period for taking pictures, the processor of the second body 200 can analyze the images collected by the camera 302 based on artificial intelligence technologies such as deep neural networks, so as to identify the character part and the background part. , and select at least one focus on the character part and the background part, for example, select a focus on the face of the character, select a focus on the body of the character, select a focus at the near point of the background, select a focus at the far point of the background, etc. After the countdown for taking pictures is over, the processor can control the camera 302 to focus at different focal points, and take at least one picture after each focal point is focused, so that the user can select his favorite picture from multiple pictures, Improved user experience.
在一个实现方式中,为提高拍摄质量,第一机身和第二机身可以相互配合实现自动构图功能。第二机身的处理器在开启摄像头之后,可以根据对摄像头采集的图像进行分析,以识别其中的近景内容,例如树木、公路、山峰、建筑物等,以及识别其中的远景内容,例如天空、太阳、月亮等。然后,处理器根据这些内容在图像中的分布确定人物在取景内容中出现的合适位置,并且在该位置生成人物轮廓线506,如图25所示,该人物轮廓线506和摄像头采集到的图像数据被共同发送给第一机身的微控制器。第一机身的微控制器接收到人物轮廓线506和摄像头采集到的图像数据之后,解码并且发送到显示屏201进行显示。这样,用户就可以根据人物轮廓线506调整自己相对于第二机身的 位置,例如将自身的位置调整到人物轮廓线506之内,从而拍出高质量的图片,提升了用户体验。In an implementation manner, in order to improve the shooting quality, the first body and the second body may cooperate with each other to realize the function of automatic composition. After turning on the camera, the processor of the second fuselage can analyze the images collected by the camera to identify the close-range content, such as trees, roads, mountains, buildings, etc., and identify the distant content, such as the sky, Sun, Moon, etc. Then, according to the distribution of these contents in the image, the processor determines a suitable position where the character appears in the framing content, and generates a character outline 506 at the position. As shown in FIG. 25 , the character outline 506 and the image captured by the camera The data is collectively sent to the microcontroller of the first body. After receiving the outline 506 of the person and the image data collected by the camera, the microcontroller of the first body decodes and sends it to the display screen 201 for display. In this way, the user can adjust his position relative to the second fuselage according to the character outline 506, for example, adjust his position within the character outline 506, thereby taking high-quality pictures and improving the user experience.
在一个实现方式中,如图22所示,为使用户能够获知拍照倒计时的结束时间,第二机身的处理器在开始倒计时之后,可以向第一机身的微控制器发送倒计时提示音。第一机身的微控制器可以将接收到的倒计时提示音解码并且输出到第一机身的扬声器进行播放。示例地,倒计时提示音可以是脉冲播放的蜂鸣音,也可以是语音播报的倒计时数字等。这样,用户就可以根据倒计时提示音获知拍照倒计时的结束时间,以在拍照倒计时结束之前摆好POSE,等待拍照,提升了用户体验。In one implementation, as shown in FIG. 22 , in order to enable the user to know the end time of the countdown for taking pictures, the processor of the second body can send a countdown prompt sound to the microcontroller of the first body after the countdown starts. The microcontroller of the first body can decode the received countdown sound and output it to the speaker of the first body for playback. For example, the countdown prompt sound may be a beep sound played by pulses, or a countdown number broadcast by voice or the like. In this way, the user can know the end time of the photo-taking countdown according to the countdown sound, so that the POSE can be placed before the photo-taking countdown ends, and the user can wait for the photo to be taken, which improves the user experience.
在本申请的一些实施例中,如图26所示,第二机身200还可以通过无线数据连接或者有线数据连接的方式同时与多个第一机身100(A~N)建立数据连接,形成一种分布式系统,其中,N为正整数,并且多个第一机身100(A~N)共享第二机身200的数据资源和网络资源,以实现多个第一机身100(A~N)之间的数据同步。下面结合更多附图,对图26示出的分布式系统的应用场景和有益效果进行具体说明。In some embodiments of the present application, as shown in FIG. 26 , the second body 200 may also establish data connections with multiple first bodies 100 (A to N) simultaneously by means of wireless data connection or wired data connection, A distributed system is formed, wherein N is a positive integer, and a plurality of first fuselages 100 (A˜N) share data resources and network resources of the second fuselage 200, so as to realize a plurality of first fuselages 100 (A to N) Data synchronization between A to N). The application scenarios and beneficial effects of the distributed system shown in FIG. 26 will be described in detail below with reference to more drawings.
在一个实施例中,如图27所示,分布式系统由两个第一机身和一个第二机身组成,为便于描述,这里将两个第一机身分别称作第一机身A和第一机身B。基于图27,在一个应用场景中,用户携带第一机身A外出办公,而将第一机身B置于家中。在用户外出办公期间,第一机身A与第二机身通过蜂窝移动网络或者Wi-Fi网络的方式保持数据连接,用户使用第一机身A远程在第二机身中创建文档,并且编辑文档,并且这些文档均存储在第二机身中,第二机身用于生成文档的编辑页面并发送给第一机身A进行显示。这样,当用户回到家之后,就可以使用第一机身B直接打开位于第二机身中的上述文档,继续进行编辑工作,此时,第二机身用于生成文档的编辑页面并发送给第一机身B进行显示。由此,免去了如图2所示的将文档从设备A发送给设备B的操作,使得第一机身A与第一机身B之间不需要共享任何数据,简化了用户操作,提升了用户使用体验。In one embodiment, as shown in FIG. 27 , the distributed system consists of two first fuselages and one second fuselage. For the convenience of description, the two first fuselages are referred to as first fuselage A respectively here. and the first fuselage B. Based on FIG. 27 , in an application scenario, the user carries the first fuselage A to go out to work, and places the first fuselage B at home. When the user goes out to work, the first fuselage A and the second fuselage maintain a data connection through a cellular mobile network or a Wi-Fi network, and the user uses the first fuselage A to remotely create a document in the second fuselage and edit it documents, and these documents are all stored in the second fuselage, and the second fuselage is used to generate an editing page of the document and send it to the first fuselage A for display. In this way, when the user returns home, the user can use the first fuselage B to directly open the above-mentioned document located in the second fuselage, and continue the editing work. At this time, the second fuselage is used to generate the editing page of the document and send it to The first body B is displayed. As a result, the operation of sending a document from device A to device B as shown in FIG. 2 is eliminated, so that there is no need to share any data between the first fuselage A and the first fuselage B, which simplifies user operations and improves user experience.
为了提高第二机身与多个第一机身同步文档或者其他数据的安全性,本申请实施例还提供了一种设备验证方法。In order to improve the security of synchronizing documents or other data between the second body and multiple first bodies, an embodiment of the present application further provides a device verification method.
具体实现中,当用户初次将第一机身置于第二机身之上时,第一机身和第二机身的NFC模块检测能够到彼此接近,随即建立初次连接。在此之后,如图28所示,第二机身的处理器可以获取第一机身的设备信息,例如设备型号、媒体访问控制(media access control,MAC)地址、国际移动设备识别码(international mobile equipment identity,IMEI)等信息。然后,第二机身的处理器针对第一机身生成专用的种子密码,该种子密码可以随机生成,也可以采用特定的方式生成。接下来,第二机身的处理器使用种子密码对第一机身的设备信息进行加密,将加密结果作为用于在第一机身的NFC模块存储数据使用的主密码,其中,使用种子密码对第一机身的设备信息进行加密可选通过多种方式实现,例如高级加密标准AES算法等。接下来,第二机身的处理器可以为第一机身分配一个专属的安全令牌token和一个动态校验码,例如循环冗余校验(cyclic redundancy check,CRC)码,并且使用主密码对token和进行加密,将加密后的数据写入到第一机身的NFC模块中。In a specific implementation, when the user places the first body on the second body for the first time, the NFC modules of the first body and the second body detect that they can approach each other, and then establish an initial connection. After that, as shown in FIG. 28 , the processor of the second body can acquire the device information of the first body, such as the device model, media access control (MAC) address, international mobile device identity (international mobile device identity) mobile equipment identity, IMEI) and other information. Then, the processor of the second fuselage generates a dedicated seed password for the first fuselage, and the seed password can be generated randomly or in a specific manner. Next, the processor of the second body uses the seed password to encrypt the device information of the first body, and uses the encryption result as the master password for storing data in the NFC module of the first body, wherein the seed password is used Encryption of the device information of the first body may optionally be implemented in various ways, such as the advanced encryption standard AES algorithm and the like. Next, the processor of the second body can assign an exclusive security token token and a dynamic check code, such as a cyclic redundancy check (CRC) code to the first body, and use the master password The token sum is encrypted, and the encrypted data is written into the NFC module of the first body.
当用户再次将第一机身置于第二机身之上时,第二机身的NFC模块读取第一机身的NFC模块中的加密数据,并交给第二机身的处理器进行解密。第二机身的处理器使用主密码对加密数据进行解密之后,对解密得到的token、动态校验码,以及第一机身的设备 型号、媒体访问控制和/或国际移动设备识别码等进行一致性校验;如果一致性校验通过,则第二机身的处理器生成文档的编辑页面并发送给第一机身进行显示,实现文档同步功能;如果一致性校验未通过,则第二机身与第一机身断开连接,并且第二机身的处理器将当前的token、动态校验码和第一机身的所有设备信息加入黑名单。另外,在一致性校验通过之后,第二机身的处理器还用于生成新的动态校验码,然后根据token和新的校验码生成新的加密数据,并且写入到第一机身的NFC模块中,以更新原有的加密数据。When the user places the first body on the second body again, the NFC module of the second body reads the encrypted data in the NFC module of the first body and hands it to the processor of the second body for processing. decrypt. After the processor of the second fuselage decrypts the encrypted data using the master password, the decrypted token and dynamic check code, as well as the device model, media access control and/or international mobile equipment identification code of the first fuselage are processed. Consistency check; if the consistency check passes, the processor of the second fuselage generates an edit page of the document and sends it to the first fuselage for display to realize the document synchronization function; if the consistency check fails, the first fuselage The second fuselage is disconnected from the first fuselage, and the processor of the second fuselage adds the current token, the dynamic verification code and all device information of the first fuselage to the blacklist. In addition, after the consistency check is passed, the processor of the second machine is also used to generate a new dynamic check code, and then generates new encrypted data according to the token and the new check code, and writes it to the first machine In the NFC module of the body, to update the original encrypted data.
这样,第一机身和第二机身利用动态校验的方式进行设备验证,由于动态校验码是变化的,当有不法设备对第一机身的NFC模块存储的加密数据进行了窃取,并尝试与第二机身进行验证时,第二机身的处理器会检测到来自不法设备的动态校验码与当前的动态校验码不一致,从而将不法设备接入黑名单,使得不法设备无法与第二机身建立连接,提高了文件同步的安全性。In this way, the first fuselage and the second fuselage use dynamic verification for device verification. Since the dynamic verification code changes, when an illegal device steals the encrypted data stored in the NFC module of the first fuselage, When trying to verify with the second body, the processor of the second body will detect that the dynamic check code from the illegal device is inconsistent with the current dynamic check code, so that the illegal device is connected to the blacklist, making the illegal device It is impossible to establish a connection with the second fuselage, which improves the security of file synchronization.
在一个实施例中,如图29所示,分布式系统由至少两个第一机身和一个第二机身组成,为便于描述,这里将至少两个第一机身分别称作第一机身A、第一机身B、…、第一机身N。其中,在家庭场景中,第二机身例如可以是智能摄像头,第一机身A例如可以是大屏显示设备、第一机身B例如可以是手机、第一机身N例如可以是平板电脑;第二机身例如可以固定在第一机身A之上,与第一机身建立有线数据连接;第一机身B~第二机身N可由不同的本地家庭成员持有,并且第一机身B~第二机身N可以与第二机身建立无线数据连接。当两个家庭之间进行视频互动通话时,一方面,第二机身可以通过网络接收来自远端家庭设备的视频图像Vr,并将视频图像Vr分发给第一机身A~第一机身N,第一机身A~第一机身N可以使用自身的微控制器解码接收到的视频图像Vr,并且在显示屏显示;另一方面,第一机身B~第一机身N的摄像头可以分别采集持有它们的家庭成员的视频图像Vb~Vn,这些视频图像Vb~Vn分别由各自的第一机身编码之后可以被发送给第二机身,第二机身在接收到视频图像Vb~Vn之后,可以对视频图像执行Vb~Vn解码、裁剪、缩放和排列等操作,并且与第二机身自身采集到的视频图像Va的叠加得到视频图像Vs,最后将Vs编码后通过网络发送给远端家庭设备,使得远端家庭设备的用户能够在一个视频图像Vs中同时看到多个本地家庭成员,并且上述多个本地家庭成员可以分别位于家中的不同位置、不同房间,不需要聚集到第二机身面前,极大地提高了多人视频的用户体验。In one embodiment, as shown in FIG. 29 , the distributed system consists of at least two first fuselage and one second fuselage. For the convenience of description, the at least two first fuselage is referred to as the first fuselage here respectively. Body A, first body B, ..., first body N. Wherein, in a home scenario, the second body may be, for example, a smart camera, the first body A may be, for example, a large-screen display device, the first body B may be, for example, a mobile phone, and the first body N may be, for example, a tablet computer For example, the second body can be fixed on the first body A and establish a wired data connection with the first body; the first body B to the second body N can be held by different local family members, and the first body The body B to the second body N can establish a wireless data connection with the second body. When a video interactive call is made between two families, on the one hand, the second body can receive the video image Vr from the remote home device through the network, and distribute the video image Vr to the first body A ~ the first body N, the first body A ~ the first body N can use their own microcontroller to decode the received video image Vr, and display it on the display screen; on the other hand, the first body B ~ the first body N The cameras can respectively capture the video images Vb~Vn of the family members who hold them, and these video images Vb~Vn are encoded by the respective first body respectively and can be sent to the second body, and the second body receives the video. After the images Vb to Vn, operations such as Vb to Vn decoding, cropping, scaling and arrangement can be performed on the video image, and the video image Vs is obtained by superimposing with the video image Va collected by the second body itself, and finally the Vs is encoded and passed through. The network is sent to the remote home device, so that the user of the remote home device can see multiple local family members in one video image Vs at the same time, and the above-mentioned multiple local family members can be located in different locations, different rooms in the home, not It needs to be gathered in front of the second fuselage, which greatly improves the user experience of multi-person video.
可以理解的是,当第一机身显示的图像来自于第二机身时,如果第二机身发送给第一机身的图像的分辨率和帧率与第一机身的显示屏的分辨率和刷新率相匹配,则第一机身的显示屏能够呈现出最佳的显示效果。然而,在实际应用场景中,第二机身可能会与不同的第一机身建立数据连接,并且不同的第一机身的显示屏分辨率和刷新率也很可能不同,那么,如果第二机身向上述不同的第一机身均发送统一分辨率和帧率的图像,则必然会存在一部分第一机身的显示屏分辨率和刷新率与接收到的图像分辨率和帧率不匹配,影响显示效果。为解决这一问题,本申请实施例还提供了一种图像适配方法,该方法如图30所示可以包括以下步骤:It can be understood that when the image displayed by the first body comes from the second body, if the resolution and frame rate of the image sent by the second body to the first body are the same as those of the display screen of the first body If the rate and refresh rate are matched, the display screen of the first body can present the best display effect. However, in a practical application scenario, the second body may establish a data connection with a different first body, and the display screen resolutions and refresh rates of the different first bodies are also likely to be different, then, if the second body If the fuselage sends images of the same resolution and frame rate to the above-mentioned different first fuselage, there will inevitably be a part of the first fuselage whose display screen resolution and refresh rate do not match the received image resolution and frame rate , which affects the display effect. In order to solve this problem, the embodiment of the present application also provides an image adaptation method. As shown in FIG. 30 , the method may include the following steps:
步骤S101,第二机身获取第一机身的显示屏参数。Step S101, the second fuselage acquires the display screen parameters of the first fuselage.
其中,显示屏参数可以包括显示屏的分辨率和刷新率的至少一种。The parameters of the display screen may include at least one of a resolution and a refresh rate of the display screen.
在一个实施例中,步骤S101可以发生在第一机身与第二机身建立数据连接的过程中。示例地,当第一机身与第二机身采用Wi-Fi或者蓝牙等无线数据连接时,第一机身的微 控制器可以在Wi-Fi模块广播的信标帧beacon中携带显示屏的显示屏参数,第二机身的处理器可以通过Wi-Fi模块搜寻信标帧,以获取显示屏参数。In one embodiment, step S101 may occur during the process of establishing a data connection between the first body and the second body. For example, when the first body and the second body are connected by wireless data such as Wi-Fi or Bluetooth, the microcontroller of the first body can carry the data of the display screen in the beacon frame broadcast by the Wi-Fi module. Display parameters, the processor of the second fuselage can search for beacon frames through the Wi-Fi module to obtain display parameters.
在一个实施例中,步骤S101可以发生在第一机身与第二机身建立数据连接之后。示例地,在第一机身与第二机身建立数据连接之后,第二机身的处理器可以向第一机身的微控制器发送一个用于获取参数信息的请求消息,第一机身的微控制器在接收到这个请求消息之后,将显示屏参数发送给第二机身的处理器。In one embodiment, step S101 may occur after the first body establishes a data connection with the second body. For example, after the first fuselage establishes a data connection with the second fuselage, the processor of the second fuselage may send a request message for acquiring parameter information to the microcontroller of the first fuselage, and the first fuselage After receiving the request message, the microcontroller sends the display screen parameters to the processor of the second fuselage.
在一个实施例中,第二机身可以维护一个显示屏参数列表,该显示屏参数列表中可以用于记录第二机身已知的第一机身的显示屏参数,这样,当有未知的第一机身与第二机身数据连接时,第二机身的处理器可以将这个未知的第一机身的显示屏参数保存在显示屏参数列表中,当有已知的第一机身与第二机身数据连接时,第二机身的处理器可以直接从信息参数列表中获取这个已知的第一机身的显示屏参数。In one embodiment, the second fuselage may maintain a display screen parameter list, and the display screen parameter list may be used to record the display screen parameters of the first fuselage known to the second fuselage, so that when there are unknown parameters of the first fuselage When the first fuselage is data-connected to the second fuselage, the processor of the second fuselage can save the unknown display screen parameters of the first fuselage in the display parameter list. When there is a known first fuselage During data connection with the second fuselage, the processor of the second fuselage can directly obtain the known display screen parameters of the first fuselage from the information parameter list.
示例地,显示屏参数列表可以如以下表1示出的形式:Illustratively, the display parameter list may be in the form shown in Table 1 below:
设备IDDevice ID 显示屏分辨率Display resolution 显示屏刷新率Display refresh rate
XX-XX-XX-XX-XXXX-XX-XX-XX-XX 1920×1080(1080P)1920×1080(1080P) 60Hz60Hz
YY-YY-YY-YY-YYYY-YY-YY-YY-YY 2340×10802340×1080 90Hz90Hz
ZZ-ZZ-ZZ-ZZ-ZZZZ-ZZ-ZZ-ZZ-ZZ 1280×720(720P)1280×720(720P) 60Hz60Hz
……... ……... ……...
表1Table 1
其中,设备ID例如可以是第一机身的媒体访问控制地址(media access control address,MAC)、服务集标识符(Service Set Identifier,SSID)、国际移动设备识别码(international mobile equipment identity,IMEI),也可以是其他能够用于唯一识别第一机身身份的信息。显示屏分辨率指的是显示屏像素的行数×列数。显示屏刷新率指的是显示屏每秒能够显示的图像帧数,60赫兹(Hz)表示显示屏每秒最多能够显示60帧图像。其中,分辨率1920×1080和刷新率60Hz也可以用1920×1080@60Hz表示。The device ID may be, for example, a media access control address (MAC), a service set identifier (SSID), and an international mobile equipment identity (IMEI) of the first body. , or other information that can be used to uniquely identify the identity of the first fuselage. Display resolution refers to the number of rows × columns of display pixels. Display refresh rate refers to the number of image frames per second the display can display, and 60 hertz (Hz) means the display can display up to 60 frames per second. Among them, the resolution of 1920×1080 and the refresh rate of 60Hz can also be represented by 1920×1080@60Hz.
步骤S102,第二机身根据显示屏参数生成待显示的图像。Step S102, the second body generates an image to be displayed according to the parameters of the display screen.
具体实现中,如果待显示的图像是操作系统UI界面、应用程序界面等非媒体类内容,那么第二机身的处理器可以在绘制图像阶段直接绘制与第一机身的显示屏分辨率和刷新率相匹配的图像;如果待显示的图像是视频、图片等媒体类内容,那么第二机身的处理器可以对图像进行缩放、裁剪、插帧、压缩等调整操作,以将图像转换成与第一机身的显示屏分辨率和刷新率相匹配的图像。In specific implementation, if the image to be displayed is non-media content such as operating system UI interface, application program interface, etc., then the processor of the second fuselage can directly draw the image with the resolution and resolution of the display screen of the first fuselage in the stage of drawing the image. An image with a matching refresh rate; if the image to be displayed is video, picture and other media content, the processor of the second fuselage can perform scaling, cropping, frame insertion, compression and other adjustment operations on the image to convert the image into An image that matches the display resolution and refresh rate of the first body.
步骤S103,第二机身将调整后的图像发送给第一机身进行显示。Step S103, the second body sends the adjusted image to the first body for display.
下面结合一个示例地对步骤S101-步骤S103能够实现的效果进行具体说明。如图31所示,第二机身与第一机身A、第一机身B和第一机身C建立数据连接,其中,第一机身A的显示屏参数为1920×1080@60Hz,第一机身B的显示屏参数为2340×1080@90Hz,第一机身C的显示屏参数为1280×720@60Hz。当第二机身需要将原始质量为1920×1080@60Hz的视频发送给这三个第一机身进行显示时,对于第一机身A,由于其显示屏参数与视频的原始质量相同,因此第二机身不需要对视频进行处理,而直接发送给第一机身A进行显示,对于第一机身B,第二机身可以在视频左右两侧各增加尺寸为210×1080的黑色背景,从而将视频的分辨率扩增为2340×1080@60Hz,然后再将视频发送给第一机身B进行显示,对于第一机身C,第二机身可以将视频的分辨率压缩为1280× 720@60Hz,然后再将视频发送给第一机身C进行显示。由此可以看出,第二机身可以根据不同第一机身的显示屏参数对待显示的视频做不同的调整,从而确保视频能够适配每一个第一机身的显示屏参数,实现最优的显示效果,提高用户使用体验。The effects that can be achieved in steps S101 to S103 will be specifically described below with reference to an example. As shown in FIG. 31 , the second fuselage establishes data connections with the first fuselage A, the first fuselage B, and the first fuselage C, wherein the display screen parameter of the first fuselage A is 1920×1080@60Hz, The display parameters of the first body B are 2340×1080@90Hz, and the display parameters of the first body C are 1280×720@60Hz. When the second body needs to send the video with the original quality of 1920×1080@60Hz to the three first bodies for display, for the first body A, since its display parameters are the same as the original quality of the video, so The second fuselage does not need to process the video, but directly sends it to the first fuselage A for display. For the first fuselage B, the second fuselage can add a black background with a size of 210×1080 on the left and right sides of the video. , so that the resolution of the video is expanded to 2340×1080@60Hz, and then the video is sent to the first body B for display. For the first body C, the second body can compress the resolution of the video to 1280 × 720@60Hz, and then send the video to the first body C for display. It can be seen from this that the second fuselage can make different adjustments to the video to be displayed according to the display parameters of the first fuselage, so as to ensure that the video can be adapted to the display parameters of each first fuselage and achieve optimal display effect, improve user experience.
进一步地,为了节省功耗,本申请实施例还提供了一种设备休眠方法,该方法如图32可以包括以下步骤:Further, in order to save power consumption, an embodiment of the present application also provides a device dormancy method. As shown in FIG. 32, the method may include the following steps:
步骤S201,在第一机身与第二机身通过Wi-Fi和蓝牙建立有数据连接的情况下,如果第一机身在第一预设时长内没有获取到用户操作,则第一机身进入第一休眠状态。Step S201, in the case that the first body and the second body have established a data connection through Wi-Fi and Bluetooth, if the first body does not obtain a user operation within the first preset time period, the first body Enter the first sleep state.
其中,在第一休眠状态下,第一机身熄灭显示屏并且进入到锁屏状态,但是保持与第二机身通过Wi-Fi和蓝牙的数据连接。另外,在第一休眠状态下,第二机身的处理器关闭部分物理核心core,仅保留少数用于执行后台任务的低功耗的物理核心,以节省功耗。Wherein, in the first sleep state, the first body turns off the display screen and enters a lock screen state, but maintains a data connection with the second body through Wi-Fi and Bluetooth. In addition, in the first sleep state, the processor of the second body turns off part of the physical core cores, and only reserves a few physical cores with low power consumption for performing background tasks, so as to save power consumption.
在步骤S201-步骤S203中,“用户操作”例如可以包括用户拿起第一机身、按下第一机身的按键、解锁第一机身、在第一机身解锁状态下在显示屏上执行滑动和点击操作等。这些用户操作可以通过第一机身的传感器、按键和触控面板感知,并传递给第一机身的微控制器,如果微控制器在第一预设时长内没有获取到用户操作,则微控制器可以控制第一机身进入第一休眠状态。In step S201-step S203, the "user operation" may include, for example, the user picks up the first body, presses a button of the first body, unlocks the first body, and displays on the display screen when the first body is unlocked Perform swipe and tap actions, etc. These user operations can be sensed through the sensors, buttons and touch panel of the first body, and transmitted to the microcontroller of the first body. If the microcontroller does not acquire the user operation within the first preset time period, the microcontroller The controller may control the first body to enter the first sleep state.
步骤S202,在第一机身进入第一休眠状态之后,如果第一机身在第二预设时长内没有获取到用户操作,则第一机身进入到第二休眠状态。Step S202, after the first body enters the first sleep state, if the first body does not acquire a user operation within the second preset time period, the first body enters the second sleep state.
具体实现中,在第一机身进入第一休眠状态之后,如果微控制器在第二预设时长内没有获取到用户操作,则微控制器可以控制第一机身进入到第二休眠状态。In a specific implementation, after the first body enters the first sleep state, if the microcontroller does not acquire a user operation within the second preset time period, the microcontroller may control the first body to enter the second sleep state.
在第二休眠状态下,第一机身保持熄灭显示屏和锁屏状态,并且断开与第二机身的Wi-Fi数据连接,或者断开与第二机身的蓝牙数据连接,使得第一机身与第二机身仅通过Wi-Fi或者蓝牙中的其中一种方式保持数据连接,以进一步节省功耗。In the second sleep state, the first body keeps the display screen off and the lock screen state, and disconnects the Wi-Fi data connection with the second body, or disconnects the Bluetooth data connection with the second body, so that the first body The first body and the second body only maintain data connection through one of Wi-Fi or Bluetooth, so as to further save power consumption.
步骤S203,在第一机身进入第二休眠状态之后,如果第一机身在第三预设时长内没有获取到用户操作,则第一机身进入到第三休眠状态。Step S203, after the first body enters the second sleep state, if the first body does not acquire a user operation within the third preset time period, the first body enters the third sleep state.
具体实现中,在第一机身进入第二休眠状态之后,如果微控制器在第三预设时长内没有获取到用户操作,则微控制器可以控制第一机身进入到第三休眠状态。In a specific implementation, after the first body enters the second sleep state, if the microcontroller does not acquire a user operation within a third preset time period, the microcontroller may control the first body to enter the third sleep state.
其中,在第三休眠状态下,第一机身保持熄灭显示屏和锁屏状态,并且断开与第二机身的Wi-Fi和蓝牙数据连接,使得第一机身与第二机身之间不再保持数据连接,以进一步节省功耗。Among them, in the third sleep state, the first body keeps the display screen off and the lock screen state, and disconnects the Wi-Fi and Bluetooth data connection with the second body, so that the first body and the second body are connected to each other. The data connection is no longer maintained between, to further save power consumption.
另外,需要补充说明的是,如果在第一机身进入休眠状态之前,第一机身与第二机身之间仅通过一种方式建立数据连接,那么,该设备休眠方法如图33所示可以精简为步骤S301-步骤S302:In addition, it should be added that if the data connection between the first body and the second body is only established in one way before the first body enters the sleep state, then the device sleep method is shown in Figure 33 It can be reduced to step S301-step S302:
步骤S301,在第一机身与第二机身通过Wi-Fi或蓝牙建立有数据连接的情况下,如果第一机身在第一预设时长内没有获取到用户操作,则第一机身进入第一休眠状态。Step S301, in the case where a data connection is established between the first body and the second body through Wi-Fi or Bluetooth, if the first body does not obtain a user operation within a first preset time period, the first body Enter the first sleep state.
其中,在第二休眠状态下,第一机身熄灭显示屏并且进入到锁屏状态,但是保持与第二机身的数据连接。Wherein, in the second sleep state, the first body turns off the display screen and enters the lock screen state, but maintains the data connection with the second body.
步骤S302,在第一机身进入第一休眠状态之后,如果第一机身在第二预设时长内没有获取到用户操作,则第一机身进入到第三休眠状态。Step S302, after the first body enters the first sleep state, if the first body does not acquire a user operation within the second preset time period, the first body enters the third sleep state.
其中,在第二休眠状态下,第一机身保持熄灭显示屏和锁屏状态,并且断开与第二 机身的数据连接,以进一步节省能耗。Wherein, in the second sleep state, the first body keeps the display screen off and the lock screen state, and disconnects the data connection with the second body, so as to further save energy consumption.
可以理解的是,本申请实施例提供的设备休眠方法根据用户未操作第一机身的时长,对第一机身设置了三种休眠状态,按层次逐渐断开第一机身与第二机身之间的数据连接,既使得第一机身能够在短期休眠时能够与第二机身同步数据,以便于用户再次使用,也使得第二机身能够在长期休眠时,断开与第二机身的数据连接,以节省功耗,延长续航时间。It can be understood that the device sleep method provided by the embodiment of the present application sets three sleep states for the first body according to the length of time that the user does not operate the first body, and gradually disconnects the first body and the second body according to the level. The data connection between the two fuselage not only enables the first fuselage to synchronize data with the second fuselage during short-term sleep, so that the user can use it again, but also enables the second fuselage to disconnect from the second fuselage during long-term sleep. The data connection of the fuselage to save power consumption and prolong battery life.
进一步地,为了提高第一机身的唤醒速度,本申请实施例还提供了一种设备唤醒方法,该方法如图34可以包括以下步骤:Further, in order to improve the wake-up speed of the first body, an embodiment of the present application also provides a device wake-up method. As shown in FIG. 34 , the method may include the following steps:
步骤S401,第一机身在休眠状态下检测到被用户触摸时,与第二机身建立数据连接。Step S401 , when the first body detects that it is touched by the user in the sleep state, establishes a data connection with the second body.
具体实现中,第一机身可以配备超声波传感器、压力传感器、电容传感器、电感传感器、毫米波传感器或者其他传感器及其传感器阵列以检测用户触摸。例如,超声波传感器可以通过向外发射超声波并且接收反射回波的方式测量物体的距离,因此可以感知用户手掌接近或者触摸第一机身的行为;电容传感器在有外部导体接近时,会产生电容变化,因此也可以感知用户手掌接近或者触摸第一机身的行为;电敢传感器在有外部导体接近时,会产生电感变化,因此也可以感知用户手掌接近或者触摸第一机身的行为。这样,当传感器监测到第一机身被触摸时,微控制器可以控制第一机身与所述第二机身建立数据连接。In a specific implementation, the first body may be equipped with an ultrasonic sensor, a pressure sensor, a capacitive sensor, an inductive sensor, a millimeter-wave sensor, or other sensors and sensor arrays thereof to detect user touch. For example, an ultrasonic sensor can measure the distance of an object by emitting ultrasonic waves and receiving reflected echoes, so it can sense the behavior of the user's palm approaching or touching the first body; capacitive sensors will produce capacitance changes when an external conductor approaches , so the user's palm approaching or touching the first body can also be sensed; when an external conductor approaches, the electric sensor will produce an inductance change, so it can also sense the user's palm approaching or touching the first body. In this way, when the sensor detects that the first body is touched, the microcontroller can control the first body to establish a data connection with the second body.
图35是本申请实施例提供的第一机身的传感器分布示意图。如图35所示,为检测用户触摸,第一机身100可以包含多个传感器,例如传感器S1、传感器S2、传感器S3、传感器S4~传感器SN,这些传感器可以是相同的传感器组成的传感器阵列,也可以是不同的传感器。传感器S 1~传感器SN可以均匀地分布在第一机身的显示屏两侧,以便于当用户从任意方向接近或者触摸第一机身时,都能够被传感器监测到。FIG. 35 is a schematic diagram of sensor distribution of the first fuselage provided by an embodiment of the present application. As shown in FIG. 35 , in order to detect user touch, the first body 100 may include multiple sensors, such as sensor S1, sensor S2, sensor S3, sensor S4 to sensor SN, these sensors may be a sensor array composed of the same sensor, Different sensors are also possible. The sensors S1 to SN can be evenly distributed on both sides of the display screen of the first body, so that when the user approaches or touches the first body from any direction, they can be monitored by the sensors.
另外需要补充说明的是,根据休眠状态的不同,步骤S401可以有不同的实现方式,例如:In addition, it should be supplemented that, according to different sleep states, step S401 can be implemented in different ways, for example:
当第一机身在第一休眠状态下时,由于第一机身保持与第二机身通过Wi-Fi和蓝牙建立数据连接,因此步骤S401可以省略。When the first body is in the first sleep state, since the first body keeps establishing a data connection with the second body through Wi-Fi and Bluetooth, step S401 can be omitted.
当第一机身在第二休眠状态下时,由于第一机身与第二机身仅通过Wi-Fi或者蓝牙中的其中一种方式保持数据连接,因此如果第一机身被用户触摸,则微控制器可以启用另一种数据连接方式,使第一机身与第二机身同时通过Wi-Fi和蓝牙建立数据连接。When the first body is in the second sleep state, since the first body and the second body only maintain data connection through one of Wi-Fi or Bluetooth, if the first body is touched by the user, Then the microcontroller can enable another data connection method, so that the first body and the second body can establish a data connection through Wi-Fi and Bluetooth at the same time.
当第一机身在第三休眠状态下时,如果第一机身被用户触摸,则微控制器可以启用Wi-Fi或者蓝牙模块,使第一机身与第二机身同时通过Wi-Fi和蓝牙建立数据连接。When the first body is in the third sleep state, if the first body is touched by the user, the microcontroller can enable the Wi-Fi or Bluetooth module, so that the first body and the second body pass through Wi-Fi at the same time Establish a data connection with Bluetooth.
步骤S402,第二机身获取第一机身的显示屏参数,以及根据显示屏参数生成图像数据。Step S402, the second body acquires the display screen parameters of the first body, and generates image data according to the display screen parameters.
在一种实现方式中,第二机身的处理器可以在第一机身与第二机身建立数据连接之后,向第一机身的微控制器发送请求消息,以获取显示屏的显示屏参数,以及根据显示屏参数生成图像数据。In an implementation manner, the processor of the second body may send a request message to the microcontroller of the first body after the first body and the second body establish a data connection to obtain the display screen of the display screen parameters, and generate image data based on display parameters.
在一种实现方式中,第二机身的处理器可以在第一机身与第二机身建立数据连接的过程中,从第一机身广播的信标帧中获取显示屏的显示屏参数,以及根据显示屏参数生成第一图像数据。In an implementation manner, the processor of the second fuselage may acquire the display screen parameters of the display screen from the beacon frame broadcast by the first fuselage during the process of establishing a data connection between the first fuselage and the second fuselage , and generate the first image data according to the parameters of the display screen.
步骤S403,第一机身检测到被用户抬起时,向第二机身发送唤醒消息。Step S403, when the first body detects that it is lifted by the user, it sends a wake-up message to the second body.
具体实现中,第一机身可以包括加速度传感器和/或陀螺仪传感器。其中,加速度传感器可以用于测量第一机身的加速度,陀螺仪传感器可以用于测量第一机身的角加速度。加速度传感器和/或陀螺仪传感器可以将测得的加速度和/或角加速度传递给微处理器,以便于微处理器判断第一机身是否被用户抬起,如果第一机身被用户抬起,则微控制器可以向第二机身的处理器发送唤醒消息。另外,当微控制器获取到第一机身的按键被按下时,也可以向第二机身的处理器发送唤醒消息。In a specific implementation, the first body may include an acceleration sensor and/or a gyro sensor. The acceleration sensor may be used to measure the acceleration of the first body, and the gyro sensor may be used to measure the angular acceleration of the first body. The acceleration sensor and/or the gyroscope sensor can transmit the measured acceleration and/or angular acceleration to the microprocessor, so that the microprocessor can judge whether the first body is lifted by the user, and if the first body is lifted by the user , the microcontroller can send a wake-up message to the processor of the second body. In addition, when the microcontroller acquires that the button of the first body is pressed, it can also send a wake-up message to the processor of the second body.
步骤S404,第二机身根据唤醒消息点亮显示屏,将图像数据发送到显示屏进行显示。Step S404, the second body lights up the display screen according to the wake-up message, and sends the image data to the display screen for display.
具体实现中,当第二机身的处理器接收到唤醒消息之后,可以向第一子设备的微控制器发送一个亮屏消息以及生成的图像数据;第一子设备的微控制器在接收到亮屏消息和图像数据之后,可以根据亮屏消息点亮显示屏,并且将图像数据解码并发送到显示屏进行显示。In specific implementation, after the processor of the second body receives the wake-up message, it can send a screen-on message and the generated image data to the microcontroller of the first sub-device; After the screen-on message and the image data are obtained, the display screen can be lighted according to the screen-on message, and the image data can be decoded and sent to the display screen for display.
可以理解的是,在本申请实施例提供的设备唤醒方法中,第一机身在检测到用户触摸时,即开始与第二机身进行唤醒所需的交互,获取显示屏亮屏后需要显示的图像数据,使得第一机身能够在检测到用户抬起时迅速唤醒,提升用户使用体验。It can be understood that, in the device wake-up method provided by the embodiment of the present application, when the first body detects the user's touch, it starts to interact with the second body to wake up. the image data, so that the first body can wake up quickly when it is detected that the user lifts up, and the user experience is improved.
本申请实施例提供了一种芯片系统,该芯片系统包括微控制器,用于支持上述第一机身实现上述各实施例所涉及的功能,例如在显示屏显示图像数据。Embodiments of the present application provide a chip system, where the chip system includes a microcontroller, which is used to support the above-mentioned first body to implement the functions involved in the above-mentioned embodiments, such as displaying image data on a display screen.
本申请实施例提供了一种芯片系统,该芯片系统包括处理器,用于支持上述第二机身实现上述各实施例所涉及的功能,例如在生成图像数据。An embodiment of the present application provides a chip system, where the chip system includes a processor for supporting the second body to implement the functions involved in the foregoing embodiments, such as generating image data.
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在第一机身的微控制器上运行时,使得微控制器实现上述各方面及其实现方式所涉及的第一机身的功能。Embodiments of the present application also provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the microcontroller of the first body, the microcontroller can implement the above aspects and implementation thereof way involved in the function of the first fuselage.
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在第二机身的处理器上运行时,使得处理器实现上述各方面及其实现方式所涉及的第二机身的功能。Embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on the processor of the second body, the processor enables the processor to implement the above aspects and implementation manners. The function of the second fuselage involved.
以上的具体实施方式,对本申请实施例的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上仅为本申请实施例的具体实施方式而已,并不用于限定本申请实施例的保护范围,凡在本申请实施例的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请实施例的保护范围之内。The above specific embodiments further describe in detail the purposes, technical solutions and beneficial effects of the embodiments of the present application. It should be understood that the above are only specific implementations of the embodiments of the present application, and are not intended to limit the implementation of the present application. Any modification, equivalent replacement, improvement, etc. made on the basis of the technical solutions of the embodiments of the present application shall be included in the protection scope of the embodiments of the present application.

Claims (15)

  1. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    可分离设置的第一机身和第二机身、处理器、微控制器和显示屏;Separately arranged first body and second body, processor, microcontroller and display screen;
    所述微控制器和所述显示屏设置于所述第一机身,所述处理器设置于第二机身;The microcontroller and the display screen are arranged on the first body, and the processor is arranged on the second body;
    所述第一机身和所述第二机身通过有线或者无线方式建立数据连接;establishing a data connection between the first body and the second body in a wired or wireless manner;
    所述处理器用于生成第一图像数据,将所述第一图像数据发送给所述微控制器;The processor is configured to generate first image data and send the first image data to the microcontroller;
    所述微控制器用于将所述第一图像数据输出到所述显示屏进行显示。The microcontroller is used for outputting the first image data to the display screen for display.
  2. 根据权利要求1所述的电子设备,其特征在于,The electronic device according to claim 1, wherein,
    所述第一机身包括触控面板,所述触控面板用于检测用户在所述显示屏上的触摸输入;the first body includes a touch panel for detecting a user's touch input on the display screen;
    所述微控制器用于将所述触摸输入发送给所述处理器;the microcontroller for sending the touch input to the processor;
    所述处理器用于根据所述触摸输入生成所述第一图像数据。The processor is configured to generate the first image data according to the touch input.
  3. 根据权利要求1或2所述的电子设备,其特征在于,The electronic device according to claim 1 or 2, characterized in that,
    所述第一机身包括按键,所述按键用于检测用户的按键输入;The first body includes a button, and the button is used to detect a user's button input;
    所述微控制器用于将所述按键输入发送给所述处理器;the microcontroller is configured to send the key input to the processor;
    所述处理器用于根据所述按键输入生成所述第一图像数据。The processor is configured to generate the first image data according to the key input.
  4. 根据权利要求1-3任一项所述的电子设备,其特征在于,The electronic device according to any one of claims 1-3, wherein,
    所述第一机身包括传感器模块;the first body includes a sensor module;
    所述微控制器用于将所述传感器模块采集的传感器数据发送给所述处理器;The microcontroller is configured to send sensor data collected by the sensor module to the processor;
    所述处理器用于根据所述传感器数据生成所述第一图像数据。The processor is configured to generate the first image data from the sensor data.
  5. 根据权利要求1-4任一项所述的电子设备,其特征在于,The electronic device according to any one of claims 1-4, wherein,
    所述第一机身包括摄像头,所述摄像头用于采集第二图像数据;The first body includes a camera, and the camera is used to collect second image data;
    所述微控制器用于将所述第二图像数据发送到所述显示屏进行显示;The microcontroller is used for sending the second image data to the display screen for display;
    所述微控制器还用于将所述第二图像数据编码后发送给所述第二机身。The microcontroller is further configured to encode the second image data and send it to the second body.
  6. 根据权利要求1-5任一项所述的电子设备,其特征在于,The electronic device according to any one of claims 1-5, wherein,
    所述第一机身和所述第二机身还包括无线通信模块;The first body and the second body further include a wireless communication module;
    所述无线通信模块包括蓝牙模块和/或Wi-Fi模块;The wireless communication module includes a Bluetooth module and/or a Wi-Fi module;
    所述第一机身和所述第二机身通过所述蓝牙模块和/或Wi-Fi模块建立数据连接。The first body and the second body establish a data connection through the Bluetooth module and/or the Wi-Fi module.
  7. 根据权利要求1-5任一项所述的电子设备,其特征在于,The electronic device according to any one of claims 1-5, wherein,
    所述第一机身和所述第二机身还包括外部接口;the first body and the second body further include external interfaces;
    当使用线缆连接所述第一机身和所述第二机身的外部接口时,所述第一机身和所述第二机身建立有线数据连接。When a cable is used to connect the external interfaces of the first body and the second body, the first body and the second body establish a wired data connection.
  8. 根据权利要求1-7任一项所述的电子设备,其特征在于,The electronic device according to any one of claims 1-7, wherein,
    所述处理器用于获取所述显示屏的显示屏参数,所述显示屏参数包括所述显示屏的分辨率和/或刷新率;The processor is configured to acquire display screen parameters of the display screen, and the display screen parameters include the resolution and/or refresh rate of the display screen;
    所述处理器根据所述显示屏参数对所述第一图像数据进行调整,所述调整包括对所述第一图像数据进行缩放、裁剪、插帧、压缩中的至少一种;The processor adjusts the first image data according to the display screen parameters, and the adjustment includes at least one of scaling, cropping, frame insertion, and compression on the first image data;
    所述处理器将调整后的所述第一图像数据发送给所述微控制器。The processor sends the adjusted first image data to the microcontroller.
  9. 根据权利要求8所述的电子设备,其特征在于,The electronic device according to claim 8, wherein,
    所述微控制器用于广播信标帧,所述信标帧包括所述显示屏参数;The microcontroller is used for broadcasting a beacon frame, and the beacon frame includes the parameters of the display screen;
    所述处理器用于从所述信标帧中获取所述显示屏参数。The processor is configured to obtain the display screen parameter from the beacon frame.
  10. 根据权利要求8所述的电子设备,其特征在于,The electronic device according to claim 8, wherein,
    所述处理器用于向所述微控制器发送第一请求消息;The processor is configured to send a first request message to the microcontroller;
    所述微控制器用于根据所述第一请求消息向所述处理器发送所述显示屏参数。The microcontroller is configured to send the display screen parameter to the processor according to the first request message.
  11. 根据权利要求6所述的电子设备,其特征在于,在所述第一机身与所述第二机身通过所述Wi-Fi模块和所述蓝牙模块建立数据连接的情况下,如果所述微控制器在第一预设时长内没有获取到用户操作,则所述其微控制器控制所述第一机身进入第一休眠状态,其中,所述第一休眠状态包括所述第一机身熄灭显示屏并且锁屏,以及与所述第二机身通过所述Wi-Fi模块和所述蓝牙模块保持数据连接。The electronic device according to claim 6, wherein, in the case that the first body and the second body establish a data connection through the Wi-Fi module and the Bluetooth module, if the first body and the second body establish a data connection through the Wi-Fi module and the Bluetooth module If the microcontroller does not obtain a user operation within a first preset time period, the microcontroller controls the first body to enter a first sleep state, wherein the first sleep state includes the first body The body turns off the display screen and locks the screen, and maintains a data connection with the second body through the Wi-Fi module and the Bluetooth module.
  12. 根据权利要求11所述的电子设备,其特征在于,在所述第一机身进入所述第一休眠状态之后,如果所述微控制器在第二预设时长内没有获取到用户操作,则所述微控制器控制所述第一机身进入到第二休眠状态,其中,所述第二休眠状态包括所述第一机身熄灭显示屏并且锁屏,以及与所述第二机身通过所述Wi-Fi模块或所述蓝牙模块保持数据连接。The electronic device according to claim 11, wherein after the first body enters the first sleep state, if the microcontroller does not acquire a user operation within a second preset time period, the The microcontroller controls the first body to enter a second sleep state, wherein the second sleep state includes the first body turning off the display screen and locking the screen, and passing through the second body The Wi-Fi module or the Bluetooth module maintains a data connection.
  13. 根据权利要求12所述的电子设备,其特征在于,在所述第一机身进入所述第二休眠状态之后,如果所述微控制器在第三预设时长内没有获取到用户操作,则所述微控制器控制所述第一机身进入到第三休眠状态,其中,所述第三休眠状态包括所述第一机身熄灭显示屏并且锁屏,以及与所述第二机身断开数据连接。The electronic device according to claim 12, wherein after the first body enters the second sleep state, if the microcontroller does not acquire a user operation within a third preset time period, the The microcontroller controls the first body to enter a third sleep state, wherein the third sleep state includes that the first body turns off the display screen and locks the screen, and disconnects from the second body. Open data connection.
  14. 根据权利要求13所述的电子设备,其特征在于,The electronic device according to claim 13, wherein,
    所述显示屏的两侧分布设置有多个传感器,所述多个传感器用于检测所述第一机身是否被用户触摸和是否被用户拾起;A plurality of sensors are distributed on both sides of the display screen, and the plurality of sensors are used to detect whether the first body is touched by the user and whether it is picked up by the user;
    所述微控制器用于当所述第一机身在所述第三休眠状态下被用户触摸时,控制所述第一机身与所述第二机身建立数据连接;The microcontroller is configured to control the first body to establish a data connection with the second body when the first body is touched by a user in the third sleep state;
    所述处理器用于在所述第一机身与所述第二机身建立数据连接之后,获取所述显示屏的显示屏参数,以及根据所述显示屏参数生成所述第一图像数据;The processor is configured to acquire display screen parameters of the display screen after the first body and the second body establish a data connection, and generate the first image data according to the display screen parameters;
    所述微控制器用于当所述第一机身被用户抬起时,向所述处理器发送第一唤醒消息;The microcontroller is configured to send a first wake-up message to the processor when the first body is lifted by a user;
    所述处理器用于根据所述第一唤醒消息点亮所述显示屏,将所述第一图像数据发送给所述微控制器。The processor is configured to light up the display screen according to the first wake-up message, and send the first image data to the microcontroller.
  15. 一种分布式系统,其特征在于,包括:多个如权利要求1-14任一项所述电子设备的所述第一机身,以及一个如权利要求1-14任一项所述电子设备的所述第二机身,多个所述第一机身分别与所述第二机身通过有线或者无线方式建立数据连接。A distributed system, characterized by comprising: a plurality of the first bodies of the electronic equipment according to any one of claims 1-14, and one electronic equipment according to any one of claims 1-14 The second fuselage, a plurality of the first fuselage respectively establish a data connection with the second fuselage in a wired or wireless manner.
PCT/CN2021/105741 2020-07-21 2021-07-12 Electronic device and distributed system WO2022017209A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010704193.4A CN114040349B (en) 2020-07-21 2020-07-21 Electronic equipment and distributed system
CN202010704193.4 2020-07-21

Publications (1)

Publication Number Publication Date
WO2022017209A1 true WO2022017209A1 (en) 2022-01-27

Family

ID=79728478

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/105741 WO2022017209A1 (en) 2020-07-21 2021-07-12 Electronic device and distributed system

Country Status (2)

Country Link
CN (1) CN114040349B (en)
WO (1) WO2022017209A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050113482A (en) * 2004-05-29 2005-12-02 엘지전자 주식회사 Separable fortable terminal for image communication
CN201717907U (en) * 2010-05-05 2011-01-19 中兴通讯股份有限公司 Terminal, terminal body and screen body of terminal
CN202059455U (en) * 2011-04-18 2011-11-30 中兴通讯股份有限公司 Movement terminal
CN105430606A (en) * 2015-12-30 2016-03-23 惠州Tcl移动通信有限公司 Separate terminal and control method thereof
CN107070478A (en) * 2017-03-31 2017-08-18 深圳市沃特沃德股份有限公司 Separate type communication equipment and mobile host
CN110881254A (en) * 2018-09-06 2020-03-13 Oppo广东移动通信有限公司 Portable electronic device
CN111866233A (en) * 2019-04-29 2020-10-30 北京小米移动软件有限公司 Terminal device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8640210B2 (en) * 2011-09-01 2014-01-28 Microsoft Corporation Distributed computer systems with time-dependent credentials
CN103049179A (en) * 2012-12-03 2013-04-17 东莞宇龙通信科技有限公司 Control method of mobile terminal and mobile terminal screen state
CN103019521B (en) * 2012-12-31 2016-01-27 广东欧珀移动通信有限公司 A kind of mobile terminal puts out screen method and device
CN103685762B (en) * 2013-12-30 2016-01-06 宇龙计算机通信科技(深圳)有限公司 A kind of mobile terminal and control method thereof
KR101802588B1 (en) * 2015-08-07 2017-12-28 주식회사 엘지씨엔에스 Mutual authentication method between mutual authentication devices based on session key and token, mutual authentication devices
CN105848268B (en) * 2016-05-25 2019-04-19 努比亚技术有限公司 A kind of method and terminal reducing power consumption
CN106445080A (en) * 2016-11-02 2017-02-22 珠海格力电器股份有限公司 Sleeping control method, control device and terminal equipment
CN106534176B (en) * 2016-12-08 2019-06-14 西安交大捷普网络科技有限公司 Secure storage method of data under a kind of cloud environment
CN110109574A (en) * 2019-06-12 2019-08-09 南京东屋电气有限公司 A kind of connector that double touch sensors wake up
US11543873B2 (en) * 2019-09-27 2023-01-03 Intel Corporation Wake-on-touch display screen devices and related methods
CN111126533B (en) * 2020-01-08 2023-06-23 牛津(海南)区块链研究院有限公司 Identity authentication method and device based on dynamic password and dynamic token

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050113482A (en) * 2004-05-29 2005-12-02 엘지전자 주식회사 Separable fortable terminal for image communication
CN201717907U (en) * 2010-05-05 2011-01-19 中兴通讯股份有限公司 Terminal, terminal body and screen body of terminal
CN202059455U (en) * 2011-04-18 2011-11-30 中兴通讯股份有限公司 Movement terminal
CN105430606A (en) * 2015-12-30 2016-03-23 惠州Tcl移动通信有限公司 Separate terminal and control method thereof
CN107070478A (en) * 2017-03-31 2017-08-18 深圳市沃特沃德股份有限公司 Separate type communication equipment and mobile host
CN110881254A (en) * 2018-09-06 2020-03-13 Oppo广东移动通信有限公司 Portable electronic device
CN111866233A (en) * 2019-04-29 2020-10-30 北京小米移动软件有限公司 Terminal device

Also Published As

Publication number Publication date
CN114040349B (en) 2024-04-09
CN114040349A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
WO2021164556A1 (en) Wireless earphone case and system
WO2020168965A1 (en) Method for controlling electronic device having folding screen, and electronic device
CN112449332B (en) Bluetooth connection method and electronic equipment
US20220191668A1 (en) Short-Distance Information Transmission Method and Electronic Device
JP7324311B2 (en) Audio and video playback method, terminal, and audio and video playback device
WO2021104008A1 (en) Method for displaying folding screen and related apparatus
WO2020244623A1 (en) Air-mouse mode implementation method and related device
WO2021017909A1 (en) Method, electronic device and system for realizing functions through nfc tag
WO2020259542A1 (en) Control method for display apparatus, and related device
WO2020168968A1 (en) Control method for electronic device having foldable screen, and electronic device
WO2021043219A1 (en) Bluetooth reconnection method and related apparatus
WO2021104104A1 (en) Energy-efficient display processing method, and apparatus
CN113691842A (en) Cross-device content projection method and electronic device
WO2021208723A1 (en) Full-screen display method and apparatus, and electronic device
CN115278377A (en) Method for continuously playing multimedia content between devices
WO2022068539A1 (en) Communicational connection establishment method and system, electronic device, and storage medium
WO2022007944A1 (en) Device control method, and related apparatus
WO2022143156A1 (en) Encrypted call method and apparatus, terminal, and storage medium
CN111492678B (en) File transmission method and electronic equipment
WO2021227942A1 (en) Information sharing method, electronic devices and system
WO2022017209A1 (en) Electronic device and distributed system
WO2022095752A1 (en) Frame demultiplexing method, electronic device and storage medium
CN115525366A (en) Screen projection method and related device
WO2022206771A1 (en) Screen projection method, electronic device, and system
WO2022252980A1 (en) Method for screen sharing, related electronic device, and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21846453

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21846453

Country of ref document: EP

Kind code of ref document: A1