WO2022037479A1 - Photographing method and photographing system - Google Patents

Photographing method and photographing system Download PDF

Info

Publication number
WO2022037479A1
WO2022037479A1 PCT/CN2021/112362 CN2021112362W WO2022037479A1 WO 2022037479 A1 WO2022037479 A1 WO 2022037479A1 CN 2021112362 W CN2021112362 W CN 2021112362W WO 2022037479 A1 WO2022037479 A1 WO 2022037479A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
information
sensor
wearable device
image
Prior art date
Application number
PCT/CN2021/112362
Other languages
French (fr)
Chinese (zh)
Inventor
刘亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022037479A1 publication Critical patent/WO2022037479A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a photographing method and a photographing system.
  • the electronic device is further configured to: display a shooting preview interface, where the shooting preview interface includes a shooting button, and the first operation includes an input operation acting on the shooting button.
  • An application scenario is provided here.
  • the electronic device receives the first operation on the shooting preview interface, and triggers the electronic device to shoot pictures/videos.
  • the first wearable device is further configured to: receive the second operation; the first wearable device is further configured to instruct the electronic device to turn on the camera in response to the second operation; the first electronic device is further configured to use
  • For displaying a shooting preview interface the shooting preview interface displays a preview image collected by the camera; the first operation includes an operation acting on the shooting preview interface.
  • the second operation may include, but is not limited to, operations such as clicking, double-clicking, long-pressing, sliding, etc. The second operation acts on the first wearable device and is used to trigger the first wearable device to instruct the electronic device to turn on the camera, thereby taking pictures/ video.
  • the preset facial image is associated with the first wearable device, and the facial information may be preset by the user in the electronic device, or uploaded to the electronic device in the form of an image or video, or may be preset by the user in the electronic device.
  • the first wearable device is then provided to the electronic device, which is not limited in this application.
  • Similarity matching is performed between the preset facial image and one or more characters in the multimedia file, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the first information near the character.
  • the correspondence between the user and the first information is strengthened, and the user corresponding to the first information can be seen intuitively, which can improve the user experience.
  • the electronic device is further configured to: before receiving the first operation, display a shooting preview interface, where the shooting preview interface includes a preview image captured by a camera; the electronic device is further configured to display on the preview image At least part of the second information corresponds to the second sensor data detected by the first wearable device.
  • the second information is biometric information.
  • the electronic device displays at least part of the second information on the preview image, so that the biometric information is displayed on the preview interface in real time, and the user can view the biometric information in real time. Even more biometric information may be included in the image or video file acquired by the electronic device, and the biometric information is derived from sensor data collected by different sensors.
  • the electronic device is further configured to: in response to the preview image not including the preset face image, output a first prompt, where the first prompt is used to prompt the user to align the face.
  • the electronic device performs similarity matching between the preset facial image and one or more characters in the picture. If the matching is unsuccessful, it means that the user of the first wearable device is not included in the preview image, and the electronic device outputs prompt information to prompt the user to change the shooting angle. Aiming at the face avoids the situation that the user of the first wearable device is not included in the photographed multimedia file, and improves the user experience.
  • the system further includes a second wearable device; the electronic device is further configured to establish a connection with the second wearable device; the second wearable device is configured to detect fourth sensor data through at least one sensor; wherein , the first information also corresponds to the fourth sensor data.
  • This method describes that when the electronic device establishes a connection with two wearable devices (the first wearable device and the second wearable device), the electronic device obtains the first information, which is related to the first wearable device and the second wearable device.
  • the sensor data (the first sensor data and the fourth sensor data) of the wearable device are all corresponding. The same is true for establishing a connection between the electronic device and two or more wearable devices.
  • This method associates the biometric information with the information of the picture/video in the process of picture/video generation, makes more accurate feature recognition for the picture/video, and provides a new picture/video format, making The electronic device saves pictures/videos with biometric information, which facilitates subsequent classification of the stored pictures/videos according to the biometric information.
  • establishing a connection between the electronic device and the first wearable device includes: in response to the electronic device entering a preset shooting mode, establishing a connection between the electronic device and the first wearable device. This method describes the timing of establishing the connection between the electronic device and the first wearable device.
  • the electronic device displays a shooting preview interface.
  • the electronic device detects a user operation entering the preset shooting mode, the electronic device automatically turns on Bluetooth, and automatically establishes a Bluetooth connection with the first wearable device.
  • the electronic device displays the multimedia file and at least part of the first information, specifically including: in response to the multimedia file including a preset face image, the electronic device displays the multimedia file and at least part of the first information; preset face image The image corresponds to the first wearable device.
  • Information of one or more facial images is stored in the electronic device, the electronic device determines the preset facial image corresponding to the first wearable device through the identity information of the first wearable device, and associates the preset facial image with one or more of the multimedia files.
  • the personal characters are matched for similarity. If the preset facial image is successfully matched with one of the characters, it means that the character in the multimedia file is the user of the first wearable device. At this time, the electronic device displays at least part of the first information, otherwise it is not displayed. User privacy is protected.
  • Similarity matching is performed between the preset facial image and one or more characters in the multimedia file, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the first information near the character.
  • the correspondence between the user and the first information is strengthened, and the user corresponding to the first information can be seen intuitively, which can improve the user experience.
  • the electronic device before the electronic device receives the first operation, it further includes: the electronic device displays a shooting preview interface, and the shooting preview interface includes a preview image captured by a camera; and the electronic device displays at least part of the second information on the preview image , the second information corresponds to the second sensor data detected by the first wearable device.
  • the second information is the biometric information displayed on the preview interface.
  • the electronic device displays at least part of the second information on the preview image, so that the biometric information is displayed on the preview interface in real time, and the user can view the biometric information in real time.
  • the method further includes: in response to the preview image not including the preset face image, the electronic device outputs a first prompt, where the first prompt is used to prompt the user to aim at the face.
  • the electronic device performs similarity matching between the preset facial image and one or more characters in the picture. If the matching is unsuccessful, it means that the user of the first wearable device is not included in the preview image, and the electronic device outputs prompt information to prompt the user to change the shooting angle. Aiming at the face avoids the situation that the user of the first wearable device is not included in the photographed multimedia file, and improves the user experience.
  • the method further includes: establishing a connection between the electronic device and the second wearable device; the second wearable device is configured to detect fourth sensor data through at least one sensor, and the first information also corresponds to the fourth sensor data .
  • This method describes that when the electronic device establishes a connection with two wearable devices (the first wearable device and the second wearable device), the electronic device obtains the first information, which is related to the first wearable device and the second wearable device.
  • the sensor data (the first sensor data and the fourth sensor data) of the wearable device are all corresponding. The same is true for establishing a connection between the electronic device and two or more wearable devices.
  • Fig. 1 is a kind of system diagram that the embodiment of this application provides;
  • FIG. 3 is a software architecture diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present application.
  • 5a-5e are interface schematic diagrams of a group of shooting methods provided by an embodiment of the present application.
  • FIGS. 6a-6b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application.
  • 9a-9c are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application.
  • FIG. 10 is a schematic interface diagram of another set of shooting methods provided by an embodiment of the present application.
  • 11a-11c are interface schematic diagrams of another group of shooting methods provided by the embodiments of the present application.
  • FIG. 12 is a schematic interface diagram of another group of shooting methods provided by an embodiment of the present application.
  • FIG. 13a-13b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application.
  • 16a-16b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application.
  • 17a-17b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application.
  • 19a-19b are technical schematic diagrams of a shooting method provided by an embodiment of the application.
  • 20a-20b are method flowcharts of a shooting method provided by an embodiment of the present application.
  • FIG. 21 is a method flowchart of another shooting method provided by an embodiment of the present application.
  • FIG. 22 is another system diagram provided by an embodiment of the present application.
  • FIGS. 23-24 are method flowcharts of still another photographing method provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • the electronic equipment/user equipment involved in the embodiments of the present application may be mobile phones, tablet computers, desktops, laptops, notebook computers, Ultra-mobile Personal Computers (UMPCs), handheld computers, netbooks, personal digital Assistant (Personal Digital Assistant, PDA), virtual reality device, PDA (Personal Digital Assistant, also known as PDA), portable Internet device, data storage device, camera or wearable device (for example, wireless headset, smart Watches, smart bracelets, smart glasses, head-mounted display (HMD), electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos and smart mirrors) and so on.
  • UMPCs Ultra-mobile Personal Computers
  • PDA Personal Digital Assistant
  • portable Internet device data storage device
  • camera or wearable device for example, wireless headset, smart Watches, smart bracelets, smart glasses, head-mounted display (HMD), electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos and smart mirrors
  • the embodiment of the present application provides a shooting method, which is applied to a system including at least an electronic device 100 and a wearable device 201.
  • the electronic device 100 establishes a connection with the wearable device 201.
  • the biometrics of the user detected by the sensor such as heart rate, blood pressure, exercise posture, etc.
  • the electronic device 100 associates the biometrics with the captured pictures/videos, or associates the biometrics with the characters in the captured pictures/videos , the character corresponds to the biological feature.
  • the electronic device 100 generates a picture/video including biometric information indicating the user's biometrics.
  • FIG. 1 exemplarily shows a system diagram provided by the present application.
  • the system may include an electronic device 100 and one or more wearable devices (eg, wearable device 201 , wearable device 202 ).
  • the electronic device 100 and the wearable device 201 (wearable device 202 ) may be connected by wireless communication.
  • the connection can be established by at least one of the following wireless connection manners: Bluetooth (blue tooth, BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, WiFi), or WiFi direct connection.
  • the electronic device 100 may be connected with a plurality of different types of wearable devices.
  • the electronic device 100 can connect a smart watch and a wireless earphone through Bluetooth at the same time.
  • the electronic device 100 and the wearable device 201 are connected via Bluetooth as an exemplary illustration.
  • the electronic device 100 is an electronic device having an imaging function. Such as mobile phones, tablets, or cameras, etc.
  • Wearable devices 201 include wireless headphones, smart watches, smart bracelets, smart glasses, smart rings, smart sports shoes, virtual reality display devices, smart headbands, electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos, or smart mirrors etc.
  • the wearable device 201 can detect the user's health state information, exercise state information, emotional state information, and the like through sensors.
  • Health status information includes heart rate, blood pressure, blood sugar, EEG, ECG, EMG, body temperature and other information; exercise status information includes walking, running, cycling, swimming, badminton, skating, surfing, dancing and other common sports postures , can also include some more fine-grained motion gestures, such as: forehand, backhand, Latin dance, mechanical dance, etc.; emotional state information includes tension, anxiety, sadness, stress, excitement, joy, etc.
  • the electronic device 100 and the wearable device 201 are connected through Bluetooth.
  • the electronic device 100 obtains the user's biometric information, which corresponds to the user's health state information, motion state information, emotional state information, etc. detected by the wearable device 201 through sensors.
  • the wearable device 201 detects heart rate data through a heart rate sensor and blood pressure data through a blood pressure sensor; the biometric information may be heart rate data, blood pressure data, or normal heart rate and blood pressure based on the heart rate data and blood pressure data. normal information.
  • the electronic device associates the biometric information with the information of the picture/video, and saves the picture/video with the biometric information.
  • the electronic device can quickly and accurately query and filter pictures/videos with different characteristics.
  • the sensors of the wearable device can provide more intrinsic features, and the feature recognition of pictures/videos can be more accurate.
  • FIG. 2 shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 As an example, it should be understood that the electronic device 100 shown in FIG. 2 is only an example, and the electronic device 100 may have more or fewer components than those shown in FIG. 2, two or more components may be combined, or Different component configurations are possible.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. In some embodiments, electronic device 100 may also include one or more processors 110 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include the 5th Generation (the 5th Generation, 5G) system, the New Radio (New Radio, NR) system, the Global System for Mobile Communications (GSM), the General Packet Radio Service (general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (time-division code division multiple access) access, TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc.
  • 5G the 5th Generation
  • New Radio NR
  • GSM Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • time-division code division multiple access time-
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • a Bluetooth (BT) module and a WLAN module included in the wireless communication module 160 can transmit signals to detect or scan for devices near the electronic device 100, so that the electronic device 100 can use wireless communication such as Bluetooth or WLAN.
  • the technology discovers nearby devices, establishes wireless communication connections with nearby devices, and shares data to nearby devices through the aforementioned connections.
  • the Bluetooth (BT) module can provide a solution including one or more Bluetooth communications in classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (Bluetooth low energy, BLE).
  • the WLAN module can provide a solution including one or more WLAN communications in Wi-Fi direct, Wi-Fi LAN or Wi-Fi softAP.
  • the wireless communication solution provided by the mobile communication module 150 may enable the electronic device to communicate with a device (such as a server) in the network, and the WLAN wireless communication solution provided by the wireless communication module 160 may also be used.
  • the electronic device can be made to communicate with a device (such as a server) in a network, and can communicate with a cloud device through the device (such as a server) in the network. In this way, the electronic device can discover the cloud device and transmit data to the cloud device.
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • the display screen 194 may be a flat display screen, a curved display screen, or a folding screen.
  • the folding screen is in a folding state, and the folding screen at least includes a first display area and a second display area. Wherein, the light emitting surfaces of the first display area and the second display area are different.
  • the first display area is located in the first area of the folding screen, and the second display area is located in the second area of the folding screen.
  • the angle between the first area and the second area is greater than or equal to 0 degrees and less than 180 degrees. .
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the shapes of the display screens may be different.
  • one of the displays may be a folding display and the other display may be a flat display.
  • one of the displays could be a color display and the other could be a black and white display.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may be a 3D camera, and the electronic device 100 may implement a camera function through the 3D camera, ISP, video codec, GPU, display screen 194, application processor AP, neural network processor NPU, and the like.
  • the 3D cameras can be used to capture color image data as well as depth data of the subject.
  • the ISP can be used to process the color image data captured by the 3D camera. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the 3D camera.
  • the 3D camera may be composed of a color camera module and a 3D sensing module.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the 3D sensing module may be a time of flight (TOF) 3D sensing module or a structured light (structured light) 3D sensing module.
  • the structured light 3D sensing is an active depth sensing technology, and the basic components of the structured light 3D sensing module may include an infrared (Infrared) emitter, an IR camera module, and the like.
  • the working principle of the structured light 3D sensing module is to first emit a light spot of a specific pattern on the object to be photographed, and then receive the light coding of the light spot pattern on the surface of the object, and then compare the similarities and differences with the original projected light spot. And use the principle of trigonometry to calculate the three-dimensional coordinates of the object.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the air pressure sensor 180C is used to measure air pressure.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • LEDs light emitting diodes
  • photodiodes such as photodiodes.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of the electronic device 100 .
  • the Android system is only a system example of the electronic device 100 in the embodiment of the present application, and the present application may also be applicable to other types of operating systems, such as IOS, windows, etc., which is not limited in the present application.
  • the following only takes the Android system as an example of the operating system of the electronic device 100 .
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a system library can include multiple functional modules. For example: image processing module, video processing module, surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL for Embedded Systems, OpenGL ES)), 2D graphics Engine (for example: Skia Graphics Library (Skia Graphics Library, SGL)), etc.
  • image processing module video processing module
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL for Embedded Systems, OpenGL ES
  • 2D graphics Engine for example: Skia Graphics Library (Skia Graphics Library, SGL)
  • the image processing module is used to encode, decode and render the image, so that the application can display the image on the display screen.
  • the conversion of image formats and the generation of image files can be realized.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • FIG. 4 exemplarily shows a schematic structural diagram of the wearable device 201 provided by the present application.
  • the processor 102 may be used to read and execute computer readable instructions.
  • the processor 102 may mainly include a controller, an arithmetic unit and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations, and can also perform address operations and conversions.
  • Registers are mainly responsible for saving register operands and intermediate operation results temporarily stored during instruction execution.
  • the hardware architecture of the processor 102 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
  • Memory 103 is coupled to processor 102 for storing at least one of various software programs or sets of instructions.
  • memory 103 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 103 can store an operating system, such as an embedded operating system such as uCOS, VxWorks, RTLinux, and the like.
  • Memory 103 may also store communication programs that may be used to communicate with electronic device 100, one or more servers, or additional devices.
  • the wireless communication processing module 104 may include one or more of a Bluetooth (BT) communication processing module 104A, a WLAN communication processing module 104B.
  • BT Bluetooth
  • WLAN wireless local area network
  • one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may also transmit signals, such as broadcasting Bluetooth signals, beacon signals, so that other devices (eg, electronic device 100 ) can The wearable device 201 is discovered, and a wireless communication connection is established with other devices (eg, the electronic device 100 ), and communicates with other devices (eg, the electronic device 100 ) through one or more wireless communication technologies in Bluetooth or WLAN.
  • signals such as broadcasting Bluetooth signals, beacon signals
  • the mobile communication processing module 105 can provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communications module 106 may include a circuit switched module ("CS" module) for performing cellular communications and a packet switched module ("PS" module) for performing data communications .
  • CS circuit switched module
  • PS packet switched module
  • the mobile communication processing module 105 may communicate with other devices (such as servers) through the fourth generation mobile communication technology (4th generation mobile networks) or the fifth generation mobile communication technology (5th generation mobile networks).
  • the touch screen 106 also known as a touch panel, is an inductive liquid crystal display device that can receive input signals such as contacts, and can be used to display images, videos, and the like.
  • the touch screen 106 can use a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, an active-matrix organic light emitting diode (AMOLED) ) display, flexible light-emitting diode (flexible light-emitting diode, FLED) display, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) display and so on.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active-matrix organic light emitting diode
  • FLED flexible light-emitting diode
  • QLED quantum dot light emitting diode
  • Sensor module 107 may include motion sensor 107A, biosensor 107B, environmental sensor 107C, and the like. in,
  • the motion sensor 107A is a component that converts changes in non-electricity (eg, speed, pressure) into changes in electricity. It may include at least one of the following: an acceleration sensor, a gyro sensor, a geomagnetic sensor (also known as an electronic compass sensor), or an atmospheric pressure sensor. Among them, the acceleration sensor can detect the magnitude of acceleration in various directions (generally three axes, ie x, y and z axes). Gyroscopic sensors can be used to determine motion attitude. Electronic compass sensors can be used to measure direction, enable or assist navigation. The atmospheric pressure sensor is used to measure the air pressure. In some embodiments, the altitude change of the location can be calculated through the weak air pressure change during the movement, and the accuracy can be controlled within 10CM during the height movement of the 10-story building. Data ranging from rock climbing to small stair climbing can be monitored.
  • an acceleration sensor can detect the magnitude of acceleration in various directions (generally three axes, ie x, y and z axes
  • the motion sensor 107A can measure the user's activity, such as running steps, speed, swimming laps, cycling distance, exercise posture (such as playing ball, swimming, running) and the like.
  • Biosensor 107B is an instrument that is sensitive to biological substances and converts their concentration into electrical signals for detection. It consists of immobilized biologically sensitive materials as identification elements (including enzymes, antibodies, antigens, microorganisms, cells, tissues, nucleic acids and other biologically active substances) and appropriate physical and chemical transducers (such as oxygen electrodes, photosensitive tubes, field effect tubes, Piezoelectric crystal, etc.), that is, an analysis tool or system composed of a signal amplifying device. Biosensors function as receivers and converters.
  • the biosensor 107B may include at least one of the following: a blood sugar sensor, a blood pressure sensor, an electrocardiogram sensor, an electromyography sensor, a body temperature sensor, a brain wave sensor, etc. The main functions of these sensors include health and medical monitoring, entertainment, and the like.
  • the blood sugar sensor is used to measure blood sugar.
  • Blood pressure sensors are used to measure blood pressure.
  • ECG sensors for example, use silver nanowires to monitor electrophysiological signals, such as electrocardiograms.
  • EMG sensors are used to monitor EMG.
  • Body temperature sensors are used to measure body temperature, and brain wave sensors are used to monitor brain waves.
  • various physiological indicators of the user can be measured by the biosensor 107B, and the wearable device 201 can calculate the health status of the user according to the physiological indicators.
  • the biosensor 107B may also include a heart rate sensor, a galvanic sensor. in,
  • the heart rate sensor can track the user's exercise intensity, different exercise training modes, etc. by detecting the user's heart rate, and can calculate the user's sleep cycle, sleep quality and other health data.
  • the capacitive light strikes the skin, the light reflected back through the skin tissue is received by the photosensitive sensor and converted into an electrical signal, which is then converted into a digital signal, and then the heart rate can be measured according to the absorbance of the blood.
  • Galvanic sensors are used to measure the user's arousal, which is closely linked to the user's attention and engagement, and is usually equipped on some devices that can monitor sweat levels.
  • the skin resistance and conductance of the human body change with the changes in the function of the skin sweat glands, and these measurable skin galvanic changes are called electrodermal activity (EDA).
  • EDA electrodermal activity
  • the wearable device 201 measures the psychologically induced sweat gland activity through the electrodermal sensor to determine the user's psychological activity, such as the user's mood index. For example, feeling happy, nervous, fearful, stressed, etc.
  • the environmental sensor 107C may include at least one of the following: an air temperature and humidity sensor, a rain sensor, a light sensor, a wind speed and direction sensor, a particle sensor, and the like.
  • the environmental sensor 107C can detect air quality, such as the degree of haze, indoor formaldehyde concentration, PM2.5 detection, and so on. In this application, weather changes, air humidity, air quality, etc. can be measured by the environmental sensor 107C.
  • the structure shown in FIG. 4 does not constitute a specific limitation on the wearable device 201 .
  • the wearable device 101 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the user wears the wearable device, and the electronic device establishes a connection with the wearable device.
  • the user triggers the electronic device to start the camera to take a picture, the electronic device obtains the picture information, and obtains the biometric information.
  • the biometric information corresponds to sensor data detected by the wearable device through at least one sensor.
  • the sensor data involved in this application includes, but is not limited to, data detected by at least one of the motion sensor 107A, the biosensor 107B, and the environmental sensor 107C mentioned above.
  • the above-mentioned biometric information may be sent to the electronic device after the wearable device performs analysis and processing based on sensor data;
  • the sensor data is analyzed and processed.
  • the electronic device associates the biometric information with the picture information, or associates the biometric information with a person in the photographed picture, and the person corresponds to the biometric information.
  • the electronic device generates a picture file with biometric information, and in response to the user viewing the picture file, the electronic device may display the biometric information or display a portion of the biometric information.
  • a smartphone is taken as an example of the above-mentioned electronic device, and an implementation form of the photographing method provided by the present application on the display interface of the smartphone is exemplarily described.
  • Method 1 Start the camera of the electronic device and select the labeling mode.
  • a camera application is an application software for taking pictures of an electronic device.
  • the camera application can be started, and the electronic device calls at least one camera to shoot.
  • Figure 5a illustrates an exemplary user interface on an electronic device for displaying a list of applications.
  • Figure 5a includes a status bar 201 and a display interface 202, wherein the status bar 201 may include: one or more signal strength indicators 203 of mobile communication signals (also known as cellular signals), wireless fidelity (Wi-Fi) ) signal of one or more of signal strength indicator 207, Bluetooth indicator 208, battery status indicator 209, time indicator 211.
  • the Bluetooth module of the electronic device is in an on state (ie, the electronic device supplies power to the Bluetooth module)
  • a Bluetooth indicator 208 is displayed on the display interface of the electronic device.
  • the display interface 202 displays a plurality of application icons.
  • the display interface 202 includes an application icon of the camera 205 .
  • the electronic device detects a user operation acting on the application icon of the camera 205, the electronic device displays an application interface provided by the camera application.
  • Figure 5b shows a possible user interface provided by a camera application.
  • the application interface of the camera 205 is shown in FIG. 5b .
  • the application interface may include: a display area 30 , a flash icon 301 , a setting icon 302 , a mode selection area 303 , a gallery icon 304 , a shooting icon 305 , and a switching icon 306 .
  • the display content of the display area 30 is the preview display interface of the image captured by the camera currently used by the electronic device.
  • the camera currently used by the electronic device may be the default camera set by the camera application, and the camera currently used by the electronic device may also be the camera used when the camera application was closed last time.
  • the flash icon 301 can be used to indicate the working status of the flash.
  • the setting icon 302 when a user operation acting on the setting icon 302 is detected, in response to the operation, the electronic device can display other shortcut functions, such as adjusting the resolution, time-lapse shooting (also known as time-lapse shooting, which can be controlled to start taking pictures) time), shooting mute, voice-activated photo, smile capture (when the camera detects a smile feature, automatically focus on the smile) and other functions.
  • time-lapse shooting also known as time-lapse shooting, which can be controlled to start taking pictures
  • shooting mute also known as time-lapse shooting, which can be controlled to start taking pictures
  • voice-activated photo voice-activated photo
  • smile capture when the camera detects a smile feature, automatically focus on the smile
  • the mode selection area 303 is used to provide different shooting modes. According to the different shooting modes selected by the user, the cameras and shooting parameters enabled by the electronic device are also different.
  • An annotation mode 303A, a night scene mode 303B, a photographing mode 303C, a video recording mode 303D, and more 303E may be included.
  • the icon of the photographing mode 303C is marked to prompt the user that the current mode is the photographing mode. in,
  • Labeling mode 303A in this mode, when the electronic device detects a user operation for taking pictures/videos, the electronic device obtains the image information currently collected by the camera, and obtains biometric information, which is detected by the wearable device through the sensor.
  • the electronic device fuses and encodes the biometric information and the image information obtained by shooting, and generates a picture/video file with the biometric information.
  • the biometric information may be information such as the user's heart rate data, blood pressure data, etc.; it may also be information such as normal heart rate and normal blood pressure.
  • the electronic device if the electronic device does not turn on Bluetooth, when the electronic device detects the user operation 307 acting on the labeling mode 303A, in response to the user operation 307, the electronic device automatically turns on Bluetooth, and automatically searches for connectable devices.
  • the connection is established according to the user's choice, or the electronic device automatically establishes a connection with the Bluetooth device that has previously established a connection.
  • the electronic device may simultaneously receive sensor data or biometric information of the wearable devices of two users, and perform fusion coding with the two biometric information and the image information obtained by shooting, and then combine the two biometric information in one picture/video.
  • the biometric information of the two users is associated in the file.
  • the icon of the photographing mode 303C in the mode selection area 303 of the electronic device is no longer marked, but the annotation mode 303A is marked.
  • the night scene mode 303B can improve the detail rendering ability of bright and dark parts, control noise, and present more picture details.
  • the photographing mode 303C is suitable for most photographing scenes, and can automatically adjust photographing parameters according to the current environment.
  • Video mode 303D used to shoot a video.
  • More 303E when detecting a user operation acting on more 303E, in response to the operation, the electronic device may display other selection modes, such as panorama mode (to achieve automatic stitching, the electronic device stitches multiple photos taken continuously into one 3 photos to achieve the effect of expanding the viewing angle of the picture), HDR mode (automatic continuous shooting underexposure, normal exposure, overexposure three photos, and select the best part to combine into one photo) and so on.
  • panorama mode to achieve automatic stitching, the electronic device stitches multiple photos taken continuously into one 3 photos to achieve the effect of expanding the viewing angle of the picture
  • HDR mode automatic continuous shooting underexposure, normal exposure, overexposure three photos, and select the best part to combine into one photo
  • the photographing device/electronic device can enter the corresponding mode.
  • the image displayed in the display area 30 is the image processed in the current mode.
  • the mode icons in the mode selection area 303 are not limited to virtual icons, and can also be selected through physical buttons deployed on the photographing device/electronic device, so that the photographing device enters the corresponding mode.
  • the gallery icon 304 when a user operation acting on the gallery icon 304 is detected, in response to the operation, the electronic device may enter a gallery of the electronic device, and the gallery may include photos and videos that have been taken.
  • the gallery icon 304 may be displayed in different forms. For example, after the electronic device saves the image currently captured by the camera, the gallery icon 304 displays a thumbnail of the image.
  • Shooting icon 305 when a user operation (such as a touch operation, voice operation, gesture operation, etc.) acting on the shooting icon 305 is detected, in response to the operation, the electronic device acquires the image currently displayed in the display area 30 and saves it in the gallery middle.
  • the gallery can be entered through a user operation (eg, touch operation, gesture operation, etc.) on the gallery icon 304 .
  • the switch icon 306 can be used to switch between the front camera and the rear camera.
  • the shooting direction of the front camera is the same as the display direction of the screen of the electronic device used by the user, and the shooting direction of the rear camera is opposite to the display direction of the screen of the electronic device used by the user. If the display area 30 currently displays the image captured by the rear camera, when a user operation acting on the switch icon 306 is detected, the display area 30 displays the image captured by the front camera in response to the operation. If the display area 30 currently displays the image captured by the front camera, when a user operation acting on the switch icon 306 is detected, the display area 30 displays the image captured by the rear camera in response to the operation.
  • FIG. 5c exemplarily shows an application interface corresponding to the annotation mode 303A.
  • the icon of the smart label 303A in the mode selection area 303 is marked, indicating that the current mode is the label mode.
  • prompt information may also be displayed, which is used to prompt the user that the current electronic device is connected to the wearable device.
  • the text "connected to a wearable device” is displayed in the prompt area 308, indicating that the current electronic device has been connected to a wearable device, and the wearable device may be an earphone, a watch, a bracelet, glasses, or the like.
  • the text "not connected to the wearable device” is displayed in the prompt area 308, prompting the user that the current electronic device is not connected to the wearable device.
  • the connection method may be a short-distance connection method such as a Bluetooth connection or a WiFi connection.
  • the content in the prompt area 308 may prompt the user using the electronic device, so that the user wearing the wearable device is within the shooting area.
  • the text "Please confirm that the user wearing the wearable device is within the shooting range of the lens" is displayed in the prompt area 308 .
  • user operations include but are not limited to operations such as clicks, shortcut keys, gestures, floating touch, and voice commands.
  • FIG. 5d shows yet another possible user interface provided by the camera application.
  • the application interface of the camera 205 is shown in FIG. 5d .
  • the application interface 31 includes a label icon 310 .
  • the electronic device detects a user operation acting on the annotation icon 310, in response to the operation, the electronic device activates the annotation function.
  • the labeling function can be enabled in any shooting mode in the mode selection area 303 .
  • the electronic device When the electronic device is in the photographing mode 303C, a user operation acting on the labeling icon 310 is detected, and in response to the operation, the electronic device starts the labeling function in the photographing mode 303C.
  • the electronic device When the electronic device is in the night scene mode 303B, a user operation acting on the label icon 310 is detected, and in response to the operation, the electronic device starts the label function in the night scene mode 303B.
  • FIG. 5e exemplarily shows the application interface after the labeling function is activated.
  • the labeling icon 310 is marked, indicating that the labeling function is currently activated.
  • the electronic device obtains pictures/videos with biometric information, and shares the pictures/videos with biometric information through short messages, social software, video calls, etc.
  • the camera is activated through the first application icon, and the shooting mode of the camera is the labeling mode by default.
  • the display interface 202 of FIG. 6a displays a plurality of application icons.
  • the display interface 202 includes the application icon of the smart label 212 . If the user wants to activate the annotation mode, the application icon of the smart annotation 212 is triggered by the user operation.
  • the electronic device displays the application interface of the smart annotation 212 in response to the user operation.
  • the electronic device if the electronic device does not have Bluetooth enabled, when the electronic device detects a user operation acting on the smart label 212, in response to the user operation, the electronic device automatically enables Bluetooth, automatically searches for connectable Bluetooth devices, and selects a Bluetooth device according to the user's choice. A connection is established, or the electronic device automatically establishes a connection with a Bluetooth device that was previously connected.
  • FIG. 6b exemplarily shows a possible application interface provided by the smart annotation 212 .
  • the application interface may include: a display area 40 , a flash icon 401 , a setting icon 402 , a gallery icon 403 , a shooting icon 404 , a switching icon 405 , and a prompt area 406 .
  • the flash icon 401, the setting icon 402, the gallery icon 403, the shooting icon 404, the switching icon 405, and the prompt area 406 provided by the embodiment shown in FIG.
  • the embodiments are similar, so the implementation of the flash icon 401, the setting icon 402, the gallery icon 403, the shooting icon 404, the switching icon 405, and the prompt area 406 in FIG.
  • the corresponding descriptions corresponding to the implementation of the prompt area 308 of the shooting icon 305 and the switching icon 306 will not be repeated here.
  • the labeling mode can also be started through the wearable device APP in the electronic device.
  • the smart wearable 214 is an application for managing and interacting with one, one or more types of wearable devices, including function management, authority management, and the like.
  • the application interface of the smart wearable 214 may include function controls of multiple wearable devices.
  • the electronic device and the wearable device are paired and connected, and the user selects to enter the user interface of the corresponding wearable device.
  • the electronic device detects the user operation for starting the labeling mode, and realizes the starting labeling mode.
  • the electronic device can obtain to the biometric information corresponding to the sensor data of the wearable device.
  • FIG. 5c and FIG. 6b exemplarily show the application interface in the annotation mode.
  • Figure 7a also provides a possible application interface.
  • the display area 40 may further include a preview icon 407, where the preview icon 407 is used to trigger the electronic device to acquire biometric information.
  • the electronic device detects a user operation on the preview icon 407, the electronic device sends a request message to the wearable device. After the wearable device receives the request message, it sends sensor data or biometric information to the electronic device. Or biometric information, displaying at least part of the biometric information on the display screen in real time.
  • the wearable device after the wearable device receives the request message, the wearable device sends sensor data to the electronic device, and the electronic device determines the biometric information based on the received sensor data, and displays at least part of the biometric information on the display screen in real time; or, After the wearable device receives the request message, the wearable device determines the biometric information based on the sensor data, sends the biometric information to the electronic device, and the electronic device displays at least part of the biometric information on the display screen in real time based on the received biometric information.
  • the electronic device when the electronic device detects a user operation for the smart annotation 212 in FIG. 6a, the electronic device displays the interface shown in FIG. 7a, and sends a request message to the wearable device, and the wearable device receives the request. After the message, sensor data or biometric information is sent to the electronic device.
  • the preview icon 407 is used to trigger the electronic device to display at least part of the biometric information on the display screen in real time based on the acquired sensor data or biometric information.
  • the electronic device when the electronic device detects a user operation on the preview icon 407, the electronic device displays the interface shown in FIG. 7b, the display area 40 includes a preview area 408, and the preview area 408 displays the user widget A
  • the health status of Xiao A for example, the heart rate of Xiao A is normal; it also shows the exercise status of Xiao A, such as Xiao A is running.
  • the preview area 408 may also include information such as Little A's blood pressure, blood sugar, whether the exercise posture is standardized, emotions (eg, happy, nervous, sad).
  • the electronic device in response to a user operation, may directly display the application interface of FIG. 7b without being triggered by the preview icon 407 in FIG. 7a.
  • the electronic device detects the user operation that activates the annotation mode, the electronic device sends a request message to the wearable device. After the wearable device receives the request message, it sends sensor data or biometric information to the electronic device. feature information, at least part of the biometric information is displayed on the display screen in real time, as shown in Figure 7b.
  • the user in the labeling mode or before entering the labeling mode, can configure the biometric information on the side of the electronic device, and select to obtain specific biometric information as required.
  • the electronic device displays the setting interface.
  • the electronic device may display the setting interface 60 of the annotation mode as shown in FIG. 8a in response to the user operation on the setting icon 302 (or the setting icon 402).
  • the electronic device displays the interface 70 of the wearable device as shown in FIG. 8b in response to the user operation for the option 602 .
  • the setting interface 70 includes my device 701 and other devices 702, wherein, my device 701 includes the device that the electronic device is connecting to and the devices that have been connected, for example, the watch including small A is the device that the electronic device is connecting to, while the small B , indicating that the electronic device and Xiao B's watch have been connected before, but there is currently no connection.
  • the other devices 702 refer to the connectable and unconnected devices searched by the electronic device through Bluetooth, for example, including the earphones of small A and the watch of small C.
  • the circle of the icon 801 in the heart rate column is on the left at this time, indicating that the biometric information of the picture/video currently captured by the electronic device does not include heart rate information.
  • the circle of the icon 801 moves to the right, indicating that the biometric information of the picture/video currently captured by the electronic device includes heart rate information.
  • the electronic device can automatically display the facial image 7031 associated with the small A's watch in FIG. After that, the interface as shown in Figure 8d is directly displayed.
  • the electronic device can automatically display the facial image associated with the small A's watch.
  • the display content of the display area of the application interface 1000 is the image captured by the camera currently used by the electronic device.
  • the camera currently used by the electronic device may be the default camera set by the camera application, and the camera currently used by the electronic device may also be the camera used when the camera application was closed last time.
  • the wearable device detects sensor data through one or more sensors, and sends biometric information to the electronic device based on the configuration on the side of the wearable device.
  • the wearable device sends biometric information to the electronic device, and the biometric information corresponds to the blood pressure data, including blood pressure information (eg, normal blood pressure).
  • blood pressure information eg, normal blood pressure
  • the electronic device When the electronic device detects the user operation that triggers the taking of the picture, the electronic device fuses and encodes the currently collected picture information with the biometric information to generate a picture with the biometric information.
  • Corresponding sensor data such as heart rate, blood pressure, blood sugar, exercise state, emotional state, etc.
  • the callout icon 412 may be used to control the display and hiding of the preview area 408, wherein the callout icon 412 may indicate the display state and the hidden state in the form of display brightness or color.
  • the electronic device can acquire the biometric information, and based on the acquired biometric information, perform fusion coding with the captured image frame.
  • FIG. 11a exemplarily shows a gallery interface 1100, and the gallery interface 1100 displays a plurality of pictures.
  • FIG. 11 b shows a picture viewing interface 1200 , including a display area 1201 , a share icon 1202 , an edit icon 1203 , a delete icon 1204 , a more icon 1205 and an annotation area 1206 .
  • a display area 1201 a display area 1201
  • a share icon 1202 an edit icon 1203
  • a delete icon 1204 a more icon 1205
  • an annotation area 1206 a picture viewing interface 1200
  • FIG. 12 exemplarily shows yet another gallery interface 1400 .
  • the gallery interface 1400 divides different picture sets for different types of pictures, and the division method can be divided according to the biometric information of the pictures.
  • Figure 12 includes a picture set with intelligent annotation, and the pictures/videos in this picture set all carry biometric information; including picture sets distinguished by characters, such as the picture set of small A and the picture set of small B, the picture set of small A
  • the picture sets are all pictures/videos of small A
  • the picture sets of small B are all pictures/videos of small B; including the picture sets distinguished by the motion state, such as the picture set of running and the picture set of playing badminton.
  • the cursor 1210 can be hidden in the picture viewing interface 1201 .
  • the user can click the display area of the small A in the picture 1101 to view the biometric information of the small A in the picture 1101, and click the display area of the small A again to hide the biometric information of the small A, which is not limited in this application.
  • Fig. 15b exemplarily shows a video playing interface 1600.
  • the video playing interface 1600 includes a progress bar 1601 for indicating the progress of the video playing.
  • the title of the video "Xiao A running" indicates the motion state of the user in the video, and the title of the video may also include information such as the user's health state and emotional state.
  • the embodiment of the present application provides a picture file format with biometric information.
  • the electronic device fuses and encodes the biometric information and the picture information to generate the picture file format.
  • Fig. 19a exemplarily shows a picture file format with biometric information.
  • the basic data structure of the picture file format includes two types: "segment" and compressed-encoded image data.
  • the sensor data or biometric information obtained by the electronic device indicates that the user is running, write a 0x00 field in the field indicating the biometric information of the image; if the sensor data or biometric information obtained by the electronic device indicates the user's heart rate is 60 times/min, then in the field indicating the biometric information of the image, write 0x80 0x3C field (0x3C is 60 hexadecimal); and so on.
  • the IFD in the image file format also includes fields such as identity information, scene information, etc.
  • the identity information includes the device name of the wearable device, the device account (such as a Huawei account), a custom user name, etc.
  • the scene The information includes that the electronic device comprehensively determines the scene in the picture by identifying the scene in the image and the geographic location information when the picture is taken, such as a park, a bar, a lakeside, a museum, etc.
  • the embodiment of the present application also provides a video frame format with biometric information.
  • a complete video is composed of multiple video frames, and one video frame corresponds to one picture.
  • the electronic device fuses and encodes the biometric information and the video frame information to generate the video frame format.
  • FIG. 19b exemplarily shows a video frame format with biometric information.
  • the video frame format includes supplemental enhancement information (SEI), sequence parameter set (sequence parameter sset, SPS), picture parameter set (picture parameter set, PPS) and compression-encoded video data sequence (VCL data).
  • SEI Supplemental enhancement information
  • sequence parameter set sequence parameter set
  • PPS picture parameter set
  • VCL data compression-encoded video data sequence
  • a set of global parameters of a coded video sequence are stored in the SPS.
  • the coded video sequence is a sequence composed of a structure after the pixel data of one frame of the original video has been coded.
  • the parameters on which the encoded data of each frame depends are stored in the PPS
  • SEI belongs to the code stream category, which provides a method of adding extra information to the video code stream. During the generation and transmission of video content, SEI information can be inserted. This inserted information, along with other video content, travels through the transmission link to the electronic device.
  • the SEI includes fields such as network abstraction layer unit (NAL) type, SEI type, and SEI length.
  • NAL network abstraction layer unit
  • the video frame format shown in FIG. 19b is an exemplary video frame format provided by the present application, wherein the position of the biometric information field in the video frame format is not limited by the present application.
  • the wearable device and the photographing device obtain each other's connection information (such as hardware information, interface information, identity information, etc.) through Bluetooth.
  • the shooting device can obtain the sensor data of the wearable device after the shooting function is turned on, and the wearable device can synchronize some functions of the shooting device.
  • the device can actively trigger the shooting device to start the shooting function, and the wearable device can view the picture/video files in the shooting device, etc.
  • Step S102 The photographing device detects a user operation that triggers the function of photographing a picture.
  • the user operation may be a user operation for a wearable device
  • the wearable device detects a user operation that triggers a function of taking pictures, and sends it to the photographing device through Bluetooth
  • the photographing device detects the user operation, Trigger the capture picture function.
  • the user starts the camera application by clicking the application icon 901 in Fig. 9a, and triggers the function of taking pictures by clicking the icon 1003 in Fig. 9b, and the wearable device sends a picture taking instruction to the photographing device through Bluetooth, so that the photographing device triggers the photographing,
  • the user operation that triggers the function of taking pictures may also be referred to as the first operation.
  • the photographing device After the photographing device detects a user operation that triggers the function of taking pictures, it sends a request message to the wearable device, where the request message is used to request acquisition of sensor data or biometric information.
  • the request message includes the requested data type, data collection method, and data collection interval.
  • the requested data type may be: health state, exercise state, emotional state, and the like.
  • the health status includes heart rate, blood pressure, blood sugar, EEG, ECG, EMG, body temperature, etc.
  • exercise status includes walking, running, cycling, swimming, playing badminton, skating, surfing, dancing and other common sports postures. It can also include some more fine-grained motion gestures, such as: forehand, backhand, Latin dance, mechanical dance, etc.; emotional states include tension, anxiety, sadness, stress, excitement, joy, etc.
  • the wearable device parses the received request message, and obtains the data type, data collection method and data collection interval required by the shooting device. According to the request message, the wearable device sends sensor data or biometric information to the photographing device. Exemplarily, for taking pictures, the data collection method is single collection, the data collection interval is set to an invalid value, and the wearable device sends data to the shooting device once. If the data type in the request message includes motion posture and heart rate, the wearable device sends the sensor data or biometric information obtained by the motion sensor and the heart rate sensor to the photographing device.
  • the sensor data is raw data, that is, data directly detected by the sensor; for example, the sensor data may include the user's heart rate data obtained by the wearable device through the heart rate sensor; the user's movement amplitude, angle, and speed obtained through the motion sensor. and other data; obtain the user's skin resistance, conductance and other data through the skin sensor; obtain the user's blood sugar, blood pressure, body temperature and other data through the biosensor.
  • the fields of 0x00-0x7F represent motion gestures, and the field can include up to 128 motion gestures, for example, 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, etc.; the fields 0x80-0x9F represent vital signs, this field It can include up to 32 kinds of vital signs, such as 0x80 for heart rate, 0x81 for blood pressure, 0x82 for blood sugar, etc.; the fields of 0xA0–0xAF indicate personal basic information, and this field can include up to 16 kinds of personal basic information, such as 0xA0 for height , 0xA1 for age, 0xA2 for gender, and so on.
  • the shooting device wants to obtain the motion posture, and the wearable device detects that the data obtained by the motion sensor indicates that the user is running, the wearable device writes 0x00 to the data packet to be sent, indicating that the user is running. If the shooting device wants to obtain the heart rate, and the wearable device detects that the data obtained by the heart rate sensor is 60 beats/min at this time, the wearable device writes the data bytes indicating "heart rate is 60 beats/min" into the data packet to be sent.
  • the wearable device can write data bytes indicating "normal heart rate" into the data packet to be sent.
  • the wearable device parses the received request message, and according to the request message, the wearable device sends sensor data or biometric information to the photographing device according to the timestamp.
  • the time stamp is the moment when the electronic device detects a user operation that triggers the function of taking pictures, and the wearable device acquires sensor data at this moment, and sends the sensor data or biometric information to the photographing device. In this way, the wearable device can provide more accurate information to the photographing device.
  • Step S105 Fusion coding the captured picture information and the biometric information to generate a picture with the biometric information.
  • the photographing device After receiving the sensor data or biometric information, the photographing device fuses and encodes the captured picture information and biometric information to generate a picture with biometric information.
  • the format of the picture can refer to the picture format shown in Figure 19a above,
  • the biometric information corresponds to sensor data.
  • the display content of the biometric information may be a simple sentence, or may be detailed and complete information.
  • the display content, display position and display form of the biometric information in the picture are not limited in this application.
  • the electronic device receives sensor data, determines biometric information based on the sensor data, and fuses the captured picture information with the biometric information; For the biometric information, the captured picture information and the biometric information are fused and encoded.
  • the biometric information can be whether the user's heart rate is within the normal range; the sensor data obtained by the photographing device is the user's current exercise posture, such as walking, running, riding car, swimming, playing badminton, etc., the biometric information can be whether the user's movement posture is standard; the sensor data obtained by the shooting device is the user's current mood index, stress index and other information, then the biometric information can be whether the user is in a happy mood ;etc.
  • the generated picture can be referred to as shown in Figure 11b above.
  • the biometric information can be displayed in a fixed position in the picture, such as the upper part of the picture.
  • the biometric information in the picture can be hidden at the beginning of viewing, and then displayed by triggering the control.
  • the biometric information is displayed by triggering the cursor 1210.
  • biometric information is viewed in the user interface 1300 in Figure 11c by triggering the icon 1205.
  • the electronic device can also obtain the identity information of the wearable device, such as the device name, device account (such as Huawei account), user-defined user name and other information; Fusion coding generates image files with identity information.
  • the electronic device can also obtain the scene information when the picture is taken.
  • the scene information includes the scene in the picture comprehensively determined by the electronic device by identifying the scene in the image and the geographic location information when taking the picture, such as parks, bars, lakes, etc. Museum etc.
  • the scene information and the picture information are fused and encoded to generate a picture file with the scene information.
  • the biometric information is obtained by combining sensor data and picture information.
  • the user's health status, movement status, emotional status and other information can be obtained.
  • the picture information the scene information, user movement posture and other information can be obtained.
  • the sensor data received by the photographing device includes the user's heart rate data of 60 beats/min, and the movement posture of running; the photographing device performs image analysis on the captured picture information, and concludes that the user's movement posture in the picture is running.
  • the filming location is a park.
  • the final biometric information obtained by the shooting device includes the user running in the park with a heart rate of 60 beats/min.
  • the information of one or more facial images is stored in the photographing device, the photographing device determines a preset facial image corresponding to the wearable device through the identity information of the wearable device, and associates the preset facial image with the picture.
  • the photographing device determines a preset facial image corresponding to the wearable device through the identity information of the wearable device, and associates the preset facial image with the picture.
  • One or more of the characters are matched for similarity, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character.
  • the preset facial image is associated with the wearable device, wherein the preset facial image may be preset by the user in the shooting device, or uploaded to the shooting device in the form of an image or video, or may be preset by the user in the shooting device.
  • the wearable device is then provided to the photographing device, which is not limited in this application.
  • the wearable device is Huawei Watch 1, and the user name bound to the Huawei Watch 1 is Xiao A, and the photographing device determines the facial information of Xiao A.
  • the photographing device can determine the facial information of Xiao A by looking up the binding relationship between the user and the face in the address book; it can also determine the facial information of Xiao A through the binding relationship between the wearable device and the facial information (for example, in FIG. 8c by association
  • the icon 802 of the face image uploads the face information of the little A, and the face information of the little A is the face image 7031); and so on.
  • the photographing device performs image recognition on the picture, identifies one or more faces in the picture, and performs similarity matching between the face information of Little A and the one or more faces in the picture. If the similarity between the small A face information and one of the faces in the picture is greater than the threshold, the biometric information is displayed near the face. Referring to FIG. 13b, the biometric information of the picture in FIG. 13b is displayed near the person, indicating the person described by the biometric information.
  • the photographing device may output a prompt message (first prompt), For example "there is no user A in the picture".
  • the present application further shows a method flow for photographing a picture, as shown in FIG. 20b.
  • Step S101a Establish a connection between the wearable device and the photographing device.
  • step S101 For a specific description, reference may be made to the description of step S101, which will not be repeated here.
  • Step S102a The photographing device sends a request message, where the request message is used to request acquisition of sensor data or biometric information.
  • the photographing device After the wearable device and the photographing device are connected, the photographing device sends a request message to the wearable device, where the request message is used to request acquisition of sensor data or biometric information.
  • the photographing device when the photographing device detects a user operation for enabling the annotation mode, the photographing device sends a request message to the wearable device.
  • the photographing device detects a user operation that triggers acquisition of sensor data or biometric information, and the photographing device sends a request message to the wearable device.
  • the photographing device detects a user operation for the icon 407, and the photographing device sends a request message to the wearable device.
  • step S103 For a specific description of the request message, reference may be made to the description of step S103, which will not be repeated here.
  • Step S103a The wearable device sends sensor data or biometric information to the photographing device.
  • step S104 The specific description of this step can refer to the description of step S104. Additional,
  • the photographing device displays at least part of the biometric information on the photographing interface.
  • the image captured by the camera in real time is displayed in the display area 40, and the preview area 408 is displayed in the preview area 408.
  • the display content in the preview area 408 is at least part of the biometric information for the user to view in real time.
  • the biometric information corresponds to sensor data.
  • the shooting interface may be referred to as a shooting preview interface, and the shooting preview interface includes a preview image collected by a camera.
  • the photographing device displays at least part of the biometric information on the preview image, and the at least part of the biometric information displayed on the preview image may be referred to as second information, and the second information corresponds to the second sensor data.
  • the information of one or more facial images and the corresponding relationship between each facial image and the wearable device are stored in the photographing device.
  • the photographing device determines a preset facial image corresponding to the wearable device from one or more facial images through the identity information of the wearable device, and matches the preset facial image with one or more characters in the picture. If the image is successfully matched with one of the characters in the picture, the photographing device displays at least part of the biometric information near the character. For example, the preview area 408 in FIG. 7b may be displayed near the matched characters.
  • the photographing device can output a prompt message (the first prompt), such as "There is no user A in the picture, please point the camera at user A".
  • Step S104a The photographing device detects a user operation that triggers the function of photographing a picture.
  • a user operation that triggers the function of photographing a picture.
  • Step S105a Fusion coding the captured picture information and biometric information to generate a picture with biometric information corresponding to sensor data.
  • step S105 Fusion coding the captured picture information and biometric information to generate a picture with biometric information corresponding to sensor data.
  • the photographing device obtains sensor data or biometric information from the wearable device after detecting a user operation that triggers the function of taking pictures; while in the method shown in FIG. 20b, the photographing device obtains sensor data or biometric information from the wearable device.
  • the sensor data or biometric information can be obtained from the wearable device after the annotation mode is activated, and at least part of the biometric information can be displayed in real time on the preview interface of the shooting, so as to achieve the preview effect of the biometric information and improve the user experience.
  • the shooting device can also obtain sensor data or biometric information from the wearable device after starting the annotation mode and after receiving the user's operation, and display at least part of the biometric information in real time on the preview interface of the shooting, and the user can freely control the display and display of the biometric information. Hidden to improve user experience.
  • the user may trigger a picture taking function on the wearable device, and the wearable device detects the user operation triggering the photographing function, and sends a picture taking instruction and sensor data (or biometric information) to the photographing device.
  • the photographing device After receiving the image capture instruction and sensor data (or biometric information), the photographing device obtains the image information collected by the current camera, fuses and encodes the captured image information and biometric information, and generates a picture with biometric information.
  • Biometric information corresponds to sensor data. For a specific description of the sensor data and biometric information, reference may be made to the above step S104.
  • the wearable device starts the labeling mode.
  • the wearable device sends a picture-taking instruction to the photographing device, and provides sensor data or biometric information to the photographing device according to the data type selected by the user in FIG. 9c.
  • the photographing device receives an instruction to take a picture, obtains the picture information collected by the current camera, and fuses and encodes the photographed picture information with or biometric information to generate a picture with biometric information.
  • the present application further provides a method flow for shooting video. Please refer to FIG. 21 , which shows a flowchart of a method for shooting video.
  • Step S201 Establish a connection between the wearable device and the photographing device.
  • step S101 For a specific description, reference may be made to the description of step S101, which will not be repeated here.
  • Step S202 The photographing device detects a user operation that triggers the function of photographing a video.
  • the shooting device detects a user operation that triggers the shooting video function, triggers the shooting video function, and obtains the video frame information currently collected by the camera.
  • the user operation may be a touch operation, a voice operation, a hovering gesture operation, etc., which is not limited herein.
  • the photographing device detects a user operation that triggers the function of photographing a video in the annotation mode, and the specific content may refer to the aforementioned UI embodiment.
  • the user operation may be a user operation for a wearable device
  • the wearable device detects a user operation that triggers a function of shooting a video, sends it to the shooting device through Bluetooth, and the shooting device detects the user operation, Trigger the capture video function.
  • Step S203 The photographing device sends a request message, where the request message is used to request acquisition of sensor data or biometric information.
  • the photographing device After the photographing device detects a user operation that triggers the function of photographing a video, it sends a request message to the wearable device, where the request message is used to request acquisition of sensor data or biometric information.
  • the request message includes the data type supported by the photographing device, the data collection method, and the data collection interval.
  • the data collection method is generally continuous collection, and the data collection interval can be set to 1 second. That is, the shooting device acquires sensor data or biometric information every 1 second during the process of shooting a video.
  • the user may trigger a video capture function on a wearable device, and the wearable device detects a user operation that triggers the video capture function, and sends a video capture instruction to the capture device.
  • the shooting device receives the shooting video instruction, and sends a request message to the wearable device.
  • the user triggers the video capture function on the wearable device
  • the wearable device detects the user operation that triggers the video capture function, wants to send the video capture command to the capture device, and sends sensor data or biometric information to the capture device.
  • step S203 does not need to be performed, and the wearable device sends sensor data or biometric information to the photographing device based on the configuration on the side of the wearable device.
  • Step S204 The wearable device periodically sends sensor data or biometric information.
  • the wearable device parses the received request message, and obtains the data type, data collection method and data collection interval required by the shooting device. According to the request message, the wearable device periodically sends sensor data or biometric information to the photographing device. Exemplarily, for video shooting, the data collection method is continuous collection, and the data collection interval is set to 1 second. Then the wearable device sends sensor data or biometric information to the photographing device every 1 second. For a specific description of the sensor data and biometric information, reference may be made to the above step S104.
  • Step S205 each time the photographing device receives sensor data or biometric information, it fuses and encodes the captured video information and or biometric information to generate an image frame with biometric information corresponding to the sensor data.
  • the photographing device Each time the photographing device receives sensor data or biometric information, it fuses and encodes the captured video information with or biometric information. Since a video is composed of multiple image frames, each image frame generated by the photographing device has corresponding biometric information, and the biometric information of the image frame corresponds to the sensor data. Exemplarily, the photographing device receives sensor data every 1 second, and there are 24 image frames generated by the photographing device in one second, then the biometric information of the 24 image frames in the one second is based on the next one. seconds of sensor data.
  • the photographing device acquires sensor data once in the 5th second, and the photographing device fuses and encodes 24 frames of images from the 4th to 5th second in the video information with the sensor data to generate image frames with biometric information;
  • the shooting device obtains sensor data once in the 6th second, and the shooting device fuses and encodes 24 frames of images in the video information between the 5th second and the 6th second with the sensor data to generate image frames with biometric information; and so on.
  • the photographing device fuses and encodes the biometric information and the video information according to the timestamp.
  • the wearable device periodically sends sensor data or biometric information, and the sensor data or biometric information has a time stamp, indicating the time when the sensor data or biometric information corresponds to the video information.
  • the wearable device can provide more accurate information to the photographing device, avoiding the mismatch between the biometric information and the video content due to information transmission delay.
  • the biometric information is obtained by combining sensor data and video information.
  • the sensor data the user's health status, motion status, emotional status and other information can be obtained.
  • the video information the scene information, user movement posture and other information can be obtained.
  • the sensor data received by the shooting device includes the user's heart rate data of 60 beats/min, and the exercise posture of running; the shooting device performs image analysis on the captured video information, and concludes that the user's exercise posture in the video is running.
  • the filming location is a park.
  • the shooting device obtains the final biometric information of the image frames in the video, including the user running in the park and the heart rate of 60 beats/min.
  • Step S206 The photographing device detects a user operation that triggers stopping of photographing the video.
  • the shooting device detects a user operation that triggers the stop of shooting the video, and stops shooting the video.
  • the user operation may be a touch operation, a voice operation, a hovering gesture operation, etc., which is not limited herein.
  • the shooting device detects a user operation that triggers the function of stopping video shooting in the annotation mode, and the specific content may refer to the aforementioned UI embodiment.
  • the user may trigger the wearable device to stop shooting video, and the wearable device detects a user operation that triggers the stop of shooting video, and sends an instruction to stop shooting video to the shooting device.
  • the shooting device receives the instruction to stop shooting video, and stops shooting video.
  • Step S207 The photographing device sends a request message to the wearable device for stopping acquiring sensor data or biometric information.
  • the photographing device After the photographing device detects a user operation that triggers the function of stopping video recording, it sends a request message to the wearable device, where the request message is used to stop acquiring sensor data or biometric information.
  • the wearable device receives the request message and stops sending sensor data or biometric information to the photographing device.
  • the user may trigger the stop of shooting video on the wearable device
  • the wearable device detects the user operation that triggers the stop of shooting video, sends the instruction to stop shooting video to the shooting device, and stops sending sensors to the shooting device. data or biometric information.
  • Step S208 The photographing device generates and saves a video.
  • the shooting device After the shooting device detects a user operation that triggers the function of stopping the shooting of the video, it generates and saves the shot video.
  • the format of the video reference may be made to the video format shown in FIG. 19b.
  • the video contains biometric information, and the biometric information corresponds to the sensor data.
  • the generated video can refer to the above-mentioned Figure 16a or Figure 16b.
  • the video also includes biometric information, and the display content of the biometric information can be a simple sentence, or it can be detailed and complete information.
  • the display content, display position and display form of the biometric information are not limited in this application.
  • the electronic device can also obtain the identity information of the wearable device, such as the device name, device account (such as a Huawei account), custom user name and other information; Perform fusion encoding to generate video files with identity information.
  • the electronic device can also obtain scene information when the video is shot.
  • the scene information includes the scene in the video frame comprehensively determined by the electronic device by identifying the scene in the video frame and the geographic location information when shooting the video, such as parks, bars, lakes, etc. borders, museums, etc.
  • the scene information and video frame information are fused and encoded to generate a video file with scene information.
  • the information of one or more facial images is stored in the photographing device, the photographing device determines the preset facial image corresponding to the wearable device through the identity information of the wearable device, and associates the preset facial image with the video.
  • the similarity matching is performed on one or more characters in the image frames of the image frame, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character. Referring to Fig. 18a or Fig. 18b above, the biometric information of the video in Fig. 18a or Fig. 18b is displayed above the character, indicating the character described by the biometric information.
  • the preset facial image is associated with the wearable device, wherein the preset facial image can be preset by the user in the shooting device, or uploaded to the shooting device in the form of an image or video, or the user can It is preset in the wearable device, and the wearable device is then provided to the photographing device, which is not limited in this application.
  • the photographing device can obtain sensor data or biometric information from the wearable device after starting the annotation mode, and display at least part of the biometric information in real time on the preview interface of the shooting, so as to achieve the preview effect of the biometric information and improve the user experience.
  • the shooting device can also obtain sensor data or biometric information from the wearable device after starting the annotation mode and after receiving the user's operation, and display at least part of the biometric information in real time on the preview interface of the shooting, and the user can freely control the display and display of the biometric information. Hidden to improve user experience.
  • the biometric information may be displayed on the shooting interface in real time for the user to view in real time.
  • the present application also provides a shooting system, which can connect one shooting device with multiple wearable devices of the same type, and the shooting device obtains sensor data or biometric information of multiple wearable devices of the same type. Feature recognition for multiple users in captured pictures/videos.
  • the photographing system shown in FIG. 22 includes a photographing device 101 , a plurality of wearable devices 201 of the same type, and a third device 301 .
  • the photographing device 101 and multiple wearable devices 201 of the same type establish connections through the third device 301 . in,
  • the photographing device 101 is an electronic device with a camera function, such as a mobile phone, a tablet, a camera, and the like.
  • the wearable device 201 includes wireless earphones, smart watches, smart bracelets, smart glasses, electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos, smart mirrors, and the like.
  • the third device 301 may be a relay device.
  • a Bluetooth repeater hub the Bluetooth repeater is connected to the photographing device 101 and multiple wearable devices 201 of the same type through Bluetooth; the third device 301 can also be a cloud server, and the third device 301 communicates with the photographing device 101 through a mobile communication module. and multiple wearable devices 201 are connected.
  • the photographing device 101 can establish connections with multiple wearable devices 201 of the same type.
  • the wearable device and the photographing device may also establish a connection with the third device 301 through other wireless communication methods such as WiFi.
  • the third device 301 may also be capable of data processing and computing. functional router.
  • FIG. 23 shows a flowchart of a method for photographing a picture.
  • the devices involved in the flow chart of the method include n wearable devices, a photographing device and a third device, where n is a positive integer.
  • the method includes:
  • connection method is not limited to Bluetooth (blue tooth, BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, WiFi), WiFi direct connection, network and other wireless communication methods.
  • Bluetooth blue tooth
  • NFC near field communication
  • WiFi wireless fidelity
  • WiFi wireless fidelity
  • WiFi wireless fidelity
  • network network and other wireless communication methods.
  • pairing using Bluetooth will be described as an example.
  • the n wearable devices and the photographing device obtain each other's connection information (such as hardware information, interface information, identity information, etc.) through the third device.
  • the camera device can obtain sensor data or biometric information of the wearable device, and the wearable device can synchronize some functions of the camera device. Actively trigger the shooting device to turn on the shooting function, and the wearable device can view the pictures/video files in the shooting device, etc.
  • Step S302 The photographing device detects a user operation that triggers the function of photographing a picture.
  • a user operation that triggers the function of photographing a picture.
  • step S103 For a specific description of this step, reference may be made to the description of step S103 .
  • the difference from step S103 is that the photographing device sends a request message to the third device.
  • Step S307 The photographing device fuses and encodes the photographed picture information and the biometric information to generate a picture with the biometric information, and the biometric information corresponds to the sensor data.
  • Figure 14b includes two biometric information and two characters, and each biometric information is displayed near a different character (small A is displayed above the number 12 clothes when running, and small B is displayed at 8 when running. (above the person in the number clothes), the person described by the biometric information is indicated according to the display position of the biometric information.
  • the preset facial image is associated with the wearable device, wherein the preset facial image may be preset by the user in the shooting device, or uploaded to the shooting device in the form of an image or video, or may be preset by the user in the shooting device.
  • the wearable device is then provided to the photographing device, which is not limited in this application.
  • the photographing device can obtain sensor data or biometric information from the wearable device after starting the annotation mode, and display at least part of the biometric information in real time on the preview interface of the shooting, so as to achieve the preview effect of the biometric information and improve the user experience.
  • the shooting device can also obtain sensor data or biometric information from the wearable device after starting the annotation mode and after receiving the user's operation, and display at least part of the biometric information in real time on the preview interface of the shooting, and the user can freely control the display and display of the biometric information. Hidden to improve user experience.
  • the photographing device and n wearable devices are connected through a third device, and the photographing device detects a user operation that triggers the function of taking pictures in the labeling mode, and at the same time obtains the picture information, the third device sends the information to the wearable device through the third device. Send a request message.
  • the photographing device receives sensor data or biometric information of n wearable devices, and fuses and encodes the n or biometric information with the picture information to generate a picture with biometric information, and the biometric information corresponds to the n sensor data.
  • step S203 For a specific description of this step, reference may be made to the description of step S203. Different from step S203, the photographing device sends a request message to the third device.
  • step S204 The specific description of this step can refer to the description of step S204.
  • the difference from step S204 is that n wearable devices send sensor data or biometric information to the third device, and the sensor data or biometric information also includes the identity information of the wearable device.
  • the identity information can uniquely represent a wearable device.
  • Step S409 The photographing device sends a request message to the third device for stopping acquiring sensor data or biometric information.
  • the photographing device sends a request message to the third device.
  • Step S410 The third device forwards the request message to the n wearable devices respectively.
  • the third device After receiving the request message sent by the photographing device, the third device forwards the request message to the n wearable devices.
  • the request message is used to stop acquiring sensor data or biometric information, and the wearable device receives the request message and stops sending sensor data or biometric information.
  • Step S411 The photographing device generates and saves a video.
  • step S208 The specific description of this step can refer to the description of step S208.
  • the generated video can refer to the above-mentioned Figures 18a and 18b.
  • the display content of the biometric information can be a simple sentence or a detailed and complete information.
  • the display content, display position and display form of the biometric information are not limited in this application.
  • the method provided by this application is to complete the feature recognition of image frames in the video in the process of video generation, without the need for subsequent feature recognition and extraction of the saved video, which saves hardware resources; and can be combined with The sensor data of the wearable device can perform more accurate and rich feature recognition on the video.
  • the second information mentioned in the embodiment of the present application may be biometric information displayed on the preview interface.
  • the first user wants to use the mobile phone to shoot a dance video for the second user, he can select the labeling mode to shoot for the second user.
  • Wear the smart watch on the user's second-hand and connect the mobile phone with the user's second-hand smart watch.
  • user 1 selects the labeling mode and uses the mobile phone to shoot for user 2, and the pictures/videos captured at this time have biometric information.
  • the biometric information can indicate the health status of the second user during the dance, and further determine the degree of fatigue and physical stress of the second user; the biometric information can also indicate the movement posture of the second user during the dance, and further determine the Whether the movement posture is standard; etc.
  • the user Before taking a photo, the user can configure the mobile phone or smart watch according to their needs, and the configuration content determines the content of the biometric information. For example, as soon as the user configures options such as heart rate information, exercise state information, and emotional state information on the mobile phone, the biometric information of the captured dance video includes information such as the heart rate information, exercise state information, and emotional state information of the second user.
  • options such as heart rate information, exercise state information, and emotional state information on the mobile phone
  • the biometric information of the captured dance video includes information such as the heart rate information, exercise state information, and emotional state information of the second user.
  • the user wants to use the mobile phone to take a picture of the play in the park for the second user he can select the mark mode to take the picture for the second user.
  • Wear the smart watch on the user's second-hand and connect the mobile phone with the user's second-hand smart watch.
  • the first user selects the labeling mode and uses the mobile phone to shoot for the second user.
  • the pictures/videos captured at this time have biometric information.
  • the biometric information may indicate the movement posture, emotional state (eg, happy, excited, etc.) of the second user, and may also indicate the current environmental status (eg, air quality, air temperature and humidity, etc.).
  • the biometric information indicates the relevant information of User 2, and the biometric information of the picture can be displayed on User 2's Nearby, match biometric information with user two on the picture.
  • Scenario 3 professional training venues, fitness venues, shooting training pictures/videos of multiple people and multiple users.
  • the multiple smart watches are respectively worn on the hands of multiple users, the multiple smart watches are connected with the third device, and the shooting device is connected with the third device. After the connection is successful, use the shooting device to shoot for multiple users, and the pictures/videos obtained at this time have biometric information.
  • the biometric information can indicate the user's heart rate, blood pressure, blood sugar, exercise posture and other information, and further determine whether the exercise posture is standard and whether the user is suitable for increasing the training intensity. It can be used for the coach's action guidance for the sports body and the monitoring of training fatigue.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the methods described in the foregoing method embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media can include both computer storage media and communication media and also include any medium that can transfer a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a computer.

Abstract

Disclosed in the present application are a photographing method and a photographing system. The photographing method is applied to the photographing system. The photographing system comprises an electronic device and a first wearable device. A connection is established between the electronic device and the first wearable device. The electronic device receives a first operation, and in response to the first operation, obtains a multimedia file acquired by a camera, the multimedia file comprising an image file, a video file, and the like; the first wearable device detects first sensor data by means of at least one sensor; the electronic device obtains first information, which corresponds to the first sensor data. The electronic device stores the multimedia file, which is associated with the first information. The first information comprises biologic characteristic information of a user. The electronic device associates the biologic characteristic information with image/video information, and performs more precise characteristic identification on the image/video, facilitating subsequent classification on the stored image/video according to the biologic characteristic information.

Description

一种拍摄方法和拍摄系统A shooting method and shooting system
本申请要求于2020年8月19日提交中国专利局、申请号为202010839365.9、申请名称为“一种拍摄方法和拍摄系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202010839365.9 and the application title "A Shooting Method and Shooting System" filed with the China Patent Office on August 19, 2020, the entire contents of which are incorporated into this application by reference .
技术领域technical field
本申请涉及电子技术领域,尤其涉及一种拍摄方法和拍摄系统。The present application relates to the field of electronic technology, and in particular, to a photographing method and a photographing system.
背景技术Background technique
网络数字信息化时代,消费者的资源(如图片、视频)呈指数级增长,对资源的存储和使用提出了挑战。通常会涉及对指定类型资源的搜索。例如,通过搜索引擎,根据预设特征搜索图片。又例如,用户对其电子设备上保存的资源库进行查找,达到浏览和分享的目的。如:通过手机相册搜索指定特征的图片,小孩跳舞、踢球图片等等。In the era of network digital information, consumers' resources (such as pictures and videos) are increasing exponentially, which poses challenges to the storage and use of resources. Usually involves a search for a resource of a specified type. For example, through a search engine, search for pictures based on preset characteristics. For another example, the user searches the resource library stored on the electronic device to achieve the purpose of browsing and sharing. Such as: searching pictures of specified characteristics through mobile phone albums, pictures of children dancing, playing football, etc.
然而,特征的提取和标注依赖大量的硬件资源,尤其当资源量较大时,计算时间也呈几何倍数增长;并且现有的图像识别技术仅能对一些外在的特征进行识别和提取,还没有技术公开如何根据人物的心理、生理特征对图像分类。However, feature extraction and labeling depend on a lot of hardware resources, especially when the amount of resources is large, the computing time also increases exponentially; and the existing image recognition technology can only identify and extract some external features, and also No technology discloses how to classify images according to the psychological and physiological characteristics of characters.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种拍摄方法和拍摄系统,实现了在图片/视频生成的过程中将生物特征信息与图片/视频的信息相关联,对图片/视频做出更精准的特征识别。The embodiments of the present application provide a shooting method and a shooting system, which realize the association of biometric information with the information of the pictures/videos in the process of generating pictures/videos, and make more accurate feature recognition for the pictures/videos.
第一方面,本申请提供了一种拍摄系统,其特征在于,包括:电子设备和第一穿戴设备,电子设备包括摄像头;其中,电子设备,用于与第一穿戴设备建立连接;电子设备,还用于接收第一操作;电子设备,还用于响应于第一操作,获取摄像头采集的多媒体文件;第一穿戴设备,用于通过至少一个传感器检测第一传感器数据;电子设备,还用于获取第一信息,第一信息和第一传感器数据对应;电子设备,还用于保存多媒体文件,多媒体文件和第一信息关联。其中,多媒体文件包括图片文件、视频文件等。第一操作可以包括但不限于如点击、双击、长按、滑动等操作,第一操作用于触发电子设备拍摄图片/视频。电子设备和第一穿戴设备建立连接;电子设备响应于第一操作,通过摄像头采集图片/视频。电子设备获取第一穿戴设备通过传感器检测的用户的生物特征信息(例如心率、血压、运动姿势等),生物特征信息又称为第一信息。电子设备将该生物特征信息与拍摄的图片/视频相关联,或者将该生物特征信息与拍摄的图片/视频中的人物相关联,该人物与该生物特征信息是对应的。电子设备生成包括生物特征信息的图片/视频,该生物特征信息指示了用户的生物特征。通过本申请提供的一种拍摄系统,在图片/视频生成的过程中,电子设备与第一穿戴设备进行信息交互,电子设备获取生物特征信息,电子设备将生物特征信息与图片/视频的信息相关联,对图片/视频做出了更精准的特征识别,电子设备保存带有生物特征信息的图片/视频,方便后续根据生物特征信息对存储的图片/视频进行分类。In a first aspect, the present application provides a shooting system, which is characterized by comprising: an electronic device and a first wearable device, and the electronic device includes a camera; wherein the electronic device is used to establish a connection with the first wearable device; the electronic device, The electronic device is also used for receiving the first operation; the electronic device is also used for acquiring the multimedia file collected by the camera in response to the first operation; the first wearable device is used for detecting the first sensor data through at least one sensor; the electronic device is also used for The first information is acquired, and the first information corresponds to the first sensor data; the electronic device is further configured to save a multimedia file, and the multimedia file is associated with the first information. The multimedia files include picture files, video files, and the like. The first operation may include, but is not limited to, operations such as clicking, double-clicking, long-pressing, sliding, and the like, and the first operation is used to trigger the electronic device to take pictures/videos. The electronic device establishes a connection with the first wearable device; the electronic device captures pictures/videos through the camera in response to the first operation. The electronic device acquires the user's biometric information (eg, heart rate, blood pressure, exercise posture, etc.) detected by the first wearable device through the sensor, and the biometric information is also referred to as first information. The electronic device associates the biometric information with the photographed picture/video, or associates the biometric information with a person in the photographed picture/video, and the person corresponds to the biometric information. The electronic device generates a picture/video including biometric information indicative of the user's biometrics. Through the shooting system provided by the present application, in the process of picture/video generation, the electronic device and the first wearable device perform information interaction, the electronic device obtains biometric information, and the electronic device correlates the biometric information with the information of the picture/video The electronic device saves the pictures/videos with biometric information, which facilitates the subsequent classification of the stored pictures/videos according to the biometric information.
在一种可能的实现方式中,第一穿戴设备,还用于根据第一传感器数据确定出第一信息;向电子设备发送第一信息。第一信息为生物特征信息。这种方式描述了电子设备获取第一信息的一种方式,第一穿戴设备通过一个或多个传感器检测传感器数据(第一传感器数据),第一穿戴设备基于该第一传感器数据确定第一信息,第一穿戴设备将该第一信息发送给电子设备,电子设备获取到第一信息。第一穿戴设备对第一传感器数据进行处理,省去了电子设 备处理第一传感器数据的时间和资源。In a possible implementation manner, the first wearable device is further configured to determine the first information according to the first sensor data; and send the first information to the electronic device. The first information is biometric information. This approach describes a way for the electronic device to acquire the first information. The first wearable device detects sensor data (first sensor data) through one or more sensors, and the first wearable device determines the first information based on the first sensor data. , the first wearable device sends the first information to the electronic device, and the electronic device obtains the first information. The first wearable device processes the first sensor data, which saves time and resources for the electronic device to process the first sensor data.
在一种可能的实现方式中,第一穿戴设备,还用于向电子设备发送第一传感器数据;电子设备,还用于根据第一传感器数据确定出第一信息。第一信息为生物特征信息。这种方式描述了电子设备获取第一信息的又一种方式,第一穿戴设备通过一个或多个传感器检测传感器数据(第一传感器数据),将该第一传感器数据发送给电子设备,电子设备基于获取的第一传感器数据确定生物特征信息。In a possible implementation manner, the first wearable device is further configured to send the first sensor data to the electronic device; and the electronic device is further configured to determine the first information according to the first sensor data. The first information is biometric information. This method describes another method for the electronic device to obtain the first information. The first wearable device detects sensor data (first sensor data) through one or more sensors, and sends the first sensor data to the electronic device. The electronic device Biometric information is determined based on the acquired first sensor data.
在一种可能的实现方式中,电子设备,具体用于:响应于第一操作,获取第一信息,第一信息和第一传感器数据对应。这种方式描述了电子设备获取第一信息的时机。第一操作包括触发电子设备拍摄图片/视频的用户操作。在电子设备接收到第一操作后,电子设备基于该第一操作,获取到第一信息。电子设备对第一传感器数据进行处理,省去了第一穿戴设备处理第一传感器数据的时间和资源。In a possible implementation manner, the electronic device is specifically configured to: in response to the first operation, acquire first information, where the first information corresponds to the first sensor data. This way describes the timing when the electronic device obtains the first information. The first operation includes a user operation that triggers the electronic device to take pictures/videos. After the electronic device receives the first operation, the electronic device acquires the first information based on the first operation. The electronic device processes the first sensor data, which saves time and resources for the first wearable device to process the first sensor data.
在一种可能的实现方式中,电子设备还用于:响应于第一操作,向第一穿戴设备发送第一请求消息;第一穿戴设备,具体用于:响应于第一请求消息,向电子设备发送第一信息。这种方式描述了电子设备响应于第一操作,获取第一信息的一种方式。在电子设备接收到第一操作后,电子设备基于该第一操作,向第一穿戴设备请求获取生物特征信息(第一信息),第一穿戴设备将该第一信息发送给电子设备,电子设备获取到第一信息。In a possible implementation manner, the electronic device is further configured to: in response to the first operation, send a first request message to the first wearable device; the first wearable device is specifically configured to: in response to the first request message, send a first request message to the electronic device The device sends the first information. This manner describes a manner in which the electronic device acquires the first information in response to the first operation. After the electronic device receives the first operation, the electronic device requests the first wearable device to obtain biometric information (first information) based on the first operation, the first wearable device sends the first information to the electronic device, and the electronic device Obtain the first information.
在一种可能的实现方式中,电子设备还用于:响应于第一操作,向第一穿戴设备发送第一请求消息;第一穿戴设备,具体用于:响应于第一请求消息,向电子设备发送第一传感器数据。这种方式描述了电子设备响应于第一操作,获取第一信息的又一种方式。在电子设备接收到第一操作后,电子设备基于该第一操作,向第一穿戴设备请求获取传感器数据(第一传感器数据),电子设备基于获取的第一传感器数据确定生物特征信息(第一信息)。In a possible implementation manner, the electronic device is further configured to: in response to the first operation, send a first request message to the first wearable device; the first wearable device is specifically configured to: in response to the first request message, send a first request message to the electronic device The device sends first sensor data. This manner describes another manner in which the electronic device acquires the first information in response to the first operation. After the electronic device receives the first operation, the electronic device requests the first wearable device to acquire sensor data (first sensor data) based on the first operation, and the electronic device determines biometric information (first sensor data) based on the acquired first sensor data information).
在一种可能的实现方式中,电子设备,还用于:显示拍摄预览界面,拍摄预览界面中包括拍摄按钮,第一操作包括作用于拍摄按钮的输入操作。这里提供了一种应用场景,电子设备在拍摄预览界面接收到第一操作,触发电子设备拍摄图片/视频。In a possible implementation manner, the electronic device is further configured to: display a shooting preview interface, where the shooting preview interface includes a shooting button, and the first operation includes an input operation acting on the shooting button. An application scenario is provided here. The electronic device receives the first operation on the shooting preview interface, and triggers the electronic device to shoot pictures/videos.
在一种可能的实现方式中,多媒体文件的属性信息中包括第一信息。电子设备保存多媒体文件,将该多媒体文件和第一信息进行融合编码,用户可以通过查看该多媒体文件的属性信息查看到第一信息。In a possible implementation manner, the attribute information of the multimedia file includes the first information. The electronic device saves the multimedia file, performs fusion coding on the multimedia file and the first information, and the user can check the first information by checking the attribute information of the multimedia file.
在一种可能的实现方式中,电子设备,用于与第一穿戴设备建立连接包括:响应于电子设备进入预设拍摄模式,电子设备与第一穿戴设备建立连接。这种方式描述了电子设备与第一穿戴设备建立连接的时机,在预设拍摄模式下,电子设备显示拍摄预览界面。当电子设备检测到进入预设拍摄模式的用户操作时,电子设备自动开启蓝牙,与第一穿戴设备自动建立蓝牙连接。In a possible implementation manner, the electronic device for establishing a connection with the first wearable device includes: in response to the electronic device entering a preset shooting mode, establishing a connection between the electronic device and the first wearable device. This method describes the timing of establishing the connection between the electronic device and the first wearable device. In the preset shooting mode, the electronic device displays a shooting preview interface. When the electronic device detects a user operation entering the preset shooting mode, the electronic device automatically turns on Bluetooth, and automatically establishes a Bluetooth connection with the first wearable device.
在一种可能的实现方式中,第一穿戴设备,还用于:接收第二操作;第一穿戴设备,还用于响应于第二操作,指示电子设备开启摄像头;第一电子设备,还用于显示拍摄预览界面,拍摄预览界面显示摄像头采集的预览图像;第一操作包括作用于拍摄预览界面的操作。这种方式描述了在第一穿戴设备侧触发拍摄图片/视频的一种方式。其中,第二操作可以包括但不限于如点击、双击、长按、滑动等操作,第二操作作用在第一穿戴设备上,用于触发第一穿戴设备指示电子设备开启摄像头,从而拍摄图片/视频。In a possible implementation manner, the first wearable device is further configured to: receive the second operation; the first wearable device is further configured to instruct the electronic device to turn on the camera in response to the second operation; the first electronic device is further configured to use For displaying a shooting preview interface, the shooting preview interface displays a preview image collected by the camera; the first operation includes an operation acting on the shooting preview interface. This way describes a way of triggering the shooting of pictures/videos on the side of the first wearable device. Wherein, the second operation may include, but is not limited to, operations such as clicking, double-clicking, long-pressing, sliding, etc. The second operation acts on the first wearable device and is used to trigger the first wearable device to instruct the electronic device to turn on the camera, thereby taking pictures/ video.
在一种可能的实现方式中,电子设备,还用于显示多媒体文件和至少部分第一信息。用户可以进入图库查看多媒体文件,电子设备显示多媒体文件和至少部分第一信息,第一信息指示了用户的生物特征信息。可选的,电子设备的显示界面上显示多媒体文件和部分第一信 息,用户可以通过查看详细信息等方式查看全部第一信息。In a possible implementation manner, the electronic device is further configured to display the multimedia file and at least part of the first information. The user can enter the gallery to view the multimedia file, and the electronic device displays the multimedia file and at least part of the first information, where the first information indicates the biometric information of the user. Optionally, the display interface of the electronic device displays the multimedia file and part of the first information, and the user can view all the first information by viewing the detailed information or the like.
在一种可能的实现方式中,电子设备,还用于显示多媒体文件和至少部分第一信息,包括:电子设备,还用于响应于多媒体文件中包括预设面部图像,显示多媒体文件和至少部分第一信息;预设面部图像与第一穿戴设备对应。电子设备中存有一个或多个面部图像的信息,电子设备通过第一穿戴设备的身份信息确定出第一穿戴设备对应的预设面部图像,将预设面部图像与多媒体文件中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则表示多媒体文件中的人物为第一穿戴设备的用户,这时电子设备才显示至少部分第一信息,否则不显示,保护了用户的隐私性。In a possible implementation manner, the electronic device is further configured to display the multimedia file and at least part of the first information, including: the electronic device is further configured to display the multimedia file and at least part of the first information in response to the preset facial image included in the multimedia file The first information; the preset facial image corresponds to the first wearable device. Information of one or more facial images is stored in the electronic device, the electronic device determines the preset facial image corresponding to the first wearable device through the identity information of the first wearable device, and associates the preset facial image with one or more of the multimedia files. The personal characters are matched for similarity. If the preset facial image is successfully matched with one of the characters, it means that the character in the multimedia file is the user of the first wearable device. At this time, the electronic device displays at least part of the first information, otherwise it will not be displayed. User privacy is protected.
其中,预设面部图像与第一穿戴设备具有关联关系,其中面部信息可以是用户预设在电子设备中,也可以是以图像或视频的方式上传到电子设备中,也可以是用户预设在第一穿戴设备中,第一穿戴设备再提供给电子设备,本申请不作限制。The preset facial image is associated with the first wearable device, and the facial information may be preset by the user in the electronic device, or uploaded to the electronic device in the form of an image or video, or may be preset by the user in the electronic device. In the first wearable device, the first wearable device is then provided to the electronic device, which is not limited in this application.
在一种可能的实现方式中,电子设备,还用于显示多媒体文件和至少部分第一信息,包括:电子设备,还用于响应于多媒体文件中包括第一面部图像和第二面部图像,且第一面部图像与预设面部图像匹配,在多媒体文件的第一区域显示至少部分第一信息;其中,预设面部图像与第一穿戴设备对应;第一区域与第一面部图像的显示区域的距离,小于第一区域与第二面部图像的显示区域的距离。这种方式描述了确定至少部分第一信息在多媒体文件上的显示位置的一种方式。将预设面部图像与多媒体文件中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分第一信息显示在该人物附近。在显示效果上加强了用户和第一信息的对应性,可以直观的看出第一信息对应的用户,能够提升用户体验。In a possible implementation manner, the electronic device is further configured to display the multimedia file and at least part of the first information, including: the electronic device is further configured to respond to the multimedia file including the first facial image and the second facial image, And the first facial image matches the preset facial image, and at least part of the first information is displayed in the first area of the multimedia file; wherein, the preset facial image corresponds to the first wearable device; The distance of the display area is smaller than the distance between the first area and the display area of the second facial image. This approach describes a way of determining the display position of at least part of the first information on the multimedia file. Similarity matching is performed between the preset facial image and one or more characters in the multimedia file, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the first information near the character. In terms of display effect, the correspondence between the user and the first information is strengthened, and the user corresponding to the first information can be seen intuitively, which can improve the user experience.
在一种可能的实现方式中,电子设备,还用于:在接收第一操作之前,显示拍摄预览界面,拍摄预览界面上包括摄像头采集的预览图像;电子设备,还用于在预览图像上显示至少部分第二信息,第二信息和第一穿戴设备检测出的第二传感器数据对应。第二信息为生物特征信息。电子设备在预览图像上显示至少部分第二信息,实现了在预览界面上实时显示生物特征信息,用户可以实时查看到生物特征信息。即使得电子设备所获取的图像或者视频文件中可以包括更多的生物特征信息,且生物特征信息来源于不同的传感器所采集的传感器数据。In a possible implementation manner, the electronic device is further configured to: before receiving the first operation, display a shooting preview interface, where the shooting preview interface includes a preview image captured by a camera; the electronic device is further configured to display on the preview image At least part of the second information corresponds to the second sensor data detected by the first wearable device. The second information is biometric information. The electronic device displays at least part of the second information on the preview image, so that the biometric information is displayed on the preview interface in real time, and the user can view the biometric information in real time. Even more biometric information may be included in the image or video file acquired by the electronic device, and the biometric information is derived from sensor data collected by different sensors.
在一种可能的实现方式中,电子设备,还用于在预览图像上显示至少部分第二信息,包括:电子设备,还用于响应于预览图像中包括预设面部图像,显示预览图像和至少部分第二信息,预设面部图像与第一穿戴设备对应。电子设备中存有一个或多个面部图像的信息,电子设备通过第一穿戴设备的身份信息确定出第一穿戴设备对应的预设面部图像,将预设面部图像与预览图像上的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则表示预览图像上的人物为第一穿戴设备的用户,这时电子设备才显示至少部分第二信息,否则不显示,保护了用户的隐私性。In a possible implementation manner, the electronic device is further configured to display at least part of the second information on the preview image, including: the electronic device is further configured to display the preview image and at least a preset face image in response to the preview image. Part of the second information, the preset facial image corresponds to the first wearable device. Information of one or more facial images is stored in the electronic device, the electronic device determines the preset facial image corresponding to the first wearable device through the identity information of the first wearable device, and compares the preset facial image with one or more of the facial images on the preview image. The personal characters are matched for similarity. If the preset facial image is successfully matched with one of the characters, it means that the character on the preview image is the user of the first wearable device. At this time, the electronic device displays at least part of the second information, otherwise it will not be displayed. User privacy is protected.
在一种可能的实现方式中,电子设备,还用于在预览图像上显示至少部分第二信息,包括:电子设备,还用于响应于预览图像中包括第三面部图像和第四面部图像,且第三面部图像与预设面部图像匹配,在预览图像的第二区域显示至少部分第二信息;其中,预设面部图像与第一穿戴设备对应;第二区域与第三面部图像的显示区域的距离,小于第二区域与第四面部图像的显示区域的距离。这种方式描述了确定至少部分第二信息在预览图像上的显示位置的一种方式。将预设面部图像与预览图像中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分生物特征信息显示在预览图像中该人物附近。在显示效果上加强了用户和第二信息的对应性,可以直观的看出第二信息在预览 图像上对应的用户,提升用户体验。In a possible implementation manner, the electronic device is further configured to display at least part of the second information on the preview image, including: the electronic device is further configured to respond that the preview image includes the third facial image and the fourth facial image, And the third facial image matches the preset facial image, and at least part of the second information is displayed in the second area of the preview image; wherein the preset facial image corresponds to the first wearable device; the second area corresponds to the display area of the third facial image The distance is smaller than the distance between the second area and the display area of the fourth face image. This approach describes a way of determining the display position of at least part of the second information on the preview image. The preset facial image is matched with one or more characters in the preview image for similarity, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character in the preview image. In the display effect, the correspondence between the user and the second information is strengthened, and the user corresponding to the second information on the preview image can be intuitively seen, thereby improving the user experience.
在一种可能的实现方式中,电子设备,还用于:响应于预览图像中不包括预设面部图像,输出第一提示,第一提示用于提示用户对准面部。电子设备将预设面部图像与图片中的一个或多个人物进行相似度匹配,若匹配不成功,则说明预览图像中不包括第一穿戴设备的用户,电子设备输出提示信息提示用户将拍摄角度对准面部,避免了拍摄出的多媒体文件中不包括第一穿戴设备的用户的情况,提升用户体验。In a possible implementation manner, the electronic device is further configured to: in response to the preview image not including the preset face image, output a first prompt, where the first prompt is used to prompt the user to align the face. The electronic device performs similarity matching between the preset facial image and one or more characters in the picture. If the matching is unsuccessful, it means that the user of the first wearable device is not included in the preview image, and the electronic device outputs prompt information to prompt the user to change the shooting angle. Aiming at the face avoids the situation that the user of the first wearable device is not included in the photographed multimedia file, and improves the user experience.
在一种可能的实现方式中,第一信息包括以下至少一项:健康状态信息、运动状态信息、或情绪状态信息。In a possible implementation manner, the first information includes at least one of the following: health state information, exercise state information, or emotional state information.
在一种可能的实现方式中,第一传感器数据包括通过至少一种传感器检测的数据,至少一种传感器至少包括以下至少一项:加速度传感器、陀螺仪传感器、地磁传感器、大气压传感器、心率传感器、血压传感器、心电传感器、肌电传感器、体温传感器、皮电传感器、空气温湿度传感器、光照传感器、或骨传导传感器。In a possible implementation manner, the first sensor data includes data detected by at least one sensor, and the at least one sensor includes at least one of the following: an acceleration sensor, a gyroscope sensor, a geomagnetic sensor, an atmospheric pressure sensor, a heart rate sensor, Blood pressure sensor, ECG sensor, EMG sensor, body temperature sensor, skin electrical sensor, air temperature and humidity sensor, light sensor, or bone conduction sensor.
在一种可能的实现方式中,系统还包括第二穿戴设备;电子设备,还用于与第二穿戴设备建立连接;第二穿戴设备,用于通过至少一个传感器检测出第四传感器数据;其中,第一信息还与第四传感器数据对应。这种方式描述了电子设备与两个穿戴设备(第一穿戴设备和第二穿戴设备)一起建立连接的情况下,电子设备获取到第一信息,该第一信息与第一穿戴设备和第二穿戴设备的传感器数据(第一传感器数据和第四传感器数据)均对应。其中,电子设备与两个以上的穿戴设备一起建立连接同理。In a possible implementation manner, the system further includes a second wearable device; the electronic device is further configured to establish a connection with the second wearable device; the second wearable device is configured to detect fourth sensor data through at least one sensor; wherein , the first information also corresponds to the fourth sensor data. This method describes that when the electronic device establishes a connection with two wearable devices (the first wearable device and the second wearable device), the electronic device obtains the first information, which is related to the first wearable device and the second wearable device. The sensor data (the first sensor data and the fourth sensor data) of the wearable device are all corresponding. The same is true for establishing a connection between the electronic device and two or more wearable devices.
第二方面,一种拍摄方法,应用于包括摄像头的电子设备,方法包括:电子设备与第一穿戴设备建立连接;电子设备接收第一操作;电子设备响应于第一操作,获取摄像头采集的多媒体文件;电子设备获取第一信息,第一信息和第一穿戴设备的至少一个传感器检测的第一传感器数据对应;电子设备保存多媒体文件,多媒体文件和第一信息关联。其中,多媒体文件包括图片文件、视频文件等。第一操作可以包括但不限于如点击、双击、长按、滑动等操作,第一操作用于触发电子设备拍摄图片/视频。电子设备响应于第一操作,通过摄像头采集图片/视频。电子设备获取第一穿戴设备通过传感器检测的用户的生物特征信息(例如心率、血压、运动姿势等),电子设备将该生物特征信息与拍摄的图片/视频相关联,或者将该生物特征与拍摄的图片/视频中的人物相关联,该人物与该生物特征是对应的。生物特征信息又称为第一信息。电子设备生成包括生物特征信息的图片/视频,该生物特征信息指示了用户的生物特征。这种方法在图片/视频生成的过程中将生物特征信息与图片/视频的信息相关联,对图片/视频做出了更精准的特征识别,并且提供了一种新的图片/视频格式,使得电子设备保存带有生物特征信息的图片/视频,方便后续根据生物特征信息对存储的图片/视频进行分类。In a second aspect, a shooting method is applied to an electronic device including a camera. The method includes: establishing a connection between the electronic device and a first wearable device; the electronic device receives a first operation; the electronic device responds to the first operation and acquires multimedia collected by the camera. file; the electronic device acquires first information, and the first information corresponds to first sensor data detected by at least one sensor of the first wearable device; the electronic device saves a multimedia file, and the multimedia file is associated with the first information. The multimedia files include picture files, video files, and the like. The first operation may include, but is not limited to, operations such as clicking, double-clicking, long-pressing, sliding, and the like, and the first operation is used to trigger the electronic device to take pictures/videos. The electronic device captures pictures/videos through the camera in response to the first operation. The electronic device acquires the user's biometric information (such as heart rate, blood pressure, exercise posture, etc.) detected by the first wearable device through the sensor, and the electronic device associates the biometric information with the photographed picture/video, or associates the biometric information with the photographed image/video. associated with the person in the picture/video of , the person corresponds to the biometric. The biometric information is also referred to as the first information. The electronic device generates a picture/video including biometric information indicative of the user's biometrics. This method associates the biometric information with the information of the picture/video in the process of picture/video generation, makes more accurate feature recognition for the picture/video, and provides a new picture/video format, making The electronic device saves pictures/videos with biometric information, which facilitates subsequent classification of the stored pictures/videos according to the biometric information.
在一种可能的实现方式中,电子设备获取第一信息,包括:电子设备获取第一传感器数据;电子设备基于第一传感器数据确定第一信息。这种方式描述了电子设备获取第一信息的一种方式,第一信息为生物特征信息。第一穿戴设备通过一个或多个传感器检测传感器数据(第一传感器数据),将该第一传感器数据发送给电子设备,电子设备基于获取的第一传感器数据确定生物特征信息。电子设备对第一传感器数据进行处理,省去了第一穿戴设备处理第一传感器数据的时间和资源。In a possible implementation manner, acquiring the first information by the electronic device includes: acquiring the first sensor data by the electronic device; and determining the first information based on the first sensor data by the electronic device. This way describes a way for the electronic device to obtain the first information, where the first information is biometric information. The first wearable device detects sensor data (first sensor data) through one or more sensors, sends the first sensor data to the electronic device, and the electronic device determines biometric information based on the acquired first sensor data. The electronic device processes the first sensor data, which saves time and resources for the first wearable device to process the first sensor data.
在一种可能的实现方式中,电子设备获取第一信息,包括:电子设备获取第一穿戴设备基于第一传感器数据确定的第一信息。这种方式描述了电子设备获取第一信息的又一种方式,第一信息为生物特征信息。第一穿戴设备通过一个或多个传感器检测传感器数据(第一传感 器数据),第一穿戴设备基于该第一传感器数据确定第一信息,第一穿戴设备将该第一信息发送给电子设备,电子设备获取到第一信息。第一穿戴设备对第一传感器数据进行处理,省去了电子设备处理第一传感器数据的时间和资源。In a possible implementation manner, acquiring the first information by the electronic device includes: the electronic device acquiring the first information determined by the first wearable device based on the first sensor data. This method describes another method for the electronic device to obtain the first information, where the first information is biometric information. The first wearable device detects sensor data (first sensor data) through one or more sensors, the first wearable device determines first information based on the first sensor data, the first wearable device sends the first information to the electronic device, and the electronic The device obtains the first information. The first wearable device processes the first sensor data, which saves time and resources for the electronic device to process the first sensor data.
在一种可能的实现方式中,电子设备获取第一信息,包括:响应于第一操作,电子设备获取第一信息。这种方式描述了电子设备获取第一信息的时机,第一操作包括触发电子设备拍摄图片/视频的用户操作。在电子设备接收到第一操作后,电子设备基于该第一操作,获取到第一信息。In a possible implementation manner, the electronic device acquiring the first information includes: in response to the first operation, the electronic device acquiring the first information. This method describes the timing when the electronic device acquires the first information, and the first operation includes a user operation that triggers the electronic device to capture a picture/video. After the electronic device receives the first operation, the electronic device acquires the first information based on the first operation.
在一种可能的实现方式中,响应于第一操作,电子设备获取第一信息,包括:响应于第一操作,电子设备向第一穿戴设备发送第一请求,第一请求用于请求获取第一穿戴设备的至少一个传感器检测的第一传感器数据;电子设备获取第一信息,第一信息和第一传感器数据对应。这种方式描述了电子设备响应于第一操作,获取第一信息的一种方式。在电子设备接收到第一操作后,电子设备基于该第一操作,向第一穿戴设备发送请求,以获取传感器数据(第一传感器数据),电子设备基于获取的第一传感器数据确定生物特征信息(第一信息)。In a possible implementation manner, acquiring the first information by the electronic device in response to the first operation includes: in response to the first operation, the electronic device sends a first request to the first wearable device, where the first request is used to request to acquire the first information. First sensor data detected by at least one sensor of a wearable device; the electronic device acquires first information, and the first information corresponds to the first sensor data. This manner describes a manner in which the electronic device acquires the first information in response to the first operation. After the electronic device receives the first operation, based on the first operation, the electronic device sends a request to the first wearable device to acquire sensor data (first sensor data), and the electronic device determines biometric information based on the acquired first sensor data (first message).
在一种可能的实现方式中,响应于第一操作,电子设备获取第一信息,包括:响应于第一操作,电子设备向第一穿戴设备发送第一请求,第一请求用于请求获取第一穿戴设备基于第一传感器数据确定的第一信息;电子设备获取第一信息,第一信息和第一传感器数据对应。这种方式描述了电子设备响应于第一操作,获取第一信息的又一种方式。在电子设备接收到第一操作后,电子设备基于该第一操作,向第一穿戴设备发送请求,以获取生物特征信息(第一信息),第一穿戴设备将该第一信息发送给电子设备,电子设备获取到第一信息。In a possible implementation manner, acquiring the first information by the electronic device in response to the first operation includes: in response to the first operation, the electronic device sends a first request to the first wearable device, where the first request is used to request to acquire the first information. A wearable device determines the first information based on the first sensor data; the electronic device obtains the first information, and the first information corresponds to the first sensor data. This manner describes another manner in which the electronic device acquires the first information in response to the first operation. After the electronic device receives the first operation, the electronic device sends a request to the first wearable device based on the first operation to obtain biometric information (first information), and the first wearable device sends the first information to the electronic device , the electronic device obtains the first information.
在一种可能的实现方式中,电子设备接收第一操作包括:电子设备显示拍摄预览界面,拍摄预览界面中包括拍摄按钮;电子设备接收第一操作,第一操作包括作用于拍摄按钮的输入操作。这里提供了一种应用场景,电子设备在拍摄预览界面接收到第一操作,触发电子设备拍摄图片/视频。In a possible implementation manner, the electronic device receiving the first operation includes: the electronic device displays a shooting preview interface, and the shooting preview interface includes a shooting button; the electronic device receives a first operation, and the first operation includes an input operation acting on the shooting button . An application scenario is provided here. The electronic device receives the first operation on the shooting preview interface, and triggers the electronic device to shoot pictures/videos.
在一种可能的实现方式中,多媒体文件的属性信息中包括第一信息。电子设备保存多媒体文件,将该多媒体文件和第一信息进行融合编码,用户可以通过查看该多媒体文件的属性信息查看到第一信息。In a possible implementation manner, the attribute information of the multimedia file includes the first information. The electronic device saves the multimedia file, performs fusion coding on the multimedia file and the first information, and the user can check the first information by checking the attribute information of the multimedia file.
在一种可能的实现方式中,电子设备与第一穿戴设备建立连接,包括:响应于电子设备进入预设拍摄模式,电子设备与第一穿戴设备建立连接。这种方式描述了电子设备与第一穿戴设备建立连接的时机,在预设拍摄模式下,电子设备显示拍摄预览界面。当电子设备检测到进入预设拍摄模式的用户操作时,电子设备自动开启蓝牙,与第一穿戴设备自动建立蓝牙连接。In a possible implementation manner, establishing a connection between the electronic device and the first wearable device includes: in response to the electronic device entering a preset shooting mode, establishing a connection between the electronic device and the first wearable device. This method describes the timing of establishing the connection between the electronic device and the first wearable device. In the preset shooting mode, the electronic device displays a shooting preview interface. When the electronic device detects a user operation entering the preset shooting mode, the electronic device automatically turns on Bluetooth, and automatically establishes a Bluetooth connection with the first wearable device.
在一种可能的实现方式中,方法还包括:电子设备显示多媒体文件和至少部分第一信息。用户可以进入图库查看多媒体文件,电子设备显示多媒体文件和至少部分第一信息,第一信息指示了用户的生物特征信息。可选的,电子设备的显示界面上显示多媒体文件和部分第一信息,用户可以通过查看详细信息等方式查看全部第一信息。In a possible implementation manner, the method further includes: the electronic device displays the multimedia file and at least part of the first information. The user can enter the gallery to view the multimedia file, and the electronic device displays the multimedia file and at least part of the first information, where the first information indicates the biometric information of the user. Optionally, the display interface of the electronic device displays the multimedia file and part of the first information, and the user can view all the first information by viewing the detailed information or the like.
在一种可能的实现方式中,电子设备显示多媒体文件和至少部分第一信息,具体包括:响应于多媒体文件中包括预设面部图像,电子设备显示多媒体文件和至少部分第一信息;预设面部图像与第一穿戴设备对应。电子设备中存有一个或多个面部图像的信息,电子设备通过第一穿戴设备的身份信息确定出第一穿戴设备对应的预设面部图像,将预设面部图像与多媒体文件中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则表示多媒体文件中的人物为第一穿戴设备的用户,这时电子设备才显示至少部分第一信息, 否则不显示,保护了用户的隐私性。In a possible implementation manner, the electronic device displays the multimedia file and at least part of the first information, specifically including: in response to the multimedia file including a preset face image, the electronic device displays the multimedia file and at least part of the first information; preset face image The image corresponds to the first wearable device. Information of one or more facial images is stored in the electronic device, the electronic device determines the preset facial image corresponding to the first wearable device through the identity information of the first wearable device, and associates the preset facial image with one or more of the multimedia files. The personal characters are matched for similarity. If the preset facial image is successfully matched with one of the characters, it means that the character in the multimedia file is the user of the first wearable device. At this time, the electronic device displays at least part of the first information, otherwise it is not displayed. User privacy is protected.
其中,预设面部图像与第一穿戴设备具有关联关系,其中面部信息可以是用户预设在电子设备中,也可以是以图像或视频的方式上传到电子设备中,也可以是用户预设在第一穿戴设备中,第一穿戴设备再提供给电子设备,本申请不作限制。The preset facial image is associated with the first wearable device, and the facial information may be preset by the user in the electronic device, or uploaded to the electronic device in the form of an image or video, or may be preset by the user in the electronic device. In the first wearable device, the first wearable device is then provided to the electronic device, which is not limited in this application.
在一种可能的实现方式中,电子设备显示多媒体文件和至少部分第一信息,具体包括:响应于多媒体文件中包括第一面部图像和第二面部图像,且第一面部图像与预设面部图像匹配,电子设备在多媒体文件的第一区域显示至少部分第一信息;其中,预设面部图像与第一穿戴设备对应;第一区域与第一面部图像的显示区域的距离,小于第一区域与第二面部图像的显示区域的距离。这种方式描述了确定至少部分第一信息在多媒体文件上的显示位置的一种方式。将预设面部图像与多媒体文件中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分第一信息显示在该人物附近。在显示效果上加强了用户和第一信息的对应性,可以直观的看出第一信息对应的用户,能够提升用户体验。In a possible implementation manner, the electronic device displays the multimedia file and at least part of the first information, which specifically includes: in response to the multimedia file including the first facial image and the second facial image, and the first facial image and the preset The facial image is matched, and the electronic device displays at least part of the first information in the first area of the multimedia file; wherein, the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the first area. The distance of an area from the display area of the second facial image. This approach describes a way of determining the display position of at least part of the first information on the multimedia file. Similarity matching is performed between the preset facial image and one or more characters in the multimedia file, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the first information near the character. In terms of display effect, the correspondence between the user and the first information is strengthened, and the user corresponding to the first information can be seen intuitively, which can improve the user experience.
在一种可能的实现方式中,电子设备接收第一操作,之前还包括:电子设备显示拍摄预览界面,拍摄预览界面上包括摄像头采集的预览图像;电子设备在预览图像上显示至少部分第二信息,第二信息和第一穿戴设备检测出的第二传感器数据对应。第二信息为预览界面上显示的生物特征信息。电子设备在预览图像上显示至少部分第二信息,实现了在预览界面上实时显示生物特征信息,用户可以实时查看到生物特征信息。In a possible implementation manner, before the electronic device receives the first operation, it further includes: the electronic device displays a shooting preview interface, and the shooting preview interface includes a preview image captured by a camera; and the electronic device displays at least part of the second information on the preview image , the second information corresponds to the second sensor data detected by the first wearable device. The second information is the biometric information displayed on the preview interface. The electronic device displays at least part of the second information on the preview image, so that the biometric information is displayed on the preview interface in real time, and the user can view the biometric information in real time.
在一种可能的实现方式中,电子设备在预览图像上显示至少部分第二信息,具体包括:响应于预览图像中包括预设面部图像,电子设备显示预览图像和至少部分第二信息,预设面部图像与第一穿戴设备对应。电子设备中存有一个或多个面部图像的信息,电子设备通过第一穿戴设备的身份信息确定出第一穿戴设备对应的预设面部图像,将预设面部图像与预览图像上的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则表示预览图像上的人物为第一穿戴设备的用户,这时电子设备才显示至少部分第二信息,否则不显示,保护了用户的隐私性。In a possible implementation manner, the electronic device displays at least part of the second information on the preview image, which specifically includes: in response to the preview image including a preset facial image, the electronic device displays the preview image and at least part of the second information, preset The facial image corresponds to the first wearable device. Information of one or more facial images is stored in the electronic device, the electronic device determines the preset facial image corresponding to the first wearable device through the identity information of the first wearable device, and compares the preset facial image with one or more of the facial images on the preview image. The personal characters are matched for similarity. If the preset facial image is successfully matched with one of the characters, it means that the character on the preview image is the user of the first wearable device. At this time, the electronic device displays at least part of the second information, otherwise it will not be displayed. User privacy is protected.
在一种可能的实现方式中,电子设备在预览界面上显示至少部分第二信息,具体包括:响应于预览图像中包括第三面部图像和第四面部图像,且第三面部图像与预设面部图像匹配,电子设备在预览图像的第二区域显示至少部分第二信息;其中,预设面部图像与第一穿戴设备对应;第二区域与第三面部图像的显示区域的距离,小于第二区域与第四面部图像的显示区域的距离。这种方式描述了确定至少部分第二信息在预览图像上的显示位置的一种方式。将预设面部图像与预览图像中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分生物特征信息显示在预览图像中该人物附近。在显示效果上加强了用户和第二信息的对应性,可以直观的看出第二信息在预览图像上对应的用户,提升用户体验。In a possible implementation manner, the electronic device displays at least part of the second information on the preview interface, which specifically includes: in response to the preview image including the third facial image and the fourth facial image, and the third facial image is the same as the preset face Image matching, the electronic device displays at least part of the second information in the second area of the preview image; the preset facial image corresponds to the first wearable device; the distance between the second area and the display area of the third facial image is smaller than the second area The distance from the display area of the fourth face image. This approach describes a way of determining the display position of at least part of the second information on the preview image. The preset facial image is matched with one or more characters in the preview image for similarity, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character in the preview image. In terms of display effect, the correspondence between the user and the second information is strengthened, and the user corresponding to the second information on the preview image can be intuitively seen, thereby improving the user experience.
在一种可能的实现方式中,方法还包括:响应于预览图像中不包括预设面部图像,电子设备输出第一提示,第一提示用于提示用户对准面部。电子设备将预设面部图像与图片中的一个或多个人物进行相似度匹配,若匹配不成功,则说明预览图像中不包括第一穿戴设备的用户,电子设备输出提示信息提示用户将拍摄角度对准面部,避免了拍摄出的多媒体文件中不包括第一穿戴设备的用户的情况,提升用户体验。In a possible implementation manner, the method further includes: in response to the preview image not including the preset face image, the electronic device outputs a first prompt, where the first prompt is used to prompt the user to aim at the face. The electronic device performs similarity matching between the preset facial image and one or more characters in the picture. If the matching is unsuccessful, it means that the user of the first wearable device is not included in the preview image, and the electronic device outputs prompt information to prompt the user to change the shooting angle. Aiming at the face avoids the situation that the user of the first wearable device is not included in the photographed multimedia file, and improves the user experience.
在一种可能的实现方式中,第一信息包括以下至少一项:健康状态信息、运动状态信息、或情绪状态信息。In a possible implementation manner, the first information includes at least one of the following: health state information, exercise state information, or emotional state information.
在一种可能的实现方式中,第一传感器数据包括通过至少一种传感器检测的数据,至少一种传感器至少包括以下至少一项:加速度传感器、陀螺仪传感器、地磁传感器、大气压传感器、心率传感器、血压传感器、心电传感器、肌电传感器、体温传感器、皮电传感器、空气温湿度传感器、光照传感器、或骨传导传感器中。In a possible implementation manner, the first sensor data includes data detected by at least one sensor, and the at least one sensor includes at least one of the following: an acceleration sensor, a gyroscope sensor, a geomagnetic sensor, an atmospheric pressure sensor, a heart rate sensor, Blood pressure sensor, ECG sensor, EMG sensor, body temperature sensor, skin electric sensor, air temperature and humidity sensor, light sensor, or bone conduction sensor.
在一种可能的实现方式中,方法还包括:电子设备与第二穿戴设备建立连接;第二穿戴设备用于通过至少一个传感器检测出第四传感器数据,第一信息还与第四传感器数据对应。这种方式描述了电子设备与两个穿戴设备(第一穿戴设备和第二穿戴设备)一起建立连接的情况下,电子设备获取到第一信息,该第一信息与第一穿戴设备和第二穿戴设备的传感器数据(第一传感器数据和第四传感器数据)均对应。其中,电子设备与两个以上的穿戴设备一起建立连接同理。In a possible implementation manner, the method further includes: establishing a connection between the electronic device and the second wearable device; the second wearable device is configured to detect fourth sensor data through at least one sensor, and the first information also corresponds to the fourth sensor data . This method describes that when the electronic device establishes a connection with two wearable devices (the first wearable device and the second wearable device), the electronic device obtains the first information, which is related to the first wearable device and the second wearable device. The sensor data (the first sensor data and the fourth sensor data) of the wearable device are all corresponding. The same is true for establishing a connection between the electronic device and two or more wearable devices.
第三方面,本申请提供了一种电子设备,包括:一个或多个处理器、一个或多个存储器;该一个或多个存储与一个或多个处理器耦合;该一个或多个存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令;当该计算机指令在该处理器上运行时,使得该电子设备执行上述任一方面任一种可能的实现方式中的拍摄方法。In a third aspect, the present application provides an electronic device, comprising: one or more processors and one or more memories; the one or more memories are coupled with the one or more processors; the one or more memories are used for The computer program code is stored in the computer program code, and the computer program code includes computer instructions; when the computer instructions are executed on the processor, the electronic device enables the electronic device to execute the photographing method in any possible implementation manner of any of the above aspects.
第四方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得通信装置执行上述任一方面任一项可能的实现方式中的拍摄方法。In a fourth aspect, the embodiments of the present application provide a computer storage medium, including computer instructions, when the computer instructions are executed on the electronic device, the communication apparatus can execute the photographing method in any possible implementation manner of any of the foregoing aspects.
第五方面,本申请提供一种芯片系统,芯片系统应用于包括存储器、显示屏和传感器的电子设备;芯片系统包括:一个或多个接口电路和一个或者多个处理器;接口电路和处理器通过线路互联;接口电路用于从存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,电子设备执行第一方面及第一方面任一项可能的实现方式中的任务处理的方法。In a fifth aspect, the present application provides a chip system, which is applied to an electronic device including a memory, a display screen and a sensor; the chip system includes: one or more interface circuits and one or more processors; the interface circuit and the processor interconnected by lines; the interface circuit is used to receive signals from the memory and send signals to the processor, where the signals include computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device executes any one of the first aspect and the first aspect Possible implementations of task handling methods.
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述任一方面任一项可能的实现方式中的拍摄方法。In a sixth aspect, an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, enables the computer to execute the photographing method in any of the possible implementations of any one of the foregoing aspects.
附图说明Description of drawings
图1为本申请实施例提供的一种系统图;Fig. 1 is a kind of system diagram that the embodiment of this application provides;
图2为本申请实施例提供的一种电子设备的结构示意图;FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application;
图3为本申请实施例提供的一种电子设备的软件架构图;3 is a software architecture diagram of an electronic device provided by an embodiment of the present application;
图4为本申请实施例提供的一种穿戴设备的结构示意图;FIG. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present application;
图5a-图5e为本申请实施例提供的一组拍摄方法的界面示意图;5a-5e are interface schematic diagrams of a group of shooting methods provided by an embodiment of the present application;
图6a-图6b为本申请实施例提供的又一组拍摄方法的界面示意图;6a-6b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图7a-图7b为本申请实施例提供的又一组拍摄方法的界面示意图;7a-7b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图8a-图8d为本申请实施例提供的又一组拍摄方法的界面示意图;8a-8d are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图9a-图9c为本申请实施例提供的又一组拍摄方法的界面示意图;9a-9c are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图10为本申请实施例提供的又一组拍摄方法的界面示意图;10 is a schematic interface diagram of another set of shooting methods provided by an embodiment of the present application;
图11a-图11c为本申请实施例提供的又一组拍摄方法的界面示意图;11a-11c are interface schematic diagrams of another group of shooting methods provided by the embodiments of the present application;
图12为本申请实施例提供的又一组拍摄方法的界面示意图;12 is a schematic interface diagram of another group of shooting methods provided by an embodiment of the present application;
图13a-图13b为本申请实施例提供的又一组拍摄方法的界面示意图;13a-13b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图14a-图14b为本申请实施例提供的又一组拍摄方法的界面示意图;14a-14b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图15a-图15b为本申请实施例提供的又一组拍摄方法的界面示意图;15a-15b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图16a-图16b为本申请实施例提供的又一组拍摄方法的界面示意图;16a-16b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图17a-图17b为本申请实施例提供的又一组拍摄方法的界面示意图;17a-17b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图18a-图18b为本申请实施例提供的又一组拍摄方法的界面示意图;18a-18b are interface schematic diagrams of another group of shooting methods provided by an embodiment of the present application;
图19a-图19b为本申请实施例提供的一种拍摄方法的技术原理图;19a-19b are technical schematic diagrams of a shooting method provided by an embodiment of the application;
图20a-图20b为本申请实施例提供的一种拍摄方法的方法流程图;20a-20b are method flowcharts of a shooting method provided by an embodiment of the present application;
图21为本申请实施例提供的又一种拍摄方法的方法流程图;FIG. 21 is a method flowchart of another shooting method provided by an embodiment of the present application;
图22为本申请实施例提供的又一种系统图;FIG. 22 is another system diagram provided by an embodiment of the present application;
图23-图24为本申请实施例提供的又一种拍摄方法的方法流程图。FIGS. 23-24 are method flowcharts of still another photographing method provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。The technical solutions in the embodiments of the present application will be described clearly and in detail below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise specified, “/” means or, for example, A/B can mean A or B; “and/or” in the text is only a description of an associated object The association relationship indicates that there can be three kinds of relationships, for example, A and/or B can indicate that A exists alone, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiments of this application , "plurality" means two or more than two.
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。Hereinafter, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the "multiple" The meaning is two or more.
本申请实施例中涉及的电子设备/用户设备可以是手机、平板电脑、桌面型、膝上型、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、虚拟现实设备、PDA(Personal Digital Assistant,个人数字助手,又称为掌上电脑)、便携式互联网设备、数据存储设备、相机或可穿戴设备(例如,无线耳机、智能手表、智能手环、智能眼镜、头戴式设备(Head-mounted display,HMD)、电子衣物、电子手镯、电子项链、电子配件、电子纹身和智能镜子)等等。The electronic equipment/user equipment involved in the embodiments of the present application may be mobile phones, tablet computers, desktops, laptops, notebook computers, Ultra-mobile Personal Computers (UMPCs), handheld computers, netbooks, personal digital Assistant (Personal Digital Assistant, PDA), virtual reality device, PDA (Personal Digital Assistant, also known as PDA), portable Internet device, data storage device, camera or wearable device (for example, wireless headset, smart Watches, smart bracelets, smart glasses, head-mounted display (HMD), electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos and smart mirrors) and so on.
本申请实施例提供了一种拍摄方法,应用于至少包括电子设备100和穿戴设备201的系统,电子设备100与穿戴设备201建立连接,当电子设备100拍摄图片/视频时,获取穿戴设备201通过传感器检测的用户的生物特征(例如心率、血压、运动姿势等),电子设备100将该生物特征与拍摄的图片/视频相关联,或者将该生物特征与拍摄的图片/视频中的人物相关联,该人物与该生物特征是对应的。电子设备100生成包括生物特征信息的图片/视频,该生物特征信息指示了用户的生物特征。这种方法在图片/视频生成的过程中将生物特征信息与图片/视频的信息相关联,对图片/视频做出了更精准的特征识别,并且提供了一种新的图片/视频格式,使得电子设备100保存带有生物特征信息的图片/视频,方便后续根据生物特征信息对存储的图片/视频进行分类。其中,为实现本申请实施例所涉及的方案,所述电子设备和穿戴设备的数量都可以为一个或者多个,本申请中对此不加以限制。The embodiment of the present application provides a shooting method, which is applied to a system including at least an electronic device 100 and a wearable device 201. The electronic device 100 establishes a connection with the wearable device 201. The biometrics of the user detected by the sensor (such as heart rate, blood pressure, exercise posture, etc.), the electronic device 100 associates the biometrics with the captured pictures/videos, or associates the biometrics with the characters in the captured pictures/videos , the character corresponds to the biological feature. The electronic device 100 generates a picture/video including biometric information indicating the user's biometrics. This method associates the biometric information with the information of the picture/video in the process of picture/video generation, makes more accurate feature recognition for the picture/video, and provides a new picture/video format, making The electronic device 100 saves pictures/videos with biometric information, so as to facilitate subsequent classification of the stored pictures/videos according to the biometric information. Wherein, in order to implement the solutions involved in the embodiments of the present application, the number of the electronic device and the wearable device may be one or more, which is not limited in the present application.
图1示例性示出了本申请提供的一种系统图。FIG. 1 exemplarily shows a system diagram provided by the present application.
如图1所示,该系统可以包括电子设备100和一个或多个穿戴设备(例如穿戴设备201、穿戴设备202)。电子设备100和穿戴设备201(穿戴设备202)可以通过无线通信方式连接。例如至少可以通过以下至少一种无线连接方式建立连接:蓝牙(blue tooth,BT)、近场通信(near  field communication,NFC)、无线保真(wireless fidelity,WiFi)、或WiFi直连。As shown in FIG. 1 , the system may include an electronic device 100 and one or more wearable devices (eg, wearable device 201 , wearable device 202 ). The electronic device 100 and the wearable device 201 (wearable device 202 ) may be connected by wireless communication. For example, the connection can be established by at least one of the following wireless connection manners: Bluetooth (blue tooth, BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, WiFi), or WiFi direct connection.
其中,可选的,电子设备100可以与多个不同类型的穿戴设备连接。例如,电子设备100可以同时通过蓝牙连接智能手表和无线耳机。Wherein, optionally, the electronic device 100 may be connected with a plurality of different types of wearable devices. For example, the electronic device 100 can connect a smart watch and a wireless earphone through Bluetooth at the same time.
在本申请实施例中,以电子设备100和穿戴设备201通过蓝牙连接作为示例性说明。In the embodiments of the present application, the electronic device 100 and the wearable device 201 are connected via Bluetooth as an exemplary illustration.
电子设备100是具有摄像功能的电子设备。例如手机、平板、或相机等等。The electronic device 100 is an electronic device having an imaging function. Such as mobile phones, tablets, or cameras, etc.
穿戴设备201包括无线耳机、智能手表、智能手环、智能眼镜、智能戒指、智能运动鞋、虚拟现实显示设备、智能头带、电子衣物、电子手镯、电子项链、电子配件、电子纹身或智能镜子等等。Wearable devices 201 include wireless headphones, smart watches, smart bracelets, smart glasses, smart rings, smart sports shoes, virtual reality display devices, smart headbands, electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos, or smart mirrors etc.
穿戴设备201可以通过传感器检测用户的健康状态信息、运动状态信息、情绪状态信息等。健康状态信息包括心率、血压、血糖、脑电、心电、肌电、体温等信息;运动状态信息包括走路、跑步、骑车、游泳、打羽毛球、滑冰、冲浪、跳舞等常见的运动类型姿态,也可以包括一些更细粒度的运动姿态,例如:正手击球、反手击球、跳拉丁舞、跳机械舞等;情绪状态信息包括紧张、焦虑、悲伤、压力大、兴奋、愉悦等。The wearable device 201 can detect the user's health state information, exercise state information, emotional state information, and the like through sensors. Health status information includes heart rate, blood pressure, blood sugar, EEG, ECG, EMG, body temperature and other information; exercise status information includes walking, running, cycling, swimming, badminton, skating, surfing, dancing and other common sports postures , can also include some more fine-grained motion gestures, such as: forehand, backhand, Latin dance, mechanical dance, etc.; emotional state information includes tension, anxiety, sadness, stress, excitement, joy, etc.
电子设备100与穿戴设备201通过蓝牙连接。电子设备100在拍摄图片/视频时,获取用户的生物特征信息,该生物特征信息与穿戴设备201通过传感器检测到的用户的健康状态信息、运动状态信息、情绪状态信息等对应。例如,穿戴设备201通过心率传感器检测到心率数据、通过血压传感器检测到血压数据;则生物特征信息可以是心率数据、血压数据,也可以是基于该心率数据和血压数据得出的心率正常、血压正常等信息。The electronic device 100 and the wearable device 201 are connected through Bluetooth. When taking pictures/videos, the electronic device 100 obtains the user's biometric information, which corresponds to the user's health state information, motion state information, emotional state information, etc. detected by the wearable device 201 through sensors. For example, the wearable device 201 detects heart rate data through a heart rate sensor and blood pressure data through a blood pressure sensor; the biometric information may be heart rate data, blood pressure data, or normal heart rate and blood pressure based on the heart rate data and blood pressure data. normal information.
电子设备将生物特征信息与图片/视频的信息相关联,并保存带有生物特征信息的图片/视频。电子设备通过对这种带有生物特征信息的图片/视频进行分类,能够对不同特征的图片/视频进行快捷且准确的查询和筛选。并且通过穿戴设备的传感器能够提供更多的内在特征,对图片/视频的特征识别更加精准。The electronic device associates the biometric information with the information of the picture/video, and saves the picture/video with the biometric information. By classifying such pictures/videos with biometric information, the electronic device can quickly and accurately query and filter pictures/videos with different characteristics. Moreover, the sensors of the wearable device can provide more intrinsic features, and the feature recognition of pictures/videos can be more accurate.
首先介绍本申请以下实施例中提供的示例性电子设备100。First, the exemplary electronic device 100 provided in the following embodiments of the present application is introduced.
图2示出了电子设备100的结构示意图。FIG. 2 shows a schematic structural diagram of the electronic device 100 .
下面以电子设备100为例对实施例进行具体说明。应该理解的是,图2所示电子设备100仅是一个范例,并且电子设备100可以具有比图2中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。The embodiment will be described in detail below by taking the electronic device 100 as an example. It should be understood that the electronic device 100 shown in FIG. 2 is only an example, and the electronic device 100 may have more or fewer components than those shown in FIG. 2, two or more components may be combined, or Different component configurations are possible. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2. Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195 and so on. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备100也可以包括一个或多个处理器110。The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. In some embodiments, electronic device 100 may also include one or more processors 110 .
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。The charging management module 140 is used to receive charging input from the charger. The charger may be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like. The mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 . In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Wherein, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and passed to the application processor. The application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大, 经天线2转为电磁波辐射出去。示例性地,无线通信模块160可以包括蓝牙模块、Wi-Fi模块等。The wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR). The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and radiate it into electromagnetic waves through the antenna 2 . Exemplarily, the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括第五代移动通信(the 5th Generation,5G)系统、新空口(New Radio,NR)系统,全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include the 5th Generation (the 5th Generation, 5G) system, the New Radio (New Radio, NR) system, the Global System for Mobile Communications (GSM), the General Packet Radio Service ( general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (time-division code division multiple access) access, TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
可选的,在一些实施例中,无线通信模块160中包括的蓝牙(BT)模块、WLAN模块可以发射信号来探测或扫描电子设备100附近设备,使得电子设备100可以使用蓝牙或WLAN等无线通信技术发现附近设备,并与附近设备建立无线通信连接,并通过上述连接分享数据至附近设备。其中,蓝牙(BT)模块可以提供包括经典蓝牙(蓝牙2.1)或蓝牙低功耗(Bluetooth low energy,BLE)中一项或多项蓝牙通信的解决方案。WLAN模块可以提供包括Wi-Fi direct、Wi-Fi LAN或Wi-Fi softAP中一项或多项WLAN通信的解决方案。Optionally, in some embodiments, a Bluetooth (BT) module and a WLAN module included in the wireless communication module 160 can transmit signals to detect or scan for devices near the electronic device 100, so that the electronic device 100 can use wireless communication such as Bluetooth or WLAN. The technology discovers nearby devices, establishes wireless communication connections with nearby devices, and shares data to nearby devices through the aforementioned connections. Among them, the Bluetooth (BT) module can provide a solution including one or more Bluetooth communications in classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (Bluetooth low energy, BLE). The WLAN module can provide a solution including one or more WLAN communications in Wi-Fi direct, Wi-Fi LAN or Wi-Fi softAP.
可选的,在一些实施例中,移动通信模块150提供的无线通信的解决方案可使得电子设备可以与网络中的设备(如服务器)通信,无线通信模块160提供的WLAN无线通信的解决方案也可使得电子设备可以与网络中的设备(如服务器)通信,并可以通过网络中的该设备(如服务器)与云端设备通信。这样,电子设备便可以发现云端设备、传输数据至云端设备。Optionally, in some embodiments, the wireless communication solution provided by the mobile communication module 150 may enable the electronic device to communicate with a device (such as a server) in the network, and the WLAN wireless communication solution provided by the wireless communication module 160 may also be used. The electronic device can be made to communicate with a device (such as a server) in a network, and can communicate with a cloud device through the device (such as a server) in the network. In this way, the electronic device can discover the cloud device and transmit data to the cloud device.
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
显示屏194用于显示图像,视频等。该显示屏194可以为平面显示屏、曲面显示屏、或折叠屏。当显示屏194为折叠屏,折叠屏处于折叠状态,折叠屏至少包括第一显示区域和第二显示区域。其中,第一显示区域和第二显示区域的出光面不同。第一显示区域位于折叠屏的第一区域,第二显示区域位于折叠屏的第二区域,当折叠屏处于折叠状态,第一区域和第二区域之间的夹角大于等于0度,小于180。Display screen 194 is used to display images, videos, and the like. The display screen 194 may be a flat display screen, a curved display screen, or a folding screen. When the display screen 194 is a folding screen, the folding screen is in a folding state, and the folding screen at least includes a first display area and a second display area. Wherein, the light emitting surfaces of the first display area and the second display area are different. The first display area is located in the first area of the folding screen, and the second display area is located in the second area of the folding screen. When the folding screen is in the folded state, the angle between the first area and the second area is greater than or equal to 0 degrees and less than 180 degrees. .
显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。当电子设备100包含两个或两个以上显示屏时,各显示屏的形态可以不同。例如,其中一个显示屏可以是折叠显示屏,另一个显示屏可以是平面显示屏。例如,其中一个显示屏是彩色显示屏,另一个显示屏可以是黑白显示屏。Display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one. When the electronic device 100 includes two or more display screens, the shapes of the display screens may be different. For example, one of the displays may be a folding display and the other display may be a flat display. For example, one of the displays could be a color display and the other could be a black and white display.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处 理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back by the camera 193 . For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object is projected through the lens to generate an optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats of image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
摄像头193可以是3D摄像头,电子设备100可以通过3D摄像头,ISP,视频编解码器,GPU,显示屏194以及应用处理器AP、神经网络处理器NPU等实现摄像功能。The camera 193 may be a 3D camera, and the electronic device 100 may implement a camera function through the 3D camera, ISP, video codec, GPU, display screen 194, application processor AP, neural network processor NPU, and the like.
3D摄像头可用于采集拍摄对象的彩色图像数据以及深度数据。ISP可用于处理3D摄像头采集的彩色图像数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置于3D摄像头。3D cameras can be used to capture color image data as well as depth data of the subject. The ISP can be used to process the color image data captured by the 3D camera. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the 3D camera.
可选的,在一些实施例中,3D摄像头可以由彩色摄像模组和3D感测模组组成。Optionally, in some embodiments, the 3D camera may be composed of a color camera module and a 3D sensing module.
可选的,在一些实施例中,彩色摄像模组的摄像头的感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,Optionally, in some embodiments, the photosensitive element of the camera of the color camera module may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (complementary metal-oxide-semiconductor,
CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。CMOS) phototransistors. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
可选的,在一些实施例中,3D感测模组可以是(time of flight,TOF)3D感测模块或结构光(structured light)3D感测模块。其中,结构光3D感测是一种主动式深度感测技术,结构光3D感测模组的基本零组件可包括红外线(Infrared)发射器、IR相机模等。结构光3D感测模组的工作原理是先对被拍摄物体发射特定图案的光斑(pattern),再接收该物体表面上的光斑图案编码(light coding),进而比对与原始投射光斑的异同,并利用三角原理计算出物体的三维坐标。该三维坐标中就包括电子设备100距离被拍摄物体的距离。其中,TOF 3D感测也是主动式深度感测技术,TOF 3D感测模组的基本组件可包括红外线(Infrared)发射器、IR相机模等。TOF 3D感测模组的工作原理是通过红外线折返的时间去计算TOF 3D感测模组跟被拍摄物体之间的距离(即深度),以得到3D景深图。Optionally, in some embodiments, the 3D sensing module may be a time of flight (TOF) 3D sensing module or a structured light (structured light) 3D sensing module. The structured light 3D sensing is an active depth sensing technology, and the basic components of the structured light 3D sensing module may include an infrared (Infrared) emitter, an IR camera module, and the like. The working principle of the structured light 3D sensing module is to first emit a light spot of a specific pattern on the object to be photographed, and then receive the light coding of the light spot pattern on the surface of the object, and then compare the similarities and differences with the original projected light spot. And use the principle of trigonometry to calculate the three-dimensional coordinates of the object. The three-dimensional coordinates include the distance between the electronic device 100 and the object to be photographed. Among them, TOF 3D sensing is also an active depth sensing technology, and the basic components of the TOF 3D sensing module may include an infrared (Infrared) transmitter, an IR camera module, and the like. The working principle of the TOF 3D sensing module is to calculate the distance (ie depth) between the TOF 3D sensing module and the object to be photographed through the time of infrared reentrant, so as to obtain a 3D depth of field map.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。A digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)-1,MPEG-2,MPEG-3,MPEG-4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG)-1, MPEG-2, MPEG-3, MPEG-4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以 实现电子设备100的智能认知等应用,例如:图像识别,面部识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process the input information, and can continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as: image recognition, facial recognition, speech recognition, text understanding, etc.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐、照片、视频等数据保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 . The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save data such as music, photos, videos, etc. in an external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。Internal memory 121 may be used to store computer executable program code, which includes instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 . The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like. The storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like. In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。The audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. Speaker 170A, also referred to as a "speaker", is used to convert audio electrical signals into sound signals. The electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call. The receiver 170B, also referred to as "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 answers a call or a voice message, the voice can be answered by placing the receiver 170B close to the human ear. The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone jack 170D is used to connect wired earphones. The earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。The pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
气压传感器180C用于测量气压。The air pressure sensor 180C is used to measure air pressure.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。Distance sensor 180F for measuring distance. The electronic device 100 can measure the distance through infrared or laser.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
温度传感器180J用于检测温度。The temperature sensor 180J is used to detect the temperature.
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。Touch sensor 180K, also called "touch panel". The touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to touch operations may be provided through display screen 194 .
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone. The audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key. The electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。Motor 191 can generate vibrating cues. The motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects. The motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used to connect a SIM card. The SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication. In some embodiments, the electronic device 100 employs an eSIM, ie: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。其中,Android系统仅为本申请实施例中电子设备100的一种系统示例,本申请还可以适用于其他类型的操作系统,比如IOS、windows等,本申请对此不加以限制。下述仅将Android系统作为电子设备100的操作系统的示例。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of the electronic device 100 . The Android system is only a system example of the electronic device 100 in the embodiment of the present application, and the present application may also be applicable to other types of operating systems, such as IOS, windows, etc., which is not limited in the present application. The following only takes the Android system as an example of the operating system of the electronic device 100 .
图3是本申请实施例的电子设备100的软件结构框图。FIG. 3 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
分层架构将软件分成若干个层。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。A layered architecture divides software into several layers. Layers communicate with each other through software interfaces. In some embodiments, the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图3所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝 牙,音乐,视频,短信息等应用程序。As shown in Figure 3, the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in Figure 3, the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make these data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications. A display interface can consist of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the electronic device 100 . For example, the management of call status (including connecting, hanging up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:图像处理模块、视频处理模块、表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL嵌入式系统版(OpenGL for Embedded Systems,OpenGL ES)),2D图形引擎(例如:Skia图形库(Skia Graphics Library,SGL))等。A system library can include multiple functional modules. For example: image processing module, video processing module, surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL for Embedded Systems, OpenGL ES)), 2D graphics Engine (for example: Skia Graphics Library (Skia Graphics Library, SGL)), etc.
图像处理模块,用于对图像进行编码、解码以及渲染等过程,使应用程序可以在显示屏上显示图像。可以实现图像格式的转换和图像文件的生成等。The image processing module is used to encode, decode and render the image, so that the application can display the image on the display screen. The conversion of image formats and the generation of image files can be realized.
视频处理模块,用于对视频帧进行编码、解码以及渲染等过程,使应用程序可以在显示屏上显示视频。可以实现视频格式的转换和视频文件的生成等。The video processing module is used to encode, decode, and render video frames, so that the application can display the video on the display screen. It can realize the conversion of video formats and the generation of video files.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:移动影像专家组4(Motion Picture Expert Group,MPEG4),高级视频编码(MPEG-4 Part 10 Advanced Video Coding,MPEG-4 AVC/H.264),动态影像专家压缩标准音频层3(MPEG Audio Layer3,MP3),高级音频编码(Advanced Audio Coding,AAC),自适应多速率(Adaptive Multi-Rate,AMR),联合图像专家组(Joint Photographic Experts Group,JPEG/JPG),便携式网络图形(Portable Network Graphics,PNG)等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support a variety of audio and video encoding formats, such as: Motion Picture Expert Group 4 (Motion Picture Expert Group, MPEG4), Advanced Video Coding (MPEG-4 Part 10 Advanced Video Coding, MPEG-4 AVC/H.264), Motion Picture Expert Compression Standard Audio Layer 3 (MPEG Audio Layer 3, MP3), Advanced Audio Coding (Advanced Audio Coding, AAC), Adaptive Multi-Rate (AMR), Joint Photographic Experts Group (Joint Photographic Experts Group, JPEG/JPG), Portable Network Graphics (PNG), etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
2D图形引擎是2D绘图的绘图引擎。2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
图3所示的软件系统涉及到使用拍摄能力的应用呈现(如图库,文件管理器),以及应用框架层提供WLAN服务、蓝牙服务,以及内核和底层提供WLAN蓝牙能力和基本通信协议。The software system shown in FIG. 3 involves application presentation (eg, gallery, file manager) using photographing capabilities, and the application framework layer provides WLAN services, Bluetooth services, and the kernel and bottom layer provide WLAN Bluetooth capabilities and basic communication protocols.
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。In the following, the workflow of the software and hardware of the electronic device 100 is exemplarily described in conjunction with the capturing and photographing scene.
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用对应的相机服务,进而相机服务通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer. The application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the control corresponding to the touch operation as the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera service corresponding to the camera application, and then the camera service starts the camera driver by calling the kernel layer, and captures static images through the camera 193. image or video.
相机服务同步调用内核层启动蓝牙驱动,通过蓝牙天线向已连接的穿戴设备发送请求消息,并接收上述穿戴设备基于该请求消息发送的传感器数据或生物特征信息;相机服务调用图像处理模块或视频处理模块将生物特征信息写入图像或视频的图像帧中,生成带有生物特征信息的图片文件或视频文件。The camera service synchronously calls the kernel layer to start the Bluetooth driver, sends a request message to the connected wearable device through the Bluetooth antenna, and receives the sensor data or biometric information sent by the above-mentioned wearable device based on the request message; the camera service calls the image processing module or video processing module The module writes the biometric information into the image frame of the image or video, and generates a picture file or video file with the biometric information.
本申请中,图片文件和视频文件可以称为多媒体文件。In this application, picture files and video files may be referred to as multimedia files.
图4示例性的示出了本申请提供的穿戴设备201的结构示意图。FIG. 4 exemplarily shows a schematic structural diagram of the wearable device 201 provided by the present application.
如图4所示,穿戴设备201可以包括处理器102、存储器103、无线通信处理模块104、移动通信处理模块105、触控显示屏106和传感器模块107。这些部件可以通过总线连接。其中:As shown in FIG. 4 , the wearable device 201 may include a processor 102 , a memory 103 , a wireless communication processing module 104 , a mobile communication processing module 105 , a touch display screen 106 and a sensor module 107 . These components can be connected via a bus. in:
处理器102可用于读取和执行计算机可读指令。具体实现中,处理器102可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责执行定点或浮点算数运算操作、移位操作以及逻辑操作等,也可以执行地址运算和转换。寄存器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器102的硬件架构可以是专用集成电路(Application Specific Integrated Circuits,ASIC)架构、MIPS架构、ARM架构或者NP架构等等。The processor 102 may be used to read and execute computer readable instructions. In a specific implementation, the processor 102 may mainly include a controller, an arithmetic unit and a register. Among them, the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations, and can also perform address operations and conversions. Registers are mainly responsible for saving register operands and intermediate operation results temporarily stored during instruction execution. In specific implementation, the hardware architecture of the processor 102 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
在一些实施例中,处理器102可以用于解析无线通信处理模块104和有线LAN通信处理模块116接收到的信号,如电子设备100发送的获取传感器数据或生物特征信息的请求消息,电子设备100发送的停止获取传感器数据或生物特征信息的请求消息。处理器102可以用于根据传感器模块106采集的信息,进行相应的分析处理,如分析用户的健康状态、运动状态、情绪状态等等。In some embodiments, the processor 102 may be configured to parse the signals received by the wireless communication processing module 104 and the wired LAN communication processing module 116, such as a request message sent by the electronic device 100 to obtain sensor data or biometric information, the electronic device 100 A request message sent to stop acquiring sensor data or biometric information. The processor 102 may be configured to perform corresponding analysis processing according to the information collected by the sensor module 106, such as analyzing the user's health state, exercise state, emotional state, and the like.
存储器103与处理器102耦合,用于存储各种软件程序或多组指令中的至少一种。具体实现中,存储器103可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器103可以存储操作系统,例如uCOS、VxWorks、RTLinux等嵌入式操作系统。存储器103还可以存储通信程序,该通信程序可用于与电子设备100,一个或多个服务器,或附加设备进行通信。Memory 103 is coupled to processor 102 for storing at least one of various software programs or sets of instructions. In specific implementations, memory 103 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 103 can store an operating system, such as an embedded operating system such as uCOS, VxWorks, RTLinux, and the like. Memory 103 may also store communication programs that may be used to communicate with electronic device 100, one or more servers, or additional devices.
无线通信处理模块104可以包括蓝牙(BT)通信处理模块104A、WLAN通信处理模块 104B中的一项或多项。The wireless communication processing module 104 may include one or more of a Bluetooth (BT) communication processing module 104A, a WLAN communication processing module 104B.
在一些实施例中,蓝牙(BT)通信处理模块、WLAN通信处理模块中的一项或多项可以监听到其他设备(如电子设备100)发射的信号,如探测请求、扫描信号等等,并可以发送响应信号,如探测响应、扫描响应等,使得其他设备(如电子设备100)可以发现穿戴设备201,并与其他设备(如电子设备100)建立无线通信连接,通过蓝牙或WLAN中的一种或多种无线通信技术与其他设备(如电子设备100)进行通信。In some embodiments, one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may listen to signals transmitted by other devices (eg, electronic device 100 ), such as probe requests, scan signals, etc., and A response signal, such as a probe response, a scan response, etc., can be sent, so that other devices (such as the electronic device 100) can discover the wearable device 201 and establish a wireless communication connection with other devices (such as the electronic device 100) through a Bluetooth or a WLAN. One or more wireless communication technologies to communicate with other devices, such as electronic device 100 .
在另一些实施例中,蓝牙(BT)通信处理模块、WLAN通信处理模块中的一项或多项也可以发射信号,如广播蓝牙信号、信标信号,使得其他设备(如电子设备100)可以发现穿戴设备201,并与其他设备(如电子设备100)建立无线通信连接,通过蓝牙或WLAN中的一种或多种无线通信技术与其他设备(如电子设备100)进行通信。In other embodiments, one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may also transmit signals, such as broadcasting Bluetooth signals, beacon signals, so that other devices (eg, electronic device 100 ) can The wearable device 201 is discovered, and a wireless communication connection is established with other devices (eg, the electronic device 100 ), and communicates with other devices (eg, the electronic device 100 ) through one or more wireless communication technologies in Bluetooth or WLAN.
移动通信处理模块105可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。用于执行蜂窝通信和/或数据通信,例如,移动通信模块106可以包括用于执行蜂窝通信的电路交换模块(“CS”模块)和用于执行数据通信的分组交换模块(“PS”模块)。在本申请中,移动通信处理模块105可以通过第四代移动通信技术(4th generation mobile networks)或第五代移动通信技术(5th generation mobile networks)与其他设备(如服务器)进行通信。The mobile communication processing module 105 can provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 . For performing cellular communications and/or data communications, for example, the mobile communications module 106 may include a circuit switched module ("CS" module) for performing cellular communications and a packet switched module ("PS" module) for performing data communications . In this application, the mobile communication processing module 105 may communicate with other devices (such as servers) through the fourth generation mobile communication technology (4th generation mobile networks) or the fifth generation mobile communication technology (5th generation mobile networks).
触控屏106,Touch panel,又称为触控面板,是个可接收触头等输入讯号的感应式液晶显示装置,可用于显示图像、视频等。触控屏106可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED)显示屏,有源矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED)显示屏,柔性发光二极管(flexible light-emitting diode,FLED)显示屏,量子点发光二极管(quantum dot light emitting diodes,QLED)显示屏等等。The touch screen 106, also known as a touch panel, is an inductive liquid crystal display device that can receive input signals such as contacts, and can be used to display images, videos, and the like. The touch screen 106 can use a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, an active-matrix organic light emitting diode (AMOLED) ) display, flexible light-emitting diode (flexible light-emitting diode, FLED) display, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) display and so on.
传感器模块107可以包括运动传感器107A、生物传感器107B、环境传感器107C等等。其中,Sensor module 107 may include motion sensor 107A, biosensor 107B, environmental sensor 107C, and the like. in,
运动传感器107A,是一种将非电量(如速度、压力)的变化转换为电量变化的元件。可以包括以下的至少一种:加速度传感器、陀螺仪传感器、地磁传感器(又称电子罗盘传感器)、或大气压传感器等。其中,加速度传感器可以检测各个方向上(一般为三轴,即x、y和z轴)加速度的大小。陀螺仪传感器可以用于确定运动姿态。电子罗盘传感器可以用于测量方向,实现或辅助导航。大气压传感器用于测量气压,在一些实施例中,可以通过在运动过程中微弱的气压变化来计算出所在位置的高度变化,同时在10层楼的高度运动过程中精确度可以控制在10CM以内,大到攀岩、小到爬楼梯的数据都可以实现监测。The motion sensor 107A is a component that converts changes in non-electricity (eg, speed, pressure) into changes in electricity. It may include at least one of the following: an acceleration sensor, a gyro sensor, a geomagnetic sensor (also known as an electronic compass sensor), or an atmospheric pressure sensor. Among them, the acceleration sensor can detect the magnitude of acceleration in various directions (generally three axes, ie x, y and z axes). Gyroscopic sensors can be used to determine motion attitude. Electronic compass sensors can be used to measure direction, enable or assist navigation. The atmospheric pressure sensor is used to measure the air pressure. In some embodiments, the altitude change of the location can be calculated through the weak air pressure change during the movement, and the accuracy can be controlled within 10CM during the height movement of the 10-story building. Data ranging from rock climbing to small stair climbing can be monitored.
本申请中,通过运动传感器107A可以测量用户的活动情况,例如跑步步数、速度、游泳圈数、骑车距离、运动姿态(如打球、游泳、跑步)等。In this application, the motion sensor 107A can measure the user's activity, such as running steps, speed, swimming laps, cycling distance, exercise posture (such as playing ball, swimming, running) and the like.
生物传感器107B,对生物物质敏感并将其浓度转换为电信号进行检测的仪器。是由固定化的生物敏感材料作为识别元件(包括酶、抗体、抗原、微生物、细胞、组织、核酸等生物活性物质)与适当的理化换能器(如氧电极、光敏管、场效应管、压电晶体等)即信号放大装置构成的分析工具或系统。生物传感器具有接收器与转换器的功能。生物传感器107B可以包括以下至少一种:血糖传感器、血压传感器、心电传感器、肌电传感器、体温传感器、脑电波传感器等,这些传感器主要实现的功能包括健康和医疗监控、娱乐等。Biosensor 107B is an instrument that is sensitive to biological substances and converts their concentration into electrical signals for detection. It consists of immobilized biologically sensitive materials as identification elements (including enzymes, antibodies, antigens, microorganisms, cells, tissues, nucleic acids and other biologically active substances) and appropriate physical and chemical transducers (such as oxygen electrodes, photosensitive tubes, field effect tubes, Piezoelectric crystal, etc.), that is, an analysis tool or system composed of a signal amplifying device. Biosensors function as receivers and converters. The biosensor 107B may include at least one of the following: a blood sugar sensor, a blood pressure sensor, an electrocardiogram sensor, an electromyography sensor, a body temperature sensor, a brain wave sensor, etc. The main functions of these sensors include health and medical monitoring, entertainment, and the like.
其中,血糖传感器用于测量血糖。血压传感器用于测量血压。心电传感器,例如使用银质纳米线来监测电生理信号,比如心电图。肌电传感器用于监测肌电图。体温传感器用于测量体温、脑电波传感器用于监测脑电波。本申请中,通过生物传感器107B可以测量用户的 各项生理指标(例如血糖、血压、体温、心电等等),穿戴设备201可以根据生理指标推算出用户的健康状况。Among them, the blood sugar sensor is used to measure blood sugar. Blood pressure sensors are used to measure blood pressure. ECG sensors, for example, use silver nanowires to monitor electrophysiological signals, such as electrocardiograms. EMG sensors are used to monitor EMG. Body temperature sensors are used to measure body temperature, and brain wave sensors are used to monitor brain waves. In this application, various physiological indicators of the user (such as blood sugar, blood pressure, body temperature, electrocardiogram, etc.) can be measured by the biosensor 107B, and the wearable device 201 can calculate the health status of the user according to the physiological indicators.
生物传感器107B还可以包括心率传感器、皮电传感器。其中,The biosensor 107B may also include a heart rate sensor, a galvanic sensor. in,
心率传感器,可以通过检测用户的心率来追踪用户的运动强度、不同的运动训练模式等,并可以推算出用户的睡眠周期、睡眠质量等健康数据。当电容灯光射向皮肤,透过皮肤组织反射回的光被光敏传感器接受并转换成电信号再经过电信号转换成数字信号,再根据血液的吸光率就能测算出心率。The heart rate sensor can track the user's exercise intensity, different exercise training modes, etc. by detecting the user's heart rate, and can calculate the user's sleep cycle, sleep quality and other health data. When the capacitive light strikes the skin, the light reflected back through the skin tissue is received by the photosensitive sensor and converted into an electrical signal, which is then converted into a digital signal, and then the heart rate can be measured according to the absorbance of the blood.
皮电传感器,用来测量用户的唤醒度,而唤醒度是和用户的关注度和参与度紧密联系在一起的,通常配备在一些可以监测汗水水平的设备上。人体的皮肤电阻、电导随皮肤汗腺机能变化而改变,这些可测量的皮肤电改变称之为皮电活动(EDA)。Galvanic sensors are used to measure the user's arousal, which is closely linked to the user's attention and engagement, and is usually equipped on some devices that can monitor sweat levels. The skin resistance and conductance of the human body change with the changes in the function of the skin sweat glands, and these measurable skin galvanic changes are called electrodermal activity (EDA).
本申请实施例中,穿戴设备201通过皮电传感器对心理上引起的汗腺活动进行测量,来确定用户的心理活动,例如用户的心情指数。例如心情愉悦、紧张害怕、压力较大等。In the embodiment of the present application, the wearable device 201 measures the psychologically induced sweat gland activity through the electrodermal sensor to determine the user's psychological activity, such as the user's mood index. For example, feeling happy, nervous, fearful, stressed, etc.
环境传感器107C可以包括以下至少一种:空气温湿度传感器、雨量传感器、光照传感器、风速风向传感器、颗粒物传感器等。环境传感器107C可以实现对空气质量的检测,比如雾霾程度、室内甲醛浓度、PM2.5检测等。本申请中,通过环境传感器107C可以测量天气变化、空气湿度、空气质量等。The environmental sensor 107C may include at least one of the following: an air temperature and humidity sensor, a rain sensor, a light sensor, a wind speed and direction sensor, a particle sensor, and the like. The environmental sensor 107C can detect air quality, such as the degree of haze, indoor formaldehyde concentration, PM2.5 detection, and so on. In this application, weather changes, air humidity, air quality, etc. can be measured by the environmental sensor 107C.
可以理解的是,图4示意的结构并不构成对穿戴设备201的具体限定。可选的,在本申请另一些实施例中,穿戴设备101可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure shown in FIG. 4 does not constitute a specific limitation on the wearable device 201 . Optionally, in other embodiments of the present application, the wearable device 101 may include more or less components than shown, or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
在一些具体场景中,用户佩戴穿戴设备,电子设备与穿戴设备建立连接。用户触发电子设备启动摄像头拍摄图片,电子设备获取图片信息,并且获取生物特征信息。该生物特征信息与穿戴设备通过至少一个传感器检测出的传感器数据对应。其中,本申请涉及的传感器数据包括但不限于上述提及的运动传感器107A、生物传感器107B和环境传感器107C中的至少一个传感器检测出的数据。In some specific scenarios, the user wears the wearable device, and the electronic device establishes a connection with the wearable device. The user triggers the electronic device to start the camera to take a picture, the electronic device obtains the picture information, and obtains the biometric information. The biometric information corresponds to sensor data detected by the wearable device through at least one sensor. Wherein, the sensor data involved in this application includes, but is not limited to, data detected by at least one of the motion sensor 107A, the biosensor 107B, and the environmental sensor 107C mentioned above.
可选的,上述生物特征信息可以是穿戴设备基于传感器数据进行分析处理后,发送给电子设备的;可选的,上述生物特征信息可以是电子设备获取到穿戴设备发送的传感器数据,然后基于该传感器数据进行分析处理后得到的。Optionally, the above-mentioned biometric information may be sent to the electronic device after the wearable device performs analysis and processing based on sensor data; The sensor data is analyzed and processed.
电子设备将该生物特征信息与图片信息相关联,或者将该生物特征信息与拍摄的图片中的人物相关联,该人物与该生物特征信息是对应的。电子设备生成带有生物特征信息的图片文件,响应于用户查看该图片文件,电子设备可以显示该生物特征信息或者显示部分该生物特征信息。The electronic device associates the biometric information with the picture information, or associates the biometric information with a person in the photographed picture, and the person corresponds to the biometric information. The electronic device generates a picture file with biometric information, and in response to the user viewing the picture file, the electronic device may display the biometric information or display a portion of the biometric information.
下面以智能手机为上述电子设备举例,示例性的说明本申请提供的拍摄方法在智能手机的显示界面上的实现形式。In the following, a smartphone is taken as an example of the above-mentioned electronic device, and an implementation form of the photographing method provided by the present application on the display interface of the smartphone is exemplarily described.
首先,对启动标注模式进行介绍。First, the startup annotation mode is introduced.
方式一,启动电子设备的摄像头,选择标注模式。Method 1: Start the camera of the electronic device and select the labeling mode.
相机应用是电子设备具有的一款用于拍照的应用软件。当用户想要拍摄图像或视频时,可以启动相机应用,电子设备调用至少一个摄像头进行拍摄。如图5a所示,图5a示出了电子设备上用于陈列应用程序列表的示例性用户界面。图5a包括状态栏201和显示界面202,其中状态栏201可包括:移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符 203、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符207,蓝牙指示符208,电池状态指示符209、时间指示符211。当电子设备的蓝牙模块为开启状态(即电子设备为蓝牙模块进行供电)时,电子设备的显示界面上显示蓝牙指示符208。A camera application is an application software for taking pictures of an electronic device. When the user wants to shoot an image or video, the camera application can be started, and the electronic device calls at least one camera to shoot. As shown in Figure 5a, Figure 5a illustrates an exemplary user interface on an electronic device for displaying a list of applications. Figure 5a includes a status bar 201 and a display interface 202, wherein the status bar 201 may include: one or more signal strength indicators 203 of mobile communication signals (also known as cellular signals), wireless fidelity (Wi-Fi) ) signal of one or more of signal strength indicator 207, Bluetooth indicator 208, battery status indicator 209, time indicator 211. When the Bluetooth module of the electronic device is in an on state (ie, the electronic device supplies power to the Bluetooth module), a Bluetooth indicator 208 is displayed on the display interface of the electronic device.
显示界面202陈列了多个应用图标。其中,显示界面202中包括相机205的应用图标。当电子设备检测到作用于相机205的应用图标的用户操作,电子设备显示相机应用提供的应用界面。The display interface 202 displays a plurality of application icons. The display interface 202 includes an application icon of the camera 205 . When the electronic device detects a user operation acting on the application icon of the camera 205, the electronic device displays an application interface provided by the camera application.
参考图5b,图5b示出了一种可能的相机应用提供的用户界面。相机205的应用界面如图5b所示,该应用界面可以包括:显示区30、闪光灯图标301、设置图标302、模式选择区303、图库图标304、拍摄图标305、切换图标306。Referring to Figure 5b, Figure 5b shows a possible user interface provided by a camera application. The application interface of the camera 205 is shown in FIG. 5b . The application interface may include: a display area 30 , a flash icon 301 , a setting icon 302 , a mode selection area 303 , a gallery icon 304 , a shooting icon 305 , and a switching icon 306 .
显示区30的显示内容为电子设备当前使用的摄像头所采集的图像的预览显示界面。电子设备当前使用的摄像头可以是相机应用设置的默认摄像头,电子设备当前使用过的摄像头还可以是上一次关闭相机应用时使用的摄像头。The display content of the display area 30 is the preview display interface of the image captured by the camera currently used by the electronic device. The camera currently used by the electronic device may be the default camera set by the camera application, and the camera currently used by the electronic device may also be the camera used when the camera application was closed last time.
闪光灯图标301,可以用于指示闪光灯的工作状态。The flash icon 301 can be used to indicate the working status of the flash.
设置图标302,当检测到作用于设置图标302的用户操作时,响应于该操作,电子设备可显示其他快捷功能,例如调整分辨率、定时拍摄(又可称为延时拍摄,可以控制开启拍照的时间)、拍摄静音、声控拍照、笑脸抓拍(当摄像头检测到笑脸特征时,自动对焦到笑脸)等功能。The setting icon 302, when a user operation acting on the setting icon 302 is detected, in response to the operation, the electronic device can display other shortcut functions, such as adjusting the resolution, time-lapse shooting (also known as time-lapse shooting, which can be controlled to start taking pictures) time), shooting mute, voice-activated photo, smile capture (when the camera detects a smile feature, automatically focus on the smile) and other functions.
模式选择区303,用于提供不同的拍摄模式,根据用户选择的拍摄模式不同,电子设备启用的摄像头以及拍摄参数也不同。可以包括标注模式303A、夜景模式303B、拍照模式303C、录像模式303D以及更多303E。图5b中拍照模式303C的图标被标记,用于提示用户当前的模式为拍照模式。其中,The mode selection area 303 is used to provide different shooting modes. According to the different shooting modes selected by the user, the cameras and shooting parameters enabled by the electronic device are also different. An annotation mode 303A, a night scene mode 303B, a photographing mode 303C, a video recording mode 303D, and more 303E may be included. In FIG. 5b, the icon of the photographing mode 303C is marked to prompt the user that the current mode is the photographing mode. in,
标注模式303A,在该模式下时,当电子设备检测到拍摄图片/视频的用户操作,电子设备获取摄像头当前采集的图像信息,并获取生物特征信息,该生物特征信息与穿戴设备通过传感器检测出的传感器数据(例如心率传感器检测出的心率数据,血压传感器检测出的血压数据)对应,电子设备将生物特征信息与拍摄获取的图像信息进行融合编码,生成带有生物特征信息的图片/视频文件。例如,生物特征信息可以是用户的心率数据、血压数据等信息;也可以是心率正常、血压正常等信息。Labeling mode 303A, in this mode, when the electronic device detects a user operation for taking pictures/videos, the electronic device obtains the image information currently collected by the camera, and obtains biometric information, which is detected by the wearable device through the sensor. Corresponding to the sensor data (such as the heart rate data detected by the heart rate sensor, the blood pressure data detected by the blood pressure sensor), the electronic device fuses and encodes the biometric information and the image information obtained by shooting, and generates a picture/video file with the biometric information. . For example, the biometric information may be information such as the user's heart rate data, blood pressure data, etc.; it may also be information such as normal heart rate and normal blood pressure.
可选的,在一些实施例中,若电子设备没有开启蓝牙,当电子设备检测到作用于标注模式303A的用户操作307,响应于该用户操作307,电子设备自动开启蓝牙,并自动搜索可连接的蓝牙设备,根据用户选择建立连接,或者电子设备与曾经建立过连接的蓝牙设备自动建立连接。Optionally, in some embodiments, if the electronic device does not turn on Bluetooth, when the electronic device detects the user operation 307 acting on the labeling mode 303A, in response to the user operation 307, the electronic device automatically turns on Bluetooth, and automatically searches for connectable devices. The connection is established according to the user's choice, or the electronic device automatically establishes a connection with the Bluetooth device that has previously established a connection.
可选的,在一些实施例中,电子设备可以同时接收两个用户的穿戴设备的传感器数据或生物特征信息,将两个生物特征信息与拍摄获取的图像信息进行融合编码,在一个图片/视频文件中关联两个用户的生物特征信息。Optionally, in some embodiments, the electronic device may simultaneously receive sensor data or biometric information of the wearable devices of two users, and perform fusion coding with the two biometric information and the image information obtained by shooting, and then combine the two biometric information in one picture/video. The biometric information of the two users is associated in the file.
当检测到作用于标注模式303A的用户操作时,响应于该操作,电子设备的模式选择区303中拍照模式303C的图标不再被标记,而标注模式303A被标记。When a user operation acting on the annotation mode 303A is detected, in response to the operation, the icon of the photographing mode 303C in the mode selection area 303 of the electronic device is no longer marked, but the annotation mode 303A is marked.
夜景模式303B,可以提升亮部和暗部的细节呈现能力,控制噪点,并呈现出更多的画面细节。拍照模式303C,适应大部分的拍摄场景,可以根据当前的环境自动调整摄影参数。录像模式303D,用于拍摄一段视频。更多303E,当检测到作用于更多303E的用户操作时,响应于该操作,电子设备可显示其他选择模式,例如全景模式(实现自动拼接,电子设备将连续拍摄的多张照片拼接为一张照片,实现扩大画面视角的效果)、HDR模式(自动连拍欠曝 光、正常曝光、过度曝光三张照片、并选取最好的部分合成为一张照片)等等。The night scene mode 303B can improve the detail rendering ability of bright and dark parts, control noise, and present more picture details. The photographing mode 303C is suitable for most photographing scenes, and can automatically adjust photographing parameters according to the current environment. Video mode 303D, used to shoot a video. More 303E, when detecting a user operation acting on more 303E, in response to the operation, the electronic device may display other selection modes, such as panorama mode (to achieve automatic stitching, the electronic device stitches multiple photos taken continuously into one 3 photos to achieve the effect of expanding the viewing angle of the picture), HDR mode (automatic continuous shooting underexposure, normal exposure, overexposure three photos, and select the best part to combine into one photo) and so on.
当检测到作用于模式选择区303中任一模式的应用图标(例如标注模式303A、夜景模式303B、拍照模式303C、录像模式303D、全景模式、HDR模式等等)的用户操作时,响应于该操作,拍摄设备/电子设备可以进入对应的模式。相应的,所述显示区30显示的图像为当前模式下处理后的图像。When detecting a user operation acting on an application icon of any mode in the mode selection area 303 (for example, the label mode 303A, the night scene mode 303B, the photo mode 303C, the video mode 303D, the panorama mode, the HDR mode, etc.) Operation, the photographing device/electronic device can enter the corresponding mode. Correspondingly, the image displayed in the display area 30 is the image processed in the current mode.
模式选择区303中的各模式图标不限于虚拟图标,也可以为通过拍摄设备/电子设备上部署的物理按键进行模式的选择,从而使得拍摄设备进入相应的模式。The mode icons in the mode selection area 303 are not limited to virtual icons, and can also be selected through physical buttons deployed on the photographing device/electronic device, so that the photographing device enters the corresponding mode.
图库图标304,当检测到作用于图库图标304的用户操作时,响应于该操作,电子设备可进入电子设备的图库,所述图库中可以包括已拍摄的照片和视频。其中,图库图标304可以显示为不同的形式,例如当电子设备保存摄像头当前采集的图像后,图库图标304中显示该图像的缩略图。The gallery icon 304, when a user operation acting on the gallery icon 304 is detected, in response to the operation, the electronic device may enter a gallery of the electronic device, and the gallery may include photos and videos that have been taken. The gallery icon 304 may be displayed in different forms. For example, after the electronic device saves the image currently captured by the camera, the gallery icon 304 displays a thumbnail of the image.
拍摄图标305,当检测到作用于拍摄图标305的用户操作(例如触控操作、语音操作、手势操作等)时,响应于该操作,电子设备获取显示区30当前显示的图像,并保存在图库中。其中,可以通过针对于图库图标304的用户操作(例如触控操作、手势操作等)进入图库。 Shooting icon 305, when a user operation (such as a touch operation, voice operation, gesture operation, etc.) acting on the shooting icon 305 is detected, in response to the operation, the electronic device acquires the image currently displayed in the display area 30 and saves it in the gallery middle. The gallery can be entered through a user operation (eg, touch operation, gesture operation, etc.) on the gallery icon 304 .
切换图标306,可以用于对前置摄像头和后置摄像头的切换。其中,前置摄像头的拍摄方向与用户使用的电子设备的屏幕的显示方向相同,后置摄像头的拍摄方向与用户使用的电子设备的屏幕的显示方向相反。若显示区30当前显示后置摄像头采集的图像,当检测到作用于切换图标306的用户操作时,响应于该操作,显示区30显示前置摄像头采集的图像。若显示区30当前显示前置摄像头采集的图像,当检测到作用于切换图标306的用户操作时,响应于该操作,显示区30显示后置摄像头采集的图像。The switch icon 306 can be used to switch between the front camera and the rear camera. The shooting direction of the front camera is the same as the display direction of the screen of the electronic device used by the user, and the shooting direction of the rear camera is opposite to the display direction of the screen of the electronic device used by the user. If the display area 30 currently displays the image captured by the rear camera, when a user operation acting on the switch icon 306 is detected, the display area 30 displays the image captured by the front camera in response to the operation. If the display area 30 currently displays the image captured by the front camera, when a user operation acting on the switch icon 306 is detected, the display area 30 displays the image captured by the rear camera in response to the operation.
如图5c所示,图5c示例性的示出了标注模式303A所对应的应用界面。其中,模式选择区303中智能标注303A的图标被标记,表示当前的模式为标注模式。As shown in FIG. 5c, FIG. 5c exemplarily shows an application interface corresponding to the annotation mode 303A. The icon of the smart label 303A in the mode selection area 303 is marked, indicating that the current mode is the label mode.
可选的,如图5c的显示区30还可以显示提示信息,用于提示用户当前电子设备与穿戴设备连接。比如提示区308中显示文本“连接到一个可穿戴设备”,表示当前电子设备已经与一个穿戴设备进行连接,该穿戴设备可以是耳机、手表、手环、眼镜等等。又比如提示区308中显示文本“没有连接到可穿戴设备”,提示用户当前电子设备没有与穿戴设备进行连接。其中连接方式可以是蓝牙连接、WiFi连接等短距离连接方式。Optionally, as shown in the display area 30 of FIG. 5c, prompt information may also be displayed, which is used to prompt the user that the current electronic device is connected to the wearable device. For example, the text "connected to a wearable device" is displayed in the prompt area 308, indicating that the current electronic device has been connected to a wearable device, and the wearable device may be an earphone, a watch, a bracelet, glasses, or the like. For another example, the text "not connected to the wearable device" is displayed in the prompt area 308, prompting the user that the current electronic device is not connected to the wearable device. The connection method may be a short-distance connection method such as a Bluetooth connection or a WiFi connection.
可选的,在一些实施例中,提示区308中的内容可以提示使用该电子设备的用户,使佩戴该穿戴设备的用户在拍摄区域内。例如提示区308中显示文本“请确认佩戴穿戴设备的用户在镜头的拍摄范围内”。Optionally, in some embodiments, the content in the prompt area 308 may prompt the user using the electronic device, so that the user wearing the wearable device is within the shooting area. For example, the text "Please confirm that the user wearing the wearable device is within the shooting range of the lens" is displayed in the prompt area 308 .
本申请中,用户操作包括但不限于点击、快捷按键、手势、悬浮触控、语音指令等操作。In this application, user operations include but are not limited to operations such as clicks, shortcut keys, gestures, floating touch, and voice commands.
可选的,参考图5d,图5d示出了又一种可能的相机应用提供的用户界面。相机205的应用界面如图5d所示,与图5b不同的是,在应用界面31中包括标注图标310。当电子设备检测到作用于标注图标310的用户操作,响应于该操作,电子设备启动标注功能。这样,可以在模式选择区303中的任意拍摄模式下,开启标注功能。Optionally, referring to Fig. 5d, Fig. 5d shows yet another possible user interface provided by the camera application. The application interface of the camera 205 is shown in FIG. 5d . The difference from FIG. 5b is that the application interface 31 includes a label icon 310 . When the electronic device detects a user operation acting on the annotation icon 310, in response to the operation, the electronic device activates the annotation function. In this way, the labeling function can be enabled in any shooting mode in the mode selection area 303 .
当电子设备在拍照模式303C下,检测到作用于标注图标310的用户操作,响应于该操作,电子设备在拍摄模式303C下启动标注功能。当电子设备在夜景模式303B下,检测到作用于标注图标310的用户操作,响应于该操作,电子设备在夜景模式303B下启动标注功能。When the electronic device is in the photographing mode 303C, a user operation acting on the labeling icon 310 is detected, and in response to the operation, the electronic device starts the labeling function in the photographing mode 303C. When the electronic device is in the night scene mode 303B, a user operation acting on the label icon 310 is detected, and in response to the operation, the electronic device starts the label function in the night scene mode 303B.
如图5e所示,图5e示例性的示出了启动标注功能后的应用界面。其中,标注图标310被标记,表示当前启动了标注功能。As shown in FIG. 5e, FIG. 5e exemplarily shows the application interface after the labeling function is activated. The labeling icon 310 is marked, indicating that the labeling function is currently activated.
本申请实施例,不限定通过相机205应用图标启动摄像头。例如还可以通过短信、社交软件的拍摄功能、视频通话等方式启动摄像头,进一步开启标注功能。在开启了标注功能的情况下,电子设备获取带有生物特征信息的图片/视频,并且将带有生物特征信息的图片/视频通过短信息,社交软件,视频通话等方式分享出去。In the embodiment of the present application, it is not limited to start the camera through the application icon of the camera 205 . For example, you can also activate the camera through text messages, the shooting function of social software, video calls, etc., and further enable the labeling function. When the annotation function is enabled, the electronic device obtains pictures/videos with biometric information, and shares the pictures/videos with biometric information through short messages, social software, video calls, etc.
方式二,通过第一应用图标,启动摄像头,摄像头的拍摄模式默认为标注模式。In the second method, the camera is activated through the first application icon, and the shooting mode of the camera is the labeling mode by default.
其中,“智能标注”仅为示例,还可以为其他名称。如图6a所示,图6a的显示界面202陈列了多个应用图标。其中,显示界面202中包括智能标注212的应用图标。若用户想要启动标注模式,通过用户操作触发智能标注212的应用图标。电子设备响应于用户操作,显示智能标注212的应用界面。Among them, "smart labeling" is only an example, and other names are also possible. As shown in FIG. 6a, the display interface 202 of FIG. 6a displays a plurality of application icons. The display interface 202 includes the application icon of the smart label 212 . If the user wants to activate the annotation mode, the application icon of the smart annotation 212 is triggered by the user operation. The electronic device displays the application interface of the smart annotation 212 in response to the user operation.
在一些实施例中,若电子设备没有开启蓝牙,当电子设备检测到作用于智能标注212的用户操作,响应于该用户操作,电子设备自动开启蓝牙,自动搜索可连接的蓝牙设备,根据用户选择建立连接,或者电子设备与曾经建立过连接的蓝牙设备自动建立连接。In some embodiments, if the electronic device does not have Bluetooth enabled, when the electronic device detects a user operation acting on the smart label 212, in response to the user operation, the electronic device automatically enables Bluetooth, automatically searches for connectable Bluetooth devices, and selects a Bluetooth device according to the user's choice. A connection is established, or the electronic device automatically establishes a connection with a Bluetooth device that was previously connected.
参考图6b,图6b示例性的示出了一种可能的智能标注212提供的应用界面。该应用界面可以包括:显示区40、闪光灯图标401、设置图标402、图库图标403、拍摄图标404、切换图标405,提示区406。Referring to FIG. 6b , FIG. 6b exemplarily shows a possible application interface provided by the smart annotation 212 . The application interface may include: a display area 40 , a flash icon 401 , a setting icon 402 , a gallery icon 403 , a shooting icon 404 , a switching icon 405 , and a prompt area 406 .
需要说明的是,基于同一发明构思,图6b所示实施例提供的闪光灯图标401、设置图标402、图库图标403、拍摄图标404、切换图标405、提示区406解决问题的原理与上述图5b中实施例相似,因此图6b中闪光灯图标401、设置图标402、图库图标403、拍摄图标404、切换图标405、提示区406的实施可以参见上述图5b中闪光灯图标301、设置图标302、图库图标304、拍摄图标305、切换图标306提示区308的实施对应的相应描述,在此不再赘述。It should be noted that, based on the same inventive concept, the flash icon 401, the setting icon 402, the gallery icon 403, the shooting icon 404, the switching icon 405, and the prompt area 406 provided by the embodiment shown in FIG. The embodiments are similar, so the implementation of the flash icon 401, the setting icon 402, the gallery icon 403, the shooting icon 404, the switching icon 405, and the prompt area 406 in FIG. The corresponding descriptions corresponding to the implementation of the prompt area 308 of the shooting icon 305 and the switching icon 306 will not be repeated here.
可选的,在一些实施例中,还可以通过电子设备中的穿戴设备APP,启动标注模式。例如图6a的显示界面202中的应用图标,智能穿戴214。其中,智能穿戴214是对一个、一类或多类穿戴设备进行管理和交互的应用,包括功能管理、权限管理等等。在该智能穿戴214的应用界面中,可以包括多个穿戴设备的功能控件。电子设备与穿戴设备进行配对连接,用户选择进入相应的穿戴设备的用户界面,在该穿戴设备的用户界面中,电子设备检测到针对启动标注模式的用户操作,实现启动标注模式,电子设备可以获取到与该穿戴设备的传感器数据对应的生物特征信息。Optionally, in some embodiments, the labeling mode can also be started through the wearable device APP in the electronic device. For example, the application icon in the display interface 202 of FIG. 6a, the smart wearable 214. The smart wearable 214 is an application for managing and interacting with one, one or more types of wearable devices, including function management, authority management, and the like. The application interface of the smart wearable 214 may include function controls of multiple wearable devices. The electronic device and the wearable device are paired and connected, and the user selects to enter the user interface of the corresponding wearable device. In the user interface of the wearable device, the electronic device detects the user operation for starting the labeling mode, and realizes the starting labeling mode. The electronic device can obtain to the biometric information corresponding to the sensor data of the wearable device.
上述两种方式,介绍了电子设备开启标注模式的不同方式以及相应的显示界面。图5c和图6b都示例性的示出了标注模式下的应用界面。可选的,图7a还提供了一种可能的应用界面。The above two methods describe different methods for enabling the labeling mode of the electronic device and the corresponding display interface. Both FIG. 5c and FIG. 6b exemplarily show the application interface in the annotation mode. Optionally, Figure 7a also provides a possible application interface.
如图7a所示,相比于图6b,显示区40还可以包括预览图标407,该预览图标407用于触发电子设备获取生物特征信息。当电子设备检测到针对于预览图标407的用户操作,电子设备向穿戴设备发送请求消息,穿戴设备接收到该请求消息后,向电子设备发送传感器数据或生物特征信息,电子设备根据获取的传感器数据或生物特征信息,在显示屏上实时显示至少部分生物特征信息。具体的,穿戴设备接收到该请求消息后,穿戴设备向电子设备发送传感器数据,电子设备基于接收到的该传感器数据,确定生物特征信息,在显示屏上实时显示至少部分生物特征信息;或者,穿戴设备接收到该请求消息后,穿戴设备基于传感器数据确定生物特征信息,向电子设备发送生物特征信息,电子设备基于接收到的该生物特征信息,在显示屏上实时显示至少部分生物特征信息。As shown in Fig. 7a, compared to Fig. 6b, the display area 40 may further include a preview icon 407, where the preview icon 407 is used to trigger the electronic device to acquire biometric information. When the electronic device detects a user operation on the preview icon 407, the electronic device sends a request message to the wearable device. After the wearable device receives the request message, it sends sensor data or biometric information to the electronic device. Or biometric information, displaying at least part of the biometric information on the display screen in real time. Specifically, after the wearable device receives the request message, the wearable device sends sensor data to the electronic device, and the electronic device determines the biometric information based on the received sensor data, and displays at least part of the biometric information on the display screen in real time; or, After the wearable device receives the request message, the wearable device determines the biometric information based on the sensor data, sends the biometric information to the electronic device, and the electronic device displays at least part of the biometric information on the display screen in real time based on the received biometric information.
可选的,在一些实施例中,当电子设备检测到针对于图6a中智能标注212的用户操作, 电子设备显示图7a所示界面,并且向穿戴设备发送请求消息,穿戴设备接收到该请求消息后,向电子设备发送传感器数据或生物特征信息。该预览图标407用于触发电子设备基于获取的传感器数据或生物特征信息,在显示屏上实时显示至少部分生物特征信息。Optionally, in some embodiments, when the electronic device detects a user operation for the smart annotation 212 in FIG. 6a, the electronic device displays the interface shown in FIG. 7a, and sends a request message to the wearable device, and the wearable device receives the request. After the message, sensor data or biometric information is sent to the electronic device. The preview icon 407 is used to trigger the electronic device to display at least part of the biometric information on the display screen in real time based on the acquired sensor data or biometric information.
举例来说,如图7b所示,当电子设备检测到针对于预览图标407的用户操作,电子设备显示图7b所示界面,显示区40包括预览区408,该预览区408显示了用户小A的健康状态,例如小A的心率正常;还显示了小A的运动状态,例如小A正在跑步。其中,预览区408还可以包括小A的血压、血糖、运动姿势是否规范、情绪(例如心情愉快、紧张、伤心)等信息。For example, as shown in FIG. 7b, when the electronic device detects a user operation on the preview icon 407, the electronic device displays the interface shown in FIG. 7b, the display area 40 includes a preview area 408, and the preview area 408 displays the user widget A The health status of Xiao A, for example, the heart rate of Xiao A is normal; it also shows the exercise status of Xiao A, such as Xiao A is running. The preview area 408 may also include information such as Little A's blood pressure, blood sugar, whether the exercise posture is standardized, emotions (eg, happy, nervous, sad).
可选的,在一种可能的实施例中,电子设备响应于用户操作,可以直接显示图7b的应用界面,无需在图7a中通过预览图标407的触发。当电子设备检测到启动标注模式的用户操作,电子设备向穿戴设备发送请求消息,穿戴设备接收到该请求消息后,向电子设备发送传感器数据或生物特征信息,电子设备根据获取的传感器数据或生物特征信息,在显示屏上实时显示至少部分生物特征信息,如图7b所示。Optionally, in a possible embodiment, in response to a user operation, the electronic device may directly display the application interface of FIG. 7b without being triggered by the preview icon 407 in FIG. 7a. When the electronic device detects the user operation that activates the annotation mode, the electronic device sends a request message to the wearable device. After the wearable device receives the request message, it sends sensor data or biometric information to the electronic device. feature information, at least part of the biometric information is displayed on the display screen in real time, as shown in Figure 7b.
上述实施例提供了智能标注可能的应用界面,电子设备响应于用户操作307,显示智能标注303A的应用界面(如图5c所示);或者电子设备响应于用户操作,显示智能标注212的应用界面(如图6b所示);或者显示如图7a或图7b的应用界面,等等。The above-mentioned embodiment provides a possible application interface of the smart annotation, and the electronic device displays the application interface of the smart annotation 303A in response to the user operation 307 (as shown in FIG. 5c ); or the electronic device displays the application interface of the intelligent annotation 212 in response to the user operation (as shown in Figure 6b); or display the application interface as shown in Figure 7a or Figure 7b, and so on.
可选的,在标注模式下或者进入标注模式前,用户可以在电子设备侧对生物特征信息进行配置,根据需要选择获取特定的生物特征信息。Optionally, in the labeling mode or before entering the labeling mode, the user can configure the biometric information on the side of the electronic device, and select to obtain specific biometric information as required.
用户点击上述应用界面中的设置图标,电子设备显示设置界面。示例性的,在上述应用界面中,电子设备响应于针对设置图标302(或设置图标402)的用户操作,可以显示如图8a所示的标注模式的设置界面60。The user clicks the setting icon in the above application interface, and the electronic device displays the setting interface. Exemplarily, in the above application interface, the electronic device may display the setting interface 60 of the annotation mode as shown in FIG. 8a in response to the user operation on the setting icon 302 (or the setting icon 402).
如图8a所示,设置界面60包括设置分辨率、定时拍照、拍照静音、声控拍照、笑脸抓拍等功能。在设置界面60中,该设置界面60包括穿戴设备601这一区域栏,可以对穿戴设备进行具体配置。As shown in FIG. 8a, the setting interface 60 includes functions such as setting resolution, taking pictures regularly, muting pictures, voice-activated pictures, and capturing smiley faces. In the setting interface 60 , the setting interface 60 includes a region bar of the wearable device 601 , and the wearable device can be specifically configured.
电子设备响应于针对选项602的用户操作,显示如图8b所示的穿戴设备的界面70。该设置界面70包括我的设备701和其他设备702,其中,我的设备701包括电子设备正在连接的设备和连接过的设备,例如包括小A的手表为电子设备正在连接的设备,而小B的手表,表示电子设备和小B的手表曾经建立过连接,但当前没有连接。其他设备702指的是电子设备通过蓝牙搜索到的可连接的,并且没有连接过的设备,例如包括小A的耳机和小C的手表。The electronic device displays the interface 70 of the wearable device as shown in FIG. 8b in response to the user operation for the option 602 . The setting interface 70 includes my device 701 and other devices 702, wherein, my device 701 includes the device that the electronic device is connecting to and the devices that have been connected, for example, the watch including small A is the device that the electronic device is connecting to, while the small B , indicating that the electronic device and Xiao B's watch have been connected before, but there is currently no connection. The other devices 702 refer to the connectable and unconnected devices searched by the electronic device through Bluetooth, for example, including the earphones of small A and the watch of small C.
用户可以点击区域703进入小A的手表的具体配置界面,电子设备响应于针对区域703的用户操作,可以进入如图8c所示的小A的手表的具体配置界面80。如图8c所示,该配置界面80包括多个配置选项,例如心率、血压、血糖、运动状态、情绪状态等。图标801可以描述为开关,当图标801中的圆圈在左边时,表示关闭;当图标801中的圆圈在右边时,表示开启;其中,关闭状态和开启状态可以通过单击操作进行切换。举例来说,如图8c所示,此时心率这一列的图标801的圆圈在左边,表示电子设备当前拍摄的图片/视频的生物特征信息中不包括心率信息,当电子设备检测到针对于图标801的单击操作,图标801的圆圈移动到右边,表示电子设备当前拍摄的图片/视频的生物特征信息中包括心率信息。The user can click on the area 703 to enter the specific configuration interface of the small A watch, and the electronic device can enter the specific configuration interface 80 of the small A watch as shown in FIG. 8c in response to the user operation on the area 703. As shown in FIG. 8c, the configuration interface 80 includes a plurality of configuration options, such as heart rate, blood pressure, blood sugar, exercise state, emotional state, and the like. The icon 801 can be described as a switch, when the circle in the icon 801 is on the left, it means off; when the circle in the icon 801 is on the right, it means on; wherein the off state and the on state can be switched by a single-click operation. For example, as shown in FIG. 8c, the circle of the icon 801 in the heart rate column is on the left at this time, indicating that the biometric information of the picture/video currently captured by the electronic device does not include heart rate information. 801, the circle of the icon 801 moves to the right, indicating that the biometric information of the picture/video currently captured by the electronic device includes heart rate information.
该配置界面80还包括关联面部图像的图标802,电子设备响应于针对图标802的用户操作,获取佩戴小A的手表的用户的面部图像信息,将小A的手表与面部图像信息结合起来。其中该面部图像信息可以是用户上传的图像,用户点击关联面部图像的图标802,上传小A 的面部图像,也可以通过点击图标802后,通过电子设备的摄像头获取小A的面部图像,将小A的面部图像与小A的手表关联。The configuration interface 80 also includes an icon 802 associated with a facial image. The electronic device, in response to a user operation on the icon 802, obtains the facial image information of the user wearing the small A watch, and combines the small A watch with the facial image information. The facial image information can be an image uploaded by the user. The user clicks on the icon 802 associated with the facial image to upload the facial image of Little A, or after clicking on the icon 802, the facial image of Little A is obtained through the camera of the electronic device, and the facial image of Little A is obtained by clicking on the icon 802. A's face image is associated with little A's watch.
如图8d所示,电子设备将小A的手表和面部图像关联成功后,在区域703中显示该面部图像7031。面部图像7031指示了佩戴小A的手表的用户的面部图像。电子设备通过面部识别技术可以在图片中识别出小A。As shown in FIG. 8d , after the electronic device successfully associates the small A's watch with the facial image, the facial image 7031 is displayed in the area 703 . The face image 7031 indicates the face image of the user wearing the small A watch. The electronic device can recognize the small A in the picture through the facial recognition technology.
可以理解的,当小A的手表再次与电子设备连接时,电子设备可以在图8d中自动显示小A的手表所关联的面部图像7031,即电子设备检测到针对图8a中选项602的用户操作后,直接显示如图8d的界面。可选的,电子设备与小A的手表在再次建立连接之前,当电子设备搜索到小A的手表,电子设备可以自动显示小A的手表所关联的面部图像。It can be understood that when the small A's watch is connected to the electronic device again, the electronic device can automatically display the facial image 7031 associated with the small A's watch in FIG. After that, the interface as shown in Figure 8d is directly displayed. Optionally, before establishing a connection between the electronic device and the small A's watch again, when the electronic device searches for the small A's watch, the electronic device can automatically display the facial image associated with the small A's watch.
当用户点击图8d中的区域703,电子设备响应于该点击操作,显示如图8c所示的界面80。此时,图标802可以用于更改小A的手表关联的面部图像。电子设备响应于针对图标802的用户操作,将小A的手表与最新接收到的面部图像信息结合起来。When the user clicks on the area 703 in Fig. 8d, the electronic device displays the interface 80 shown in Fig. 8c in response to the click operation. At this time, the icon 802 can be used to change the face image associated with the watch of Little A. The electronic device, in response to the user operation on the icon 802, combines the watch of Little A with the most recently received facial image information.
可选的,在一些实施例中,图标802还可以用于添加小A的手表关联的面部图像。电子设备响应于针对图标802的用户操作,将小A的手表与接收到的面部图像信息结合起来。不影响之前添加的关联的面部图像,小A的手表可以与多个面部图像信息相结合。Optionally, in some embodiments, the icon 802 may also be used to add a face image associated with the watch of the little A. The electronic device, in response to the user operation on the icon 802, combines the small A's watch with the received facial image information. Without affecting the previously added associated facial image, the small A watch can be combined with multiple facial image information.
上述实施例描述了电子设备对标注模式的配置过程以及相应的用户界面。在本申请的另一种实现方式中,还可以通过穿戴设备对标注模式进行配置。该穿戴设备是与电子设备建立连接的穿戴设备。The above embodiments describe the configuration process of the electronic device for the annotation mode and the corresponding user interface. In another implementation manner of the present application, the labeling mode may also be configured through a wearable device. The wearable device is a wearable device that establishes a connection with an electronic device.
如图9a所示,图9a示例性的示出了穿戴设备的主界面90,主界面90陈列了多个应用图标。其中,主界面90中包括相机901的应用图标。当穿戴设备检测到作用于相机901的应用图标的用户操作,穿戴设备显示相机应用提供的应用界面。As shown in FIG. 9a, FIG. 9a exemplarily shows the main interface 90 of the wearable device, and the main interface 90 displays a plurality of application icons. The main interface 90 includes the application icon of the camera 901 . When the wearable device detects a user operation acting on the application icon of the camera 901, the wearable device displays an application interface provided by the camera application.
参考图9b,图9b示出了一种相机应用提供的应用界面1000。相机901的应用界面1000如图9b所示,该应用界面1000可以包括:标注图标1001、图库图标1002、拍摄图标1003、切换图标1004。其中,图9b所示实施例提供的图库图标1002、拍摄图标1003、切换图标1004解决问题的原理与上述图5b中实施例相似,因此图6b中图库图标1002、拍摄图标1003、切换图标1004的实施可以参见上述图5b中图库图标304、拍摄图标305、切换图标306的实施对应的相应描述,在此不再赘述。Referring to Fig. 9b, Fig. 9b shows an application interface 1000 provided by a camera application. The application interface 1000 of the camera 901 is shown in FIG. 9 b , and the application interface 1000 may include: a labeling icon 1001 , a gallery icon 1002 , a shooting icon 1003 , and a switching icon 1004 . Among them, the problem-solving principles of the gallery icon 1002, the shooting icon 1003, and the switching icon 1004 provided by the embodiment shown in FIG. 9b are similar to those of the embodiment shown in FIG. 5b. Therefore, in FIG. For the implementation, reference may be made to the corresponding descriptions of the implementation of the gallery icon 304 , the shooting icon 305 , and the switching icon 306 in FIG. 5 b , which will not be repeated here.
应用界面1000的显示区的显示内容为电子设备当前使用的摄像头所采集的图像。电子设备当前使用的摄像头可以是相机应用设置的默认摄像头,电子设备当前使用过的摄像头还可以是上一次关闭相机应用时使用的摄像头。The display content of the display area of the application interface 1000 is the image captured by the camera currently used by the electronic device. The camera currently used by the electronic device may be the default camera set by the camera application, and the camera currently used by the electronic device may also be the camera used when the camera application was closed last time.
标注图标1001,指示用户可以启动标注模式,并对标注模式进行配置。当穿戴设备检测到针对于标注图标1001的用户操作,显示如图9c的配置界面。图9c用于对标注模式进行配置,配置的选项包括心率、血压、血糖、运动状态、情绪状态等。图标1005用于对电子设备是否获取心率信息进行标识,如图9c所示,此时图标1005中的圆圈在左边,表示电子设备当前拍摄的图片/视频的生物特征信息中不包括心率信息,当电子设备检测到针对于图标1005的单击操作,图标1005中的圆圈移动到右边,表示电子设备当前拍摄的图片/视频的生物特征信息中包括心率信息。图标1007用于对电子设备是否获取血压信息进行标识,图9c中,图标1007中的圆圈在右边,表示电子设备当前拍摄的图片/视频的生物特征信息中包括血压信息。The annotation icon 1001 indicates that the user can start the annotation mode and configure the annotation mode. When the wearable device detects a user operation on the label icon 1001, a configuration interface as shown in FIG. 9c is displayed. Figure 9c is used to configure the labeling mode, and the configuration options include heart rate, blood pressure, blood sugar, exercise state, emotional state, and the like. Icon 1005 is used to identify whether the electronic device obtains heart rate information. As shown in Figure 9c, the circle in icon 1005 is on the left, indicating that the biometric information of the picture/video currently captured by the electronic device does not include heart rate information. The electronic device detects a click operation on the icon 1005, and the circle in the icon 1005 moves to the right, indicating that the biometric information of the picture/video currently captured by the electronic device includes heart rate information. Icon 1007 is used to identify whether the electronic device obtains blood pressure information. In FIG. 9c, the circle in icon 1007 is on the right, indicating that the biometric information of the picture/video currently captured by the electronic device includes blood pressure information.
当用户完成配置后,点击确定图标1006。穿戴设备检测到针对于确定图标1006的用户操作,完成对标注模式的配置,并开启标注模式。开启标注模式后,用户可以在应用界面1000 上触发拍摄图标1003,以拍摄图片或视频。穿戴设备向电子设备发送传感器数据,一些实施例中,用户在穿戴设备侧触发拍摄,则发送的数据类型以穿戴设备侧的配置为准;用户在电子设备侧触发拍摄,则发送的数据类型以电子设备侧的配置为准。以图9c的配置为例,穿戴设备向电子设备发送血压传感器检测到的血压数据。When the user completes the configuration, the OK icon 1006 is clicked. The wearable device detects the user operation for determining the icon 1006, completes the configuration of the annotation mode, and enables the annotation mode. After the annotation mode is enabled, the user can trigger the shooting icon 1003 on the application interface 1000 to shoot a picture or video. The wearable device sends sensor data to the electronic device. In some embodiments, if the user triggers shooting on the wearable device side, the type of data sent is based on the configuration on the wearable device side; if the user triggers shooting on the electronic device side, the type of data sent is based on the configuration on the wearable device side. The configuration on the electronic device side shall prevail. Taking the configuration of FIG. 9c as an example, the wearable device sends the blood pressure data detected by the blood pressure sensor to the electronic device.
可选的,在一些实施例中,穿戴设备通过一个或多个传感器检测传感器数据,基于穿戴设备侧的配置,向电子设备发送生物特征信息。以图9c的配置为例,穿戴设备向电子设备发送生物特征信息,该生物特征信息与血压数据对应,包括血压信息(例如血压正常)。Optionally, in some embodiments, the wearable device detects sensor data through one or more sensors, and sends biometric information to the electronic device based on the configuration on the side of the wearable device. Taking the configuration of FIG. 9c as an example, the wearable device sends biometric information to the electronic device, and the biometric information corresponds to the blood pressure data, including blood pressure information (eg, normal blood pressure).
可选的,在一些实施方式中,当穿戴设备检测到针对于标注图标1001的单击操作,直接启动标注模式,穿戴设备根据配置信息,向电子设备发送传感器数据或生物特征信息;当穿戴设备检测到针对于标注图标1001的长按或者双击操作,显示如图9c的配置界面。Optionally, in some embodiments, when the wearable device detects a click operation on the labeling icon 1001, the labeling mode is directly activated, and the wearable device sends sensor data or biometric information to the electronic device according to the configuration information; A long-press or double-click operation on the label icon 1001 is detected, and a configuration interface as shown in FIG. 9c is displayed.
上述实施例提供了标注模式可能的设置界面,电子设备响应于针对设置图标302(或设置图标402)的用户操作,显示标注模式的设置界面(如图8a所示);或者穿戴设备响应于针对标注图标1001的用户操作,显示标注模式的设置界面(如图9c所示);等等。The above-mentioned embodiment provides a possible setting interface of the labeling mode, and the electronic device displays the setting interface of the labeling mode (as shown in FIG. 8a ) in response to a user operation on the setting icon 302 (or the setting icon 402 ); The user operation of the annotation icon 1001 displays the setting interface of the annotation mode (as shown in FIG. 9c ); and so on.
完成配置后,电子设备在标注模式下进行拍照,在如图7b所示的界面中,预览区408的显示内容随着配置的不同,显示内容也会发生相应的改变。举例来说,若用户在如图8c所示的配置界面80中选择开启心率和运动状态,则预览区408的显示内容可以包括心率信息(如心率数据或心率正常)和运动状态信息(如跑步)。After the configuration is completed, the electronic device takes a picture in the labeling mode. In the interface shown in FIG. 7b, the display content of the preview area 408 changes correspondingly with the different configurations. For example, if the user chooses to enable heart rate and exercise status in the configuration interface 80 shown in FIG. 8c, the display content in the preview area 408 may include heart rate information (such as heart rate data or normal heart rate) and exercise state information (such as running ).
当电子设备检测到触发拍摄图片的用户操作,电子设备将当下采集的图片信息与生物特征信息融合编码,生成带有生物特征信息的图片,该生物特征信息与穿戴设备的传感器采集到的用户的心率、血压、血糖、运动状态、情绪状态等传感器数据对应。When the electronic device detects the user operation that triggers the taking of the picture, the electronic device fuses and encodes the currently collected picture information with the biometric information to generate a picture with the biometric information. Corresponding sensor data such as heart rate, blood pressure, blood sugar, exercise state, emotional state, etc.
与拍摄图片不同,拍摄视频是在一段时间内的过程。当电子设备检测到触发拍摄视频的用户操作,本申请提供了一种示例性的用户界面。如图10所示,图10示出了电子设备拍摄视频的用户界面45,其中,该用户界面45中的图标411指示了当前的拍摄模式是拍摄视频中。该用户界面45的显示区可以包括计时区域410、预览区408以及标注图标412。其中,计时区域410实时对视频的拍摄长度进行计时。预览区408实时显示用户的健康状态、运动状态、情绪状态等信息,例如显示了用户小A(电子设备连接的穿戴设备的用户)的健康状态,心率正常;还显示了小A的运动状态,例如小A正在跑步。其中,预览区408的显示内容可以根据用户在电子设备侧或穿戴设备侧的配置而显示,预览区408的显示内容可以随着电子设备获取的生物特征信息进行实时更新。Unlike taking pictures, taking videos is a process over a period of time. When the electronic device detects a user operation that triggers video shooting, the present application provides an exemplary user interface. As shown in FIG. 10 , FIG. 10 shows a user interface 45 for capturing a video by an electronic device, wherein an icon 411 in the user interface 45 indicates that the current capturing mode is capturing a video. The display area of the user interface 45 may include a timing area 410 , a preview area 408 , and callout icons 412 . The timing area 410 counts the shooting length of the video in real time. The preview area 408 displays the user's health state, exercise state, emotional state and other information in real time. For example, it displays the health state of user A (the user of the wearable device connected to the electronic device), and the heart rate is normal; it also displays the exercise state of Xiao A, For example, A is running. The display content of the preview area 408 may be displayed according to the user's configuration on the electronic device side or the wearable device side, and the display content of the preview area 408 may be updated in real time along with the biometric information obtained by the electronic device.
标注图标412用于控制标注模式的开启和关闭,其中,标注图标412可以通过显示亮度或颜色等形式指示开启状态和关闭状态。The callout icon 412 is used to control the opening and closing of the callout mode, wherein the callout icon 412 may indicate the on state and the off state by displaying brightness or color.
举例来说,用户在拍摄一段视频的过程中,当标注图标412为亮状态,表示当前拍摄的模式为标注模式,用户界面45显示预览区408,电子设备将获取的生物特征信息与拍摄的图像帧进行融合编码;当标注图标412为暗状态,表示当前拍摄的模式不是标注模式(如普通模式),用户界面45不显示预览区408,电子设备不将获取的生物特征信息与拍摄的图像帧进行融合编码,或者电子设备不再获取生物特征信息。即用户可以在视频录制的过程中,通过针对标注图标412的用户操作,控制标注模式的开启和关闭。在标注模式开启的状态下,电子设备将连续采集的图片信息与生物特征信息融合编码,生成带有生物特征信息的图像帧,一系列图像帧组合成一段连续的视频。For example, when the user is shooting a video, when the mark icon 412 is on, it means that the current shooting mode is the mark mode, the user interface 45 displays the preview area 408, and the electronic device compares the acquired biometric information with the captured image. The frame is fused and encoded; when the label icon 412 is in a dark state, it means that the current shooting mode is not the label mode (such as normal mode), the user interface 45 does not display the preview area 408, and the electronic device does not associate the acquired biometric information with the captured image frame. Fusion coding is performed, or the electronic device no longer captures biometric information. That is, the user can control the opening and closing of the annotation mode through user operations on the annotation icon 412 during the video recording process. When the annotation mode is turned on, the electronic device fuses and encodes continuously collected picture information and biometric information to generate image frames with biometric information, and a series of image frames are combined into a continuous video.
可选的,在一些实施例中,标注图标412可以用于控制预览区408的显示和隐藏,其中,标注图标412可以通过显示亮度或颜色等形式指示显示状态和隐藏状态。在这种实施例中, 不管预览区408是显示或隐藏,电子设备都可以获取生物特征信息,并基于获取的生物特征信息,与拍摄的图像帧进行融合编码。Optionally, in some embodiments, the callout icon 412 may be used to control the display and hiding of the preview area 408, wherein the callout icon 412 may indicate the display state and the hidden state in the form of display brightness or color. In this embodiment, regardless of whether the preview area 408 is displayed or hidden, the electronic device can acquire the biometric information, and based on the acquired biometric information, perform fusion coding with the captured image frame.
在上述标注模式下,电子设备拍摄图片/视频,生成带有生物特征信息的图片/视频。拍摄完成的图片/视频可以在图库中进行查看。当电子设备检测到针对于图库图标的用户操作,电子设备显示图库的应用界面。In the above labeling mode, the electronic device takes pictures/videos to generate pictures/videos with biometric information. The finished pictures/videos can be viewed in the gallery. When the electronic device detects a user operation on the gallery icon, the electronic device displays an application interface of the gallery.
如图11a所示,图11a示例性的示出了图库界面1100,图库界面1100陈列了多张图片。用户点击在标注模式下拍摄的一张图片1101,进入图片查看界面。如图11b所示,图11b示出了一种图片查看界面1200,包括显示区1201、分享图标1202、编辑图标1203、删除图标1204、更多图标1205和标注区域1206。其中,As shown in FIG. 11a, FIG. 11a exemplarily shows a gallery interface 1100, and the gallery interface 1100 displays a plurality of pictures. The user clicks a picture 1101 taken in the annotation mode to enter the picture viewing interface. As shown in FIG. 11 b , FIG. 11 b shows a picture viewing interface 1200 , including a display area 1201 , a share icon 1202 , an edit icon 1203 , a delete icon 1204 , a more icon 1205 and an annotation area 1206 . in,
显示区1201显示图片1101。分享图标1202可用于触发启动图片分享功能,将图片1101分享到其他的设备或应用软件中。编辑图标1203可用于触发对图片1101的旋转、修剪、增加滤镜、虚化等编辑功能。删除图标1204可用于触发删除该图片1101。更多图标1205可用于触发打开更多与该图片1101相关的功能。The display area 1201 displays the picture 1101 . The share icon 1202 can be used to trigger the start of a picture sharing function, and share the picture 1101 to other devices or application software. The editing icon 1203 can be used to trigger editing functions such as rotation, trimming, adding filters, and blurring to the picture 1101 . The delete icon 1204 can be used to trigger deletion of the picture 1101 . The more icon 1205 can be used to trigger opening of more functions related to the picture 1101 .
可选的,在一些实施例中,分享图标1202、编辑图标1203、删除图标1204和更多图标1205的显示区域可以统一称为菜单区。该菜单区为可选的。该菜单区可以在该图片查看界面1200中隐藏,例如,用户单击显示区1201可以隐藏该菜单区,再次单击显示区1201可以显示该菜单区,本申请不作限制。Optionally, in some embodiments, the display areas of the share icon 1202 , the edit icon 1203 , the delete icon 1204 and the more icons 1205 may be collectively referred to as a menu area. This menu area is optional. The menu area can be hidden in the picture viewing interface 1200. For example, the user can click the display area 1201 to hide the menu area, and click the display area 1201 again to display the menu area, which is not limited in this application.
标注区域1206显示图片1101的生物特征信息。该标注区域1206显示的文本内容包括但不限于心率、血压、血糖、运动状态、情绪状态等相关描述。举例来说,该标注区域1206显示的文本内容可以是“小A在跑步”,也可以是“小A的心率正常,运动状态为跑步,情绪状态为愉快”。该标注区域1206显示的文本内容也可以包括对于用户运动姿势的评价,例如姿势规范、姿势不规范等;也可以包括对于用户的建议,例如用户心情指数较低,希望每天都要开心一点哦;还可以根据用户的状态进行相应的推送,例如用户跑步姿势不规范,电子设备通过网络搜索正确跑步姿势的图片或视频,建议用户观看相关的学习视频或者图片资料等等。本申请对此不作限制。可以理解的,该标注区域1206可以在任意位置、以任意形式显示在图片查看界面1200中。 Annotated area 1206 displays the biometric information of picture 1101 . The text content displayed in the labeling area 1206 includes, but is not limited to, descriptions related to heart rate, blood pressure, blood sugar, exercise state, emotional state, and the like. For example, the text content displayed in the labeling area 1206 may be "Little A is running" or "Little A's heart rate is normal, the exercise state is running, and the emotional state is happy". The text content displayed in the labeling area 1206 may also include the evaluation of the user's movement posture, such as posture specification, posture irregularity, etc.; may also include suggestions for the user, such as the user's mood index is low, and I hope to be happy every day; It can also push correspondingly according to the user's status, for example, the user's running posture is not standardized, the electronic device searches for pictures or videos of the correct running posture through the Internet, and recommends the user to watch related learning videos or picture materials, etc. This application does not limit this. It can be understood that the marked area 1206 can be displayed in the picture viewing interface 1200 in any position and in any form.
用户可以通过更多图标1205查看图片1101的详细信息。如图11c所示,当电子设备检测到针对于更多图标1205的用户操作,显示用户界面1300。图11c中的用户界面1300示例性的显示了小A的详细信息,包括例如血压正常、血糖正常、心率正常、运动状态为跑步、情绪状态为愉快、地理状态为某某公园等信息,又例如血压数据、血糖数据、心率数据、心情指数等具体的数值信息。The user can view the detailed information of the picture 1101 through the more icon 1205. As shown in FIG. 11c, when the electronic device detects a user operation for more icons 1205, a user interface 1300 is displayed. The user interface 1300 in FIG. 11c exemplarily displays the detailed information of small A, including information such as normal blood pressure, normal blood sugar, normal heart rate, exercise state as running, emotional state as happy, and geographic state as a certain park, etc. Specific numerical information such as blood pressure data, blood sugar data, heart rate data, and mood index.
在一些实施例中,图12示例性的示出了又一种图库界面1400。如图12所示,该图库界面1400针对不同类型的图片划分出了不同的图片集,划分的方式可以根据图片的生物特征信息进行划分。例如图12中包括智能标注的图片集,该图片集中的图片/视频均带有生物特征信息;包括以人物区分出的图片集,例如小A的图片集和小B的图片集,小A的图片集中均为小A的图片/视频,小B的图片集中均为小B的图片/视频;包括以运动状态区分出的图片集,例如跑步的图片集和打羽毛球的图片集。可选的,还可以包括以情绪状态区分出的图片集,例如愉快的图片集、悲伤的图片集等。这样,用户可以以生物特征作为区分,选择想要查看的图片集。In some embodiments, FIG. 12 exemplarily shows yet another gallery interface 1400 . As shown in FIG. 12 , the gallery interface 1400 divides different picture sets for different types of pictures, and the division method can be divided according to the biometric information of the pictures. For example, Figure 12 includes a picture set with intelligent annotation, and the pictures/videos in this picture set all carry biometric information; including picture sets distinguished by characters, such as the picture set of small A and the picture set of small B, the picture set of small A The picture sets are all pictures/videos of small A, and the picture sets of small B are all pictures/videos of small B; including the picture sets distinguished by the motion state, such as the picture set of running and the picture set of playing badminton. Optionally, it may also include a set of pictures distinguished by emotional states, such as a set of happy pictures, a set of sad pictures, and the like. In this way, the user can use the biometrics as a distinction to select the set of pictures they want to view.
在一些实施例中,图13a示例性的示出了又一种图片查看界面1201。如图13a所示,图 片查看界面1201的显示区显示图片1101。相比于图片查看界面1200,图片查看界面1201中不包括标注区域,但包括光标1210。该光标1210提示用户可选择查看图片的生物特征信息。如图13a和图13b所示,当电子设备检测到作用于光标1210的用户操作,显示该图片1101的标注区域1211,该标注区域1211中的文本内容可以包括至少部分生物特征信息,例如运动状态、情绪状态、健康状态等,图13a中“小A在跑步”指示了小A的运动状态。In some embodiments, FIG. 13a exemplarily shows yet another picture viewing interface 1201 . As shown in Figure 13a, the display area of the picture viewing interface 1201 displays the picture 1101. Compared with the picture viewing interface 1200 , the picture viewing interface 1201 does not include a marked area, but includes a cursor 1210 . The cursor 1210 prompts the user to select to view the biometric information of the picture. As shown in Fig. 13a and Fig. 13b, when the electronic device detects a user operation acting on the cursor 1210, a label area 1211 of the picture 1101 is displayed, and the text content in the label area 1211 may include at least part of biometric information, such as motion status , emotional state, health state, etc. In Figure 13a, "Little A is running" indicates the movement state of Little A.
其中光标1210的显示位置在小A的附近,指示该光标1210代表的生物特征信息为小A的信息。标注区域1211的显示位置在小A的附近,指示该标注区域1211中的文本内容描述的人物为小A。当图片中包括两个或两个以上的人物时,这种方式可以准确的显示出该生物特征信息所描述的具体人物。The display position of the cursor 1210 is near the small A, indicating that the biometric information represented by the cursor 1210 is the information of the small A. The display position of the marked area 1211 is near the small A, indicating that the character described by the text content in the marked area 1211 is the small A. When two or more characters are included in the picture, this method can accurately display the specific characters described by the biometric information.
可选的,光标1210可以在该图片查看界面1201中隐藏。用户可以通过单击图片1101中小A的显示区域,查看图片1101中小A的生物特征信息,再次点击小A的显示区域,可以隐藏小A的生物特征信息,本申请不作限制。Optionally, the cursor 1210 can be hidden in the picture viewing interface 1201 . The user can click the display area of the small A in the picture 1101 to view the biometric information of the small A in the picture 1101, and click the display area of the small A again to hide the biometric information of the small A, which is not limited in this application.
如图14a所示,图14a中的显示区1201中显示的图片包括两个人物,标注区域1206中包括了两个用户的生物特征信息(小A在跑步,小B在跑步),也即是说,电子设备可以同时接收两个用户的穿戴设备的传感器数据或生物特征信息,在一张图片中显示两个用户的生物特征信息。As shown in FIG. 14a, the picture displayed in the display area 1201 in FIG. 14a includes two people, and the label area 1206 includes the biometric information of the two users (Little A is running, Little B is running), that is, That is, the electronic device can simultaneously receive sensor data or biometric information of the wearable devices of two users, and display the biometric information of the two users in one picture.
在一些实施例中,生物特征信息可以显示在人物的附近。如图14b所示,生物特征信息“小A在跑步”显示在衣服为12号的人物附近,而生物特征信息“小B在跑步”显示在衣服为8号的人物附近。若穿戴设备与面部图像进行了关联,电子设备可以通过面部识别技术,根据面部图像信息识别图像中的用户,将穿戴设备对应的用户与图像中的人物进行匹配,将生物特征信息显示在图像中对应的人物的附近。In some embodiments, biometric information may be displayed near the character. As shown in Figure 14b, the biometric information "Little A is running" is displayed near the person whose clothes are size 12, while the biometric information "Little B is running" is displayed near the person whose clothes are size 8. If the wearable device is associated with the facial image, the electronic device can identify the user in the image according to the facial image information through facial recognition technology, match the user corresponding to the wearable device with the person in the image, and display the biometric information in the image. the vicinity of the corresponding character.
上述实施例描述了电子设备中的图片查看界面,接下来,对查看带有生物特征信息的视频的应用界面进行介绍。The above embodiments describe the picture viewing interface in the electronic device. Next, the application interface for viewing videos with biometric information is introduced.
如图15a所示,图15a示例性的示出了一种视频查看界面1500。在该视频查看界面1500中包括一个或多个视频,其中,每个视频的标题可以是电子设备根据视频的生物特征信息而自动命名的。例如小A跑步指示了该视频中的用户为小A,小A的运动状态为跑步。电子设备检测到针对于视频“小A跑步”的用户操作,对“小A跑步”的视频进行播放。As shown in Figure 15a, Figure 15a exemplarily shows a video viewing interface 1500. One or more videos are included in the video viewing interface 1500, wherein the title of each video may be automatically named by the electronic device according to the biometric information of the video. For example, Xiao A running indicates that the user in the video is Xiao A, and the movement state of Xiao A is running. The electronic device detects a user operation for the video "Xiao A running", and plays the video of "Xiao A running".
如图15b所示,图15b示例性的示出了一种视频播放界面1600。该视频播放界面1600包括进度条1601,用于指示视频播放的进度。视频的标题“小A跑步”指示了该视频中用户的运动状态,其中视频的标题还可以包括用户的健康状态、情绪状态等信息。As shown in Fig. 15b, Fig. 15b exemplarily shows a video playing interface 1600. The video playing interface 1600 includes a progress bar 1601 for indicating the progress of the video playing. The title of the video "Xiao A running" indicates the motion state of the user in the video, and the title of the video may also include information such as the user's health state and emotional state.
在一些实施例中,在视频播放的过程中视频播放界面1600实时显示至少部分生物特征信息。由于一段视频是由多个图像帧组成。每一个图像帧都具有生物特征信息,根据播放的进度,电子设备可以实时显示每个图像帧的至少部分生物特征信息。如图16a和图16b所示,相比于图15b,图16a和图16b还包括标注区域1603和标注区域1604,该标注区域1603和标注区域1604中的文本内容可以包括用户的健康状态、运动状态、情绪状态等相关描述;也可以包括对于用户运动姿势的评价,例如姿势规范、姿势不规范等;也可以包括对于用户的建议,例如用户心情指数较低,希望每天都要开心一点哦;还可以根据用户的状态进行相应的推送,例如用户跑步姿势不规范,建议观看以下链接的视频学习;等等。In some embodiments, the video playback interface 1600 displays at least a portion of the biometric information in real time during the video playback. Since a video is composed of multiple image frames. Each image frame has biometric information, and according to the progress of the playback, the electronic device can display at least part of the biometric information of each image frame in real time. As shown in Fig. 16a and Fig. 16b, compared with Fig. 15b, Fig. 16a and Fig. 16b further include a labeling area 1603 and a labeling area 1604, and the text content in the labeling area 1603 and the labeling area 1604 may include the user's health status, exercise Relevant descriptions such as state, emotional state, etc.; it can also include the evaluation of the user's movement posture, such as posture specification, posture irregularity, etc.; it can also include the user's suggestion, such as the user's mood index is low, I hope to be happy every day; You can also make corresponding pushes according to the user's status. For example, if the user's running posture is not standardized, it is recommended to watch the video linked below for learning; and so on.
在图16a和图16b中可以看出,在同一视频中,标注区域1603和标注区域1604中的内容可以随着视频播放进度的改变而改变。即视频播放界面1600可以实时显示至少部分生物特征信息。It can be seen in Fig. 16a and Fig. 16b that, in the same video, the content in the label area 1603 and the label area 1604 can be changed as the video playback progress changes. That is, the video playback interface 1600 can display at least part of the biometric information in real time.
在一些实施例中,视频播放界面中实时显示的至少部分生物特征信息为生物特征信息的部分内容。若用户想要查看完整的生物特征信息,可以通过暂停查看详细信息。如图17a所示,在图17a中,进度条1605当前指示了暂停状态(暂停在1分41秒的时刻),当电子设备检测到针对于更多图标1205的用户操作,显示当前图像帧(1分41秒的时刻的图像帧)的完整生物特征信息,如图17b所示。图17b中的详细信息即为图17a中的视频在1分41秒的图像帧的完整的生物特征信息。In some embodiments, at least part of the biometric information displayed in real time in the video playback interface is part of the biometric information. If the user wants to view the complete biometric information, they can view the detailed information by pausing. As shown in FIG. 17a, in FIG. 17a, the progress bar 1605 currently indicates a pause state (pause at the moment of 1 minute and 41 seconds), when the electronic device detects a user operation for the more icons 1205, the current image frame ( The complete biometric information of the image frame at the time of 1 minute and 41 seconds) is shown in Fig. 17b. The detailed information in Fig. 17b is the complete biometric information of the image frame of the video in Fig. 17a at 1 minute and 41 seconds.
在一些实施例中,生物特征信息可以实时显示在人物的附近。如图18a所示,图18a中的视频播放界面1600中播放的视频为“小A和小B跑步”的视频,包括两个人物,并且包括了两个用户的生物特征信息。其中,生物特征信息“小A姿势规范”显示在衣服为12号的人物附近,而生物特征信息“小B姿势规范”显示在衣服为8号的人物附近。若穿戴设备与面部图像进行了关联,电子设备可以通过面部识别技术,根据面部图像信息识别图像中的用户,将穿戴设备对应的用户与图像中的人物进行匹配,将生物特征信息显示在图像中对应的人物的附近。In some embodiments, the biometric information may be displayed in the vicinity of the character in real time. As shown in Fig. 18a, the video played in the video playing interface 1600 in Fig. 18a is a video of "Little A and Little B running", including two characters and biometric information of two users. Among them, the biometric information "Xiao A pose specification" is displayed near the person whose clothes are size 12, while the biometric information "Xiao B pose specification" is displayed near the person whose clothing is size 8. If the wearable device is associated with the facial image, the electronic device can identify the user in the image according to the facial image information through facial recognition technology, match the user corresponding to the wearable device with the person in the image, and display the biometric information in the image. the vicinity of the corresponding character.
在图16a和图16b中可以看出,在同一视频中,区域1603和区域1604中的显示内容可以随着视频播放进度的改变而改变。即视频播放界面1600可以在多用户的情况下实时显示生物特征信息。As can be seen in Figures 16a and 16b, in the same video, the displayed content in the area 1603 and the area 1604 can be changed as the video playback progress changes. That is, the video playback interface 1600 can display biometric information in real time in the case of multiple users.
前述介绍了本申请中所涉及的显示界面和方法流程,可以理解的是,前述涉及到生物特征信息和图片/消息进行关联的内容都依赖于将生物特征信息与图片/视频信息进行融合编码,从而得到相应的带有生物特征信息的新的图片/视频。下面对本申请实施例涉及的将对生物特征信息与图片/视频信息进行融合编码,生成带有生物特征信息的图片/视频文件的具体技术实现和原理进行介绍。The foregoing describes the display interface and method flow involved in this application. It can be understood that the aforementioned content related to the association of biometric information and pictures/messages all rely on the fusion coding of biometric information and picture/video information, Thereby, a corresponding new picture/video with biometric information is obtained. The specific technical implementation and principle for generating a picture/video file with biometric information by merging and encoding biometric information and picture/video information involved in the embodiments of the present application are described below.
(1)带有生物特征信息的图片文件格式。(1) Image file format with biometric information.
本申请实施例提供了一种带有生物特征信息的图片文件格式。电子设备将生物特征信息与图片信息进行融合编码,生成为该图片文件格式。如图19a所示,图19a示例性的示出了一种带有生物特征信息的图片文件格式。该图片文件格式的基本数据结构包括两大类型:“段”和经过压缩编码的图像数据。“段”包括标识图像开始(start of image,SOI)的字段、图像识别信息(例如APP1、APP2等)的字段、定义量化表(define quantization table,DQT)的字段、定义霍夫曼表(define huffman table,DHT)的字段、图像帧开始(start of frame,SOF)的字段、扫描开始(start of scan,SOS)的字段、标识图像结束(end of image,EDI)的字段等。The embodiment of the present application provides a picture file format with biometric information. The electronic device fuses and encodes the biometric information and the picture information to generate the picture file format. As shown in Fig. 19a, Fig. 19a exemplarily shows a picture file format with biometric information. The basic data structure of the picture file format includes two types: "segment" and compressed-encoded image data. "Segment" includes fields identifying the start of image (SOI), fields for image identification information (such as APP1, APP2, etc.), fields defining quantization tables (define quantization table, DQT), fields defining Huffman tables (define quantization table, DQT) huffman table, DHT) field, image frame start (start of frame, SOF) field, scan start (start of scan, SOS) field, identification image end (end of image, EDI) field, etc.
其中,图像识别信息的字段可以有一段或多段,例如APP1、APP2等,每个图像识别信息的字段定义了图像的属性信息。其中,图像的属性信息包括拍摄时的光圈、快门、日期时间等各种与当时摄影条件相关的讯息,如相机品牌型号、色彩编码、拍摄时录制的声音以及全球定位系统(GPS)等信息。Wherein, the field of the image identification information may have one or more sections, such as APP1, APP2, etc., and each field of the image identification information defines the attribute information of the image. Among them, the attribute information of the image includes various information related to the shooting conditions at the time of shooting, such as aperture, shutter, date and time, such as camera brand model, color code, sound recorded when shooting, and global positioning system (GPS) and other information.
具体的,APP1字段中包括段标识、段字符、段长度、以及标记图像文件格式(tag image file format,TIFF)的数据。该TIFF数据即包括上述属性信息。其中,一个TIFF文件可以包含多个图像,每个图像都有自己的图像文件目录(image file director,IFD)和一系列标记,并且可以采用多种压缩算法。Specifically, the APP1 field includes segment identifier, segment character, segment length, and data of tag image file format (TIFF). The TIFF data includes the above attribute information. Among them, a TIFF file can contain multiple images, each image has its own image file directory (image file director, IFD) and a series of tags, and can use a variety of compression algorithms.
以IFD0为例,包括多个数据字段(例如DE1、DE2等),每一个数据字段都有一个标识tag,不同的tag指示了不同的属性信息。例如,DE1的字段可以指示图像的拍摄时间,DE2 的字段可以指示相机型号,Tag=0x8825的字段可以指示图像的GPS信息,Tag=0x8769的字段可以指示图像的生物特征信息(运动姿态、心率、血压等)。Taking IFD0 as an example, it includes multiple data fields (eg DE1, DE2, etc.), each data field has an identification tag, and different tags indicate different attribute information. For example, the field of DE1 may indicate the capture time of the image, the field of DE2 may indicate the camera model, the field of Tag=0x8825 may indicate the GPS information of the image, and the field of Tag=0x8769 may indicate the biometric information of the image (movement posture, heart rate, blood pressure, etc.).
示例性的,在Tag=0x8769的字段中,定义0x00–0x7F的字段表示运动姿态,则该字段中最多可以包括128种运动姿态,例如0x00表示跑步、0x01表示走路、0x02表示游泳等等;0x80–0x9F的字段表示生命体征,则该字段中最多可以包括32种生命体征,例如0x80表示心率、0x81表示血压、0x82表示血糖等等;0xA0–0xAF的字段表示个人基本信息,则该字段中最多可以包括16种个人基本信息,例如0xA0表示身高、0xA1表示年龄、0xA2表示性别等等。Exemplarily, in the field of Tag=0x8769, the fields of 0x00-0x7F are defined to represent motion gestures, then the field can include up to 128 motion gestures, for example, 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, etc.; 0x80 The field of –0x9F represents vital signs, and this field can include up to 32 vital signs, such as 0x80 for heart rate, 0x81 for blood pressure, 0x82 for blood sugar, etc.; 0xA0–0xAF field represents basic personal information, and the most It can include 16 kinds of personal basic information, such as 0xA0 for height, 0xA1 for age, 0xA2 for gender, and so on.
若电子设备获取到的传感器数据或生物特征信息表示用户正在跑步,那么在指示图像的生物特征信息的字段中,写入0x00字段;若电子设备获取到的传感器数据或生物特征信息表示用户的心率为60次/分,那么在指示图像的生物特征信息的字段中,写入0x80 0x3C字段(0x3C为60的十六进制);等等。If the sensor data or biometric information obtained by the electronic device indicates that the user is running, write a 0x00 field in the field indicating the biometric information of the image; if the sensor data or biometric information obtained by the electronic device indicates the user's heart rate is 60 times/min, then in the field indicating the biometric information of the image, write 0x80 0x3C field (0x3C is 60 hexadecimal); and so on.
在一些实施例中,该图片文件格式中的IFD中,还包括有身份信息、场景信息等字段,身份信息包括穿戴设备的设备名称、设备账号(例如华为账号)、自定义用户名等;场景信息包括电子设备通过识别图像中的场景、以及拍摄图片时的地理位置信息,综合确定出图片中的场景,例如公园、酒吧、湖边、博物馆等。In some embodiments, the IFD in the image file format also includes fields such as identity information, scene information, etc., and the identity information includes the device name of the wearable device, the device account (such as a Huawei account), a custom user name, etc.; the scene The information includes that the electronic device comprehensively determines the scene in the picture by identifying the scene in the image and the geographic location information when the picture is taken, such as a park, a bar, a lakeside, a museum, etc.
可以理解的,图19a示出的图片格式是本申请提供的一种示例性的图片格式,其中图片格式中生物特征信息字段的位置本申请不作限制。It can be understood that the picture format shown in FIG. 19a is an exemplary picture format provided by the present application, wherein the position of the biometric information field in the picture format is not limited by the present application.
(2)带有生物特征信息的视频帧格式。(2) Video frame format with biometric information.
本申请实施例还提供了一种带有生物特征信息的视频帧格式。一段完整的视频是由多个视频帧组成的,一个视频帧对应一张图片。电子设备将生物特征信息与视频帧信息进行融合编码,生成为该视频帧格式。如图19b所示,图19b示例性的示出了一种带有生物特征信息的视频帧格式。该视频帧格式包括补充增强信息(supplemental enhancement information,SEI)、序列参数集(sequence parameter sset,SPS)、图像参数集(picture parameter set,PPS)和压缩编码后的视频数据序列(VCL数据)。其中,SPS中保存了一组编码视频序列(coded video sequence)的全局参数。其中编码视频序列即原始视频的一帧的像素数据经过编码之后的结构组成的序列。而每一帧的编码后数据所依赖的参数保存于PPS中。The embodiment of the present application also provides a video frame format with biometric information. A complete video is composed of multiple video frames, and one video frame corresponds to one picture. The electronic device fuses and encodes the biometric information and the video frame information to generate the video frame format. As shown in FIG. 19b, FIG. 19b exemplarily shows a video frame format with biometric information. The video frame format includes supplemental enhancement information (SEI), sequence parameter set (sequence parameter sset, SPS), picture parameter set (picture parameter set, PPS) and compression-encoded video data sequence (VCL data). Among them, a set of global parameters of a coded video sequence (coded video sequence) are stored in the SPS. The coded video sequence is a sequence composed of a structure after the pixel data of one frame of the original video has been coded. The parameters on which the encoded data of each frame depends are stored in the PPS.
SEI属于码流范畴,它提供了向视频码流中加入额外信息的方法。在视频内容的生成和传输过程中,可以插入SEI信息。这些插入的信息,和其他视频内容一同经过传输链路到达电子设备。其中,SEI中包括网络提取层单元(network abstraction layer unit,NAL)类型、SEI类型、SEI长度等字段。SEI belongs to the code stream category, which provides a method of adding extra information to the video code stream. During the generation and transmission of video content, SEI information can be inserted. This inserted information, along with other video content, travels through the transmission link to the electronic device. Among them, the SEI includes fields such as network abstraction layer unit (NAL) type, SEI type, and SEI length.
SEI中还包括指示生物特征信息的字段,生物特征信息包括运动姿态、心率、血压等。示例性的,定义0x00–0x7F的字段表示运动姿态,则该字段中最多可以包括128种运动姿态,例如0x00表示跑步、0x01表示走路、0x02表示游泳等等;0x80–0x9F的字段表示生命体征,则该字段中最多可以包括32种生命体征,例如0x80表示心率、0x81表示血压、0x82表示血糖等等;0xA0–0xAF的字段表示个人基本信息,则该字段中最多可以包括16种个人基本信息,例如0xA0表示身高、0xA1表示年龄、0xA2表示性别等等。The SEI also includes fields indicating biometric information, and the biometric information includes exercise posture, heart rate, blood pressure, and the like. Exemplarily, the field defining 0x00-0x7F represents motion posture, and the field can include up to 128 motion postures, for example, 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, etc.; the field of 0x80-0x9F represents vital signs, Then this field can include up to 32 vital signs, such as 0x80 for heart rate, 0x81 for blood pressure, 0x82 for blood sugar, etc.; the fields of 0xA0–0xAF indicate personal basic information, then this field can include up to 16 kinds of personal basic information, For example, 0xA0 is height, 0xA1 is age, 0xA2 is gender, and so on.
若电子设备获取到的传感器数据或生物特征信息表示用户正在跑步,那么在指示图像的生物特征信息的字段中,写入0x00字段;若电子设备获取到的传感器数据或生物特征信息表示用户的心率为60次/分,那么在指示图像的生物特征信息的字段中,写入0x80 0x3C字段(0x3C为60的十六进制);等等。If the sensor data or biometric information obtained by the electronic device indicates that the user is running, write a 0x00 field in the field indicating the biometric information of the image; if the sensor data or biometric information obtained by the electronic device indicates the user's heart rate is 60 times/min, then in the field indicating the biometric information of the image, write 0x80 0x3C field (0x3C is 60 hexadecimal); and so on.
在一些实施例中,SEI中还包括有身份信息、场景信息等字段,身份信息包括穿戴设备的设备名称、设备账号(例如华为账号)、自定义用户名等;场景信息包括电子设备通过识别图像中的场景、以及拍摄图片时的地理位置信息,综合确定出图片中的场景,例如公园、酒吧、湖边、博物馆等。In some embodiments, the SEI also includes fields such as identity information and scene information, where the identity information includes the device name of the wearable device, the device account (such as a Huawei account), a user-defined user name, etc.; the scene information includes the identification image of the electronic device The scene in the picture, and the geographic location information when the picture was taken, comprehensively determine the scene in the picture, such as parks, bars, lakes, museums, etc.
可以理解的,图19b示出的视频帧格式是本申请提供的一种示例性的视频帧格式,其中视频帧格式中生物特征信息字段的位置本申请不作限制。It can be understood that the video frame format shown in FIG. 19b is an exemplary video frame format provided by the present application, wherein the position of the biometric information field in the video frame format is not limited by the present application.
可以理解的是,前文中所提及的实施例、方法流程以及相关技术原理等可以有机的进行组合,从而得到其他新的实施例,本申请对此不进行限定。It can be understood that the above-mentioned embodiments, method flows and related technical principles can be organically combined to obtain other new embodiments, which are not limited in this application.
基于上述的技术原理以及图3所示的特征标注系统,下面结合示例首先介绍本申请提供的针对图片进行拍摄的方法流程。请参见图20a,图20a示出了一种针对图片进行拍摄的方法流程图。其中,该方法流程图中涉及的设备包括有穿戴设备和拍摄设备。其中,拍摄设备包括具有拍摄功能的电子设备100,穿戴设备示例性的包括穿戴设备201、穿戴设备202,还可以包括更多设备。该方法包括:Based on the above-mentioned technical principles and the feature labeling system shown in FIG. 3 , the following first introduces the method flow of the photographing method provided by the present application with reference to an example. Please refer to Fig. 20a, Fig. 20a shows a flowchart of a method for photographing a picture. The devices involved in the flow chart of the method include wearable devices and photographing devices. The photographing device includes an electronic device 100 having a photographing function, and the wearable device exemplarily includes a wearable device 201 and a wearable device 202, and may also include more devices. The method includes:
步骤S101:穿戴设备和拍摄设备建立连接。Step S101: Establish a connection between the wearable device and the photographing device.
穿戴设备和拍摄设备建立连接,连接的方式不限于蓝牙(blue tooth,BT),近场通信(near field communication,NFC),无线保真(wireless fidelity,WiFi)、WiFi直连、网络等无线通信方式。在本申请实施例中,使用蓝牙的配对将作为示例被描述。The connection between the wearable device and the shooting device is not limited to Bluetooth (blue tooth, BT), near field communication (NFC), wireless fidelity (wireless fidelity, WiFi), WiFi direct connection, network and other wireless communication Way. In the embodiments of the present application, pairing using Bluetooth will be described as an example.
可选的,建立连接的过程中,穿戴设备和拍摄设备通过蓝牙互相获取对方的连接信息(例如硬件信息,接口信息,身份信息等等)。拍摄设备可以在开启拍摄功能后,获取穿戴设备的传感器数据,穿戴设备可以同步拍摄设备的部分功能,例如穿戴设备可以同步输出拍摄设备中的通知提醒(例如来电提醒、新消息提醒等),穿戴设备可以主动触发拍摄设备开启拍摄功能,穿戴设备可以查看拍摄设备中的图片/视频文件等等。Optionally, in the process of establishing the connection, the wearable device and the photographing device obtain each other's connection information (such as hardware information, interface information, identity information, etc.) through Bluetooth. The shooting device can obtain the sensor data of the wearable device after the shooting function is turned on, and the wearable device can synchronize some functions of the shooting device. The device can actively trigger the shooting device to start the shooting function, and the wearable device can view the picture/video files in the shooting device, etc.
可选的,在一些实施例中,当拍摄设备检测到开启标注模式的用户操作时,拍摄设备自动开启蓝牙,与穿戴设备自动建立蓝牙连接。Optionally, in some embodiments, when the photographing device detects a user operation for enabling the labeling mode, the photographing device automatically turns on Bluetooth, and automatically establishes a Bluetooth connection with the wearable device.
步骤S102:拍摄设备检测到触发拍摄图片功能的用户操作。Step S102: The photographing device detects a user operation that triggers the function of photographing a picture.
拍摄设备检测到触发拍摄图片功能的用户操作,触发拍摄图片功能,获取摄像头当前采集的图片信息。该用户操作可以是触控操作、语音操作、悬浮手势操作等等,在此不作限定。本申请中,拍摄设备在标注模式下检测到触发拍摄图片功能的用户操作,具体内容可以参考前述UI实施例。例如用户可以通过针对图5c中的图标305,或图6b中的图标404的用户操作,触发拍摄设备拍摄图片,拍摄设备检测到该用户操作,触发拍摄图片功能。The photographing device detects the user operation that triggers the function of taking pictures, triggers the function of taking pictures, and obtains the picture information currently collected by the camera. The user operation may be a touch operation, a voice operation, a hovering gesture operation, etc., which is not limited herein. In the present application, the photographing device detects a user operation that triggers the function of photographing a picture in the annotation mode, and the specific content may refer to the aforementioned UI embodiment. For example, the user can trigger the photographing device to take a picture through a user operation on the icon 305 in FIG. 5c or the icon 404 in FIG. 6b, and the photographing device detects the user operation and triggers the function of taking a picture.
可选的,在一些实施例中,该用户操作可以是针对于穿戴设备的用户操作,穿戴设备检测到触发拍摄图片功能的用户操作,通过蓝牙发送到拍摄设备,拍摄设备检测到该用户操作,触发拍摄图片功能。参考图9a,用户通过点击图9a中的应用图标901启动相机应用,通过点击图9b中的图标1003触发拍摄图片功能,穿戴设备通过蓝牙将拍摄图片指令发送到拍摄设备,从而拍摄设备触发拍照,具体内容此处不再赘述。本申请中,触发拍摄图片功能的用户操作也可称为第一操作。Optionally, in some embodiments, the user operation may be a user operation for a wearable device, the wearable device detects a user operation that triggers a function of taking pictures, and sends it to the photographing device through Bluetooth, and the photographing device detects the user operation, Trigger the capture picture function. Referring to Fig. 9a, the user starts the camera application by clicking the application icon 901 in Fig. 9a, and triggers the function of taking pictures by clicking the icon 1003 in Fig. 9b, and the wearable device sends a picture taking instruction to the photographing device through Bluetooth, so that the photographing device triggers the photographing, The specific content will not be repeated here. In this application, the user operation that triggers the function of taking pictures may also be referred to as the first operation.
步骤S103:拍摄设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。Step S103: The photographing device sends a request message, where the request message is used to request acquisition of sensor data or biometric information.
拍摄设备检测到触发拍摄图片功能的用户操作后,向穿戴设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。After the photographing device detects a user operation that triggers the function of taking pictures, it sends a request message to the wearable device, where the request message is used to request acquisition of sensor data or biometric information.
可选的,该请求消息包括请求获取的数据类型、数据收集方式和数据收集间隔。其中,请求获取的数据类型可以是:健康状态、运动状态、情绪状态等。其中,健康状态包括心率、血压、血糖、脑电、心电、肌电、体温等;运动状态包括走路、跑步、骑车、游泳、打羽毛球、滑冰、冲浪、跳舞等常见的运动类型姿态,也可以包括一些更细粒度的运动姿态,例如:正手击球、反手击球、跳拉丁舞、跳机械舞等;情绪状态包括紧张、焦虑、悲伤、压力大、兴奋、愉悦等。Optionally, the request message includes the requested data type, data collection method, and data collection interval. The requested data type may be: health state, exercise state, emotional state, and the like. Among them, the health status includes heart rate, blood pressure, blood sugar, EEG, ECG, EMG, body temperature, etc.; exercise status includes walking, running, cycling, swimming, playing badminton, skating, surfing, dancing and other common sports postures. It can also include some more fine-grained motion gestures, such as: forehand, backhand, Latin dance, mechanical dance, etc.; emotional states include tension, anxiety, sadness, stress, excitement, joy, etc.
可选的,数据收集方式可以分为单次采集和连续采集。对于拍摄图片来说,数据收集方式一般可以为单次采集。Optionally, the data collection methods can be divided into single collection and continuous collection. For taking pictures, the data collection method can generally be a single collection.
数据收集间隔与数据收集方式有关,当数据收集方式为单次采集时,数据收集间隔为无效值。即拍摄设备只需获取一次穿戴设备发送的数据。当数据收集方式为连续采集时,数据收集间隔为预设间隔。即拍摄设备每隔预设间隔获取一次穿戴设备发送的数据。The data collection interval is related to the data collection method. When the data collection method is single collection, the data collection interval is invalid. That is, the shooting device only needs to obtain the data sent by the wearable device once. When the data collection method is continuous collection, the data collection interval is the preset interval. That is, the photographing device obtains the data sent by the wearable device at preset intervals.
拍摄设备请求获取的数据类型,可以由用户在拍摄设备侧进行配置,具体配置内容的方式可以参考前述UI实施例,例如图8c中所示,配置界面80中包括多个配置选项,每个配置选项都对应一种数据类型,拍摄设备根据用户选择的数据类型,向穿戴设备发送请求消息。举例来说,运动姿态、心率、血压、血糖、情绪状态分别用一个字节表示,若拍摄设备想要获取运动姿态,将表示运动姿态的字节置1,否则置0;其他数据类型同理。The type of data requested to be acquired by the photographing device can be configured by the user on the side of the photographing device. The specific configuration method can refer to the aforementioned UI embodiment. For example, as shown in FIG. 8c, the configuration interface 80 includes a plurality of configuration options. The options all correspond to a data type, and the photographing device sends a request message to the wearable device according to the data type selected by the user. For example, motion posture, heart rate, blood pressure, blood sugar, and emotional state are each represented by one byte. If the shooting device wants to obtain the motion posture, set the byte representing the motion posture to 1, otherwise set it to 0; the same is true for other data types .
可选的,在一些实施例中,用户可以是在穿戴设备上触发拍摄图片功能,穿戴设备检测到触发拍摄图片功能的用户操作,向拍摄设备发送拍摄图片指令。拍摄设备接收到该拍摄图片指令,获取摄像头当前采集的图片信息,并向穿戴设备发送请求消息。本申请中,穿戴设备检测到触发拍摄图片功能的用户操作,该用户操作可以称为第二操作。Optionally, in some embodiments, the user may trigger a picture-taking function on the wearable device, and the wearable device detects a user operation that triggers the picture-taking function, and sends a picture-taking instruction to the photographing device. The photographing device receives the picture-taking instruction, obtains picture information currently collected by the camera, and sends a request message to the wearable device. In this application, the wearable device detects a user operation that triggers the function of taking pictures, and the user operation may be referred to as a second operation.
步骤S104:穿戴设备向拍摄设备发送传感器数据或生物特征信息。Step S104: The wearable device sends sensor data or biometric information to the photographing device.
穿戴设备解析接收到的请求消息,获取到拍摄设备所需要的数据类型、数据收集方式以及数据收集间隔。根据该请求消息,穿戴设备向拍摄设备发送传感器数据或生物特征信息。示例性的,对于拍摄图片来说,数据收集方式为单次采集,数据收集间隔设置为无效值,穿戴设备向拍摄设备发送一次数据。若请求消息中的数据类型中包括运动姿态和心率,则穿戴设备向拍摄设备发送通过运动传感器和心率传感器获取的传感器数据或生物特征信息。The wearable device parses the received request message, and obtains the data type, data collection method and data collection interval required by the shooting device. According to the request message, the wearable device sends sensor data or biometric information to the photographing device. Exemplarily, for taking pictures, the data collection method is single collection, the data collection interval is set to an invalid value, and the wearable device sends data to the shooting device once. If the data type in the request message includes motion posture and heart rate, the wearable device sends the sensor data or biometric information obtained by the motion sensor and the heart rate sensor to the photographing device.
可选的,传感器数据为原始的数据,即传感器直接检测出的数据;例如,传感器数据可以包括穿戴设备通过心率传感器获取到用户的心率数据;通过运动传感器获取到用户的运动幅度、角度、速度等数据;通过皮电传感器获取到用户的皮肤电阻、电导等数据;通过生物传感器获取到用户的血糖、血压、体温等数据。Optionally, the sensor data is raw data, that is, data directly detected by the sensor; for example, the sensor data may include the user's heart rate data obtained by the wearable device through the heart rate sensor; the user's movement amplitude, angle, and speed obtained through the motion sensor. and other data; obtain the user's skin resistance, conductance and other data through the skin sensor; obtain the user's blood sugar, blood pressure, body temperature and other data through the biosensor.
可选的,生物特征信息为基于传感器数据处理过后的数据,即根据原始的数据进行计算/推算得出的进一步的数据。例如,生物特征信息可以为穿戴设备根据该心率数据推算出用户当前的心率在正常范围内;根据运动传感器获取的数据推算出用户当前的运动姿势,例如走路、跑步、骑车、游泳、打羽毛球等等;根据皮电传感器获取的数据推算出用户当前的心情指数、压力指数等;根据生物传感器获取的数据推算出用户当前的健康状态。Optionally, the biometric information is processed data based on sensor data, that is, further data obtained by calculation/calculation based on the original data. For example, the biometric information can be calculated by the wearable device based on the heart rate data that the user's current heart rate is within the normal range; the user's current exercise posture, such as walking, running, cycling, swimming, playing badminton, can be calculated based on the data obtained by the motion sensor. and so on; calculate the user's current mood index, stress index, etc. according to the data obtained by the galvanic sensor; calculate the user's current health state according to the data obtained by the biosensor.
示例性的,0x00–0x7F的字段表示运动姿态,该字段中最多可以包括128种运动姿态,例如0x00表示跑步、0x01表示走路、0x02表示游泳等等;0x80–0x9F的字段表示生命体征,该字段中最多可以包括32种生命体征,例如0x80表示心率、0x81表示血压、0x82表示血糖等等;0xA0–0xAF的字段表示个人基本信息,该字段中最多可以包括16种个人基本信息,例如0xA0表示身高、0xA1表示年龄、0xA2表示性别等等。若拍摄设备想要获取运动姿态,穿戴设备检测到此时运动传感器获取的数据表示用户正在跑步,则穿戴设备将0x00写入待发 送的数据包,表示用户正在跑步。若拍摄设备想要获取心率,穿戴设备检测到此时心率传感器获取的数据为60次/分,则穿戴设备将表示“心率为60次/分”的数据字节写入待发送的数据包,例如0x80 0x3C(0x3C为60的十六进制);可选的,心率60次/分在正常范围内,穿戴设备可以将表示“心率正常”的数据字节写入待发送的数据包。Exemplarily, the fields of 0x00-0x7F represent motion gestures, and the field can include up to 128 motion gestures, for example, 0x00 represents running, 0x01 represents walking, 0x02 represents swimming, etc.; the fields 0x80-0x9F represent vital signs, this field It can include up to 32 kinds of vital signs, such as 0x80 for heart rate, 0x81 for blood pressure, 0x82 for blood sugar, etc.; the fields of 0xA0–0xAF indicate personal basic information, and this field can include up to 16 kinds of personal basic information, such as 0xA0 for height , 0xA1 for age, 0xA2 for gender, and so on. If the shooting device wants to obtain the motion posture, and the wearable device detects that the data obtained by the motion sensor indicates that the user is running, the wearable device writes 0x00 to the data packet to be sent, indicating that the user is running. If the shooting device wants to obtain the heart rate, and the wearable device detects that the data obtained by the heart rate sensor is 60 beats/min at this time, the wearable device writes the data bytes indicating "heart rate is 60 beats/min" into the data packet to be sent. For example, 0x80 0x3C (0x3C is 60 in hexadecimal); Optionally, if the heart rate is 60 beats/min within the normal range, the wearable device can write data bytes indicating "normal heart rate" into the data packet to be sent.
可选的,在一些实施例中,穿戴设备解析接收到的请求消息,获取到拍摄设备所需要的数据类型、数据收集方式以及数据收集间隔。根据该请求消息,穿戴设备获取预设时间段之前的传感器数据,发送该传感器数据或基于该传感器数据得到的生物特征信息到拍摄设备。该预设时间段可以是1秒,可以是0.5秒,这里不作限制。由于穿戴设备获取传感器数据的时刻始终滞后于电子设备检测到触发拍摄图片功能的用户操作的时刻,通过实验获取到滞后的数据值,设置为预设时间段,则穿戴设备能够向拍摄设备提供准确的传感器数据或生物特征信息,减少误差。Optionally, in some embodiments, the wearable device parses the received request message, and obtains the data type, data collection method, and data collection interval required by the photographing device. According to the request message, the wearable device acquires sensor data before a preset time period, and sends the sensor data or biometric information obtained based on the sensor data to the photographing device. The preset time period may be 1 second or 0.5 seconds, which is not limited here. Since the time when the wearable device obtains the sensor data always lags behind the time when the electronic device detects the user operation that triggers the function of taking pictures, the delayed data value is obtained through experiments and set to a preset time period, then the wearable device can provide accurate information to the shooting device. sensor data or biometric information, reducing errors.
可选的,在一些实施例中,穿戴设备解析接收到的请求消息,根据该请求消息,穿戴设备根据时间戳,将传感器数据或生物特征信息发送到拍摄设备。该时间戳为电子设备检测到触发拍摄图片功能的用户操作时的时刻,穿戴设备获取该时刻的传感器数据,将传感器数据或生物特征信息发送到拍摄设备。这样,穿戴设备能够向拍摄设备提供更加准确的信息。Optionally, in some embodiments, the wearable device parses the received request message, and according to the request message, the wearable device sends sensor data or biometric information to the photographing device according to the timestamp. The time stamp is the moment when the electronic device detects a user operation that triggers the function of taking pictures, and the wearable device acquires sensor data at this moment, and sends the sensor data or biometric information to the photographing device. In this way, the wearable device can provide more accurate information to the photographing device.
步骤S105:将拍摄到的图片信息与生物特征信息进行融合编码,生成带有生物特征信息的图片。Step S105: Fusion coding the captured picture information and the biometric information to generate a picture with the biometric information.
拍摄设备接收到传感器数据或生物特征信息后,将拍摄到的图片信息与生物特征信息进行融合编码,生成带有生物特征信息的图片,该图片的格式可以参考上述图19a所示的图片格式,该生物特征信息与传感器数据对应。其中,生物特征信息的显示内容可以是简单的一句话,也可以是详细完整的信息。其中生物特征信息在图片中的显示内容、显示位置以及显示形式本申请不作限制。After receiving the sensor data or biometric information, the photographing device fuses and encodes the captured picture information and biometric information to generate a picture with biometric information. The format of the picture can refer to the picture format shown in Figure 19a above, The biometric information corresponds to sensor data. The display content of the biometric information may be a simple sentence, or may be detailed and complete information. The display content, display position and display form of the biometric information in the picture are not limited in this application.
可选的,电子设备接收到传感器数据,基于该传感器数据确定生物特征信息,将拍摄到的图片信息与生物特征信息进行融合编码;或者可选的,电子设备接收到生物特征信息,电子设备基于该生物特征信息,将拍摄到的图片信息与生物特征信息进行融合编码。Optionally, the electronic device receives sensor data, determines biometric information based on the sensor data, and fuses the captured picture information with the biometric information; For the biometric information, the captured picture information and the biometric information are fused and encoded.
举例来说,拍摄设备获取的传感器数据为用户的心率数据,则生物特征信息可以为用户的心率是否在正常范围内;拍摄设备获取的传感器数据为用户当前的运动姿势,例如走路、跑步、骑车、游泳、打羽毛球等,则生物特征信息可以为用户的运动姿势是否标准;拍摄设备获取的传感器数据为用户当前的心情指数、压力指数等信息,则生物特征信息可以为用户的心情是否愉快;等等。For example, if the sensor data obtained by the photographing device is the user's heart rate data, the biometric information can be whether the user's heart rate is within the normal range; the sensor data obtained by the photographing device is the user's current exercise posture, such as walking, running, riding car, swimming, playing badminton, etc., the biometric information can be whether the user's movement posture is standard; the sensor data obtained by the shooting device is the user's current mood index, stress index and other information, then the biometric information can be whether the user is in a happy mood ;etc.
其中生成的图片可以参考上述图11b所示,当用户查看带有生物特征信息的图片时,生物特征信息可以显示在图片中的固定位置,例如图片的上部。其中,图片中的生物特征信息在刚开始查看时可以隐藏,然后通过触发控件显示,例如在图13a中,生物特征信息通过触发光标1210显示。又如在图11b中,生物特征信息通过触发图标1205,在图11c中的用户界面1300中进行查看。The generated picture can be referred to as shown in Figure 11b above. When the user views a picture with biometric information, the biometric information can be displayed in a fixed position in the picture, such as the upper part of the picture. The biometric information in the picture can be hidden at the beginning of viewing, and then displayed by triggering the control. For example, in FIG. 13a, the biometric information is displayed by triggering the cursor 1210. As another example in Figure 11b, biometric information is viewed in the user interface 1300 in Figure 11c by triggering the icon 1205.
可选的,在一些实施例中,电子设备还可以获取穿戴设备的身份信息,例如设备名称、设备账号(例如华为账号)、自定义用户名等信息;将穿戴设备的身份信息和图片信息进行融合编码,生成带有身份信息的图片文件。电子设备还可以获取拍摄图片时的场景信息,场景信息包括电子设备通过识别图像中的场景、以及拍摄图片时的地理位置信息,综合确定出的图片中的场景,例如公园、酒吧、湖边、博物馆等。将场景信息和图片信息进行融合编码, 生成带有场景信息的图片文件。Optionally, in some embodiments, the electronic device can also obtain the identity information of the wearable device, such as the device name, device account (such as Huawei account), user-defined user name and other information; Fusion coding generates image files with identity information. The electronic device can also obtain the scene information when the picture is taken. The scene information includes the scene in the picture comprehensively determined by the electronic device by identifying the scene in the image and the geographic location information when taking the picture, such as parks, bars, lakes, etc. Museum etc. The scene information and the picture information are fused and encoded to generate a picture file with the scene information.
可选的,在一些实施例中,生物特征信息是根据传感器数据和图片信息结合得到的。根据传感器数据可以得出用户的健康状态、运动状态、情绪状态等信息,根据图片信息可以得到场景信息、用户运动姿势等信息,拍摄设备结合传感器数据和图片信息综合得出图片的生物特征信息。举例来说,拍摄设备接收到的传感器数据包括用户的心率数据为60次/分,和运动姿势为跑步;拍摄设备对拍摄到的图片信息进行图像分析,得出图片中用户的运动姿势为跑步,和拍摄地点为公园。综合传感器数据和图片信息,拍摄设备得出最终的生物特征信息包括用户在公园跑步,心率为60次/分。Optionally, in some embodiments, the biometric information is obtained by combining sensor data and picture information. According to the sensor data, the user's health status, movement status, emotional status and other information can be obtained. According to the picture information, the scene information, user movement posture and other information can be obtained. For example, the sensor data received by the photographing device includes the user's heart rate data of 60 beats/min, and the movement posture of running; the photographing device performs image analysis on the captured picture information, and concludes that the user's movement posture in the picture is running. , and the filming location is a park. Combining sensor data and picture information, the final biometric information obtained by the shooting device includes the user running in the park with a heart rate of 60 beats/min.
可选的,在一些实施例中,拍摄设备中存有一个或多个面部图像的信息,拍摄设备通过穿戴设备的身份信息确定出穿戴设备对应的预设面部图像,将预设面部图像与图片中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分生物特征信息显示在该人物附近。Optionally, in some embodiments, the information of one or more facial images is stored in the photographing device, the photographing device determines a preset facial image corresponding to the wearable device through the identity information of the wearable device, and associates the preset facial image with the picture. One or more of the characters are matched for similarity, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character.
其中,预设面部图像与穿戴设备具有关联关系,其中预设面部图像可以是用户预设在拍摄设备中,也可以是以图像或视频的方式上传到拍摄设备中,也可以是用户预设在穿戴设备中,穿戴设备再提供给拍摄设备,本申请不作限制。The preset facial image is associated with the wearable device, wherein the preset facial image may be preset by the user in the shooting device, or uploaded to the shooting device in the form of an image or video, or may be preset by the user in the shooting device. In the wearable device, the wearable device is then provided to the photographing device, which is not limited in this application.
举例来说,穿戴设备为华为手表1,其中与该华为手表1的绑定用户名为小A,拍摄设备确定小A的面部信息。其中,拍摄设备可以通过查找通讯录中的用户与面部的绑定关系确定小A的面部信息;还可以通过穿戴设备与面部信息的绑定关系确定小A的面部信息(例如图8c中通过关联面部图像的图标802上传小A的面部信息,小A的面部信息即为面部图像7031);等等。拍摄设备对图片进行图像识别,识别出图片中的一个或多个面部,将小A的面部信息与图片中的一个或多个面部进行相似度匹配。若小A面部信息与图片中其中一个面部的相似度大于阈值,则将生物特征信息显示在该面部的附近。参考图13b,图13b中图片的生物特征信息显示在人物的附近,指示了该生物特征信息描述的人物。For example, the wearable device is Huawei Watch 1, and the user name bound to the Huawei Watch 1 is Xiao A, and the photographing device determines the facial information of Xiao A. Among them, the photographing device can determine the facial information of Xiao A by looking up the binding relationship between the user and the face in the address book; it can also determine the facial information of Xiao A through the binding relationship between the wearable device and the facial information (for example, in FIG. 8c by association The icon 802 of the face image uploads the face information of the little A, and the face information of the little A is the face image 7031); and so on. The photographing device performs image recognition on the picture, identifies one or more faces in the picture, and performs similarity matching between the face information of Little A and the one or more faces in the picture. If the similarity between the small A face information and one of the faces in the picture is greater than the threshold, the biometric information is displayed near the face. Referring to FIG. 13b, the biometric information of the picture in FIG. 13b is displayed near the person, indicating the person described by the biometric information.
可选的,若预设面部图像与图片中的人物匹配不成功,即拍摄设备检测到预设面部图像不在当前的拍摄设备采集到的图片中,拍摄设备可以输出提示消息(第一提示),例如“图片中没有用户小A”。Optionally, if the preset facial image is unsuccessfully matched with the person in the picture, that is, the photographing device detects that the preset facial image is not in the picture collected by the current photographing device, the photographing device may output a prompt message (first prompt), For example "there is no user A in the picture".
可选的,在一些实施例中,相比于上述图20a所示的方法流程,本申请还示出了一种对图片进行拍摄的方法流程,如图20b所示。Optionally, in some embodiments, compared with the method flow shown in FIG. 20a above, the present application further shows a method flow for photographing a picture, as shown in FIG. 20b.
步骤S101a:穿戴设备和拍摄设备建立连接。具体描述可以参考步骤S101的描述,此处不再赘述。Step S101a: Establish a connection between the wearable device and the photographing device. For a specific description, reference may be made to the description of step S101, which will not be repeated here.
步骤S102a:拍摄设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。Step S102a: The photographing device sends a request message, where the request message is used to request acquisition of sensor data or biometric information.
穿戴设备和拍摄设备建立连接后,拍摄设备向穿戴设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。After the wearable device and the photographing device are connected, the photographing device sends a request message to the wearable device, where the request message is used to request acquisition of sensor data or biometric information.
可选的,当拍摄设备检测到开启标注模式的用户操作时,拍摄设备向穿戴设备发送请求消息。Optionally, when the photographing device detects a user operation for enabling the annotation mode, the photographing device sends a request message to the wearable device.
可选的,拍摄设备在标注模式下,检测到触发获取传感器数据或生物特征信息的用户操作,拍摄设备向穿戴设备发送请求消息。参考图7a,在图7a中拍摄设备检测到针对于图标407的用户操作,拍摄设备向穿戴设备发送请求消息。Optionally, in the labeling mode, the photographing device detects a user operation that triggers acquisition of sensor data or biometric information, and the photographing device sends a request message to the wearable device. Referring to Fig. 7a, in Fig. 7a, the photographing device detects a user operation for the icon 407, and the photographing device sends a request message to the wearable device.
关于请求消息的具体描述可以参考步骤S103的描述,此处不再赘述。For a specific description of the request message, reference may be made to the description of step S103, which will not be repeated here.
步骤S103a:穿戴设备向拍摄设备发送传感器数据或生物特征信息。Step S103a: The wearable device sends sensor data or biometric information to the photographing device.
该步骤的具体描述可以参考步骤S104的描述。另外的,The specific description of this step can refer to the description of step S104. additional,
在一些实施例中,拍摄设备获取到穿戴设备发送的传感器数据或生物特征信息后,将至少部分生物特征信息显示在拍摄界面上。参考图7b,在显示区40中显示摄像头实时采集的图像,以及预览区408,预览区408中的显示内容即为至少部分生物特征信息,以供用户实时查看。该生物特征信息与传感器数据对应。本申请实施例中,该拍摄界面可称为拍摄预览界面,拍摄预览界面上包括摄像头采集的预览图像。拍摄设备在预览图像上显示至少部分生物特征信息,该在预览图像上显示的至少部分生物特征信息可以称为第二信息,该第二信息与第二传感器数据对应。In some embodiments, after acquiring the sensor data or biometric information sent by the wearable device, the photographing device displays at least part of the biometric information on the photographing interface. Referring to FIG. 7b, the image captured by the camera in real time is displayed in the display area 40, and the preview area 408 is displayed in the preview area 408. The display content in the preview area 408 is at least part of the biometric information for the user to view in real time. The biometric information corresponds to sensor data. In this embodiment of the present application, the shooting interface may be referred to as a shooting preview interface, and the shooting preview interface includes a preview image collected by a camera. The photographing device displays at least part of the biometric information on the preview image, and the at least part of the biometric information displayed on the preview image may be referred to as second information, and the second information corresponds to the second sensor data.
在一些实施例中,拍摄设备中存有一个或多个面部图像的信息,以及每个面部图像与穿戴设备的对应关系。拍摄设备通过穿戴设备的身份信息从一个或多个面部图像中确定出穿戴设备对应的预设面部图像,将预设面部图像与图片中的一个或多个人物进行相似度匹配,若预设面部图像与图片中其中一个人物匹配成功,则拍摄设备将至少部分该生物特征信息显示在该人物附近。例如图7b中的预览区408可以显示在匹配的人物附近。In some embodiments, the information of one or more facial images and the corresponding relationship between each facial image and the wearable device are stored in the photographing device. The photographing device determines a preset facial image corresponding to the wearable device from one or more facial images through the identity information of the wearable device, and matches the preset facial image with one or more characters in the picture. If the image is successfully matched with one of the characters in the picture, the photographing device displays at least part of the biometric information near the character. For example, the preview area 408 in FIG. 7b may be displayed near the matched characters.
可选的,若目标面部信息与图片中的人物匹配不成功,即拍摄设备检测到预设面部图像不在当前的拍摄设备采集到的图片中,拍摄设备可以输出提示消息(第一提示),例如“图片中没有用户小A,请将摄像头对准用户小A”。Optionally, if the target facial information is unsuccessfully matched with the person in the picture, that is, the photographing device detects that the preset facial image is not in the picture collected by the current photographing device, the photographing device can output a prompt message (the first prompt), such as "There is no user A in the picture, please point the camera at user A".
步骤S104a:拍摄设备检测到触发拍摄图片功能的用户操作。具体描述可以参考步骤S102的描述,此处不再赘述。Step S104a: The photographing device detects a user operation that triggers the function of photographing a picture. For a specific description, reference may be made to the description of step S102, which will not be repeated here.
步骤S105a:将拍摄到的图片信息与生物特征信息进行融合编码,生成带有生物特征信息的图片,该生物特征信息与传感器数据对应。具体描述可以参考步骤S105的描述,此处不再赘述。Step S105a: Fusion coding the captured picture information and biometric information to generate a picture with biometric information corresponding to sensor data. For a specific description, reference may be made to the description of step S105, which will not be repeated here.
相比于上述图20a所描述的实施例,拍摄设备在检测到触发拍摄图片功能的用户操作后,再向穿戴设备获取传感器数据或生物特征信息;而图20b所示出的方法中,拍摄设备可以在启动标注模式后就向穿戴设备获取传感器数据或生物特征信息,在拍摄的预览界面实时显示至少部分生物特征信息,达到对生物特征信息的预览效果,提升用户体验。拍摄设备也可以在启动标注模式后,接收到用户操作后,向穿戴设备获取传感器数据或生物特征信息,在拍摄的预览界面实时显示至少部分生物特征信息,用户可以自由控制生物特征信息的显示和隐藏,提升用户体验。Compared with the embodiment described in FIG. 20a, the photographing device obtains sensor data or biometric information from the wearable device after detecting a user operation that triggers the function of taking pictures; while in the method shown in FIG. 20b, the photographing device obtains sensor data or biometric information from the wearable device. The sensor data or biometric information can be obtained from the wearable device after the annotation mode is activated, and at least part of the biometric information can be displayed in real time on the preview interface of the shooting, so as to achieve the preview effect of the biometric information and improve the user experience. The shooting device can also obtain sensor data or biometric information from the wearable device after starting the annotation mode and after receiving the user's operation, and display at least part of the biometric information in real time on the preview interface of the shooting, and the user can freely control the display and display of the biometric information. Hidden to improve user experience.
可选的,在一些实施例中,用户可以是在穿戴设备上触发拍摄图片功能,穿戴设备检测到触发拍摄功能的用户操作,向拍摄设备发送拍摄图片指令以及传感器数据(或生物特征信息)。拍摄设备接收到拍摄图片指令以及传感器数据(或生物特征信息)后,获取当前摄像头采集的图片信息,将拍摄到的图片信息与生物特征信息进行融合编码,生成带有生物特征信息的图片,该生物特征信息与传感器数据对应。关于传感器数据和生物特征信息的具体描述可以参考上述步骤S104。Optionally, in some embodiments, the user may trigger a picture taking function on the wearable device, and the wearable device detects the user operation triggering the photographing function, and sends a picture taking instruction and sensor data (or biometric information) to the photographing device. After receiving the image capture instruction and sensor data (or biometric information), the photographing device obtains the image information collected by the current camera, fuses and encodes the captured image information and biometric information, and generates a picture with biometric information. Biometric information corresponds to sensor data. For a specific description of the sensor data and biometric information, reference may be made to the above step S104.
举例来说,参考图9c,用户在如图9c中所示的配置界面中选择想要的数据类型,配置完成后,穿戴设备开启标注模式。穿戴设备响应于针对界面1000中图标1003的用户操作,向拍摄设备发送拍摄图片指令,以及根据用户在图9c中选择的数据类型,向拍摄设备提供传感器数据或生物特征信息。拍摄设备接收到拍摄图片指令,获取当前摄像头采集的图片信息,将拍摄到的图片信息与或生物特征信息进行融合编码,生成带有生物特征信息的图片。For example, referring to Fig. 9c, the user selects the desired data type in the configuration interface as shown in Fig. 9c, and after the configuration is completed, the wearable device starts the labeling mode. In response to the user's operation on the icon 1003 in the interface 1000, the wearable device sends a picture-taking instruction to the photographing device, and provides sensor data or biometric information to the photographing device according to the data type selected by the user in FIG. 9c. The photographing device receives an instruction to take a picture, obtains the picture information collected by the current camera, and fuses and encodes the photographed picture information with or biometric information to generate a picture with biometric information.
可选的,本申请还提供了一种针对视频进行拍摄的方法流程。请参见图21,图21示出了一种针对视频进行拍摄的方法流程图。Optionally, the present application further provides a method flow for shooting video. Please refer to FIG. 21 , which shows a flowchart of a method for shooting video.
步骤S201:穿戴设备和拍摄设备建立连接。具体描述可以参考步骤S101的描述,此处不再赘述。Step S201: Establish a connection between the wearable device and the photographing device. For a specific description, reference may be made to the description of step S101, which will not be repeated here.
步骤S202:拍摄设备检测到触发拍摄视频功能的用户操作。Step S202: The photographing device detects a user operation that triggers the function of photographing a video.
拍摄设备检测到触发拍摄视频功能的用户操作,触发拍摄视频功能,获取摄像头当前采集的视频帧信息。该用户操作可以是触控操作、语音操作、悬浮手势操作等等,在此不作限定。本申请中,拍摄设备在标注模式下检测到触发拍摄视频功能的用户操作,具体内容可以参考前述UI实施例。The shooting device detects a user operation that triggers the shooting video function, triggers the shooting video function, and obtains the video frame information currently collected by the camera. The user operation may be a touch operation, a voice operation, a hovering gesture operation, etc., which is not limited herein. In the present application, the photographing device detects a user operation that triggers the function of photographing a video in the annotation mode, and the specific content may refer to the aforementioned UI embodiment.
可选的,在一些实施例中,该用户操作可以是针对于穿戴设备的用户操作,穿戴设备检测到触发拍摄视频功能的用户操作,通过蓝牙发送到拍摄设备,拍摄设备检测到该用户操作,触发拍摄视频功能。Optionally, in some embodiments, the user operation may be a user operation for a wearable device, the wearable device detects a user operation that triggers a function of shooting a video, sends it to the shooting device through Bluetooth, and the shooting device detects the user operation, Trigger the capture video function.
步骤S203:拍摄设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。Step S203: The photographing device sends a request message, where the request message is used to request acquisition of sensor data or biometric information.
拍摄设备检测到触发拍摄视频功能的用户操作后,向穿戴设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。After the photographing device detects a user operation that triggers the function of photographing a video, it sends a request message to the wearable device, where the request message is used to request acquisition of sensor data or biometric information.
该请求消息包括拍摄设备支持的数据类型、数据收集方式和数据收集间隔。这部分的具体描述可以参考上述步骤S103的描述,与步骤S103不同的是,对于拍摄视频来说,数据收集方式一般为连续采集,数据收集间隔可以设置为1秒。即拍摄设备在拍摄视频的过程中每隔1秒获取一次传感器数据或生物特征信息。The request message includes the data type supported by the photographing device, the data collection method, and the data collection interval. For the specific description of this part, please refer to the description of step S103 above. The difference from step S103 is that for shooting video, the data collection method is generally continuous collection, and the data collection interval can be set to 1 second. That is, the shooting device acquires sensor data or biometric information every 1 second during the process of shooting a video.
可选的,在一些实施例中,用户可以是在穿戴设备上触发拍摄视频功能,穿戴设备检测到触发拍摄视频功能的用户操作,向拍摄设备发送拍摄视频指令。拍摄设备接收到该拍摄视频指令,向穿戴设备发送请求消息。Optionally, in some embodiments, the user may trigger a video capture function on a wearable device, and the wearable device detects a user operation that triggers the video capture function, and sends a video capture instruction to the capture device. The shooting device receives the shooting video instruction, and sends a request message to the wearable device.
在一些实施例中,用户是在穿戴设备上触发拍摄视频功能,穿戴设备检测到触发拍摄视频功能的用户操作,想拍摄设备发送拍摄视频指令,并且向拍摄设备发送传感器数据或生物特征信息。此时无需执行步骤S203,穿戴设备基于在穿戴设备侧的配置,向拍摄设备发送传感器数据或生物特征信息。In some embodiments, the user triggers the video capture function on the wearable device, the wearable device detects the user operation that triggers the video capture function, wants to send the video capture command to the capture device, and sends sensor data or biometric information to the capture device. In this case, step S203 does not need to be performed, and the wearable device sends sensor data or biometric information to the photographing device based on the configuration on the side of the wearable device.
步骤S204:穿戴设备周期性发送传感器数据或生物特征信息。Step S204: The wearable device periodically sends sensor data or biometric information.
穿戴设备解析接收到的请求消息,获取到拍摄设备所需要的数据类型、数据收集方式以及数据收集间隔。根据该请求消息,穿戴设备向拍摄设备周期性发送传感器数据或生物特征信息。示例性的,对于拍摄视频来说,数据收集方式为连续采集,数据收集间隔设置为1秒。则穿戴设备向拍摄设备每隔1秒发送一次传感器数据或生物特征信息。关于传感器数据和生物特征信息的具体描述可以参考上述步骤S104。The wearable device parses the received request message, and obtains the data type, data collection method and data collection interval required by the shooting device. According to the request message, the wearable device periodically sends sensor data or biometric information to the photographing device. Exemplarily, for video shooting, the data collection method is continuous collection, and the data collection interval is set to 1 second. Then the wearable device sends sensor data or biometric information to the photographing device every 1 second. For a specific description of the sensor data and biometric information, reference may be made to the above step S104.
步骤S205:拍摄设备每收到一次传感器数据或生物特征信息,将拍摄到的视频信息与或生物特征信息进行融合编码,生成带有生物特征信息的图像帧,该生物特征信息与传感器数据对应。Step S205 : each time the photographing device receives sensor data or biometric information, it fuses and encodes the captured video information and or biometric information to generate an image frame with biometric information corresponding to the sensor data.
拍摄设备每收到一次传感器数据或生物特征信息,将拍摄到的视频信息与或生物特征信息进行融合编码。由于一个视频是由多个图像帧组成的,拍摄设备生成的每个图像帧都具有对应的生物特征信息,图像帧的生物特征信息与传感器数据对应。示例性的,拍摄设备每隔1秒接收到一次传感器数据,在一秒内拍摄设备生成的图像帧有24帧,则该一秒内的24个图像帧的生物特征信息都是根据下一次一秒的传感器数据得到的。例如,拍摄设备第5秒获 取了一次传感器数据,拍摄设备将视频信息中第4秒-第5秒之间的24帧图像与该传感器数据进行融合编码,生成带有生物特征信息的图像帧;拍摄设备第6秒获取了一次传感器数据,拍摄设备将视频信息中第5秒-第6秒之间的24帧图像与该传感器数据进行融合编码,生成带有生物特征信息的图像帧;依次类推。Each time the photographing device receives sensor data or biometric information, it fuses and encodes the captured video information with or biometric information. Since a video is composed of multiple image frames, each image frame generated by the photographing device has corresponding biometric information, and the biometric information of the image frame corresponds to the sensor data. Exemplarily, the photographing device receives sensor data every 1 second, and there are 24 image frames generated by the photographing device in one second, then the biometric information of the 24 image frames in the one second is based on the next one. seconds of sensor data. For example, the photographing device acquires sensor data once in the 5th second, and the photographing device fuses and encodes 24 frames of images from the 4th to 5th second in the video information with the sensor data to generate image frames with biometric information; The shooting device obtains sensor data once in the 6th second, and the shooting device fuses and encodes 24 frames of images in the video information between the 5th second and the 6th second with the sensor data to generate image frames with biometric information; and so on. .
可选的,在一些实施例中,拍摄设备根据时间戳将生物特征信息与视频信息进行融合编码。举例来说,穿戴设备周期性发送传感器数据或生物特征信息,该传感器数据或生物特征信息具有时间戳,表示了该传感器数据或生物特征信息对应在视频信息中的时间。这样,穿戴设备能够向拍摄设备提供更加准确的信息,避免由于信息传输时延导致生物特征信息与视频内容不匹配。Optionally, in some embodiments, the photographing device fuses and encodes the biometric information and the video information according to the timestamp. For example, the wearable device periodically sends sensor data or biometric information, and the sensor data or biometric information has a time stamp, indicating the time when the sensor data or biometric information corresponds to the video information. In this way, the wearable device can provide more accurate information to the photographing device, avoiding the mismatch between the biometric information and the video content due to information transmission delay.
可选的,在一些实施例中,生物特征信息是根据传感器数据和视频信息结合得到的。根据传感器数据可以得出用户的健康状态、运动状态、情绪状态等信息,根据视频信息可以得到场景信息、用户运动姿势等信息,拍摄设备结合传感器数据和视频信息综合得出图片的生物特征信息。举例来说,拍摄设备接收到的传感器数据包括用户的心率数据为60次/分,和运动姿势为跑步;拍摄设备对拍摄到的视频信息进行图像分析,得出视频中用户的运动姿势为跑步,和拍摄地点为公园。综合传感器数据和视频信息,拍摄设备得出视频中图像帧最终的生物特征信息包括用户在公园跑步,心率为60次/分。Optionally, in some embodiments, the biometric information is obtained by combining sensor data and video information. According to the sensor data, the user's health status, motion status, emotional status and other information can be obtained. According to the video information, the scene information, user movement posture and other information can be obtained. For example, the sensor data received by the shooting device includes the user's heart rate data of 60 beats/min, and the exercise posture of running; the shooting device performs image analysis on the captured video information, and concludes that the user's exercise posture in the video is running. , and the filming location is a park. Combining the sensor data and video information, the shooting device obtains the final biometric information of the image frames in the video, including the user running in the park and the heart rate of 60 beats/min.
步骤S206:拍摄设备检测到触发停止拍摄视频的用户操作。Step S206: The photographing device detects a user operation that triggers stopping of photographing the video.
拍摄设备检测到触发停止拍摄视频的用户操作,停止拍摄视频。该用户操作可以是触控操作、语音操作、悬浮手势操作等等,在此不作限定。本申请中,拍摄设备在标注模式下检测到触发停止拍摄视频功能的用户操作,具体内容可以参考前述UI实施例。The shooting device detects a user operation that triggers the stop of shooting the video, and stops shooting the video. The user operation may be a touch operation, a voice operation, a hovering gesture operation, etc., which is not limited herein. In this application, the shooting device detects a user operation that triggers the function of stopping video shooting in the annotation mode, and the specific content may refer to the aforementioned UI embodiment.
可选的,在一些实施例中,用户可以是在穿戴设备上触发停止拍摄视频,穿戴设备检测到触发停止拍摄视频的用户操作,向拍摄设备发送停止拍摄视频指令。拍摄设备接收到该停止拍摄视频指令,停止拍摄视频。Optionally, in some embodiments, the user may trigger the wearable device to stop shooting video, and the wearable device detects a user operation that triggers the stop of shooting video, and sends an instruction to stop shooting video to the shooting device. The shooting device receives the instruction to stop shooting video, and stops shooting video.
步骤S207:拍摄设备向穿戴设备发送请求消息,用于停止获取传感器数据或生物特征信息。Step S207: The photographing device sends a request message to the wearable device for stopping acquiring sensor data or biometric information.
拍摄设备检测到触发停止拍摄视频功能的用户操作后,向穿戴设备发送请求消息,该请求消息用于停止获取传感器数据或生物特征信息。穿戴设备接收到该请求消息,停止向拍摄设备发送传感器数据或生物特征信息。After the photographing device detects a user operation that triggers the function of stopping video recording, it sends a request message to the wearable device, where the request message is used to stop acquiring sensor data or biometric information. The wearable device receives the request message and stops sending sensor data or biometric information to the photographing device.
可选的,在一些实施例中,用户可以是在穿戴设备上触发停止拍摄视频,穿戴设备检测到触发停止拍摄视频的用户操作,向拍摄设备发送停止拍摄视频指令,并且停止向拍摄设备发送传感器数据或生物特征信息。Optionally, in some embodiments, the user may trigger the stop of shooting video on the wearable device, the wearable device detects the user operation that triggers the stop of shooting video, sends the instruction to stop shooting video to the shooting device, and stops sending sensors to the shooting device. data or biometric information.
步骤S208:拍摄设备生成并保存视频。Step S208: The photographing device generates and saves a video.
拍摄设备检测到触发停止拍摄视频功能的用户操作后,生成并保存拍摄的视频。该视频的格式可以参考上述图19b所示的视频格式,该视频带有生物特征信息,该生物特征信息与传感器数据对应。After the shooting device detects a user operation that triggers the function of stopping the shooting of the video, it generates and saves the shot video. For the format of the video, reference may be made to the video format shown in FIG. 19b. The video contains biometric information, and the biometric information corresponds to the sensor data.
其中生成的视频可以参考上述图16a或图16b所示,当用户查看该视频时,该视频中还包括生物特征信息,生物特征信息的显示内容可以是简单的一句话,也可以是详细完整的信息。其中生物特征信息的显示内容、显示位置以及显示形式本申请不作限制。The generated video can refer to the above-mentioned Figure 16a or Figure 16b. When the user views the video, the video also includes biometric information, and the display content of the biometric information can be a simple sentence, or it can be detailed and complete information. The display content, display position and display form of the biometric information are not limited in this application.
可选的,在一些实施例中,电子设备还可以获取穿戴设备的身份信息,例如设备名称、设备账号(例如华为账号)、自定义用户名等信息;将穿戴设备的身份信息和视频帧信息进行融合编码,生成带有身份信息的视频文件。电子设备还可以获取拍摄视频时的场景信息, 场景信息包括电子设备通过识别视频帧中的场景、以及拍摄视频时的地理位置信息,综合确定出的视频帧中的场景,例如公园、酒吧、湖边、博物馆等。将场景信息和视频帧信息进行融合编码,生成带有场景信息的视频文件。Optionally, in some embodiments, the electronic device can also obtain the identity information of the wearable device, such as the device name, device account (such as a Huawei account), custom user name and other information; Perform fusion encoding to generate video files with identity information. The electronic device can also obtain scene information when the video is shot. The scene information includes the scene in the video frame comprehensively determined by the electronic device by identifying the scene in the video frame and the geographic location information when shooting the video, such as parks, bars, lakes, etc. borders, museums, etc. The scene information and video frame information are fused and encoded to generate a video file with scene information.
可选的,在一些实施例中,拍摄设备中存有一个或多个面部图像的信息,拍摄设备通过穿戴设备的身份信息确定出穿戴设备对应的预设面部图像,将预设面部图像与视频的图像帧中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分生物特征信息显示在该人物附近。参考上述图18a或图18b,图18a或图18b中视频的生物特征信息显示在人物的上方,指示了该生物特征信息描述的人物。Optionally, in some embodiments, the information of one or more facial images is stored in the photographing device, the photographing device determines the preset facial image corresponding to the wearable device through the identity information of the wearable device, and associates the preset facial image with the video. The similarity matching is performed on one or more characters in the image frames of the image frame, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character. Referring to Fig. 18a or Fig. 18b above, the biometric information of the video in Fig. 18a or Fig. 18b is displayed above the character, indicating the character described by the biometric information.
其中可选的,预设面部图像与穿戴设备具有关联关系,其中预设面部图像可以是用户预设在拍摄设备中,也可以是以图像或视频的方式上传到拍摄设备中,也可以是用户预设在穿戴设备中,穿戴设备再提供给拍摄设备,本申请不作限制。Optionally, the preset facial image is associated with the wearable device, wherein the preset facial image can be preset by the user in the shooting device, or uploaded to the shooting device in the form of an image or video, or the user can It is preset in the wearable device, and the wearable device is then provided to the photographing device, which is not limited in this application.
需要说明的是,拍摄设备可以在启动标注模式后就向穿戴设备获取传感器数据或生物特征信息,在拍摄的预览界面实时显示至少部分生物特征信息,达到对生物特征信息的预览效果,提升用户体验。拍摄设备也可以在启动标注模式后,接收到用户操作后,向穿戴设备获取传感器数据或生物特征信息,在拍摄的预览界面实时显示至少部分生物特征信息,用户可以自由控制生物特征信息的显示和隐藏,提升用户体验。It should be noted that the photographing device can obtain sensor data or biometric information from the wearable device after starting the annotation mode, and display at least part of the biometric information in real time on the preview interface of the shooting, so as to achieve the preview effect of the biometric information and improve the user experience. . The shooting device can also obtain sensor data or biometric information from the wearable device after starting the annotation mode and after receiving the user's operation, and display at least part of the biometric information in real time on the preview interface of the shooting, and the user can freely control the display and display of the biometric information. Hidden to improve user experience.
可选的,在一些实施例中,在拍摄视频的过程中,生物特征信息可以实时的显示在拍摄界面上,以供用户实时查看。Optionally, in some embodiments, during the process of shooting a video, the biometric information may be displayed on the shooting interface in real time for the user to view in real time.
上述介绍的针对图片和视频进行拍摄的方法适用于图1所示的系统。可选的,本申请还提供了一种拍摄系统,可以使一个拍摄设备与多个相同类型的穿戴设备进行连接,拍摄设备获取到多个相同类型的穿戴设备的传感器数据或生物特征信息,对拍摄的图片/视频中的多个用户进行特征识别。如图22所示,图22示出的拍摄系统包括拍摄设备101、多个相同类型的穿戴设备201以及第三设备301。拍摄设备101和多个相同类型的穿戴设备201通过第三设备301建立连接。其中,The method for shooting pictures and videos described above is applicable to the system shown in FIG. 1 . Optionally, the present application also provides a shooting system, which can connect one shooting device with multiple wearable devices of the same type, and the shooting device obtains sensor data or biometric information of multiple wearable devices of the same type. Feature recognition for multiple users in captured pictures/videos. As shown in FIG. 22 , the photographing system shown in FIG. 22 includes a photographing device 101 , a plurality of wearable devices 201 of the same type, and a third device 301 . The photographing device 101 and multiple wearable devices 201 of the same type establish connections through the third device 301 . in,
拍摄设备101是具有摄像功能的电子设备,例如手机、平板、相机等等。穿戴设备201包括无线耳机、智能手表、智能手环、智能眼镜、电子衣物、电子手镯、电子项链、电子配件、电子纹身和智能镜子等等。The photographing device 101 is an electronic device with a camera function, such as a mobile phone, a tablet, a camera, and the like. The wearable device 201 includes wireless earphones, smart watches, smart bracelets, smart glasses, electronic clothing, electronic bracelets, electronic necklaces, electronic accessories, electronic tattoos, smart mirrors, and the like.
第三设备301,可以是中继设备。例如蓝牙中继器hub,蓝牙中继器通过蓝牙与拍摄设备101以及多个相同类型的穿戴设备201连接;第三设备301还可以是云端服务器,第三设备301通过移动通信模块与拍摄设备101以及多个穿戴设备201连接。通过第三设备301,拍摄设备101可以和多个相同类型的穿戴设备201建立连接。可选的,在一种可能实现的方式中,可穿戴设备和拍摄设备还可以通过WiFi等其他无线通信方式和第三设备301建立连接,此时第三设备301还可以是具有数据处理和运算功能的路由器。The third device 301 may be a relay device. For example, a Bluetooth repeater hub, the Bluetooth repeater is connected to the photographing device 101 and multiple wearable devices 201 of the same type through Bluetooth; the third device 301 can also be a cloud server, and the third device 301 communicates with the photographing device 101 through a mobile communication module. and multiple wearable devices 201 are connected. Through the third device 301 , the photographing device 101 can establish connections with multiple wearable devices 201 of the same type. Optionally, in a possible implementation manner, the wearable device and the photographing device may also establish a connection with the third device 301 through other wireless communication methods such as WiFi. At this time, the third device 301 may also be capable of data processing and computing. functional router.
基于上述的技术原理以及图22所示的拍摄系统,下面结合示例介绍本申请提供的针对图片进行拍摄的方法流程。请参见图23,图23示出了一种针对图片进行拍摄的方法流程图。其中,该方法流程图中涉及的设备包括有n个穿戴设备、拍摄设备以及第三设备,n为正整数。该方法包括:Based on the foregoing technical principles and the photographing system shown in FIG. 22 , the following describes the method flow for photographing a picture provided by the present application with reference to an example. Please refer to FIG. 23 , which shows a flowchart of a method for photographing a picture. The devices involved in the flow chart of the method include n wearable devices, a photographing device and a third device, where n is a positive integer. The method includes:
步骤S301:n个穿戴设备和第三设备建立连接,拍摄设备和第三设备建立连接。Step S301 : the n wearable devices establish a connection with the third device, and the photographing device establishes a connection with the third device.
连接的方式不限于蓝牙(blue tooth,BT),近场通信(near field communication,NFC),无 线保真(wireless fidelity,WiFi)、WiFi直连、网络等无线通信方式。在本申请实施例中,使用蓝牙的配对将作为示例被描述。The connection method is not limited to Bluetooth (blue tooth, BT), near field communication (near field communication, NFC), wireless fidelity (wireless fidelity, WiFi), WiFi direct connection, network and other wireless communication methods. In the embodiments of the present application, pairing using Bluetooth will be described as an example.
建立连接的过程中,n个穿戴设备和拍摄设备通过第三设备互相获取对方的连接信息(例如硬件信息,接口信息,身份信息等等)。拍摄设备可以获取穿戴设备的传感器数据或生物特征信息,穿戴设备可以同步拍摄设备的部分功能,例如穿戴设备可以同步输出拍摄设备中的通知提醒(例如来电提醒、新消息提醒等),穿戴设备可以主动触发拍摄设备开启拍摄功能,穿戴设备可以查看拍摄设备中的图片/视频文件等等。In the process of establishing the connection, the n wearable devices and the photographing device obtain each other's connection information (such as hardware information, interface information, identity information, etc.) through the third device. The camera device can obtain sensor data or biometric information of the wearable device, and the wearable device can synchronize some functions of the camera device. Actively trigger the shooting device to turn on the shooting function, and the wearable device can view the pictures/video files in the shooting device, etc.
步骤S302:拍摄设备检测到触发拍摄图片功能的用户操作。具体描述可以参考步骤S102的描述,此处不再赘述。Step S302: The photographing device detects a user operation that triggers the function of photographing a picture. For a specific description, reference may be made to the description of step S102, which will not be repeated here.
步骤S303:拍摄设备向第三设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。Step S303: The photographing device sends a request message to the third device, where the request message is used to request acquisition of sensor data or biometric information.
此步骤的具体描述可以参考步骤S103的描述,与步骤S103不同的是,拍摄设备向第三设备发送请求消息。For a specific description of this step, reference may be made to the description of step S103 . The difference from step S103 is that the photographing device sends a request message to the third device.
步骤S304:第三设备向n个穿戴设备转发该请求消息。Step S304: The third device forwards the request message to n wearable devices.
第三设备接收到拍摄设备发送的请求消息后,向n个穿戴设备转发该请求消息。After receiving the request message sent by the photographing device, the third device forwards the request message to the n wearable devices.
步骤S305:穿戴设备发送传感器数据或生物特征信息,传感器数据或生物特征信息中包含穿戴设备的身份信息。Step S305: The wearable device sends sensor data or biometric information, where the sensor data or biometric information includes the identity information of the wearable device.
此步骤的具体描述可以参考步骤S104的描述,与步骤S104不同的是,穿戴设备向第三设备发送传感器数据或生物特征信息,传感器数据或生物特征信息中还包含穿戴设备的身份信息,该身份信息可以唯一表示一个穿戴设备。The specific description of this step can refer to the description of step S104. Different from step S104, the wearable device sends sensor data or biometric information to the third device, and the sensor data or biometric information also includes the identity information of the wearable device. Information can uniquely represent a wearable device.
步骤S306:第三设备确定所有连接的穿戴设备的传感器数据或生物特征信息接收完整后,发送给拍摄设备。Step S306: After the third device determines that the sensor data or biometric information of all connected wearable devices are received completely, it sends the data to the photographing device.
第三设备接收穿戴设备发送的传感器数据或生物特征信息,当第三设备接收到n个穿戴设备的传感器数据或生物特征信息后,将n个穿戴设备的传感器数据或生物特征信息发送给拍摄设备。n个穿戴设备的传感器数据或生物特征信息包括穿戴设备的身份信息,第三设备根据获取的身份信息判断是否接收到所有连接的穿戴设备的传感器数据或生物特征信息。The third device receives the sensor data or biometric information sent by the wearable device, and after receiving the sensor data or biometric information of n wearable devices, the third device sends the sensor data or biometric information of the n wearable devices to the shooting device . The sensor data or biometric information of the n wearable devices includes the identity information of the wearable devices, and the third device determines whether to receive the sensor data or biometric information of all connected wearable devices according to the acquired identity information.
步骤S307:拍摄设备将拍摄到的图片信息与生物特征信息进行融合编码,生成带有生物特征信息的图片,该生物特征信息与传感器数据对应。Step S307: The photographing device fuses and encodes the photographed picture information and the biometric information to generate a picture with the biometric information, and the biometric information corresponds to the sensor data.
此步骤的具体描述可以参考步骤S105的描述,与步骤S105不同的是,由于拍摄设备接收到的传感器数据或生物特征信息是n个穿戴设备的,则生物特征信息也相应的包括n个用户的生物特征信息。The specific description of this step can refer to the description of step S105. The difference from step S105 is that since the sensor data or biometric information received by the photographing device is for n wearable devices, the biometric information also includes the corresponding information of n users. Biometric Information.
其中生成的图片可以参考上述图14a所示,图14a显示了一种图片查看界面,查看的图片中包括两个人物。当用户查看该图片时,该图片中还包括生物特征信息,该生物特征信息包括两个人物的生物特征信息,分别为小A在跑步和小B在跑步。生物特征信息的显示内容可以是简单的一句话,也可以是详细完整的信息。其中生物特征信息的显示内容、显示位置以及显示形式本申请不作限制。The generated pictures can be referred to as shown in the above-mentioned Fig. 14a. Fig. 14a shows a picture viewing interface, and the viewed picture includes two characters. When the user views the picture, the picture also includes biometric information, and the biometric information includes biometric information of two characters, namely, little A is running and little B is running. The displayed content of the biometric information can be a simple sentence or a detailed and complete information. The display content, display position and display form of the biometric information are not limited in this application.
可选的,在一些实施例中,拍摄设备中存有一个或多个面部图像的信息,拍摄设备通过穿戴设备的身份信息确定出穿戴设备对应的预设面部图像,将预设面部图像与图片中的一个或多个人物进行相似度匹配,若预设面部图像与其中一个人物匹配成功,则拍摄设备将至少部分生物特征信息显示在该人物附近。拍摄设备获取到n个穿戴设备的身份信息,最多可以执行上述匹配过程n次。参考图14b,图14b中包括两个生物特征信息和两个人物,每个生 物特征信息显示在不同人物的附近(小A在跑步显示在12号衣服人物的上方,小B在跑步显示在8号衣服人物的上方),根据生物特征信息的显示位置指示了该生物特征信息描述的人物。Optionally, in some embodiments, the information of one or more facial images is stored in the photographing device, the photographing device determines a preset facial image corresponding to the wearable device through the identity information of the wearable device, and associates the preset facial image with the picture. One or more of the characters are matched for similarity, and if the preset facial image is successfully matched with one of the characters, the photographing device displays at least part of the biometric information near the character. The photographing device obtains the identity information of n wearable devices, and can perform the above matching process n times at most. Referring to Figure 14b, Figure 14b includes two biometric information and two characters, and each biometric information is displayed near a different character (small A is displayed above the number 12 clothes when running, and small B is displayed at 8 when running. (above the person in the number clothes), the person described by the biometric information is indicated according to the display position of the biometric information.
其中,预设面部图像与穿戴设备具有关联关系,其中预设面部图像可以是用户预设在拍摄设备中,也可以是以图像或视频的方式上传到拍摄设备中,也可以是用户预设在穿戴设备中,穿戴设备再提供给拍摄设备,本申请不作限制。The preset facial image is associated with the wearable device, wherein the preset facial image may be preset by the user in the shooting device, or uploaded to the shooting device in the form of an image or video, or may be preset by the user in the shooting device. In the wearable device, the wearable device is then provided to the photographing device, which is not limited in this application.
需要说明的是,拍摄设备可以在启动标注模式后就向穿戴设备获取传感器数据或生物特征信息,在拍摄的预览界面实时显示至少部分生物特征信息,达到对生物特征信息的预览效果,提升用户体验。拍摄设备也可以在启动标注模式后,接收到用户操作后,向穿戴设备获取传感器数据或生物特征信息,在拍摄的预览界面实时显示至少部分生物特征信息,用户可以自由控制生物特征信息的显示和隐藏,提升用户体验。It should be noted that the photographing device can obtain sensor data or biometric information from the wearable device after starting the annotation mode, and display at least part of the biometric information in real time on the preview interface of the shooting, so as to achieve the preview effect of the biometric information and improve the user experience. . The shooting device can also obtain sensor data or biometric information from the wearable device after starting the annotation mode and after receiving the user's operation, and display at least part of the biometric information in real time on the preview interface of the shooting, and the user can freely control the display and display of the biometric information. Hidden to improve user experience.
本申请实施例中,拍摄设备和n个穿戴设备通过第三设备建立连接,拍摄设备在标注模式下检测到触发拍摄图片功能的用户操作,在获取图片信息的同时,通过第三设备向穿戴设备发送请求消息。拍摄设备接收到n个穿戴设备的传感器数据或生物特征信息,将n个或生物特征信息与图片信息进行融合编码,生成带有生物特征信息的图片,该生物特征信息与n个传感器数据对应。可以看出,本申请提供的这种方式是在图片生成的过程中完成对图片的标注,而不需要后续对保存的图片进行特征提取,节约了硬件资源;并且可以结合穿戴设备的传感器数据对图片进行更加精准和丰富的特征标注。In the embodiment of the present application, the photographing device and n wearable devices are connected through a third device, and the photographing device detects a user operation that triggers the function of taking pictures in the labeling mode, and at the same time obtains the picture information, the third device sends the information to the wearable device through the third device. Send a request message. The photographing device receives sensor data or biometric information of n wearable devices, and fuses and encodes the n or biometric information with the picture information to generate a picture with biometric information, and the biometric information corresponds to the n sensor data. It can be seen that the method provided by this application is to complete the labeling of the picture during the process of generating the picture, without the need for subsequent feature extraction of the saved picture, which saves hardware resources; and can be combined with the sensor data of the wearable device. Images are marked with more accurate and rich features.
接下来本申请还提供了一种针对视频进行拍摄的方法流程。请参见图24,图24示出了一种针对视频进行拍摄的方法流程图。Next, the present application also provides a method flow for shooting video. Please refer to FIG. 24, which shows a flowchart of a method for shooting video.
步骤S401:n个穿戴设备和第三设备建立连接,拍摄设备和第三设备建立连接。具体描述可以参考步骤S301的描述,此处不再赘述。Step S401 : the n wearable devices establish connections with the third device, and the photographing device establishes the connection with the third device. For a specific description, reference may be made to the description of step S301, which will not be repeated here.
步骤S402:拍摄设备检测到触发拍摄视频功能的用户操作。具体描述可以参考步骤S202的描述,此处不再赘述。Step S402: The photographing device detects a user operation that triggers the function of photographing a video. For a specific description, reference may be made to the description of step S202, which will not be repeated here.
步骤S403:拍摄设备向第三设备发送请求消息,该请求消息用于请求获取传感器数据或生物特征信息。Step S403: The photographing device sends a request message to the third device, where the request message is used to request acquisition of sensor data or biometric information.
此步骤的具体描述可以参考步骤S203的描述,与步骤S203不同的是,拍摄设备向第三设备发送请求消息。For a specific description of this step, reference may be made to the description of step S203. Different from step S203, the photographing device sends a request message to the third device.
步骤S404:第三设备分别向n个穿戴设备转发该请求消息。Step S404: The third device forwards the request message to the n wearable devices respectively.
第三设备接收到拍摄设备发送的请求消息后,向n个穿戴设备转发该请求消息。After receiving the request message sent by the photographing device, the third device forwards the request message to the n wearable devices.
步骤S405:穿戴设备周期性发送传感器数据或生物特征信息,传感器数据或生物特征信息中包含穿戴设备的身份信息。Step S405: The wearable device periodically sends sensor data or biometric information, and the sensor data or biometric information includes the identity information of the wearable device.
此步骤的具体描述可以参考步骤S204的描述,与步骤S204不同的是,n个穿戴设备向第三设备发送传感器数据或生物特征信息,传感器数据或生物特征信息中还包含穿戴设备的身份信息,该身份信息可以唯一表示一个穿戴设备。The specific description of this step can refer to the description of step S204. The difference from step S204 is that n wearable devices send sensor data or biometric information to the third device, and the sensor data or biometric information also includes the identity information of the wearable device. The identity information can uniquely represent a wearable device.
步骤S406:第三设备确定一个周期内所有连接的穿戴设备的传感器数据或生物特征信息接收完整后,发送给拍摄设备。Step S406: The third device determines that the sensor data or biometric information of all connected wearable devices in a cycle is completely received, and then sends the data to the photographing device.
由于n个穿戴设备周期性向第三设备发送传感器数据或生物特征信息,第三设备在一个周期性内会接收到n个传感器数据或生物特征信息。n个穿戴设备的传感器数据或生物特征信息包括穿戴设备的身份信息,第三设备根据获取的身份信息判断是否接收到所有连接的穿 戴设备的传感器数据或生物特征信息。若接收到了n个穿戴设备的传感器数据或生物特征信息,将该n个穿戴设备的传感器数据或生物特征信息发送给拍摄设备。Since n wearable devices periodically send sensor data or biometric information to the third device, the third device will receive n sensor data or biometric information in one period. The sensor data or biometric information of the n wearable devices includes the identity information of the wearable device, and the third device determines whether to receive the sensor data or biometric information of all connected wearable devices according to the acquired identity information. If the sensor data or biometric information of n wearable devices is received, the sensor data or biometric information of the n wearable devices is sent to the photographing device.
步骤S407:拍摄设备每收到一次传感器数据或生物特征信息,对拍摄到的视频信息与生物特征信息进行融合编码,生成带有生物特征信息的图像帧,该生物特征信息与传感器数据对应。Step S407: Each time the photographing device receives sensor data or biometric information, it performs fusion coding on the captured video information and biometric information to generate an image frame with biometric information corresponding to the sensor data.
具体描述可以参考步骤S205的描述,此处不再赘述。For a specific description, reference may be made to the description of step S205, which will not be repeated here.
步骤S408:拍摄设备检测到触发停止拍摄视频的用户操作。具体描述可以参考步骤S206的描述,此处不再赘述。Step S408: The photographing device detects a user operation that triggers stopping of photographing the video. For a specific description, reference may be made to the description of step S206, which is not repeated here.
步骤S409:拍摄设备向第三设备发送请求消息,用于停止获取传感器数据或生物特征信息。具体描述可以参考步骤S207的描述,与步骤S207不同的是,拍摄设备向第三设备发送请求消息。Step S409: The photographing device sends a request message to the third device for stopping acquiring sensor data or biometric information. For a specific description, reference may be made to the description of step S207. Different from step S207, the photographing device sends a request message to the third device.
步骤S410:第三设备分别向n个穿戴设备转发该请求消息。Step S410: The third device forwards the request message to the n wearable devices respectively.
第三设备接收到拍摄设备发送的请求消息后,向n个穿戴设备转发该请求消息。该请求消息用于停止获取传感器数据或生物特征信息,穿戴设备接收到该请求消息,停止发送传感器数据或生物特征信息。After receiving the request message sent by the photographing device, the third device forwards the request message to the n wearable devices. The request message is used to stop acquiring sensor data or biometric information, and the wearable device receives the request message and stops sending sensor data or biometric information.
步骤S411:拍摄设备生成并保存视频。Step S411: The photographing device generates and saves a video.
此步骤的具体描述可以参考步骤S208的描述,与步骤S208不同的是,生成的视频可以参考上述图18a和图18b所示,当用户查看该视频时,随着视频的播放进度显示生物特征信息,生物特征信息的显示内容可以是简单的一句话,也可以是详细完整的信息。其中生物特征信息的显示内容、显示位置以及显示形式本申请不作限制。The specific description of this step can refer to the description of step S208. Different from step S208, the generated video can refer to the above-mentioned Figures 18a and 18b. When the user views the video, the biometric information is displayed along with the playback progress of the video. , the display content of the biometric information can be a simple sentence or a detailed and complete information. The display content, display position and display form of the biometric information are not limited in this application.
本申请实施例中,拍摄设备和n个穿戴设备通过第三设备建立连接,拍摄设备在智能标注模式下检测到触发拍摄视频功能的用户操作,在获取视频信息的同时,通过第三设备向穿戴设备发送请求消息。拍摄设备通过第三设备接收到n个穿戴设备的传感器数据或生物特征信息,将n个生物特征信息与图像帧进行融合编码,随着视频拍摄的过程生成带有生物特征信息的图像帧,该生物特征信息与传感器数据对应。可以看出,本申请提供的这种方式是在视频生成的过程中完成对视频中图像帧的特征识别,而不需要后续对保存的视频进行特征识别和提取,节约了硬件资源;并且可以结合穿戴设备的传感器数据对视频进行更加精准和丰富的特征识别。In the embodiment of the present application, the photographing device and n wearable devices are connected through a third device, and the photographing device detects a user operation that triggers the function of shooting video in the smart labeling mode, and at the same time acquires video information, sends the video information to the wearable device through the third device. The device sends a request message. The shooting device receives sensor data or biometric information of n wearable devices through a third device, fuses and encodes the n biometric information and image frames, and generates image frames with biometric information along with the process of video shooting. Biometric information corresponds to sensor data. It can be seen that the method provided by this application is to complete the feature recognition of image frames in the video in the process of video generation, without the need for subsequent feature recognition and extraction of the saved video, which saves hardware resources; and can be combined with The sensor data of the wearable device can perform more accurate and rich feature recognition on the video.
本申请实施例提及的用户操作包括但不限于点击、双击、长按、滑动、悬浮手势、语音指令等操作。The user operations mentioned in the embodiments of the present application include, but are not limited to, operations such as clicking, double-clicking, long-pressing, sliding, hovering gestures, and voice commands.
本申请实施例提及的第一操作可以为触发拍摄图片功能的用户操作,或触发拍摄视频功能的用户操作。例如前述步骤S102、步骤S302中描述的触发拍摄图片功能的用户操作,前述步骤S202、步骤S402中描述的触发拍摄视频功能的用户操作。The first operation mentioned in the embodiment of the present application may be a user operation that triggers the function of capturing a picture, or a user operation that triggers the function of capturing a video. For example, the user operation described in the foregoing steps S102 and S302 to trigger the function of capturing a picture, and the user operation described in the foregoing steps S202 and S402 to trigger the function of capturing a video.
本申请实施例提及的第一信息可以为与图片/视频文件进行融合编码的生物特征信息。The first information mentioned in the embodiment of the present application may be biometric information that is fused and encoded with a picture/video file.
本申请实施例提及的第二信息可以为在预览界面上显示的生物特征信息。The second information mentioned in the embodiment of the present application may be biometric information displayed on the preview interface.
针对于上述的方法流程,下面示例性的简要介绍三种本申请实施例适用的应用场景。With regard to the above method flow, three application scenarios to which the embodiments of the present application are applicable are briefly introduced exemplarily below.
场景一,家居环境,拍摄特定用户的舞蹈图片/视频、瑜伽图片/视频等。Scenario 1, home environment, shooting dance pictures/videos, yoga pictures/videos of specific users, etc.
用户一想要使用手机为用户二拍摄舞蹈视频,可以选择标注模式为用户二进行拍摄。将智能手表佩戴在用户二手上,并将手机与用户二手上的智能手表建立连接。连接成功后,用户一选择标注模式,使用手机为用户二进行拍摄,此时拍摄获取的图片/视频是具有生物特征 信息的。该生物特征信息可以指示用户二在跳舞过程中的健康状态,进一步判断出用户二的疲惫程度、身体压力等;该生物特征信息还可以指示用户二在跳舞过程中的运动姿势,进一步判断出该运动姿势是否标准;等等。If the first user wants to use the mobile phone to shoot a dance video for the second user, he can select the labeling mode to shoot for the second user. Wear the smart watch on the user's second-hand, and connect the mobile phone with the user's second-hand smart watch. After the connection is successful, user 1 selects the labeling mode and uses the mobile phone to shoot for user 2, and the pictures/videos captured at this time have biometric information. The biometric information can indicate the health status of the second user during the dance, and further determine the degree of fatigue and physical stress of the second user; the biometric information can also indicate the movement posture of the second user during the dance, and further determine the Whether the movement posture is standard; etc.
在拍照之前,用户一可以根据需求在手机或智能手表上进行配置,配置的内容决定了生物特征信息的内容。例如,用户一在手机上配置开启心率信息、运动状态信息、情绪状态信息等选项,则拍摄出的舞蹈视频的生物特征信息包括用户二的心率信息、运动状态信息、情绪状态信息等信息。Before taking a photo, the user can configure the mobile phone or smart watch according to their needs, and the configuration content determines the content of the biometric information. For example, as soon as the user configures options such as heart rate information, exercise state information, and emotional state information on the mobile phone, the biometric information of the captured dance video includes information such as the heart rate information, exercise state information, and emotional state information of the second user.
由于家居场景比较单纯,视频中大概率只会包括用户二这一个人物,该生物特征信息指示用户二的相关信息,则视频的生物特征信息可以显示在手机的显示屏中的固定位置。Since the home scene is relatively simple, there is a high probability that only user 2 will be included in the video. The biometric information indicates the relevant information of user 2, and the biometric information of the video can be displayed in a fixed position on the display screen of the mobile phone.
场景二,户外活动场景,拍摄多人物中特定用户的游玩图片/视频。Scene 2, outdoor activity scene, shooting pictures/videos of specific users in multiple people.
用户一想要使用手机为用户二在公园拍摄游玩图片,可以选择标注模式为用户二进行拍摄。将智能手表佩戴在用户二手上,并将手机与用户二手上的智能手表建立连接。连接成功后,用户一选择标注模式,使用手机为用户二进行拍摄,此时拍摄获取的图片/视频是具有生物特征信息的。该生物特征信息可以指示用户二的运动姿势、情绪状态(例如愉快、激动等)等,还可以指示当时的环境状态(例如空气质量、空气温湿度等)。If the user wants to use the mobile phone to take a picture of the play in the park for the second user, he can select the mark mode to take the picture for the second user. Wear the smart watch on the user's second-hand, and connect the mobile phone with the user's second-hand smart watch. After the connection is successful, the first user selects the labeling mode and uses the mobile phone to shoot for the second user. The pictures/videos captured at this time have biometric information. The biometric information may indicate the movement posture, emotional state (eg, happy, excited, etc.) of the second user, and may also indicate the current environmental status (eg, air quality, air temperature and humidity, etc.).
由于公园中游客比较多,场景比较复杂,用户一为用户二拍摄的图片中可能会出现其他的游客,该生物特征信息指示用户二的相关信息,则图片的生物特征信息可以显示在用户二的附近,将生物特征信息与用户二在图片上进行匹配。Since there are many tourists in the park and the scene is more complicated, other tourists may appear in the picture taken by User 1 for User 2. The biometric information indicates the relevant information of User 2, and the biometric information of the picture can be displayed on User 2's Nearby, match biometric information with user two on the picture.
场景三,专业训练场馆、健身场馆,拍摄多人物中多用户的训练图片/视频。Scenario 3, professional training venues, fitness venues, shooting training pictures/videos of multiple people and multiple users.
在例如健身场馆、训练室等场景中,通常包括多个学员一起运动,教练需要掌握每一个学员的运动状况。则拍摄设备需要与多个穿戴设备进行连接,例如图22所示的系统架构,拍摄设备和多个穿戴设备通过第三设备建立连接。In scenarios such as fitness venues and training rooms, multiple students are usually involved in exercising together, and the coach needs to grasp the exercise status of each student. Then the photographing device needs to be connected with multiple wearable devices. For example, in the system architecture shown in FIG. 22 , the photographing device and multiple wearable devices are connected through a third device.
将多个智能手表分别佩戴在多个用户的手上,多个智能手表与第三设备建立连接,拍摄设备与第三设备建立连接。连接成功后,使用拍摄设备为多个用户进行拍摄,此时拍摄获取的图片/视频是具有生物特征信息的。该生物特征信息可以指示用户的心率、血压、血糖、运动姿势等信息,进一步判断运动姿势是否标准、用户是否适合加大训练强度等。可以用于教练对运动人体的动作指导、训练疲惫度监测。The multiple smart watches are respectively worn on the hands of multiple users, the multiple smart watches are connected with the third device, and the shooting device is connected with the third device. After the connection is successful, use the shooting device to shoot for multiple users, and the pictures/videos obtained at this time have biometric information. The biometric information can indicate the user's heart rate, blood pressure, blood sugar, exercise posture and other information, and further determine whether the exercise posture is standard and whether the user is suitable for increasing the training intensity. It can be used for the coach's action guidance for the sports body and the monitoring of training fatigue.
由于场景三针对于多个用户,该生物特征信息指示了多个用户的相关信息,则在图片/视频中,不同学员的生物特征信息可以分别显示在对应的用户附近,将不同用户的生物特征信息与用户本人在图片上进行匹配。这样便于对不同用户的生物特征信息有针对性的进行查看。Since scenario 3 is for multiple users, and the biometric information indicates the relevant information of multiple users, in the picture/video, the biometric information of different students can be displayed near the corresponding users respectively, and the biometric information of different users can be displayed in the picture/video. The information is matched with the user himself on the picture. This facilitates targeted viewing of biometric information of different users.
本申请实施例还提供了一种计算机可读存储介质。上述方法实施例中描述的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。如果在软件中实现,则功能可以作为一个或多个指令或代码存储在计算机可读介质上或者在计算机可读介质上传输。计算机可读介质可以包括计算机存储介质和通信介质,还可以包括任何可以将计算机程序从一个地方传送到另一个地方的介质。存储介质可以是可由计算机访问的任何可用介质。Embodiments of the present application also provide a computer-readable storage medium. The methods described in the foregoing method embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media can include both computer storage media and communication media and also include any medium that can transfer a computer program from one place to another. A storage medium can be any available medium that can be accessed by a computer.
本申请实施例还提供了一种计算机程序产品。上述方法实施例中描述的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。如果在软件中实现,可以全部或者部分得通过计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行上述计算机程序指令时,全部或部分地产生按照上述方法实施例中描述的流程或功能。上述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备 或者其它可编程装置。The embodiments of the present application also provide a computer program product. The methods described in the foregoing method embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, it may be implemented in whole or in part in the form of a computer program product. A computer program product includes one or more computer instructions. When the above-mentioned computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the above-mentioned method embodiments are generated. The aforementioned computer may be a general purpose computer, a special purpose computer, a computer network, network equipment, user equipment, or other programmable devices.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。The steps in the method of the embodiment of the present application may be adjusted, combined and deleted in sequence according to actual needs.
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: The technical solutions described in the embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the scope of the technical solutions of the embodiments of the present application.

Claims (41)

  1. 一种拍摄系统,其特征在于,包括:电子设备和第一穿戴设备,所述电子设备包括摄像头;其中,A shooting system, comprising: an electronic device and a first wearable device, the electronic device comprising a camera; wherein,
    所述电子设备,用于与所述第一穿戴设备建立连接;the electronic device for establishing a connection with the first wearable device;
    所述电子设备,还用于接收第一操作;The electronic device is further configured to receive the first operation;
    所述电子设备,还用于响应于所述第一操作,获取所述摄像头采集的多媒体文件;The electronic device is further configured to acquire the multimedia file collected by the camera in response to the first operation;
    所述第一穿戴设备,用于通过至少一个传感器检测第一传感器数据;the first wearable device for detecting first sensor data through at least one sensor;
    所述电子设备,还用于获取第一信息,所述第一信息和所述第一传感器数据对应;The electronic device is further configured to acquire first information, where the first information corresponds to the first sensor data;
    所述电子设备,还用于保存所述多媒体文件,所述多媒体文件和所述第一信息关联。The electronic device is further configured to save the multimedia file, where the multimedia file is associated with the first information.
  2. 根据权利要求1所述的系统,其特征在于,所述第一穿戴设备,还用于:The system according to claim 1, wherein the first wearable device is further configured to:
    根据所述第一传感器数据确定出所述第一信息;determining the first information according to the first sensor data;
    向所述电子设备发送所述第一信息。The first information is sent to the electronic device.
  3. 根据权利要求1所述的系统,其特征在于,所述第一穿戴设备,还用于向所述电子设备发送所述第一传感器数据;The system according to claim 1, wherein the first wearable device is further configured to send the first sensor data to the electronic device;
    所述电子设备,还用于根据所述第一传感器数据确定出所述第一信息。The electronic device is further configured to determine the first information according to the first sensor data.
  4. 根据权利要求1所述的系统,其特征在于,所述电子设备,具体用于:响应于所述第一操作,获取所述第一信息。The system according to claim 1, wherein the electronic device is specifically configured to acquire the first information in response to the first operation.
  5. 根据权利要求2所述的系统,其特征在于,所述电子设备还用于:响应于所述第一操作,向所述第一穿戴设备发送第一请求消息;The system according to claim 2, wherein the electronic device is further configured to: in response to the first operation, send a first request message to the first wearable device;
    所述第一穿戴设备,具体用于:响应于所述第一请求消息,向所述电子设备发送所述第一信息。The first wearable device is specifically configured to: in response to the first request message, send the first information to the electronic device.
  6. 根据权利要求3所述的系统,其特征在于,所述电子设备还用于:响应于所述第一操作,向所述第一穿戴设备发送第二请求消息;The system according to claim 3, wherein the electronic device is further configured to: in response to the first operation, send a second request message to the first wearable device;
    所述第一穿戴设备,具体用于:响应于所述第二请求消息,向所述电子设备发送所述第一传感器数据。The first wearable device is specifically configured to: in response to the second request message, send the first sensor data to the electronic device.
  7. 根据权利要求1所述的系统,其特征在于,所述电子设备,还用于:显示拍摄预览界面,所述拍摄预览界面中包括拍摄按钮,所述第一操作包括作用于所述拍摄按钮的输入操作。The system according to claim 1, wherein the electronic device is further configured to: display a shooting preview interface, wherein the shooting preview interface includes a shooting button, and the first operation includes an action acting on the shooting button. Enter an action.
  8. 根据权利要求1所述的系统,其特征在于,所述多媒体文件的属性信息中包括所述第一信息。The system according to claim 1, wherein the attribute information of the multimedia file includes the first information.
  9. 根据权利要求1所述的系统,其特征在于,所述电子设备,用于与所述第一穿戴设备建立连接包括:响应于所述电子设备进入预设拍摄模式,所述电子设备与所述第一穿戴设备建立连接。The system according to claim 1, wherein the electronic device, for establishing a connection with the first wearable device comprises: in response to the electronic device entering a preset shooting mode, the electronic device is connected to the first wearable device. The first wearable device establishes a connection.
  10. 根据权利要求1所述的系统,其特征在于,所述第一穿戴设备,还用于:接收第二操作;The system according to claim 1, wherein the first wearable device is further configured to: receive a second operation;
    所述第一穿戴设备,还用于响应于所述第二操作,指示所述电子设备开启所述摄像头;The first wearable device is further configured to instruct the electronic device to turn on the camera in response to the second operation;
    所述第一电子设备,还用于显示拍摄预览界面,所述拍摄预览界面显示所述摄像头采集的预览图像;所述第一操作包括作用于所述拍摄预览界面的操作。The first electronic device is further configured to display a shooting preview interface, where the shooting preview interface displays a preview image collected by the camera; the first operation includes an operation acting on the shooting preview interface.
  11. 根据权利要求1所述的系统,其特征在于,所述电子设备,还用于显示所述多媒体文件和至少部分所述第一信息。The system according to claim 1, wherein the electronic device is further configured to display the multimedia file and at least part of the first information.
  12. 根据权利要求11所述的系统,其特征在于,所述电子设备,还用于显示所述多媒体文件和至少部分所述第一信息,包括:The system according to claim 11, wherein the electronic device is further configured to display the multimedia file and at least part of the first information, comprising:
    所述电子设备,还用于响应于所述多媒体文件中包括预设面部图像,显示所述多媒体文件和至少部分所述第一信息;所述预设面部图像与所述第一穿戴设备对应。The electronic device is further configured to display the multimedia file and at least part of the first information in response to the multimedia file including a preset facial image; the preset facial image corresponds to the first wearable device.
  13. 根据权利要求11所述的系统,其特征在于,所述电子设备,还用于显示所述多媒体文件和至少部分所述第一信息,包括:The system according to claim 11, wherein the electronic device is further configured to display the multimedia file and at least part of the first information, comprising:
    所述电子设备,还用于响应于所述多媒体文件中包括第一面部图像和第二面部图像,且所述第一面部图像与预设面部图像匹配,在所述多媒体文件的第一区域显示至少部分所述第一信息;其中,所述预设面部图像与所述第一穿戴设备对应;所述第一区域与所述第一面部图像的显示区域的距离,小于所述第一区域与所述第二面部图像的显示区域的距离。The electronic device is further configured to respond that the multimedia file includes a first facial image and a second facial image, and the first facial image matches a preset facial image, in the first facial image of the multimedia file. area displays at least part of the first information; wherein the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the first The distance of an area from the display area of the second facial image.
  14. 根据权利要求1所述的系统,其特征在于,所述电子设备,还用于:在接收所述第一操作之前,显示拍摄预览界面,所述拍摄预览界面上包括摄像头采集的预览图像;The system according to claim 1, wherein the electronic device is further configured to: before receiving the first operation, display a shooting preview interface, wherein the shooting preview interface includes a preview image collected by a camera;
    所述电子设备,还用于在所述预览图像上显示至少部分第二信息,所述第二信息和所述第一穿戴设备检测出的第二传感器数据对应。The electronic device is further configured to display at least part of second information on the preview image, where the second information corresponds to second sensor data detected by the first wearable device.
  15. 根据权利要求14所述的系统,其特征在于,所述电子设备,还用于在所述预览图像上显示至少部分第二信息,包括:The system according to claim 14, wherein the electronic device is further configured to display at least part of the second information on the preview image, comprising:
    所述电子设备,还用于响应于所述预览图像中包括预设面部图像,显示所述预览图像和至少部分所述第二信息,所述预设面部图像与所述第一穿戴设备对应。The electronic device is further configured to display the preview image and at least part of the second information in response to the preview image including a preset facial image, the preset facial image corresponding to the first wearable device.
  16. 根据权利要求14所述的系统,其特征在于,所述电子设备,还用于在所述预览图像上显示至少部分第二信息,包括:The system according to claim 14, wherein the electronic device is further configured to display at least part of the second information on the preview image, comprising:
    所述电子设备,还用于响应于所述预览图像中包括第三面部图像和第四面部图像,且所述第三面部图像与所述预设面部图像匹配,在所述预览图像的第二区域显示至少部分所述第二信息;其中,所述预设面部图像与所述第一穿戴设备对应;所述第二区域与所述第三面部图像的显示区域的距离,小于所述第二区域与所述第四面部图像的显示区域的距离。The electronic device is further configured to, in response to that the preview image includes a third facial image and a fourth facial image, and the third facial image matches the preset facial image, in the second part of the preview image; an area displaying at least part of the second information; wherein the preset facial image corresponds to the first wearable device; the distance between the second area and the display area of the third facial image is smaller than the second The distance of the area from the display area of the fourth face image.
  17. 根据权利要求15所述的系统,其特征在于,所述电子设备,还用于:响应于所述预览图像中不包括所述预设面部图像,输出第一提示,所述第一提示用于提示用户对准面部。The system according to claim 15, wherein the electronic device is further configured to: in response to the preview image not including the preset facial image, output a first prompt, wherein the first prompt is used for Prompt the user to aim at the face.
  18. 根据权利要求1-17任一项所述的系统,其特征在于,所述第一信息包括以下至少一项:The system according to any one of claims 1-17, wherein the first information includes at least one of the following:
    健康状态信息、运动状态信息、或情绪状态信息。Health state information, exercise state information, or emotional state information.
  19. 根据权利要求1-17任一项所述的系统,其特征在于,所述第一传感器数据包括通过至少一种传感器检测的数据,所述至少一种传感器包括以下至少一项:The system according to any one of claims 1-17, wherein the first sensor data includes data detected by at least one sensor, and the at least one sensor includes at least one of the following:
    加速度传感器、陀螺仪传感器、地磁传感器、大气压传感器、心率传感器、血压传感器、心电传感器、肌电传感器、体温传感器、皮电传感器、空气温湿度传感器、光照传感器、或骨传导传感器。Acceleration sensor, gyroscope sensor, geomagnetic sensor, atmospheric pressure sensor, heart rate sensor, blood pressure sensor, ECG sensor, myoelectric sensor, body temperature sensor, skin electric sensor, air temperature and humidity sensor, light sensor, or bone conduction sensor.
  20. 根据权利要求1所述的系统,其特征在于,所述系统还包括第二穿戴设备;The system according to claim 1, wherein the system further comprises a second wearable device;
    所述电子设备,还用于与第二穿戴设备建立连接;The electronic device is further configured to establish a connection with the second wearable device;
    所述第二穿戴设备,用于通过至少一个传感器检测第四传感器数据;the second wearable device for detecting fourth sensor data through at least one sensor;
    其中,所述第一信息还与所述第四传感器数据对应。Wherein, the first information also corresponds to the fourth sensor data.
  21. 一种拍摄方法,应用于包括摄像头的电子设备,其特征在于,所述方法包括:A shooting method, applied to an electronic device including a camera, characterized in that the method comprises:
    所述电子设备与第一穿戴设备建立连接;establishing a connection between the electronic device and the first wearable device;
    所述电子设备接收第一操作;the electronic device receives the first operation;
    所述电子设备响应于所述第一操作,获取所述摄像头采集的多媒体文件;The electronic device, in response to the first operation, acquires the multimedia file collected by the camera;
    所述电子设备获取第一信息,所述第一信息和所述第一穿戴设备的至少一个传感器检测的第一传感器数据对应;obtaining, by the electronic device, first information corresponding to first sensor data detected by at least one sensor of the first wearable device;
    所述电子设备保存所述多媒体文件,所述多媒体文件和所述第一信息关联。The electronic device saves the multimedia file, and the multimedia file is associated with the first information.
  22. 根据权利要求21所述的方法,其特征在于,所述电子设备获取第一信息,包括:The method according to claim 21, wherein the obtaining, by the electronic device, the first information comprises:
    所述电子设备获取所述第一传感器数据;acquiring, by the electronic device, the first sensor data;
    所述电子设备基于所述第一传感器数据确定所述第一信息。The electronic device determines the first information based on the first sensor data.
  23. 根据权利要求21所述的方法,其特征在于,所述电子设备获取第一信息,包括:The method according to claim 21, wherein the obtaining, by the electronic device, the first information comprises:
    所述电子设备获取所述第一穿戴设备基于所述第一传感器数据确定的第一信息。The electronic device acquires first information determined by the first wearable device based on the first sensor data.
  24. 根据权利要求21所述的方法,其特征在于,所述电子设备获取第一信息,包括:The method according to claim 21, wherein the obtaining, by the electronic device, the first information comprises:
    响应于所述第一操作,所述电子设备获取第一信息。In response to the first operation, the electronic device acquires first information.
  25. 根据权利要求22所述的方法,其特征在于,响应于所述第一操作,所述电子设备获取第一信息,包括:The method according to claim 22, wherein in response to the first operation, the electronic device obtains the first information, comprising:
    响应于所述第一操作,所述电子设备向所述第一穿戴设备发送第一请求消息,所述第一请求消息用于请求获取所述第一穿戴设备的至少一个传感器检测的所述第一传感器数据;In response to the first operation, the electronic device sends a first request message to the first wearable device, where the first request message is used to request to acquire the first wearable device detected by at least one sensor of the first wearable device. a sensor data;
    所述电子设备获取第一信息,所述第一信息和所述第一传感器数据对应。The electronic device acquires first information, and the first information corresponds to the first sensor data.
  26. 根据权利要求23所述的方法,其特征在于,响应于所述第一操作,所述电子设备获 取第一信息,包括:The method of claim 23, wherein, in response to the first operation, the electronic device obtains first information, comprising:
    响应于所述第一操作,所述电子设备向所述第一穿戴设备发送第二请求消息,所述第二请求消息用于请求获取所述第一穿戴设备基于所述第一传感器数据确定的第一信息;In response to the first operation, the electronic device sends a second request message to the first wearable device, where the second request message is used to request to obtain the information determined by the first wearable device based on the first sensor data. first information;
    所述电子设备获取第一信息,所述第一信息和所述第一传感器数据对应。The electronic device acquires first information, and the first information corresponds to the first sensor data.
  27. 根据权利要求21所述的方法,其特征在于,所述电子设备接收第一操作包括:The method of claim 21, wherein the electronic device receiving the first operation comprises:
    所述电子设备显示拍摄预览界面,所述拍摄预览界面中包括拍摄按钮;The electronic device displays a shooting preview interface, and the shooting preview interface includes a shooting button;
    所述电子设备接收第一操作,所述第一操作包括作用于所述拍摄按钮的输入操作。The electronic device receives a first operation, and the first operation includes an input operation acting on the shooting button.
  28. 根据权利要求21所述的方法,其特征在于,所述多媒体文件的属性信息中包括所述第一信息。The method according to claim 21, wherein the attribute information of the multimedia file includes the first information.
  29. 根据权利要求21所述的方法,其特征在于,所述电子设备与第一穿戴设备建立连接,包括:The method according to claim 21, wherein establishing a connection between the electronic device and the first wearable device comprises:
    响应于所述电子设备进入预设拍摄模式,所述电子设备与第一穿戴设备建立连接。In response to the electronic device entering the preset shooting mode, the electronic device establishes a connection with the first wearable device.
  30. 根据权利要求21所述的方法,其特征在于,所述方法还包括:The method of claim 21, wherein the method further comprises:
    所述电子设备显示所述多媒体文件和至少部分所述第一信息。The electronic device displays the multimedia file and at least part of the first information.
  31. 根据权利要求30所述的方法,其特征在于,所述电子设备显示所述多媒体文件和至少部分所述第一信息,具体包括:The method according to claim 30, wherein the electronic device displays the multimedia file and at least part of the first information, specifically comprising:
    响应于所述多媒体文件中包括预设面部图像,所述电子设备显示所述多媒体文件和至少部分所述第一信息;所述预设面部图像与所述第一穿戴设备对应。In response to the multimedia file including a preset facial image, the electronic device displays the multimedia file and at least part of the first information; the preset facial image corresponds to the first wearable device.
  32. 根据权利要求30所述的方法,其特征在于,所述电子设备显示所述多媒体文件和至少部分所述第一信息,具体包括:The method according to claim 30, wherein the electronic device displays the multimedia file and at least part of the first information, specifically comprising:
    响应于所述多媒体文件中包括第一面部图像和第二面部图像,且所述第一面部图像与预设面部图像匹配,所述电子设备在所述多媒体文件的第一区域显示至少部分所述第一信息;其中,所述预设面部图像与所述第一穿戴设备对应;所述第一区域与所述第一面部图像的显示区域的距离,小于所述第一区域与所述第二面部图像的显示区域的距离。In response to the multimedia file including a first facial image and a second facial image, and the first facial image matches a preset facial image, the electronic device displays at least a portion of the first area of the multimedia file the first information; wherein the preset facial image corresponds to the first wearable device; the distance between the first area and the display area of the first facial image is smaller than the distance between the first area and the the distance of the display area of the second facial image.
  33. 根据权利要求21所述的方法,其特征在于,所述电子设备接收第一操作,之前还包括:The method according to claim 21, wherein before the electronic device receives the first operation, the method further comprises:
    所述电子设备显示拍摄预览界面,所述拍摄预览界面上包括摄像头采集的预览图像;The electronic device displays a shooting preview interface, and the shooting preview interface includes a preview image collected by a camera;
    所述电子设备在所述预览图像上显示至少部分第二信息,所述第二信息和所述第一穿戴设备检测出的第二传感器数据对应。The electronic device displays at least part of second information on the preview image, where the second information corresponds to second sensor data detected by the first wearable device.
  34. 根据权利要求33所述的方法,其特征在于,所述电子设备在所述预览图像上显示至少部分所述第二信息,具体包括:The method according to claim 33, wherein the electronic device displays at least part of the second information on the preview image, which specifically includes:
    响应于所述预览图像中包括预设面部图像,所述电子设备显示所述预览图像和至少部分所述第二信息,所述预设面部图像与所述第一穿戴设备对应。In response to the preview image including a preset facial image, the electronic device displays the preview image and at least part of the second information, the preset facial image corresponding to the first wearable device.
  35. 根据权利要求33所述的方法,其特征在于,所述电子设备在所述预览界面上显示至少部分所述第二信息,具体包括:The method according to claim 33, wherein the electronic device displays at least part of the second information on the preview interface, which specifically includes:
    响应于所述预览图像中包括第三面部图像和第四面部图像,且所述第三面部图像与所述预设面部图像匹配,所述电子设备在所述预览图像的第二区域显示至少部分所述第二信息;其中,所述预设面部图像与所述第一穿戴设备对应;所述第二区域与所述第三面部图像的显示区域的距离,小于所述第二区域与所述第四面部图像的显示区域的距离。In response to the preview image including a third facial image and a fourth facial image, and the third facial image matches the preset facial image, the electronic device displays at least a portion in the second area of the preview image the second information; wherein the preset facial image corresponds to the first wearable device; the distance between the second area and the display area of the third facial image is smaller than the second area and the The distance of the display area of the fourth face image.
  36. 根据权利要求34所述的方法,其特征在于,所述方法还包括:The method of claim 34, wherein the method further comprises:
    响应于所述预览图像中不包括所述预设面部图像,所述电子设备输出第一提示,所述第一提示用于提示用户对准面部。In response to the preview image not including the preset face image, the electronic device outputs a first prompt for prompting the user to align the face.
  37. 根据权利要求20-36任一项所述的方法,其特征在于,所述第一信息包括以下至少一项:The method according to any one of claims 20-36, wherein the first information includes at least one of the following:
    健康状态信息、运动状态信息、或情绪状态信息。Health state information, exercise state information, or emotional state information.
  38. 根据权利要求20-36任一项所述的方法,其特征在于,所述第一传感器数据包括通过至少一种传感器检测的数据,所述至少一种传感器至少包括以下至少一项:The method according to any one of claims 20-36, wherein the first sensor data includes data detected by at least one sensor, and the at least one sensor includes at least one of the following:
    加速度传感器、陀螺仪传感器、地磁传感器、大气压传感器、心率传感器、血压传感器、心电传感器、肌电传感器、体温传感器、皮电传感器、空气温湿度传感器、光照传感器、或骨传导传感器中。Acceleration sensor, gyroscope sensor, geomagnetic sensor, atmospheric pressure sensor, heart rate sensor, blood pressure sensor, electrocardiogram sensor, myoelectric sensor, body temperature sensor, skin electrical sensor, air temperature and humidity sensor, light sensor, or bone conduction sensor.
  39. 根据权利要求21所述的方法,其特征在于,所述方法还包括:The method of claim 21, wherein the method further comprises:
    所述电子设备与第二穿戴设备建立连接;所述第二穿戴设备用于通过至少一个传感器检测出第四传感器数据,所述第一信息还与所述第四传感器数据对应。The electronic device establishes a connection with a second wearable device; the second wearable device is configured to detect fourth sensor data through at least one sensor, and the first information also corresponds to the fourth sensor data.
  40. 一种电子设备,其特征在于,包括:一个或多个处理器、一个或多个存储器和至少一个摄像头;所述一个或多个存储器与所述一个或多个处理器耦合;所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;当所述计算机指令在所述处理器上运行时,使得所述电子设备执行如权利要求21-39中任一项所述的拍摄方法。An electronic device, comprising: one or more processors, one or more memories, and at least one camera; the one or more memories are coupled with the one or more processors; the one or more memories A plurality of memories are used to store computer program code, the computer program code includes computer instructions; when the computer instructions are executed on the processor, the electronic device is caused to perform as claimed in any one of claims 21-39. the shooting method described.
  41. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求21-39中任一项所述的拍摄方法。A computer-readable storage medium, characterized by comprising computer instructions, which, when the computer instructions are executed on an electronic device, cause the electronic device to execute the photographing method according to any one of claims 21-39.
PCT/CN2021/112362 2020-08-19 2021-08-12 Photographing method and photographing system WO2022037479A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010839365.9A CN114079730B (en) 2020-08-19 2020-08-19 Shooting method and shooting system
CN202010839365.9 2020-08-19

Publications (1)

Publication Number Publication Date
WO2022037479A1 true WO2022037479A1 (en) 2022-02-24

Family

ID=80281788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112362 WO2022037479A1 (en) 2020-08-19 2021-08-12 Photographing method and photographing system

Country Status (2)

Country Link
CN (1) CN114079730B (en)
WO (1) WO2022037479A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439307A (en) * 2022-08-08 2022-12-06 荣耀终端有限公司 Style conversion method, style conversion model generation method, and style conversion system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116955662A (en) * 2022-04-14 2023-10-27 华为技术有限公司 Media file management method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020113757A1 (en) * 2000-12-28 2002-08-22 Jyrki Hoisko Displaying an image
CN1510903A (en) * 2002-11-25 2004-07-07 ��˹���´﹫˾ Image method and system
US20070124292A1 (en) * 2001-10-30 2007-05-31 Evan Kirshenbaum Autobiographical and other data collection system
CN101169955A (en) * 2006-10-27 2008-04-30 三星电子株式会社 Method and apparatus for generating meta data of content
CN105830066A (en) * 2013-12-19 2016-08-03 微软技术许可有限责任公司 Tagging images with emotional state information
CN107320114A (en) * 2017-06-29 2017-11-07 京东方科技集团股份有限公司 Shooting processing method, system and its equipment detected based on brain wave

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9189682B2 (en) * 2014-02-13 2015-11-17 Apple Inc. Systems and methods for sending digital images
US9594403B2 (en) * 2014-05-05 2017-03-14 Sony Corporation Embedding biometric data from a wearable computing device in metadata of a recorded image
JP6379424B2 (en) * 2014-10-20 2018-08-29 シャープ株式会社 Image recording device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020113757A1 (en) * 2000-12-28 2002-08-22 Jyrki Hoisko Displaying an image
US20070124292A1 (en) * 2001-10-30 2007-05-31 Evan Kirshenbaum Autobiographical and other data collection system
CN1510903A (en) * 2002-11-25 2004-07-07 ��˹���´﹫˾ Image method and system
CN101169955A (en) * 2006-10-27 2008-04-30 三星电子株式会社 Method and apparatus for generating meta data of content
CN105830066A (en) * 2013-12-19 2016-08-03 微软技术许可有限责任公司 Tagging images with emotional state information
CN107320114A (en) * 2017-06-29 2017-11-07 京东方科技集团股份有限公司 Shooting processing method, system and its equipment detected based on brain wave

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439307A (en) * 2022-08-08 2022-12-06 荣耀终端有限公司 Style conversion method, style conversion model generation method, and style conversion system

Also Published As

Publication number Publication date
CN114079730B (en) 2023-09-12
CN114079730A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
WO2020211701A1 (en) Model training method, emotion recognition method, related apparatus and device
WO2020078299A1 (en) Method for processing video file, and electronic device
WO2020151387A1 (en) Recommendation method based on user exercise state, and electronic device
WO2020259452A1 (en) Full-screen display method for mobile terminal, and apparatus
WO2021244457A1 (en) Video generation method and related apparatus
WO2021104485A1 (en) Photographing method and electronic device
WO2020029306A1 (en) Image capture method and electronic device
US20220176200A1 (en) Method for Assisting Fitness and Electronic Apparatus
WO2022095788A1 (en) Panning photography method for target user, electronic device, and storage medium
CN113645351A (en) Application interface interaction method, electronic device and computer-readable storage medium
WO2021258814A1 (en) Video synthesis method and apparatus, electronic device, and storage medium
WO2021052139A1 (en) Gesture input method and electronic device
WO2022042766A1 (en) Information display method, terminal device, and computer readable storage medium
WO2020192761A1 (en) Method for recording user emotion, and related apparatus
WO2022037479A1 (en) Photographing method and photographing system
WO2022012418A1 (en) Photographing method and electronic device
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
CN114444000A (en) Page layout file generation method and device, electronic equipment and readable storage medium
WO2023029916A1 (en) Annotation display method and apparatus, terminal device, and readable storage medium
WO2022206764A1 (en) Display method, electronic device, and system
WO2022152174A1 (en) Screen projection method and electronic device
WO2022166435A1 (en) Picture sharing method and electronic device
EP4195073A1 (en) Content recommendation method, electronic device and server
WO2021147483A1 (en) Data sharing method and apparatus
CN114995715A (en) Control method of floating ball and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21857574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21857574

Country of ref document: EP

Kind code of ref document: A1