WO2017084428A1 - Procédé de traitement d'informations, dispositif électronique et support d'informations informatique - Google Patents

Procédé de traitement d'informations, dispositif électronique et support d'informations informatique Download PDF

Info

Publication number
WO2017084428A1
WO2017084428A1 PCT/CN2016/099295 CN2016099295W WO2017084428A1 WO 2017084428 A1 WO2017084428 A1 WO 2017084428A1 CN 2016099295 W CN2016099295 W CN 2016099295W WO 2017084428 A1 WO2017084428 A1 WO 2017084428A1
Authority
WO
WIPO (PCT)
Prior art keywords
light image
information
user
visible light
temperature
Prior art date
Application number
PCT/CN2016/099295
Other languages
English (en)
Chinese (zh)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017084428A1 publication Critical patent/WO2017084428A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0084Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
    • A61B5/0086Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters using infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis

Definitions

  • the embodiments of the present invention relate to the field of information technologies, and in particular, to an information processing method, an electronic device, and a computer storage medium.
  • embodiments of the present invention are expected to provide an information processing method, an electronic device, and a computer storage medium, which can at least partially solve the above problems.
  • a first aspect of the embodiments of the present invention provides an information processing method, where the method includes: collecting infrared light images by using the infrared light to form an infrared light image, and collecting visible light images by using the visible light; and analyzing the infrared light image to determine Describe temperature information of the user's face and analyze the visible light image to determine skin color information of the user's face; and combine the temperature information and the skin color information to analyze the health status information of the user.
  • the analyzing the health status information of the user by using the temperature information and the skin color information comprises: analyzing the temperature information and the skin color information by using a machine learning algorithm to obtain the health status of the user. information.
  • the method further includes: performing algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information; and verifying the training algorithm by using test data. And obtaining the verification result; the test data includes test temperature information and test temperature information; if the verification result indicates that the training algorithm meets the preset condition, determining that the training algorithm is the machine learning algorithm.
  • the method further includes: using the visible light image to locate a distribution position of the facial part of the user; and analyzing the infrared light image to determine temperature information of the user's face, including combining the distributed position and The infrared light image determines a temperature value of each organ of the user's face and a temperature difference between the organs.
  • combining the distribution position and the infrared light image to determine a temperature value of a user's facial organs and a temperature difference between the organs including:
  • a temperature difference between the organs is calculated based on obtaining the temperature value.
  • the analyzing the health status information of the user by using the temperature information and the skin color information includes:
  • the method further includes:
  • Extracting the color value of the pixel at the location of the user's face from the visible light image to obtain the skin color information including:
  • the corrected color value is the skin color information.
  • the acquisition affects the acquisition to form a predetermined parameter for forming the visible light image.
  • Number including
  • the corrected color value is the skin color information, including:
  • the color value is corrected according to the color temperature value to obtain the corrected color value.
  • the acquiring affects the acquisition to form predetermined parameters for forming the visible light image, including
  • the corrected color value is the skin color information, including:
  • the color value is corrected according to the ambient light value to obtain the corrected color value.
  • the acquiring the infrared light image by using the infrared light to form the infrared light image and collecting the visible light image by using the visible light to capture the user's face includes separately acquiring the infrared light image and the visible light image by using a binocular acquisition unit.
  • a second aspect of the embodiments of the present invention provides an electronic device, where the electronic device includes: an acquisition unit configured to acquire an infrared light image by using the infrared light to collect a facial image of the user, and collect the visible light image by using the visible light; and analyzing the unit and configuring For analyzing the infrared light image, determining temperature information of the user's face and analyzing the visible light image, determining skin color information of the user's face; and obtaining a unit configured to combine the temperature information and the skin color information to analyze the The health status information of the user.
  • the obtaining unit is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  • the electronic device further includes: a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information; and a verification unit, configured To use the test data for the stated
  • the training algorithm performs verification to obtain a verification result; the test data includes test temperature information and test temperature information; and the determining unit is configured to determine that the training algorithm is the machine if the verification result indicates that the training algorithm meets a preset condition Learning algorithm.
  • the electronic device further includes: a positioning unit, configured to locate a distribution position of the facial part of the user by using the visible light image; the analyzing unit is further configured to combine the distribution position and the infrared light The image determines the temperature value of each organ of the user's face and the temperature difference between the organs.
  • the analyzing unit is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate a temperature difference between the organs according to the obtained temperature value .
  • the analyzing unit is configured to extract a color value of a pixel at a position where the user's face is located from the visible light image to obtain the skin color information.
  • the electronic device further includes:
  • An acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image
  • the analyzing unit is further configured to correct the color value according to the predetermined parameter; the corrected color value is the skin color information.
  • the acquiring unit is configured to acquire a color temperature parameter of an acquisition unit that forms the visible light image, according to the foregoing solution;
  • the analyzing unit is configured to correct the color value according to the color temperature value to obtain the corrected color value.
  • the acquiring unit is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method of any of the foregoing items.
  • the information processing method, the electronic device and the computer storage medium according to the embodiments of the present invention are capable of collecting an infrared light image and a visible light image of a user's face, and detecting the temperature information of the user's face according to the infrared light image and determining the skin color of the user's face using the visible light image. Information; then, based on the temperature information and the skin color information, the user's health status information is jointly determined; thus, the electronic device can be used to easily monitor the user's health status, better utilize the hardware and software resources of the existing electronic device, and improve the intelligence of the electronic device. Sex and user satisfaction.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a communication system to which an electronic device can be applied according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart diagram of a first information processing method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of a second information processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic flowchart diagram of a third information processing method according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of an effect of superimposing a visible light image and an infrared light image according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a correspondence between a user's face and a body organ according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a training learning machine according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing an axis representation of a degree of health according to an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart diagram of a fourth information processing method according to an embodiment of the present invention.
  • the information processing method described in this embodiment can be applied to various types of electronic devices.
  • the electronic device in this embodiment may include various types of mobile terminals or fixed terminals.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • Broadcast management The server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives the previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Purple BeeTM and more.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery discharger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may communicate with the mobile terminal 100 via a port or other connection device. connection.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • an external device eg, data information, power, etc.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input Into the area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • MSC 280 is also constructed to The BSC 275 coupled to the base station 270 by the backhaul line forms an interface.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • BSC provides call resource allocation and includes BS270 Coordinated mobility management functions between soft handover processes.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • this embodiment provides an information processing method, where the method includes:
  • Step S110 collecting infrared light images by using the infrared light to collect the infrared light image of the user's face and collecting the visible light image by using the visible light;
  • Step S120 analyzing the infrared light image, determining temperature information of the user's face, and analyzing the visible light image to determine skin color information of the user's face;
  • Step S130 Combine the temperature information and the skin color information to analyze the health status information of the user.
  • This embodiment can be applied to the foregoing electronic device, such as a mobile terminal, such as a mobile phone, a tablet computer, or a wearable device.
  • a mobile terminal such as a mobile phone, a tablet computer, or a wearable device.
  • the infrared light is captured by the user's face using infrared light in step S110.
  • the infrared light image includes an image of the user's face, such as a user's facial features.
  • the visible light image is also formed by collecting the user's face with visible light in step S110.
  • the infrared light image is analyzed in step S120, and the temperature information of the user's face can be perceived by the infrared image sensor according to the user's facial radiation information.
  • the temperature information herein may include temperature values for various locations of the user's face. Further, by calculation, information such as a temperature difference at each position of the user's face can be known.
  • the visible light image will also be analyzed in step S120 to obtain skin color information at various positions of the user's face.
  • the temperature information and the skin color information of the user's face can reflect the health status of the user.
  • the user is obtained based on the temperature information and the skin color information.
  • the health status information in the present embodiment, will analyze the temperature value and/or the temperature difference, and obtain the user's health status information based on Chinese medicine or Western medicine theory.
  • the skin color information herein may include depth information of skin color, uniform information of skin color, and color tone information of skin color. It is obvious that the skin color information of the user's face can reflect the user's physical health.
  • the temperature information of the user's face and the skin color information are used to jointly diagnose the health state of the user, and at least the analysis amount of the two dimensions is included, and more accurate health state information can be obtained.
  • the information processing method described in this embodiment is applied to an electronic device such as a mobile phone, a tablet computer, or a wearable device
  • the user uses an electronic device such as a mobile phone, a notebook, or a tablet computer that is carried by the user.
  • an electronic device such as a mobile phone, a notebook, or a tablet computer that is carried by the user.
  • you can easily obtain your own health status information, thus easily monitoring your health status, greatly improving the utilization of software and hardware resources in electronic devices such as mobile phones and tablet computers, and the intelligence of these devices. Sex, and user satisfaction.
  • the step S130 may include analyzing the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  • the temperature information and the skin color information are analyzed by using a machine learning algorithm to obtain health status information of the user.
  • the machine learning algorithm obtains characteristic parameters for different health states by analyzing and learning a large amount of data before performing the analysis, and the temperature information and the skin color information and the feature parameters in the embodiment may be used. Matching, accurately determining the health status information of the user.
  • the feature parameter may be data sent from a network server or a medical health detection platform or the like.
  • the method in the embodiment further includes: forming the learning machine algorithm before analyzing the temperature information and the skin color information by using a learning machine algorithm.
  • forming the learning machine algorithm may include the following steps:
  • Step S210 performing algorithm training using sample data as input data of the learning machine, and obtaining a training algorithm;
  • the sample data includes sample temperature information and sample skin color information;
  • Step S220 verifying the training algorithm by using test data, and obtaining a verification result; the test data includes test temperature information and test temperature information;
  • Step S230 If the verification result indicates that the training algorithm meets the preset condition, determine that the training algorithm is the machine learning algorithm.
  • the sample data in the embodiment may include sample skin color information and sample temperature information and corresponding health state information; using the sample data to train the learning machine, the learning machine may obtain the correspondence between the skin color information and the temperature information and the health state information.
  • Functional relationship The corresponding functional relationship can be the training algorithm or an alternative machine learning algorithm.
  • the test data is also used for verification.
  • the sample skin color information and the sample temperature information in the test data are input as information processing by using a training algorithm, and an output result is obtained; the output result is Comparing the test health status information in the test data, the correctness of the training algorithm after processing each test data can be obtained.
  • the training algorithm is used as a machine learning algorithm for performing subsequent user health state information acquisition.
  • the functional relationships here can be represented by various parameters, which are not exemplified here.
  • the method further includes:
  • the step S120 includes:
  • the visible light image is used to locate the distribution position of each organ of the user's face, specifically Such as, forehead, nose, cheeks, tongue and other parts.
  • each part of the user's face can correspond to the user's body.
  • each part of the body can directly use the electronic device to give health status information, so that the user can use the electronic device to monitor his own health status information.
  • the step S120 may include:
  • a temperature difference between the organs is calculated based on obtaining the temperature value.
  • the wavelength component of the infrared ray and the parameters such as the intensity of the infrared are related to the temperature of the face
  • the pixel value of the corresponding organ in the infrared image can be extracted, and then between the pixel value and the temperature value.
  • the conversion can be used to know the temperature value of the face and calculate the temperature difference between different organs; it is easy to implement.
  • the analyzing the health status information of the user by combining the temperature information and the skin color information comprises: extracting, from the visible light image, a color value of a pixel of a location where the user's face is located, The skin color information is obtained. The color value will be extracted from the visible light image in this embodiment.
  • the method further includes: acquiring a predetermined parameter that affects the acquisition to form the visible light image; the step S120 may include: correcting the color according to the predetermined parameter. a value; the corrected color value is the skin color information.
  • the step S120 may include: correcting the color value according to the color temperature value, obtaining the The corrected color value.
  • the obtaining affects the acquisition of the predetermined parameters for forming the visible light image, including acquiring an ambient illumination value that is used to form the visible light image; and the step S120 may include: correcting the color value according to the ambient illumination value, Obtain the repair The color value after the positive.
  • the ambient illumination value herein may include illumination values such as ambient light brightness values and color values, and may be corrected according to the illumination of the ambient light to restore the original skin color information of the collected face to improve the extracted skin color. The accuracy of the information to obtain more accurate health status information.
  • the step S110 may include separately acquiring the infrared light image and the visible light image by using a binocular acquisition unit.
  • the binocular acquisition unit here can correspond to various binocular cameras.
  • the binocular camera here can be a camera capable of collecting infrared light and visible light, can form a visible light image based on visible light, and can form an infrared light image based on infrared light.
  • the binocular acquisition unit is used for processing, and the infrared light image and the visible light image can be collected in the shortest time, which can reduce the response delay and improve the response rate of the electronic device.
  • the method in this embodiment further includes: outputting suggestion information according to the health status information.
  • the suggestion information in this embodiment may be pre-stored information mapped with the health status information, or suggestion information that is received from other electronic devices and mapped to the health status information. In this way, the electronic device can be easily used to determine its own health status information, and then the state of the diet, work and the like can be adjusted according to the suggested information.
  • the embodiment provides an electronic device, where the electronic device includes:
  • the collecting unit 310 is configured to acquire an infrared light image by using the infrared light to collect the user's face and collect the visible light image by using the visible light to collect the user's face;
  • the analyzing unit 320 is configured to analyze the infrared light image, determine temperature information of the user's face, and analyze the visible light image to determine skin color information of the user's face;
  • the obtaining unit 330 is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
  • the electronic device in this embodiment may be the foregoing mobile terminal, such as a mobile device, a tablet computer, or a wearable device.
  • the collecting unit 310 may correspond to a visible light sensor and an infrared light sensor, and the infrared The light sensor can collect infrared light to form the infrared light image.
  • the visible light sensor can collect visible light to form a visible light image.
  • the specific structures of the analysis unit 320 and the obtaining unit 330 correspond to a processor or processing circuit inside the electronic device.
  • the processor can include an application processor, a microprocessor, a digital signal processor or a programmable array, and the like.
  • the processing circuit can include a structure such as an application specific integrated circuit.
  • the analyzing unit 320 and the obtaining unit 330 may be integrated corresponding to the same processor or processing circuit, or may respectively correspond to different processors or processing circuits.
  • the obtaining unit is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
  • the electronic device in the embodiment obtains the temperature information and the skin color information of the user's face by collecting the infrared light image and the visible light image, and obtains the health state information of the user by analyzing the temperature information, thereby improving the intelligence of the electronic device. And user satisfaction, so that users can easily obtain their health status information by collecting their own faces with electronic devices.
  • the temperature information and the skin color information are referenced at the same time, and the reference quantity for forming the health status information is increased, and the accuracy of the health status information is improved.
  • the obtaining unit 330 is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  • the machine learning algorithm is used to analyze the temperature information and the skin color information to obtain the health state information.
  • the machine learning algorithm is configured to analyze a large amount of data to obtain various characteristic parameters for characterizing different profile states, and The temperature information and the matching between the skin color information and the feature parameters obtain the health state information, thereby achieving easy acquisition of the health state information and ensuring high accuracy of the health state information.
  • the electronic device further includes:
  • a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm;
  • the sample data includes sample temperature information and sample skin color information;
  • a verification unit configured to verify the training algorithm by using test data to obtain a verification result;
  • the test data includes test temperature information and test temperature information;
  • a determining unit configured to determine that the training algorithm is the machine learning algorithm if the verification result indicates that the training algorithm meets a preset condition.
  • the training unit in this embodiment may include various types of learning machines.
  • the specific structure of the verification unit and the determination unit may correspond to a processor or a processing circuit.
  • the processor or processing circuitry may implement the various functions of the various units described above by executing the executable instructions.
  • the electronic device further includes: a positioning unit, configured to locate a distribution position of the facial part of the user by using the visible light image; the analyzing unit 320 is further configured to combine the distributed position and the infrared
  • the light image determines the temperature value of each organ of the user's face and the temperature difference between the organs.
  • the positioning unit in this embodiment may include a coordinate positioning device or the like, and can determine the distribution position of each organ on the user's face through the analysis code of the visible light image.
  • the analysis unit 320 combines the distribution position and the infrared light image to determine the temperature value and temperature difference of each organ.
  • the temperature value and the temperature difference will be used as temperature information as the basis for obtaining the health status information.
  • Such an electronic device generally solves the problem that the operation of the infrared light image positioning operation is cumbersome, and at the same time, can improve the accuracy of the temperature information, thereby improving the accuracy of the health state information again.
  • the analyzing unit 320 is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate the organ according to the obtained temperature value The temperature difference between them.
  • the analyzing unit 320 is further configured to extract a color value of a pixel at a location where the user's face is located from the visible light image to obtain the skin color information.
  • the electronic device further includes: an acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image; the analyzing unit 320 is further configured to correct the color according to the predetermined parameter a value; the corrected color value is the skin color information.
  • the acquiring unit is configured to acquire a color temperature parameter for collecting an acquisition unit that forms the visible light image
  • the analyzing unit 320 is configured to correct the color value according to the color temperature value to obtain the corrected color value.
  • the acquiring unit is configured to acquire an ambient light value for collecting the visible light image
  • the analyzing unit 320 is configured to correct the color value according to the ambient light value to obtain the corrected color value.
  • the collection unit 110 is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time.
  • the binocular acquisition unit can simultaneously collect infrared light images and visible light images, which can reduce the time taken for collecting images of the user's face, improve the response rate of the electronic device, and reduce the response delay.
  • the electronic device further includes: an output unit configured to output suggestion information according to the health status information.
  • the output unit in this embodiment may correspond to a display output unit or an audio output unit.
  • the display output unit may include various types of display screens.
  • the display screen may include a liquid crystal display, an electronic ink display, a projection display, or an organic light emitting diode (OLED) display.
  • the audio output unit may include a speaker or an audio output circuit or the like.
  • the output unit in this embodiment can output suggestion information, give the user a suggestion to maintain or restore the health status, and improve the intelligence of the electronic device and the user satisfaction.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method of any of the foregoing items, for example, FIG. 3 and FIG. 4
  • the computer storage medium may include various storage media such as an optical disk, a magnetic tape, a mobile hard disk, a flash memory, and the like, and may be a non-transitory storage medium.
  • the example provides a method for acquiring health status information, including:
  • Step S410 Acquire an infrared and visible light facial binocular image; the facial binocular image herein may be an understanding of an overlap of the infrared light image and the visible light image;
  • Step S420 modeling facial health data of the face
  • Step S430 Acquire a health feature classifier parameter for identifying a face of another person according to a machine learning algorithm
  • Step S440 Perform health level detection based on the monitoring feature classification parameter pair, and output a health suggestion.
  • step S410 the infrared spectrum image of the face and face is captured by the infrared camera, and the infrared image sensor can sense the temperature information of the object according to the heat radiation information of the object, so that the temperature information of the face is obtained, and the imaging principle of the infrared camera is different from that of the visible light camera, and is lost.
  • the facial brightness and color details of the visible light camera make it difficult to locate the facial features of the face.
  • the visible light image of the face is simultaneously captured by the visible light camera, and the skin color information of the face is acquired.
  • the binocular system thus composed can simultaneously acquire the skin color information and temperature information of the facial features.
  • step S420 a machine learning algorithm is used to perform a learning analysis on a large amount of face data, and a health feature model for identifying a face is learned.
  • the main purpose is to simulate a Chinese medicine observation mode, and the facial key feature of the face is used as a classifier.
  • the feature input, through the learning and training of big data to obtain the key parameters of the health feature, can obtain the classifier for testing the face health image.
  • Key features of classification modeling include: temperature of various organs of the face such as forehead, nose, cheeks, tongue, temperature difference, color characteristics of facial organs, according to the basic color statistics of Chinese medicine, including yellow, white, red, black , green and other colors, each color includes light, medium and deep.
  • the temperature and color information of these facial organs can presume the health of other parts of the human body, as shown in Fig. 8. Therefore, analyzing the characteristic information of the organs of the face can judge the condition of each part of the human body.
  • the facial regions that reflect the shoulder, lung, throat, and liver of the user's body are respectively indicated. In the specific implementation process, other facial regions can also be Reflecting the health status of other parts of the user's body, omitted in Figure 8.
  • step S430 through a large number of data input and machine learning classification training algorithms, the computer acquires health feature parameters for identifying other people's faces.
  • the user takes a facial infrared and visible light image by self-timer, and inputs the facial image of the user as a test image into the face health classifier.
  • the health classifier analyzes the health of the currently used input test image according to the characteristic parameters of offline learning. , gives the user's health data analysis.
  • FIG. 9 A flow chart of the machine learning algorithm for facial health is given in Figure 9. As shown in FIG. 9, the information processing method in this example may include:
  • Step S1 Input face image health degree training data. These face image health training data can be sample data.
  • Step S2 extracting color and temperature characteristics of various organs of the face and face
  • Step S3 Input color and temperature characteristics to a classifier classifier such as an AdaBoost classifier or SVM.
  • Adaboost is an iterative algorithm to train different weak classifiers for the same training set, and then combine these weak classifiers to form a stronger final classifier (this final classifier is a strong classifier) .
  • SVM is an abbreviation of Support Vector Machine, which is a support vector machine classifier.
  • Step S4 Acquire a facial health degree feature classification parameter.
  • Step S5 Forming face image monitoring degree detection data based on the health degree feature classification parameter. Next, it is determined whether the training requirement is met. If the training requirement is not returned to step S3, the actual face image health can be detected if the training requirement is met.
  • Step S6 input actual face image health degree detection data, where the face image health degree detection data may be temperature information acquired from the infrared light image and/or skin color information detected from the visible light image.
  • the actual face image monitoring degree detection data here may correspond to the detection sample.
  • Step S7 Analyze the measured result.
  • Step S8 The analysis result obtained in step S7 does not satisfy the requirement return algorithm design flow, and the algorithm is improved, and the process returns to step S2.
  • Step S9 The analysis result obtained in step S7 satisfies the requirement that the algorithm is completed.
  • the step S6 to the step S7 may be repeatedly performed. If the accuracy of the analysis result of the actual face image health degree detection data reaches a specified threshold, it may be considered that the requirement is met, otherwise the requirement is not satisfied.
  • the results of the analysis herein may include the results of the health status information.
  • the face image health degree training data input in step S1 is sample data for performing the learning machine training. The following describes the production process of the sample data.
  • Figure 10 shows the axis of a health value. The health of a person is scored from 0 to 100. If the score is below 60, the corresponding user is in a sub-health state. If the score is above 60 and 60, the user is considered to be in the sub-health state. health status.
  • the main classification feature is the skin color and temperature of the facial skin, where the skin color features and temperature features of the nose tip of each sample population are extracted as feature vectors.
  • the temperature characteristic can be converted into a corresponding temperature value according to the pixel value of the infrared image, and the color information can be obtained by establishing a color mapping table, and obtaining a color value according to the image color information of the color image, and establishing yellow, white, red, black, and blue
  • a basic color table according to the size of the color value, determine the color depth of the area, divided into light, medium and deep, so that you can get the color characteristics of the sample, then you can establish a health feature vector matrix as follows: (Note: The values in the feature matrix vector are used to illustrate the method and deviate from the actual measurement data.
  • AdaBoost AdaBoost classifier
  • the popular AdaBoost classifier is adopted here, the theory is mature, and it is effectively practiced in pattern recognition and classification such as face detection and recognition.
  • This AdaBoost classification allows the designer to continually add new weak classifiers until a predetermined sufficiently small error rate is reached.
  • each training sample is given a weight indicating the probability that it will be selected into the training set by a classification classifier. If a sample has been accurately classified, then the probability of its selection is reduced in constructing the next training set; conversely, if a sample is not correctly classified, its weight is increased. In this way, the AdaBoost classifier can focus on samples that are more difficult to classify.
  • the sample is divided into training samples and test samples.
  • the training samples are mainly used for classifier learning.
  • the test samples are mainly used to detect whether the classification learning parameters meet the requirements.
  • the training samples are sent to the classifier.
  • the iterative feature extraction, feature parameter comparison, iterative feature parameter classification threshold calculation, and sample reclassification are performed.
  • the result parameters calculated by these processes are subjected to feature vector extraction and feature parameter sample reclassification on the test sample, and finally the correct rate and error rate of the sample decision are obtained.
  • the correct rate and the error rate satisfy the design requirements, For example, if the probability of correct classification is above 95%, then the classifier learning is completed; otherwise, if the test result correct rate is lower than 95%, then the parameter setting of the classifier should be re-adjusted or the number of samples should be increased. Quantity or add new feature attributes and more.
  • the actual test process the above classification only completes the learning test process on a limited sample set.
  • a successful classifier also needs to test in the actual data, and the result parameters calculated by these processes are extracted from the feature data on the actual data.
  • the user test data given by the classifier is compared with the standard healthy face data to give the user the current health level value, so that the user has an intuitive health data understanding, and compares with the user's previous test results, and analyzes the user. Whether the health level is declining or rising. Finally, based on the analysis of health data, certain health advice is given to the health of the user.
  • this example provides an information processing method, including:
  • Step S11 acquiring facial image data, the step may correspond to acquiring an infrared light image and a visible light image in the foregoing embodiment;
  • Step S12 facial facial feature analysis, which may be equivalent to extracting temperature information and skin color information in the foregoing embodiment.
  • Step S13 Feature selection, where one or more features can be selected for analysis.
  • Step S14 Feature classification learning.
  • Step S15 Acquire feature classification learning parameters.
  • Step S16 input actual face data
  • Step S17 actual face data test result.
  • Step S18 The test results are compared and analyzed.
  • the test results here can be compared to the previous implementation Health status information in the example.
  • the health status information here is compared with the health status information in the mapping relationship.
  • the mapping relationship here can be a mapping relationship between health status information and health suggestions.
  • Step S19 Give a health suggestion.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), and a RAM (Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • RAM Random Access Memory

Abstract

L'invention concerne un procédé de traitement d'informations, un dispositif électronique et un support d'informations informatique. Le procédé comprend : l'utilisation d'une lumière infrarouge pour capturer le visage d'un utilisateur pour former une image en lumière infrarouge, et l'utilisation de la lumière visible pour capturer le visage de l'utilisateur pour former une image en lumière visible (S110) ; l'analyse de l'image en lumière infrarouge pour déterminer des informations de température concernant le visage de l'utilisateur, et l'analyse de l'image en lumière visible pour déterminer des informations de couleur de peau concernant le visage de l'utilisateur (S120) ; et l'analyse des informations d'état de santé concernant cet utilisateur conjointement avec les informations de température et les informations de couleur de peau (S130).
PCT/CN2016/099295 2015-11-17 2016-09-19 Procédé de traitement d'informations, dispositif électronique et support d'informations informatique WO2017084428A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510797277.6A CN105455781A (zh) 2015-11-17 2015-11-17 信息处理方法及电子设备
CN201510797277.6 2015-11-17

Publications (1)

Publication Number Publication Date
WO2017084428A1 true WO2017084428A1 (fr) 2017-05-26

Family

ID=55594301

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099295 WO2017084428A1 (fr) 2015-11-17 2016-09-19 Procédé de traitement d'informations, dispositif électronique et support d'informations informatique

Country Status (2)

Country Link
CN (1) CN105455781A (fr)
WO (1) WO2017084428A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108553081A (zh) * 2018-01-03 2018-09-21 京东方科技集团股份有限公司 一种基于舌苔图像的诊断系统
CN110196103A (zh) * 2019-06-27 2019-09-03 Oppo广东移动通信有限公司 温度测量方法及相关设备
CN111027489A (zh) * 2019-12-12 2020-04-17 Oppo广东移动通信有限公司 图像处理方法、终端及存储介质
US10755415B2 (en) 2018-04-27 2020-08-25 International Business Machines Corporation Detecting and monitoring a user's photographs for health issues
WO2020171554A1 (fr) * 2019-02-19 2020-08-27 Samsung Electronics Co., Ltd. Procédé et appareil pour mesurer la température corporelle à l'aide d'une caméra
CN112950732A (zh) * 2021-02-23 2021-06-11 北京三快在线科技有限公司 一种图像生成方法、装置、存储介质及电子设备
CN113008404A (zh) * 2021-02-22 2021-06-22 深圳市商汤科技有限公司 温度测量方法及装置、电子设备和存储介质
CN115984126A (zh) * 2022-12-05 2023-04-18 北京拙河科技有限公司 一种基于输入指令的光图像修正方法及装置
CN117152397A (zh) * 2023-10-26 2023-12-01 慧医谷中医药科技(天津)股份有限公司 一种基于热成像投影的三维人脸成像方法及系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105455781A (zh) * 2015-11-17 2016-04-06 努比亚技术有限公司 信息处理方法及电子设备
KR102375177B1 (ko) * 2016-04-22 2022-03-16 핏스킨 인코포레이티드 전자 디바이스를 사용한 피부 분석을 위한 시스템 및 방법
CN108074647A (zh) * 2016-11-15 2018-05-25 深圳大森智能科技有限公司 一种健康数据采集方法和装置
US10762635B2 (en) * 2017-06-14 2020-09-01 Tusimple, Inc. System and method for actively selecting and labeling images for semantic segmentation
CN108241433B (zh) * 2017-11-27 2019-03-12 王国辉 疲劳度解析平台
CN110909566A (zh) * 2018-09-14 2020-03-24 奇酷互联网络科技(深圳)有限公司 健康分析方法、移动终端和计算机可读存储介质
CN110312033B (zh) * 2019-06-17 2021-02-02 Oppo广东移动通信有限公司 电子装置、信息推送方法及相关产品
CN111337142A (zh) * 2020-04-07 2020-06-26 北京迈格威科技有限公司 体温修正方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825075A (zh) * 2005-02-25 2006-08-30 安捷伦科技有限公司 检测热异常的系统和方法
US20130116591A1 (en) * 2011-11-04 2013-05-09 Alan C. Heller Systems and devices for real time health status credentialing
WO2014141084A1 (fr) * 2013-03-14 2014-09-18 Koninklijke Philips N.V. Dispositif et procédé de détermination de signes vitaux d'un sujet
WO2015169634A1 (fr) * 2014-05-07 2015-11-12 Koninklijke Philips N.V. Dispositif, système et procédé pour extraire des informations physiologiques
CN105455781A (zh) * 2015-11-17 2016-04-06 努比亚技术有限公司 信息处理方法及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030064356A1 (en) * 2001-10-01 2003-04-03 Gilles Rubinstenn Customized beauty tracking kit
US8489539B2 (en) * 2009-10-05 2013-07-16 Elc Management, Llc Computer-aided diagnostic systems and methods for determining skin compositions based on traditional chinese medicinal (TCM) principles
CN204362181U (zh) * 2014-12-05 2015-05-27 北京蚁视科技有限公司 同时采集红外光图像和可见光图像的图像采集装置
CN104434038B (zh) * 2014-12-15 2017-02-08 无限极(中国)有限公司 对采集到的肤质数据进行处理的方法、装置及系统
CN104618709B (zh) * 2015-01-27 2017-05-03 天津大学 一种双双目红外与可见光融合立体成像系统
CN104825136B (zh) * 2015-05-26 2018-08-10 高也陶 传统中医面区色部信息采集及分析系统和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825075A (zh) * 2005-02-25 2006-08-30 安捷伦科技有限公司 检测热异常的系统和方法
US20130116591A1 (en) * 2011-11-04 2013-05-09 Alan C. Heller Systems and devices for real time health status credentialing
WO2014141084A1 (fr) * 2013-03-14 2014-09-18 Koninklijke Philips N.V. Dispositif et procédé de détermination de signes vitaux d'un sujet
WO2015169634A1 (fr) * 2014-05-07 2015-11-12 Koninklijke Philips N.V. Dispositif, système et procédé pour extraire des informations physiologiques
CN105455781A (zh) * 2015-11-17 2016-04-06 努比亚技术有限公司 信息处理方法及电子设备

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108553081A (zh) * 2018-01-03 2018-09-21 京东方科技集团股份有限公司 一种基于舌苔图像的诊断系统
US10755415B2 (en) 2018-04-27 2020-08-25 International Business Machines Corporation Detecting and monitoring a user's photographs for health issues
US10755414B2 (en) 2018-04-27 2020-08-25 International Business Machines Corporation Detecting and monitoring a user's photographs for health issues
WO2020171554A1 (fr) * 2019-02-19 2020-08-27 Samsung Electronics Co., Ltd. Procédé et appareil pour mesurer la température corporelle à l'aide d'une caméra
CN110196103A (zh) * 2019-06-27 2019-09-03 Oppo广东移动通信有限公司 温度测量方法及相关设备
CN111027489B (zh) * 2019-12-12 2023-10-20 Oppo广东移动通信有限公司 图像处理方法、终端及存储介质
CN111027489A (zh) * 2019-12-12 2020-04-17 Oppo广东移动通信有限公司 图像处理方法、终端及存储介质
CN113008404A (zh) * 2021-02-22 2021-06-22 深圳市商汤科技有限公司 温度测量方法及装置、电子设备和存储介质
CN112950732A (zh) * 2021-02-23 2021-06-11 北京三快在线科技有限公司 一种图像生成方法、装置、存储介质及电子设备
CN112950732B (zh) * 2021-02-23 2022-04-01 北京三快在线科技有限公司 一种图像生成方法、装置、存储介质及电子设备
CN115984126A (zh) * 2022-12-05 2023-04-18 北京拙河科技有限公司 一种基于输入指令的光图像修正方法及装置
CN117152397A (zh) * 2023-10-26 2023-12-01 慧医谷中医药科技(天津)股份有限公司 一种基于热成像投影的三维人脸成像方法及系统
CN117152397B (zh) * 2023-10-26 2024-01-26 慧医谷中医药科技(天津)股份有限公司 一种基于热成像投影的三维人脸成像方法及系统

Also Published As

Publication number Publication date
CN105455781A (zh) 2016-04-06

Similar Documents

Publication Publication Date Title
WO2017084428A1 (fr) Procédé de traitement d'informations, dispositif électronique et support d'informations informatique
CN108629747B (zh) 图像增强方法、装置、电子设备及存储介质
CN105354838B (zh) 图像中弱纹理区域的深度信息获取方法及终端
CN109191410A (zh) 一种人脸图像融合方法、装置及存储介质
US20210343041A1 (en) Method and apparatus for obtaining position of target, computer device, and storage medium
WO2017140182A1 (fr) Procédé et appareil de synthèse d'image et support d'informations
CN106878588A (zh) 一种视频背景虚化终端及方法
CN110650379B (zh) 视频摘要生成方法、装置、电子设备及存储介质
CN108900790A (zh) 视频图像处理方法、移动终端及计算机可读存储介质
CN106447641A (zh) 图像生成装置及方法
CN107018331A (zh) 一种基于双摄像头的成像方法及移动终端
CN109167910A (zh) 对焦方法、移动终端及计算机可读存储介质
CN106791416A (zh) 一种背景虚化的拍摄方法及终端
CN106534696A (zh) 一种对焦装置和方法
CN106603931A (zh) 一种双目拍摄方法及装置
US20230076109A1 (en) Method and electronic device for adding virtual item
CN106851113A (zh) 一种基于双摄像头的拍照方法及移动终端
CN110072061A (zh) 一种交互式拍摄方法、移动终端及存储介质
WO2023151472A1 (fr) Procédé et appareil d'affichage d'image, terminal et support de stockage
CN106506778A (zh) 一种拨号装置及方法
CN108419009A (zh) 图像清晰度增强方法和装置
CN113542610A (zh) 拍摄方法、移动终端及存储介质
CN106385573A (zh) 一种图片处理方法及终端
CN106713640A (zh) 一种亮度调节方法和设备
CN112069951A (zh) 视频片段提取方法、视频片段提取装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16865606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16865606

Country of ref document: EP

Kind code of ref document: A1