WO2017084428A1 - Information processing method, electronic device and computer storage medium - Google Patents

Information processing method, electronic device and computer storage medium Download PDF

Info

Publication number
WO2017084428A1
WO2017084428A1 PCT/CN2016/099295 CN2016099295W WO2017084428A1 WO 2017084428 A1 WO2017084428 A1 WO 2017084428A1 CN 2016099295 W CN2016099295 W CN 2016099295W WO 2017084428 A1 WO2017084428 A1 WO 2017084428A1
Authority
WO
WIPO (PCT)
Prior art keywords
light image
information
user
visible light
temperature
Prior art date
Application number
PCT/CN2016/099295
Other languages
French (fr)
Chinese (zh)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017084428A1 publication Critical patent/WO2017084428A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0084Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
    • A61B5/0086Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters using infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis

Definitions

  • the embodiments of the present invention relate to the field of information technologies, and in particular, to an information processing method, an electronic device, and a computer storage medium.
  • embodiments of the present invention are expected to provide an information processing method, an electronic device, and a computer storage medium, which can at least partially solve the above problems.
  • a first aspect of the embodiments of the present invention provides an information processing method, where the method includes: collecting infrared light images by using the infrared light to form an infrared light image, and collecting visible light images by using the visible light; and analyzing the infrared light image to determine Describe temperature information of the user's face and analyze the visible light image to determine skin color information of the user's face; and combine the temperature information and the skin color information to analyze the health status information of the user.
  • the analyzing the health status information of the user by using the temperature information and the skin color information comprises: analyzing the temperature information and the skin color information by using a machine learning algorithm to obtain the health status of the user. information.
  • the method further includes: performing algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information; and verifying the training algorithm by using test data. And obtaining the verification result; the test data includes test temperature information and test temperature information; if the verification result indicates that the training algorithm meets the preset condition, determining that the training algorithm is the machine learning algorithm.
  • the method further includes: using the visible light image to locate a distribution position of the facial part of the user; and analyzing the infrared light image to determine temperature information of the user's face, including combining the distributed position and The infrared light image determines a temperature value of each organ of the user's face and a temperature difference between the organs.
  • combining the distribution position and the infrared light image to determine a temperature value of a user's facial organs and a temperature difference between the organs including:
  • a temperature difference between the organs is calculated based on obtaining the temperature value.
  • the analyzing the health status information of the user by using the temperature information and the skin color information includes:
  • the method further includes:
  • Extracting the color value of the pixel at the location of the user's face from the visible light image to obtain the skin color information including:
  • the corrected color value is the skin color information.
  • the acquisition affects the acquisition to form a predetermined parameter for forming the visible light image.
  • Number including
  • the corrected color value is the skin color information, including:
  • the color value is corrected according to the color temperature value to obtain the corrected color value.
  • the acquiring affects the acquisition to form predetermined parameters for forming the visible light image, including
  • the corrected color value is the skin color information, including:
  • the color value is corrected according to the ambient light value to obtain the corrected color value.
  • the acquiring the infrared light image by using the infrared light to form the infrared light image and collecting the visible light image by using the visible light to capture the user's face includes separately acquiring the infrared light image and the visible light image by using a binocular acquisition unit.
  • a second aspect of the embodiments of the present invention provides an electronic device, where the electronic device includes: an acquisition unit configured to acquire an infrared light image by using the infrared light to collect a facial image of the user, and collect the visible light image by using the visible light; and analyzing the unit and configuring For analyzing the infrared light image, determining temperature information of the user's face and analyzing the visible light image, determining skin color information of the user's face; and obtaining a unit configured to combine the temperature information and the skin color information to analyze the The health status information of the user.
  • the obtaining unit is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  • the electronic device further includes: a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information; and a verification unit, configured To use the test data for the stated
  • the training algorithm performs verification to obtain a verification result; the test data includes test temperature information and test temperature information; and the determining unit is configured to determine that the training algorithm is the machine if the verification result indicates that the training algorithm meets a preset condition Learning algorithm.
  • the electronic device further includes: a positioning unit, configured to locate a distribution position of the facial part of the user by using the visible light image; the analyzing unit is further configured to combine the distribution position and the infrared light The image determines the temperature value of each organ of the user's face and the temperature difference between the organs.
  • the analyzing unit is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate a temperature difference between the organs according to the obtained temperature value .
  • the analyzing unit is configured to extract a color value of a pixel at a position where the user's face is located from the visible light image to obtain the skin color information.
  • the electronic device further includes:
  • An acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image
  • the analyzing unit is further configured to correct the color value according to the predetermined parameter; the corrected color value is the skin color information.
  • the acquiring unit is configured to acquire a color temperature parameter of an acquisition unit that forms the visible light image, according to the foregoing solution;
  • the analyzing unit is configured to correct the color value according to the color temperature value to obtain the corrected color value.
  • the acquiring unit is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method of any of the foregoing items.
  • the information processing method, the electronic device and the computer storage medium according to the embodiments of the present invention are capable of collecting an infrared light image and a visible light image of a user's face, and detecting the temperature information of the user's face according to the infrared light image and determining the skin color of the user's face using the visible light image. Information; then, based on the temperature information and the skin color information, the user's health status information is jointly determined; thus, the electronic device can be used to easily monitor the user's health status, better utilize the hardware and software resources of the existing electronic device, and improve the intelligence of the electronic device. Sex and user satisfaction.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a communication system to which an electronic device can be applied according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart diagram of a first information processing method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of a second information processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic flowchart diagram of a third information processing method according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of an effect of superimposing a visible light image and an infrared light image according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a correspondence between a user's face and a body organ according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a training learning machine according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing an axis representation of a degree of health according to an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart diagram of a fourth information processing method according to an embodiment of the present invention.
  • the information processing method described in this embodiment can be applied to various types of electronic devices.
  • the electronic device in this embodiment may include various types of mobile terminals or fixed terminals.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • Broadcast management The server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives the previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Purple BeeTM and more.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery discharger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may communicate with the mobile terminal 100 via a port or other connection device. connection.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • an external device eg, data information, power, etc.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input Into the area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • MSC 280 is also constructed to The BSC 275 coupled to the base station 270 by the backhaul line forms an interface.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • BSC provides call resource allocation and includes BS270 Coordinated mobility management functions between soft handover processes.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • this embodiment provides an information processing method, where the method includes:
  • Step S110 collecting infrared light images by using the infrared light to collect the infrared light image of the user's face and collecting the visible light image by using the visible light;
  • Step S120 analyzing the infrared light image, determining temperature information of the user's face, and analyzing the visible light image to determine skin color information of the user's face;
  • Step S130 Combine the temperature information and the skin color information to analyze the health status information of the user.
  • This embodiment can be applied to the foregoing electronic device, such as a mobile terminal, such as a mobile phone, a tablet computer, or a wearable device.
  • a mobile terminal such as a mobile phone, a tablet computer, or a wearable device.
  • the infrared light is captured by the user's face using infrared light in step S110.
  • the infrared light image includes an image of the user's face, such as a user's facial features.
  • the visible light image is also formed by collecting the user's face with visible light in step S110.
  • the infrared light image is analyzed in step S120, and the temperature information of the user's face can be perceived by the infrared image sensor according to the user's facial radiation information.
  • the temperature information herein may include temperature values for various locations of the user's face. Further, by calculation, information such as a temperature difference at each position of the user's face can be known.
  • the visible light image will also be analyzed in step S120 to obtain skin color information at various positions of the user's face.
  • the temperature information and the skin color information of the user's face can reflect the health status of the user.
  • the user is obtained based on the temperature information and the skin color information.
  • the health status information in the present embodiment, will analyze the temperature value and/or the temperature difference, and obtain the user's health status information based on Chinese medicine or Western medicine theory.
  • the skin color information herein may include depth information of skin color, uniform information of skin color, and color tone information of skin color. It is obvious that the skin color information of the user's face can reflect the user's physical health.
  • the temperature information of the user's face and the skin color information are used to jointly diagnose the health state of the user, and at least the analysis amount of the two dimensions is included, and more accurate health state information can be obtained.
  • the information processing method described in this embodiment is applied to an electronic device such as a mobile phone, a tablet computer, or a wearable device
  • the user uses an electronic device such as a mobile phone, a notebook, or a tablet computer that is carried by the user.
  • an electronic device such as a mobile phone, a notebook, or a tablet computer that is carried by the user.
  • you can easily obtain your own health status information, thus easily monitoring your health status, greatly improving the utilization of software and hardware resources in electronic devices such as mobile phones and tablet computers, and the intelligence of these devices. Sex, and user satisfaction.
  • the step S130 may include analyzing the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  • the temperature information and the skin color information are analyzed by using a machine learning algorithm to obtain health status information of the user.
  • the machine learning algorithm obtains characteristic parameters for different health states by analyzing and learning a large amount of data before performing the analysis, and the temperature information and the skin color information and the feature parameters in the embodiment may be used. Matching, accurately determining the health status information of the user.
  • the feature parameter may be data sent from a network server or a medical health detection platform or the like.
  • the method in the embodiment further includes: forming the learning machine algorithm before analyzing the temperature information and the skin color information by using a learning machine algorithm.
  • forming the learning machine algorithm may include the following steps:
  • Step S210 performing algorithm training using sample data as input data of the learning machine, and obtaining a training algorithm;
  • the sample data includes sample temperature information and sample skin color information;
  • Step S220 verifying the training algorithm by using test data, and obtaining a verification result; the test data includes test temperature information and test temperature information;
  • Step S230 If the verification result indicates that the training algorithm meets the preset condition, determine that the training algorithm is the machine learning algorithm.
  • the sample data in the embodiment may include sample skin color information and sample temperature information and corresponding health state information; using the sample data to train the learning machine, the learning machine may obtain the correspondence between the skin color information and the temperature information and the health state information.
  • Functional relationship The corresponding functional relationship can be the training algorithm or an alternative machine learning algorithm.
  • the test data is also used for verification.
  • the sample skin color information and the sample temperature information in the test data are input as information processing by using a training algorithm, and an output result is obtained; the output result is Comparing the test health status information in the test data, the correctness of the training algorithm after processing each test data can be obtained.
  • the training algorithm is used as a machine learning algorithm for performing subsequent user health state information acquisition.
  • the functional relationships here can be represented by various parameters, which are not exemplified here.
  • the method further includes:
  • the step S120 includes:
  • the visible light image is used to locate the distribution position of each organ of the user's face, specifically Such as, forehead, nose, cheeks, tongue and other parts.
  • each part of the user's face can correspond to the user's body.
  • each part of the body can directly use the electronic device to give health status information, so that the user can use the electronic device to monitor his own health status information.
  • the step S120 may include:
  • a temperature difference between the organs is calculated based on obtaining the temperature value.
  • the wavelength component of the infrared ray and the parameters such as the intensity of the infrared are related to the temperature of the face
  • the pixel value of the corresponding organ in the infrared image can be extracted, and then between the pixel value and the temperature value.
  • the conversion can be used to know the temperature value of the face and calculate the temperature difference between different organs; it is easy to implement.
  • the analyzing the health status information of the user by combining the temperature information and the skin color information comprises: extracting, from the visible light image, a color value of a pixel of a location where the user's face is located, The skin color information is obtained. The color value will be extracted from the visible light image in this embodiment.
  • the method further includes: acquiring a predetermined parameter that affects the acquisition to form the visible light image; the step S120 may include: correcting the color according to the predetermined parameter. a value; the corrected color value is the skin color information.
  • the step S120 may include: correcting the color value according to the color temperature value, obtaining the The corrected color value.
  • the obtaining affects the acquisition of the predetermined parameters for forming the visible light image, including acquiring an ambient illumination value that is used to form the visible light image; and the step S120 may include: correcting the color value according to the ambient illumination value, Obtain the repair The color value after the positive.
  • the ambient illumination value herein may include illumination values such as ambient light brightness values and color values, and may be corrected according to the illumination of the ambient light to restore the original skin color information of the collected face to improve the extracted skin color. The accuracy of the information to obtain more accurate health status information.
  • the step S110 may include separately acquiring the infrared light image and the visible light image by using a binocular acquisition unit.
  • the binocular acquisition unit here can correspond to various binocular cameras.
  • the binocular camera here can be a camera capable of collecting infrared light and visible light, can form a visible light image based on visible light, and can form an infrared light image based on infrared light.
  • the binocular acquisition unit is used for processing, and the infrared light image and the visible light image can be collected in the shortest time, which can reduce the response delay and improve the response rate of the electronic device.
  • the method in this embodiment further includes: outputting suggestion information according to the health status information.
  • the suggestion information in this embodiment may be pre-stored information mapped with the health status information, or suggestion information that is received from other electronic devices and mapped to the health status information. In this way, the electronic device can be easily used to determine its own health status information, and then the state of the diet, work and the like can be adjusted according to the suggested information.
  • the embodiment provides an electronic device, where the electronic device includes:
  • the collecting unit 310 is configured to acquire an infrared light image by using the infrared light to collect the user's face and collect the visible light image by using the visible light to collect the user's face;
  • the analyzing unit 320 is configured to analyze the infrared light image, determine temperature information of the user's face, and analyze the visible light image to determine skin color information of the user's face;
  • the obtaining unit 330 is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
  • the electronic device in this embodiment may be the foregoing mobile terminal, such as a mobile device, a tablet computer, or a wearable device.
  • the collecting unit 310 may correspond to a visible light sensor and an infrared light sensor, and the infrared The light sensor can collect infrared light to form the infrared light image.
  • the visible light sensor can collect visible light to form a visible light image.
  • the specific structures of the analysis unit 320 and the obtaining unit 330 correspond to a processor or processing circuit inside the electronic device.
  • the processor can include an application processor, a microprocessor, a digital signal processor or a programmable array, and the like.
  • the processing circuit can include a structure such as an application specific integrated circuit.
  • the analyzing unit 320 and the obtaining unit 330 may be integrated corresponding to the same processor or processing circuit, or may respectively correspond to different processors or processing circuits.
  • the obtaining unit is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
  • the electronic device in the embodiment obtains the temperature information and the skin color information of the user's face by collecting the infrared light image and the visible light image, and obtains the health state information of the user by analyzing the temperature information, thereby improving the intelligence of the electronic device. And user satisfaction, so that users can easily obtain their health status information by collecting their own faces with electronic devices.
  • the temperature information and the skin color information are referenced at the same time, and the reference quantity for forming the health status information is increased, and the accuracy of the health status information is improved.
  • the obtaining unit 330 is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  • the machine learning algorithm is used to analyze the temperature information and the skin color information to obtain the health state information.
  • the machine learning algorithm is configured to analyze a large amount of data to obtain various characteristic parameters for characterizing different profile states, and The temperature information and the matching between the skin color information and the feature parameters obtain the health state information, thereby achieving easy acquisition of the health state information and ensuring high accuracy of the health state information.
  • the electronic device further includes:
  • a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm;
  • the sample data includes sample temperature information and sample skin color information;
  • a verification unit configured to verify the training algorithm by using test data to obtain a verification result;
  • the test data includes test temperature information and test temperature information;
  • a determining unit configured to determine that the training algorithm is the machine learning algorithm if the verification result indicates that the training algorithm meets a preset condition.
  • the training unit in this embodiment may include various types of learning machines.
  • the specific structure of the verification unit and the determination unit may correspond to a processor or a processing circuit.
  • the processor or processing circuitry may implement the various functions of the various units described above by executing the executable instructions.
  • the electronic device further includes: a positioning unit, configured to locate a distribution position of the facial part of the user by using the visible light image; the analyzing unit 320 is further configured to combine the distributed position and the infrared
  • the light image determines the temperature value of each organ of the user's face and the temperature difference between the organs.
  • the positioning unit in this embodiment may include a coordinate positioning device or the like, and can determine the distribution position of each organ on the user's face through the analysis code of the visible light image.
  • the analysis unit 320 combines the distribution position and the infrared light image to determine the temperature value and temperature difference of each organ.
  • the temperature value and the temperature difference will be used as temperature information as the basis for obtaining the health status information.
  • Such an electronic device generally solves the problem that the operation of the infrared light image positioning operation is cumbersome, and at the same time, can improve the accuracy of the temperature information, thereby improving the accuracy of the health state information again.
  • the analyzing unit 320 is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate the organ according to the obtained temperature value The temperature difference between them.
  • the analyzing unit 320 is further configured to extract a color value of a pixel at a location where the user's face is located from the visible light image to obtain the skin color information.
  • the electronic device further includes: an acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image; the analyzing unit 320 is further configured to correct the color according to the predetermined parameter a value; the corrected color value is the skin color information.
  • the acquiring unit is configured to acquire a color temperature parameter for collecting an acquisition unit that forms the visible light image
  • the analyzing unit 320 is configured to correct the color value according to the color temperature value to obtain the corrected color value.
  • the acquiring unit is configured to acquire an ambient light value for collecting the visible light image
  • the analyzing unit 320 is configured to correct the color value according to the ambient light value to obtain the corrected color value.
  • the collection unit 110 is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time.
  • the binocular acquisition unit can simultaneously collect infrared light images and visible light images, which can reduce the time taken for collecting images of the user's face, improve the response rate of the electronic device, and reduce the response delay.
  • the electronic device further includes: an output unit configured to output suggestion information according to the health status information.
  • the output unit in this embodiment may correspond to a display output unit or an audio output unit.
  • the display output unit may include various types of display screens.
  • the display screen may include a liquid crystal display, an electronic ink display, a projection display, or an organic light emitting diode (OLED) display.
  • the audio output unit may include a speaker or an audio output circuit or the like.
  • the output unit in this embodiment can output suggestion information, give the user a suggestion to maintain or restore the health status, and improve the intelligence of the electronic device and the user satisfaction.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method of any of the foregoing items, for example, FIG. 3 and FIG. 4
  • the computer storage medium may include various storage media such as an optical disk, a magnetic tape, a mobile hard disk, a flash memory, and the like, and may be a non-transitory storage medium.
  • the example provides a method for acquiring health status information, including:
  • Step S410 Acquire an infrared and visible light facial binocular image; the facial binocular image herein may be an understanding of an overlap of the infrared light image and the visible light image;
  • Step S420 modeling facial health data of the face
  • Step S430 Acquire a health feature classifier parameter for identifying a face of another person according to a machine learning algorithm
  • Step S440 Perform health level detection based on the monitoring feature classification parameter pair, and output a health suggestion.
  • step S410 the infrared spectrum image of the face and face is captured by the infrared camera, and the infrared image sensor can sense the temperature information of the object according to the heat radiation information of the object, so that the temperature information of the face is obtained, and the imaging principle of the infrared camera is different from that of the visible light camera, and is lost.
  • the facial brightness and color details of the visible light camera make it difficult to locate the facial features of the face.
  • the visible light image of the face is simultaneously captured by the visible light camera, and the skin color information of the face is acquired.
  • the binocular system thus composed can simultaneously acquire the skin color information and temperature information of the facial features.
  • step S420 a machine learning algorithm is used to perform a learning analysis on a large amount of face data, and a health feature model for identifying a face is learned.
  • the main purpose is to simulate a Chinese medicine observation mode, and the facial key feature of the face is used as a classifier.
  • the feature input, through the learning and training of big data to obtain the key parameters of the health feature, can obtain the classifier for testing the face health image.
  • Key features of classification modeling include: temperature of various organs of the face such as forehead, nose, cheeks, tongue, temperature difference, color characteristics of facial organs, according to the basic color statistics of Chinese medicine, including yellow, white, red, black , green and other colors, each color includes light, medium and deep.
  • the temperature and color information of these facial organs can presume the health of other parts of the human body, as shown in Fig. 8. Therefore, analyzing the characteristic information of the organs of the face can judge the condition of each part of the human body.
  • the facial regions that reflect the shoulder, lung, throat, and liver of the user's body are respectively indicated. In the specific implementation process, other facial regions can also be Reflecting the health status of other parts of the user's body, omitted in Figure 8.
  • step S430 through a large number of data input and machine learning classification training algorithms, the computer acquires health feature parameters for identifying other people's faces.
  • the user takes a facial infrared and visible light image by self-timer, and inputs the facial image of the user as a test image into the face health classifier.
  • the health classifier analyzes the health of the currently used input test image according to the characteristic parameters of offline learning. , gives the user's health data analysis.
  • FIG. 9 A flow chart of the machine learning algorithm for facial health is given in Figure 9. As shown in FIG. 9, the information processing method in this example may include:
  • Step S1 Input face image health degree training data. These face image health training data can be sample data.
  • Step S2 extracting color and temperature characteristics of various organs of the face and face
  • Step S3 Input color and temperature characteristics to a classifier classifier such as an AdaBoost classifier or SVM.
  • Adaboost is an iterative algorithm to train different weak classifiers for the same training set, and then combine these weak classifiers to form a stronger final classifier (this final classifier is a strong classifier) .
  • SVM is an abbreviation of Support Vector Machine, which is a support vector machine classifier.
  • Step S4 Acquire a facial health degree feature classification parameter.
  • Step S5 Forming face image monitoring degree detection data based on the health degree feature classification parameter. Next, it is determined whether the training requirement is met. If the training requirement is not returned to step S3, the actual face image health can be detected if the training requirement is met.
  • Step S6 input actual face image health degree detection data, where the face image health degree detection data may be temperature information acquired from the infrared light image and/or skin color information detected from the visible light image.
  • the actual face image monitoring degree detection data here may correspond to the detection sample.
  • Step S7 Analyze the measured result.
  • Step S8 The analysis result obtained in step S7 does not satisfy the requirement return algorithm design flow, and the algorithm is improved, and the process returns to step S2.
  • Step S9 The analysis result obtained in step S7 satisfies the requirement that the algorithm is completed.
  • the step S6 to the step S7 may be repeatedly performed. If the accuracy of the analysis result of the actual face image health degree detection data reaches a specified threshold, it may be considered that the requirement is met, otherwise the requirement is not satisfied.
  • the results of the analysis herein may include the results of the health status information.
  • the face image health degree training data input in step S1 is sample data for performing the learning machine training. The following describes the production process of the sample data.
  • Figure 10 shows the axis of a health value. The health of a person is scored from 0 to 100. If the score is below 60, the corresponding user is in a sub-health state. If the score is above 60 and 60, the user is considered to be in the sub-health state. health status.
  • the main classification feature is the skin color and temperature of the facial skin, where the skin color features and temperature features of the nose tip of each sample population are extracted as feature vectors.
  • the temperature characteristic can be converted into a corresponding temperature value according to the pixel value of the infrared image, and the color information can be obtained by establishing a color mapping table, and obtaining a color value according to the image color information of the color image, and establishing yellow, white, red, black, and blue
  • a basic color table according to the size of the color value, determine the color depth of the area, divided into light, medium and deep, so that you can get the color characteristics of the sample, then you can establish a health feature vector matrix as follows: (Note: The values in the feature matrix vector are used to illustrate the method and deviate from the actual measurement data.
  • AdaBoost AdaBoost classifier
  • the popular AdaBoost classifier is adopted here, the theory is mature, and it is effectively practiced in pattern recognition and classification such as face detection and recognition.
  • This AdaBoost classification allows the designer to continually add new weak classifiers until a predetermined sufficiently small error rate is reached.
  • each training sample is given a weight indicating the probability that it will be selected into the training set by a classification classifier. If a sample has been accurately classified, then the probability of its selection is reduced in constructing the next training set; conversely, if a sample is not correctly classified, its weight is increased. In this way, the AdaBoost classifier can focus on samples that are more difficult to classify.
  • the sample is divided into training samples and test samples.
  • the training samples are mainly used for classifier learning.
  • the test samples are mainly used to detect whether the classification learning parameters meet the requirements.
  • the training samples are sent to the classifier.
  • the iterative feature extraction, feature parameter comparison, iterative feature parameter classification threshold calculation, and sample reclassification are performed.
  • the result parameters calculated by these processes are subjected to feature vector extraction and feature parameter sample reclassification on the test sample, and finally the correct rate and error rate of the sample decision are obtained.
  • the correct rate and the error rate satisfy the design requirements, For example, if the probability of correct classification is above 95%, then the classifier learning is completed; otherwise, if the test result correct rate is lower than 95%, then the parameter setting of the classifier should be re-adjusted or the number of samples should be increased. Quantity or add new feature attributes and more.
  • the actual test process the above classification only completes the learning test process on a limited sample set.
  • a successful classifier also needs to test in the actual data, and the result parameters calculated by these processes are extracted from the feature data on the actual data.
  • the user test data given by the classifier is compared with the standard healthy face data to give the user the current health level value, so that the user has an intuitive health data understanding, and compares with the user's previous test results, and analyzes the user. Whether the health level is declining or rising. Finally, based on the analysis of health data, certain health advice is given to the health of the user.
  • this example provides an information processing method, including:
  • Step S11 acquiring facial image data, the step may correspond to acquiring an infrared light image and a visible light image in the foregoing embodiment;
  • Step S12 facial facial feature analysis, which may be equivalent to extracting temperature information and skin color information in the foregoing embodiment.
  • Step S13 Feature selection, where one or more features can be selected for analysis.
  • Step S14 Feature classification learning.
  • Step S15 Acquire feature classification learning parameters.
  • Step S16 input actual face data
  • Step S17 actual face data test result.
  • Step S18 The test results are compared and analyzed.
  • the test results here can be compared to the previous implementation Health status information in the example.
  • the health status information here is compared with the health status information in the mapping relationship.
  • the mapping relationship here can be a mapping relationship between health status information and health suggestions.
  • Step S19 Give a health suggestion.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), and a RAM (Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • RAM Random Access Memory

Abstract

An information processing method, an electronic device and a computer storage medium. The method comprises: using infrared light to collect a user face to form an infrared light image, and using visible light to collect the user face to form a visible light image (S110); analysing the infrared light image to determine temperature information about the user face, and analysing the visible light image to determine skin colour information about the user face (S120); and analysing health status information about this user in conjunction with the temperature information and the skin colour information (S130).

Description

信息处理方法、电子设备和计算机存储介质Information processing method, electronic device, and computer storage medium 技术领域Technical field
本发明实施例涉及信息技术领域,尤其涉及一种信息处理方法、电子设备和计算机存储介质。The embodiments of the present invention relate to the field of information technologies, and in particular, to an information processing method, an electronic device, and a computer storage medium.
背景技术Background technique
随着信息技术和电子技术的发展,电子设备的信息处理能力越来越强大,电子设备的功能也越来越丰富。与此同时,人们也越来越关注自己的监控,在繁忙的人群中,如何简便的检测自己的健康状态,有着越来越大的需求。故在现有技术中提供一种能够方便用户简便检测自身健康状态的技术方案是现有技术待进一步完善的问题。With the development of information technology and electronic technology, the information processing capability of electronic devices is becoming more and more powerful, and the functions of electronic devices are becoming more and more abundant. At the same time, people are paying more and more attention to their own monitoring. In the busy crowd, how to easily detect their own health status has an increasing demand. Therefore, in the prior art, a technical solution capable of facilitating the user to easily detect his or her health state is a problem that the prior art is to be further improved.
发明内容Summary of the invention
有鉴于此,本发明实施例期望提供一种信息处理方法、电子设备和计算机存储介质,至少能够部分解决上述问题。In view of this, embodiments of the present invention are expected to provide an information processing method, an electronic device, and a computer storage medium, which can at least partially solve the above problems.
本发明实施例的技术方案是这样实现的:The technical solution of the embodiment of the present invention is implemented as follows:
本发明实施例第一方面提供一种信息处理方法,所述方法包括:利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像;分析所述红外光图像,确定所述用户面部的温度信息并分析所述可见光图像,确定所述用户面部的肤色信息;结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。A first aspect of the embodiments of the present invention provides an information processing method, where the method includes: collecting infrared light images by using the infrared light to form an infrared light image, and collecting visible light images by using the visible light; and analyzing the infrared light image to determine Describe temperature information of the user's face and analyze the visible light image to determine skin color information of the user's face; and combine the temperature information and the skin color information to analyze the health status information of the user.
基于上述方案,所述结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息,包括:利用机器学习算法分析所述温度信息及所述肤色信息,获得所述用户的健康状态信息。 Based on the foregoing solution, the analyzing the health status information of the user by using the temperature information and the skin color information comprises: analyzing the temperature information and the skin color information by using a machine learning algorithm to obtain the health status of the user. information.
基于上述方案,所述方法还包括:利用样本数据作为学习机的输入数据进行算法训练,获得训练算法;所述样本数据包括样本温度信息和样本肤色信息;采用测试数据对所述训练算法进行验证,获得验证结果;所述测试数据包括测试温度信息和测试温度信息;若所述验证结果表明训练算法满足预设条件,则确定所述训练算法为所述机器学习算法。Based on the above solution, the method further includes: performing algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information; and verifying the training algorithm by using test data. And obtaining the verification result; the test data includes test temperature information and test temperature information; if the verification result indicates that the training algorithm meets the preset condition, determining that the training algorithm is the machine learning algorithm.
基于上述方案,所述方法还包括:利用所述可见光图像定位所述用户面部器官的分布位置;所述分析所述红外光图像,确定所述用户面部的温度信息,包括结合所述分布位置及所述红外光图像,确定用户面部各个器官的温度值及器官之间的温度差。Based on the above solution, the method further includes: using the visible light image to locate a distribution position of the facial part of the user; and analyzing the infrared light image to determine temperature information of the user's face, including combining the distributed position and The infrared light image determines a temperature value of each organ of the user's face and a temperature difference between the organs.
基于上述方案,所述结合所述分布位置及所述红外光图像,确定用户面部器官的温度值及器官之间的温度差,包括:Based on the above solution, combining the distribution position and the infrared light image to determine a temperature value of a user's facial organs and a temperature difference between the organs, including:
提取所述红外光图像中指定器官的像素值;Extracting pixel values of a specified organ in the infrared light image;
将所述像素值换算成温度值;Converting the pixel value into a temperature value;
根据获得所述温度值,计算所述器官之间的温度差。A temperature difference between the organs is calculated based on obtaining the temperature value.
基于上述方案,所述结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息,包括:Based on the foregoing solution, the analyzing the health status information of the user by using the temperature information and the skin color information includes:
从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息。Extracting a color value of a pixel at a position where the user's face is located from the visible light image to obtain the skin color information.
基于上述方案,所述方法还包括:Based on the foregoing solution, the method further includes:
获取影响采集形成所述可见光图像形成的预定参数;Obtaining a predetermined parameter that affects the acquisition to form the visible light image;
所述从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息,包括:Extracting the color value of the pixel at the location of the user's face from the visible light image to obtain the skin color information, including:
根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息。Correcting the color value according to the predetermined parameter; the corrected color value is the skin color information.
基于上述方案,所述获取影响采集形成所述可见光图像形成的预定参 数,包括Based on the above solution, the acquisition affects the acquisition to form a predetermined parameter for forming the visible light image. Number, including
获取采集形成所述可见光图像的采集单元的色温参数;Obtaining a color temperature parameter of the acquisition unit that collects the visible light image;
所述根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息,包括:Determining the color value according to the predetermined parameter; the corrected color value is the skin color information, including:
根据所述色温值修正所述颜色值,获得所述修正后的颜色值。The color value is corrected according to the color temperature value to obtain the corrected color value.
基于上述方案,所述获取影响采集形成所述可见光图像形成的预定参数,包括Based on the above solution, the acquiring affects the acquisition to form predetermined parameters for forming the visible light image, including
获取采集形成所述可见光图像的环境光照值;Obtaining an ambient illumination value that is collected to form the visible light image;
所述根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息,包括:Determining the color value according to the predetermined parameter; the corrected color value is the skin color information, including:
根据所述环境光照值修正所述颜色值,获得所述修正后的颜色值。The color value is corrected according to the ambient light value to obtain the corrected color value.
基于上述方案,所述利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像包括:利用双目采集单元同时分别采集所述红外光图像和所述可见光图像。Based on the above solution, the acquiring the infrared light image by using the infrared light to form the infrared light image and collecting the visible light image by using the visible light to capture the user's face includes separately acquiring the infrared light image and the visible light image by using a binocular acquisition unit.
本发明实施例第二方面提供一种电子设备,所述电子设备包括:采集单元,配置为利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像;分析单元,配置为分析所述红外光图像,确定所述用户面部的温度信息并分析所述可见光图像,确定所述用户面部的肤色信息;获得单元,配置为结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。A second aspect of the embodiments of the present invention provides an electronic device, where the electronic device includes: an acquisition unit configured to acquire an infrared light image by using the infrared light to collect a facial image of the user, and collect the visible light image by using the visible light; and analyzing the unit and configuring For analyzing the infrared light image, determining temperature information of the user's face and analyzing the visible light image, determining skin color information of the user's face; and obtaining a unit configured to combine the temperature information and the skin color information to analyze the The health status information of the user.
基于上述方案,所述获得单元,配置为利用机器学习算法分析所述温度信息及所述肤色信息,获得所述用户的健康状态信息。Based on the above solution, the obtaining unit is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
基于上述方案,所述电子设备还包括:训练单元,配置为利用样本数据作为学习机的输入数据进行算法训练,获得训练算法;所述样本数据包括样本温度信息和样本肤色信息;验证单元,配置为采用测试数据对所述 训练算法进行验证,获得验证结果;所述测试数据包括测试温度信息和测试温度信息;确定单元,配置为若所述验证结果表明训练算法满足预设条件,则确定所述训练算法为所述机器学习算法。Based on the above solution, the electronic device further includes: a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information; and a verification unit, configured To use the test data for the stated The training algorithm performs verification to obtain a verification result; the test data includes test temperature information and test temperature information; and the determining unit is configured to determine that the training algorithm is the machine if the verification result indicates that the training algorithm meets a preset condition Learning algorithm.
基于上述方案,所述电子设备还包括:定位单元,还配置为利用所述可见光图像定位所述用户面部器官的分布位置;所述分析单元,还配置为结合所述分布位置及所述红外光图像,确定用户面部各个器官的温度值及器官之间的温度差。The electronic device further includes: a positioning unit, configured to locate a distribution position of the facial part of the user by using the visible light image; the analyzing unit is further configured to combine the distribution position and the infrared light The image determines the temperature value of each organ of the user's face and the temperature difference between the organs.
基于上述方案,所述分析单元,配置为提取所述红外光图像中指定器官的像素值;将所述像素值换算成温度值;根据获得所述温度值,计算所述器官之间的温度差。Based on the above solution, the analyzing unit is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate a temperature difference between the organs according to the obtained temperature value .
基于上述方案,所述分析单元,配置为从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息。Based on the above solution, the analyzing unit is configured to extract a color value of a pixel at a position where the user's face is located from the visible light image to obtain the skin color information.
基于上述方案,所述电子设备还包括:Based on the foregoing solution, the electronic device further includes:
获取单元,配置为获取影响采集形成所述可见光图像形成的预定参数;An acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image;
所述分析单元,还配置为根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息。The analyzing unit is further configured to correct the color value according to the predetermined parameter; the corrected color value is the skin color information.
基于上述方案,所述获取单元,配置为获取采集形成所述可见光图像的采集单元的色温参数;The acquiring unit is configured to acquire a color temperature parameter of an acquisition unit that forms the visible light image, according to the foregoing solution;
所述分析单元,配置为根据所述色温值修正所述颜色值,获得所述修正后的颜色值。The analyzing unit is configured to correct the color value according to the color temperature value to obtain the corrected color value.
基于上述方案,所述采集单元为双目采集单元,配置为同时分别采集所述红外光图像和所述可见光图像。Based on the above solution, the acquiring unit is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time.
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于前述任意项所述信息处理方法。 The embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method of any of the foregoing items.
本发明实施例所述的信息处理方法、电子设备和计算机存储介质,能够采集用户面部的红外光图像和可见光图像,根据红外光图像能够检测用户面部的温度信息并利用可见光图像确定用户面部的肤色信息;然后基于温度信息及肤色信息共同确定用户的健康状态信息;这样能够利用电子设备简便的监控用户的健康状态,更好的利用了现有电子设备的软硬件资源,提高了电子设备的智能性及用户使用满意度。The information processing method, the electronic device and the computer storage medium according to the embodiments of the present invention are capable of collecting an infrared light image and a visible light image of a user's face, and detecting the temperature information of the user's face according to the infrared light image and determining the skin color of the user's face using the visible light image. Information; then, based on the temperature information and the skin color information, the user's health status information is jointly determined; thus, the electronic device can be used to easily monitor the user's health status, better utilize the hardware and software resources of the existing electronic device, and improve the intelligence of the electronic device. Sex and user satisfaction.
附图说明DRAWINGS
图1为本发明实施例提供的电子设备的结构示意图;1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
图2为本发明实施例所述电子设备能够应用的通信系统的结构示意图;2 is a schematic structural diagram of a communication system to which an electronic device can be applied according to an embodiment of the present invention;
图3为本发明实施例提供的第一种信息处理方法的流程示意图;FIG. 3 is a schematic flowchart diagram of a first information processing method according to an embodiment of the present disclosure;
图4为本发明实施例提供的第二种信息处理方法的流程示意图;4 is a schematic flowchart of a second information processing method according to an embodiment of the present invention;
图5为本发明实施例提供的一种电子设备的结构示意图;FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
图6为本发明实施例提供的第三种信息处理方法的流程示意图;FIG. 6 is a schematic flowchart diagram of a third information processing method according to an embodiment of the present disclosure;
图7为本发明实施例提供的一种可见光图像和红外光图像重叠的效果示意图;FIG. 7 is a schematic diagram of an effect of superimposing a visible light image and an infrared light image according to an embodiment of the present invention; FIG.
图8为本发明实施例提供的用户面部与身体器官之间的对应关系示意图;FIG. 8 is a schematic diagram of a correspondence between a user's face and a body organ according to an embodiment of the present invention; FIG.
图9为本发明实施例提供的一种训练学习机的流程示意图;FIG. 9 is a schematic flowchart of a training learning machine according to an embodiment of the present invention;
图10为本发明实施例提供的一种健康程度的轴表示示意图;FIG. 10 is a schematic diagram showing an axis representation of a degree of health according to an embodiment of the present invention; FIG.
图11为本发明实施例提供的第四种信息处理方法的流程示意图。FIG. 11 is a schematic flowchart diagram of a fourth information processing method according to an embodiment of the present invention.
具体实施方式detailed description
以下结合说明书附图及具体实施例对本发明的技术方案做进一步的详细阐述,应当理解,以下所说明的优选实施例仅用于说明和解释本发明, 并不用于限定本发明。The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings and specific embodiments. It is to be understood that the preferred embodiments described below are only used to illustrate and explain the present invention. It is not intended to limit the invention.
本实施例所述的信息处理方法可应用于各种类型的电子设备中。本实施例所述电子设备可包括各种类型的移动终端或固定终端。The information processing method described in this embodiment can be applied to various types of electronic devices. The electronic device in this embodiment may include various types of mobile terminals or fixed terminals.
现在将参考附图描述实现本发明各个实施例从电子设备(如移动终端)。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。Embodiments of the present invention from an electronic device (e.g., a mobile terminal) will now be described with reference to the accompanying drawings. In the following description, the use of suffixes such as "module", "component" or "unit" for indicating an element is merely an explanation for facilitating the present invention, and does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。The mobile terminal can be implemented in various forms. For example, the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
图1为实现本发明各个实施例的移动终端的硬件结构示意。FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。The mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like. Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。 Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理 服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel can include a satellite channel and/or a terrestrial channel. Broadcast management The server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives the previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Moreover, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like. . The broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems. In particular, the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H) The digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting. The broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。The mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。The wireless internet module 113 supports wireless internet access of the mobile terminal. The module can be internally or externally coupled to the terminal. The wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、 紫蜂TM等等。The short range communication module 114 is a module for supporting short range communication. Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Purple BeeTM and more.
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风1220,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机1210。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。The A/V input unit 120 is for receiving an audio or video signal. The A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode. The processed image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the configuration of the mobile terminal. The microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data. The processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode. The microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。The user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal. The user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc. In particular, when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池放电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100 连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。The interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power (or battery discharger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more. The identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like. In addition, the device having the identification module (hereinafter referred to as "identification device") may take the form of a smart card, and thus the identification device may communicate with the mobile terminal 100 via a port or other connection device. connection. The interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152等等。In addition, when the mobile terminal 100 is connected to the external base, the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal. Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base. Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, and the like.
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输 入面积。Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like. According to a particular desired embodiment, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) . The touch screen can be used to detect touch input pressure as well as touch input position and touch input Into the area.
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. The audio signal is output as sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. The audio output module 152 can include a speaker, a buzzer, and the like.
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。The memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。The controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。 The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。The various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 180.
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。So far, the mobile terminal has been described in terms of its function. Hereinafter, for the sake of brevity, a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。The mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
现在将参考图2描述其中根据本发明的移动终端能够操作的通信系统。A communication system in which a mobile terminal according to the present invention can be operated will now be described with reference to FIG.
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。Such communication systems may use different air interfaces and/or physical layers. For example, air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc. As a non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经 由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。Referring to FIG. 2, a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280. The MSC 280 is configured to interface with a public switched telephone network (PSTN) 290. MSC 280 is also constructed to The BSC 275 coupled to the base station 270 by the backhaul line forms an interface. The backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。The intersection of partitioning and frequency allocation can be referred to as a CDMA channel. BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" can be used to generally refer to a single BSC 275 and at least one BS 270. A base station can also be referred to as a "cell station." Alternatively, each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。As shown in FIG. 2, a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In Figure 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 helps locate at least one of the plurality of mobile terminals 100.
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。In Figure 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites. The GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270 之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。As a typical operation of a wireless communication system, BS 270 receives reverse link signals from various mobile terminals 100. Mobile terminal 100 typically participates in calls, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within a particular BS 270. The obtained data is forwarded to the relevant BSC 275. BSC provides call resource allocation and includes BS270 Coordinated mobility management functions between soft handover processes. The BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
基于上述电子设备或移动终端的硬件结构以及通信系统,提出本发明方法各个实施例。Various embodiments of the method of the present invention are proposed based on the hardware structure of the above electronic device or mobile terminal and the communication system.
如图3所示,本实施例提供一种信息处理方法,所述方法包括:As shown in FIG. 3, this embodiment provides an information processing method, where the method includes:
步骤S110:利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像;Step S110: collecting infrared light images by using the infrared light to collect the infrared light image of the user's face and collecting the visible light image by using the visible light;
步骤S120分析所述红外光图像,确定所述用户面部的温度信息并分析所述可见光图像,确定所述用户面部的肤色信息;Step S120: analyzing the infrared light image, determining temperature information of the user's face, and analyzing the visible light image to determine skin color information of the user's face;
步骤S130:结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。Step S130: Combine the temperature information and the skin color information to analyze the health status information of the user.
本实施例可应用于前述的电子设备中,具体如移动终端中,例如手机、平板电脑或可穿戴式设备等。This embodiment can be applied to the foregoing electronic device, such as a mobile terminal, such as a mobile phone, a tablet computer, or a wearable device.
在步骤S110中利用红外光采集用户面部形成红外光图像。通常该红外光图像包括所述用户面部,例如用户的五官等图像。在步骤S110中还利用可见光采集用户面部形成可见光图像。The infrared light is captured by the user's face using infrared light in step S110. Typically the infrared light image includes an image of the user's face, such as a user's facial features. The visible light image is also formed by collecting the user's face with visible light in step S110.
在步骤S120中分析所述红外光图像,利用红外图像传感器根据用户面部辐射信息可以感知用户面部的温度信息。这里的温度信息可包括用户面部各个位置的温度值。再通过计算可以知道所述用户面部各个位置的温度差等信息。在步骤S120还将分析所述可见光图像,获得用户面部各个位置的肤色信息。The infrared light image is analyzed in step S120, and the temperature information of the user's face can be perceived by the infrared image sensor according to the user's facial radiation information. The temperature information herein may include temperature values for various locations of the user's face. Further, by calculation, information such as a temperature difference at each position of the user's face can be known. The visible light image will also be analyzed in step S120 to obtain skin color information at various positions of the user's face.
显然用户面部的温度信息和肤色信息均能够反映用户的健康状态。在本实施例中所述步骤S130中,将基于所述温度信息和肤色信息获得用户的 健康状态信息,在本实施例中将分析所述温度值和/或所述温度差,基于中医或西医理论获得用户的健康状态信息。这里的肤色信息可包括肤色的深浅信息,肤色的均匀信息及肤色的色调信息等。显然用户面部的肤色信息能够反映用户的身体健康状态。It is obvious that the temperature information and the skin color information of the user's face can reflect the health status of the user. In step S130 described in this embodiment, the user is obtained based on the temperature information and the skin color information. The health status information, in the present embodiment, will analyze the temperature value and/or the temperature difference, and obtain the user's health status information based on Chinese medicine or Western medicine theory. The skin color information herein may include depth information of skin color, uniform information of skin color, and color tone information of skin color. It is obvious that the skin color information of the user's face can reflect the user's physical health.
在本实施例中不仅根据用户面部的温度信息和肤色信息来共同对用户的健康状态做出诊断,至少包括两个维度的分析量,能够获得较为精确的健康状态信息。In this embodiment, not only the temperature information of the user's face and the skin color information are used to jointly diagnose the health state of the user, and at least the analysis amount of the two dimensions is included, and more accurate health state information can be obtained.
在具体的实现过程中,若将本实施例所述的信息处理方法应用到手机、平板电脑或可穿戴式设备等电子设备中,则用户利用自己携带的手机、笔记本、平板电脑等电子设备,通过自拍自己的面部就可以简便的获得自己的健康状态信息,从而简便的实现了对自己健康状态的监控,大大的提升了手机、平板电脑等电子设备中软硬件资源的利用率及这些设备的智能性,及用户的使用满意度。In a specific implementation process, if the information processing method described in this embodiment is applied to an electronic device such as a mobile phone, a tablet computer, or a wearable device, the user uses an electronic device such as a mobile phone, a notebook, or a tablet computer that is carried by the user. By taking a self-portrait of your face, you can easily obtain your own health status information, thus easily monitoring your health status, greatly improving the utilization of software and hardware resources in electronic devices such as mobile phones and tablet computers, and the intelligence of these devices. Sex, and user satisfaction.
在一些实施例中,所述步骤S130可包括:利用机器学习算法分析所述温度信息及所述肤色信息,获得所述用户的健康状态信息。在本实施例中利用机器学习算法来分析所述温度信息和所述肤色信息,获得用户的健康状态信息。所述机器学习算法为在进行本次分析之前通过对大量数据的分析和学习,获得表征不同健康状态的特征参数,可以将本实施例中所述温度信息与所述肤色信息与所述特征参数的匹配,精确的确定出所述用户的健康状态信息。在本实施例中所述特征参数可以为来自网络服务器或医疗健康检测平台等发送的数据。In some embodiments, the step S130 may include analyzing the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user. In the embodiment, the temperature information and the skin color information are analyzed by using a machine learning algorithm to obtain health status information of the user. The machine learning algorithm obtains characteristic parameters for different health states by analyzing and learning a large amount of data before performing the analysis, and the temperature information and the skin color information and the feature parameters in the embodiment may be used. Matching, accurately determining the health status information of the user. In this embodiment, the feature parameter may be data sent from a network server or a medical health detection platform or the like.
在利用学习机算法分析所述温度信息及肤色信息之前,在本实施例中所述方法中还包括:形成所述学习机算法。如图4所示,形成所述学习机算法可包括如下步骤:The method in the embodiment further includes: forming the learning machine algorithm before analyzing the temperature information and the skin color information by using a learning machine algorithm. As shown in FIG. 4, forming the learning machine algorithm may include the following steps:
步骤S210:利用样本数据作为学习机的输入数据进行算法训练,获得 训练算法;所述样本数据包括样本温度信息和样本肤色信息;Step S210: performing algorithm training using sample data as input data of the learning machine, and obtaining a training algorithm; the sample data includes sample temperature information and sample skin color information;
步骤S220:采用测试数据对所述训练算法进行验证,获得验证结果;所述测试数据包括测试温度信息和测试温度信息;Step S220: verifying the training algorithm by using test data, and obtaining a verification result; the test data includes test temperature information and test temperature information;
步骤S230:若所述验证结果表明训练算法满足预设条件,则确定所述训练算法为所述机器学习算法。Step S230: If the verification result indicates that the training algorithm meets the preset condition, determine that the training algorithm is the machine learning algorithm.
本实施例中所述样本数据可包括样本肤色信息和样本温度信息及其对应的健康状态信息;利用样本数据训练学习机,可使学习机获取肤色信息和温度信息与健康状态信息之间的对应函数关系。该对应函数关系可为所述训练算法或备选的机器学习算法。在本实施例步骤S220中还将利用测试数据来进行验证,通常所述测试数据中的样本肤色信息和样本温度信息作为利用训练算法进行信息处理的输入,得到一个输出结果;将该输出结果与测试数据中的测试健康状态信息进行比较,可得到训练算法对每一份测试数据处理之后的正确性,若正确性达到指定阈值,可认为满足该训练算法满足所述预设条件,可将该训练算法作为进行后续用户健康状态信息获取的机器学习算法。这里的函数关系可利用各种参数来表示,在这里就不一一举例了。The sample data in the embodiment may include sample skin color information and sample temperature information and corresponding health state information; using the sample data to train the learning machine, the learning machine may obtain the correspondence between the skin color information and the temperature information and the health state information. Functional relationship. The corresponding functional relationship can be the training algorithm or an alternative machine learning algorithm. In the step S220 of the embodiment, the test data is also used for verification. Generally, the sample skin color information and the sample temperature information in the test data are input as information processing by using a training algorithm, and an output result is obtained; the output result is Comparing the test health status information in the test data, the correctness of the training algorithm after processing each test data can be obtained. If the correctness reaches the specified threshold, it can be considered that the training algorithm satisfies the preset condition, and the The training algorithm is used as a machine learning algorithm for performing subsequent user health state information acquisition. The functional relationships here can be represented by various parameters, which are not exemplified here.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
利用所述可见光图像定位所述用户面部器官的分布位置;Positioning the distribution position of the user's facial organs using the visible light image;
所述步骤S120包括:The step S120 includes:
结合所述分布位置及所述红外光图像,确定用户面部各个器官的温度值及器官之间的温度差。Combining the distribution position and the infrared light image, determining a temperature value of each organ of the user's face and a temperature difference between the organs.
单纯利用红外光图像在定位用户面部各个器官的过程中可能存在定位较为繁琐或定位精确度不够的现象,在本实施例中将利用所述可见光图像来定位用户面部的各个器官的分布位置,具体如、额头、鼻部、面颊、舌头等各个部分。根据医学理论,用户面部的各个部分又可对对应于用户身 体内的各个部分,根据上述器官的上述信息,可以直接利用电子设备给出健康状态信息,方便用户自行利用电子设备监控自身的健康状态信息。In the process of locating the various organs of the user's face by using the infrared light image, there may be a phenomenon that the positioning is cumbersome or the positioning accuracy is insufficient. In this embodiment, the visible light image is used to locate the distribution position of each organ of the user's face, specifically Such as, forehead, nose, cheeks, tongue and other parts. According to medical theory, each part of the user's face can correspond to the user's body. According to the above information of the above organs, each part of the body can directly use the electronic device to give health status information, so that the user can use the electronic device to monitor his own health status information.
在一些实施例中,所述步骤S120可包括:In some embodiments, the step S120 may include:
提取所述红外光图像中指定器官的像素值;Extracting pixel values of a specified organ in the infrared light image;
将所述像素值换算成温度值;Converting the pixel value into a temperature value;
根据获得所述温度值,计算所述器官之间的温度差。A temperature difference between the organs is calculated based on obtaining the temperature value.
由于人脸是发散红外线,红外线的波长成分以及强弱等参数与人脸的温度相关,故在本实施例中可以通过提取红外图像中对应器官的像素值,再通过像素值与温度值之间的换算,就可以知道人脸的温度值,并计算出不同器官之间的温度差;具有实现简便的特点。Since the face is divergent infrared rays, the wavelength component of the infrared ray and the parameters such as the intensity of the infrared are related to the temperature of the face, in this embodiment, the pixel value of the corresponding organ in the infrared image can be extracted, and then between the pixel value and the temperature value. The conversion can be used to know the temperature value of the face and calculate the temperature difference between different organs; it is easy to implement.
在一些实施例中,所述结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息,包括:从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息。在本实施例中将从可见光图像中提取所述颜色值。In some embodiments, the analyzing the health status information of the user by combining the temperature information and the skin color information comprises: extracting, from the visible light image, a color value of a pixel of a location where the user's face is located, The skin color information is obtained. The color value will be extracted from the visible light image in this embodiment.
在一些实施例中,会有环境光以及采集设备等自身的参数,这些参数都可能会影响可见光图像的采集,导致可见光图像中各个像素的颜色值发生变化。在本实施例中将这些参数统称为预定参数。故在本实施例中为了提升肤色信息的精确度,所述方法还包括:获取影响采集形成所述可见光图像形成的预定参数;所述步骤S120可包括:根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息。所述获取影响采集形成所述可见光图像形成的预定参数,包括获取采集形成所述可见光图像的采集单元的色温参数;所述步骤S120可包括:根据所述色温值修正所述颜色值,获得所述修正后的颜色值。再比如,所述获取影响采集形成所述可见光图像形成的预定参数,包括获取采集形成所述可见光图像的环境光照值;所述步骤S120可包括:根据所述环境光照值修正所述颜色值,获得所述修 正后的颜色值。这里的环境光照值可包括环境光的亮度值以及颜色值等光照参数,可以根据环境光的光照进行提取的颜色值的修正,以还原被采集的人脸的本来肤色信息,以提高提取的肤色信息的精确度,从而获得更加精确的健康状态信息。In some embodiments, there are ambient light and acquisition device and other self-parameters, which may affect the collection of visible light images, resulting in changes in the color values of individual pixels in the visible light image. These parameters are collectively referred to as predetermined parameters in this embodiment. Therefore, in the embodiment, in order to improve the accuracy of the skin color information, the method further includes: acquiring a predetermined parameter that affects the acquisition to form the visible light image; the step S120 may include: correcting the color according to the predetermined parameter. a value; the corrected color value is the skin color information. Obtaining a predetermined parameter for forming the visible light image, and acquiring a color temperature parameter for acquiring an acquisition unit that forms the visible light image; the step S120 may include: correcting the color value according to the color temperature value, obtaining the The corrected color value. For example, the obtaining affects the acquisition of the predetermined parameters for forming the visible light image, including acquiring an ambient illumination value that is used to form the visible light image; and the step S120 may include: correcting the color value according to the ambient illumination value, Obtain the repair The color value after the positive. The ambient illumination value herein may include illumination values such as ambient light brightness values and color values, and may be corrected according to the illumination of the ambient light to restore the original skin color information of the collected face to improve the extracted skin color. The accuracy of the information to obtain more accurate health status information.
在本实施例中,所述步骤S110可包括:利用双目采集单元同时分别采集所述红外光图像和所述可见光图像。这里的双目采集单元可对应于各种双目摄像头,这里的双目摄像头可为能够采集红外光和可见光的摄像头,基于可见光可以形成可见光图像,基于红外光可以形成红外光图像。在本实施例中利用双目采集单元来进行处理,可以在最短的时间内完成红外光图像和可见光图像的采集,可以减少响应时延,提升电子设备的响应速率。In this embodiment, the step S110 may include separately acquiring the infrared light image and the visible light image by using a binocular acquisition unit. The binocular acquisition unit here can correspond to various binocular cameras. The binocular camera here can be a camera capable of collecting infrared light and visible light, can form a visible light image based on visible light, and can form an infrared light image based on infrared light. In the embodiment, the binocular acquisition unit is used for processing, and the infrared light image and the visible light image can be collected in the shortest time, which can reduce the response delay and improve the response rate of the electronic device.
在本实施例所述方法还包括:根据所述健康状态信息,输出建议信息。本实施例所述建议信息可为预先存储的与所述健康状态信息映射的信息,或从其他电子设备接收的与所述健康状态信息相映射的建议信息。这样可以简便的利用电子设备确定出自己的健康状态信息之后,根据所述建议信息调整自身的饮食、作息等状态。The method in this embodiment further includes: outputting suggestion information according to the health status information. The suggestion information in this embodiment may be pre-stored information mapped with the health status information, or suggestion information that is received from other electronic devices and mapped to the health status information. In this way, the electronic device can be easily used to determine its own health status information, and then the state of the diet, work and the like can be adjusted according to the suggested information.
如图5所示,本实施例提供一种电子设备,所述电子设备包括:As shown in FIG. 5, the embodiment provides an electronic device, where the electronic device includes:
采集单元310,配置为利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像;The collecting unit 310 is configured to acquire an infrared light image by using the infrared light to collect the user's face and collect the visible light image by using the visible light to collect the user's face;
分析单元320,配置为分析所述红外光图像,确定所述用户面部的温度信息并分析所述可见光图像,确定所述用户面部的肤色信息;The analyzing unit 320 is configured to analyze the infrared light image, determine temperature information of the user's face, and analyze the visible light image to determine skin color information of the user's face;
获得单元330,配置为结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。The obtaining unit 330 is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
本实施例所述的电子设备可以为前述的移动终端,例如手机、平板电脑或可穿戴式设备等终端设备。The electronic device in this embodiment may be the foregoing mobile terminal, such as a mobile device, a tablet computer, or a wearable device.
所述采集单元310可对应于可见光传感器和红外光传感器,所述红外 光传感器可采集红外光,进而形成所述红外光图像。所述可见光传感器可采集可见光从而形成可见光图像。The collecting unit 310 may correspond to a visible light sensor and an infrared light sensor, and the infrared The light sensor can collect infrared light to form the infrared light image. The visible light sensor can collect visible light to form a visible light image.
所述分析单元320和所述获得单元330的具体结构都对应于电子设备内部的处理器或处理电路。所述处理器可包括应用处理器、微处理器、数字信号处理器或可编程阵列等。所述处理电路可包括专用集成电路等结构。The specific structures of the analysis unit 320 and the obtaining unit 330 correspond to a processor or processing circuit inside the electronic device. The processor can include an application processor, a microprocessor, a digital signal processor or a programmable array, and the like. The processing circuit can include a structure such as an application specific integrated circuit.
所述分析单元320和所述获得单元330可以集成对应于相同的处理器或处理电路,也可以分别对应于不同的处理器或处理电路。The analyzing unit 320 and the obtaining unit 330 may be integrated corresponding to the same processor or processing circuit, or may respectively correspond to different processors or processing circuits.
所述获得单元,配置为结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。The obtaining unit is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
总之本实施例所述电子设备通过采集所述红外光图像及可见光图像,获得用户面部的温度信息及肤色信息,通过对温度信息的分析获得用户的健康状态信息,从而能够提升电子设备的智能性及用户使用满意度,方便用户通过持电子设备采集自己面部就能够简便的获取自己的健康状态信息。且在本实施例中在分析用户的健康状态时,同时参考了温度信息及肤色信息,形成健康状态信息的参考量多,提高了健康状态信息的精确度。In summary, the electronic device in the embodiment obtains the temperature information and the skin color information of the user's face by collecting the infrared light image and the visible light image, and obtains the health state information of the user by analyzing the temperature information, thereby improving the intelligence of the electronic device. And user satisfaction, so that users can easily obtain their health status information by collecting their own faces with electronic devices. In the embodiment, when analyzing the health status of the user, the temperature information and the skin color information are referenced at the same time, and the reference quantity for forming the health status information is increased, and the accuracy of the health status information is improved.
可选地,所述获得单元330,配置为利用机器学习算法分析所述温度信息及所述肤色信息,获得所述用户的健康状态信息。Optionally, the obtaining unit 330 is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
在本实施例中利用机器学习算法来分析所述温度信息及肤色信息,获得所述健康状态信息,机器学习算法是实现对大量的数据进行分析获得各种表征不同简况状态的特征参数,可将温度信息及所述肤色信息与特征参数之间的匹配,获得所述健康状态信息,实现了简便的获取健康状态信息的同时,保证了健康状态信息的高精确度。In the embodiment, the machine learning algorithm is used to analyze the temperature information and the skin color information to obtain the health state information. The machine learning algorithm is configured to analyze a large amount of data to obtain various characteristic parameters for characterizing different profile states, and The temperature information and the matching between the skin color information and the feature parameters obtain the health state information, thereby achieving easy acquisition of the health state information and ensuring high accuracy of the health state information.
所述电子设备还包括:The electronic device further includes:
训练单元,配置为利用样本数据作为学习机的输入数据进行算法训练,获得训练算法;所述样本数据包括样本温度信息和样本肤色信息; a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information;
验证单元,配置为采用测试数据对所述训练算法进行验证,获得验证结果;所述测试数据包括测试温度信息和测试温度信息;a verification unit configured to verify the training algorithm by using test data to obtain a verification result; the test data includes test temperature information and test temperature information;
确定单元,配置为若所述验证结果表明训练算法满足预设条件,则确定所述训练算法为所述机器学习算法。And a determining unit, configured to determine that the training algorithm is the machine learning algorithm if the verification result indicates that the training algorithm meets a preset condition.
在本实施例中所述训练单元可包括各种类型的学习机。所述验证单元及所述确定单元的具体结构都可对应于处理器或处理电路。所述处理器或处理电路可通过执行可执行指令来实现上述各个单元的各项功能。The training unit in this embodiment may include various types of learning machines. The specific structure of the verification unit and the determination unit may correspond to a processor or a processing circuit. The processor or processing circuitry may implement the various functions of the various units described above by executing the executable instructions.
可选地,所述电子设备还包括:定位单元,还用于利用所述可见光图像定位所述用户面部器官的分布位置;所述分析单元320,还用于结合所述分布位置及所述红外光图像,确定用户面部各个器官的温度值及器官之间的温度差。在本实施例所述定位单元可包括坐标定位装置等结构,能够通过可见光图像的分析码,确定用户面部上各个器官的分布位置。分析单元320结合所述分布位置及红外光图像,能够确定出各个器官的温度值及温度差。温度值和温度差将作为温度信息作为获取所述健康状态信息的依据信息。这样的电子设备,一般解决了仅基于红外光图像定位操作繁琐的问题,同时能够提高温度信息的精确度,从而再次提升健康状态信息的精确度。Optionally, the electronic device further includes: a positioning unit, configured to locate a distribution position of the facial part of the user by using the visible light image; the analyzing unit 320 is further configured to combine the distributed position and the infrared The light image determines the temperature value of each organ of the user's face and the temperature difference between the organs. The positioning unit in this embodiment may include a coordinate positioning device or the like, and can determine the distribution position of each organ on the user's face through the analysis code of the visible light image. The analysis unit 320 combines the distribution position and the infrared light image to determine the temperature value and temperature difference of each organ. The temperature value and the temperature difference will be used as temperature information as the basis for obtaining the health status information. Such an electronic device generally solves the problem that the operation of the infrared light image positioning operation is cumbersome, and at the same time, can improve the accuracy of the temperature information, thereby improving the accuracy of the health state information again.
在一些实施例中,所述分析单元320,可配置为提取所述红外光图像中指定器官的像素值;将所述像素值换算成温度值;根据获得所述温度值,计算所述器官之间的温度差。In some embodiments, the analyzing unit 320 is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate the organ according to the obtained temperature value The temperature difference between them.
在一些实施例中,所述分析单元320,还配置为从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息。In some embodiments, the analyzing unit 320 is further configured to extract a color value of a pixel at a location where the user's face is located from the visible light image to obtain the skin color information.
在一些实施例中,所述电子设备还包括:获取单元,配置为获取影响采集形成所述可见光图像形成的预定参数;所述分析单元320,还配置为根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息。 In some embodiments, the electronic device further includes: an acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image; the analyzing unit 320 is further configured to correct the color according to the predetermined parameter a value; the corrected color value is the skin color information.
例如,所述获取单元,配置为获取采集形成所述可见光图像的采集单元的色温参数;所述分析单元320,配置为根据所述色温值修正所述颜色值,获得所述修正后的颜色值。For example, the acquiring unit is configured to acquire a color temperature parameter for collecting an acquisition unit that forms the visible light image, and the analyzing unit 320 is configured to correct the color value according to the color temperature value to obtain the corrected color value. .
再例如,所述获取单元,配置为获取采集形成所述可见光图像的环境光照值;所述分析单元320,配置为根据所述环境光照值修正所述颜色值,获得所述修正后的颜色值。For example, the acquiring unit is configured to acquire an ambient light value for collecting the visible light image, and the analyzing unit 320 is configured to correct the color value according to the ambient light value to obtain the corrected color value. .
在本实施例中所述采集单元110为双目采集单元,配置为同时分别采集所述红外光图像和所述可见光图像。利用双目采集单元可以同时采集红外光图像和可见光图像,这样能够减少因采集用户面部的图像所消耗的时间,提高了电子设备的响应速率,减少了响应时延。In this embodiment, the collection unit 110 is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time. The binocular acquisition unit can simultaneously collect infrared light images and visible light images, which can reduce the time taken for collecting images of the user's face, improve the response rate of the electronic device, and reduce the response delay.
可选地,所述电子设备还包括:输出单元,配置为根据所述健康状态信息,输出建议信息。本实施例中所述输出单元可对应于显示输出单元或音频输出单元。所述显示输出单元可包括各种类型的显示屏。所述显示屏可包括液晶显示屏、电子墨水显示屏、投影显示屏或有机发光二极管OLED显示屏。所述音频输出单元可包括扬声器或音频输出电路等。总之,本实施例所述输出单元能够输出建议信息,给予用户维持或恢复健康状态的建议,再次提高了电子设备的智能性及用户使用满意度。Optionally, the electronic device further includes: an output unit configured to output suggestion information according to the health status information. The output unit in this embodiment may correspond to a display output unit or an audio output unit. The display output unit may include various types of display screens. The display screen may include a liquid crystal display, an electronic ink display, a projection display, or an organic light emitting diode (OLED) display. The audio output unit may include a speaker or an audio output circuit or the like. In summary, the output unit in this embodiment can output suggestion information, give the user a suggestion to maintain or restore the health status, and improve the intelligence of the electronic device and the user satisfaction.
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于前述任意项所述信息处理方法,例如,图3、图4、图6、图9及图11所示的信息处理方法中的一个或多个。所述计算机存储介质可包括光盘、磁带、移动硬盘、闪存等各种存储介质,可选为非瞬间存储介质。The embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method of any of the foregoing items, for example, FIG. 3 and FIG. 4 One or more of the information processing methods shown in FIG. 6, FIG. 9, and FIG. The computer storage medium may include various storage media such as an optical disk, a magnetic tape, a mobile hard disk, a flash memory, and the like, and may be a non-transitory storage medium.
以下结合上述实施例提供一个具体示例:A specific example is provided below in conjunction with the above embodiments:
示例一:Example 1:
如图6所示,本示例提供一种健康状态信息获取方法,包括: As shown in FIG. 6, the example provides a method for acquiring health status information, including:
步骤S410:获取红外与可见光面部双目图像;这里的面部双目图像可为理解红外光图像和可见光图像的重叠;Step S410: Acquire an infrared and visible light facial binocular image; the facial binocular image herein may be an understanding of an overlap of the infrared light image and the visible light image;
步骤S420:人脸面部数据健康程度建模;Step S420: modeling facial health data of the face;
步骤S430:依据机器学习算法获取了用于辨别人脸的健康特征分类器参数;Step S430: Acquire a health feature classifier parameter for identifying a face of another person according to a machine learning algorithm;
步骤S440:基于所述监控特征分类参数对进行健康程度检测,并输出健康建议。Step S440: Perform health level detection based on the monitoring feature classification parameter pair, and output a health suggestion.
在步骤S410:利用红外相机拍摄人脸面部红外光谱图像,红外图像传感器根据物体热辐射信息可以感知物体的温度信息,这样获取面部的温度信息,由于红外相机的成像原理与可见光相机存在差异,丢失了可见光相机的面部亮度和颜色细节,不容易定位人脸面部的五官位置信息。如图7所示,因此同时利用可见光相机拍摄人脸面部的可见光图像,获取面部的肤色信息。这样组成的双目系统可以同时获取面部五官的肤色信息和温度信息。In step S410, the infrared spectrum image of the face and face is captured by the infrared camera, and the infrared image sensor can sense the temperature information of the object according to the heat radiation information of the object, so that the temperature information of the face is obtained, and the imaging principle of the infrared camera is different from that of the visible light camera, and is lost. The facial brightness and color details of the visible light camera make it difficult to locate the facial features of the face. As shown in FIG. 7, the visible light image of the face is simultaneously captured by the visible light camera, and the skin color information of the face is acquired. The binocular system thus composed can simultaneously acquire the skin color information and temperature information of the facial features.
在步骤S420中:利用机器学习的算法对大量的人脸面部数据进行学习分析,学习鉴别人脸的健康特征模型,主要目的是模拟中医的观察模式,将人脸的面部关键特征做为分类器的特征输入,通过大数据的学习训练获取健康特征的关键参数,即可获取用于测试人脸健康图像的分类器。In step S420, a machine learning algorithm is used to perform a learning analysis on a large amount of face data, and a health feature model for identifying a face is learned. The main purpose is to simulate a Chinese medicine observation mode, and the facial key feature of the face is used as a classifier. The feature input, through the learning and training of big data to obtain the key parameters of the health feature, can obtain the classifier for testing the face health image.
分类建模的关键特征包括:面部的各个器官如额部、鼻部、面颊、舌头的温度以及温度差,面部器官的颜色特征,根据中医望诊的基本颜色统计包括黄、白、赤、黑、青等颜色,每种颜色又包括淡、中、深三类。根据中医望诊经验,这些面部器官的的温度和颜色信息可推测人体的其他部位器官的健康程度,如图8所示,因此分析面部的器官的特征信息可以判断人体的各个部位的状况。在图8中标示了分别能够反映用户身体的肩关节、肺、咽喉及肝的面部区域,在具体的实现过程中,其他面部区域还能 反映出用户身体的其他部分的健康状态,在图8中省略。Key features of classification modeling include: temperature of various organs of the face such as forehead, nose, cheeks, tongue, temperature difference, color characteristics of facial organs, according to the basic color statistics of Chinese medicine, including yellow, white, red, black , green and other colors, each color includes light, medium and deep. According to the experience of traditional Chinese medicine, the temperature and color information of these facial organs can presume the health of other parts of the human body, as shown in Fig. 8. Therefore, analyzing the characteristic information of the organs of the face can judge the condition of each part of the human body. In Figure 8, the facial regions that reflect the shoulder, lung, throat, and liver of the user's body are respectively indicated. In the specific implementation process, other facial regions can also be Reflecting the health status of other parts of the user's body, omitted in Figure 8.
在步骤S430中通过大量数据输入和机器学习分类训练算法,计算机获取了用于辨别人脸的健康特征参数。在实际应用场景中,用户自拍获取面部红外和可见光图像,将用户的面部图像作为测试图像输入人脸健康分类器,健康分类器根据离线学习的特征参数,分析当前用的输入测试图像的健康程度,给出用户的健康数据分析。In step S430, through a large number of data input and machine learning classification training algorithms, the computer acquires health feature parameters for identifying other people's faces. In the actual application scenario, the user takes a facial infrared and visible light image by self-timer, and inputs the facial image of the user as a test image into the face health classifier. The health classifier analyzes the health of the currently used input test image according to the characteristic parameters of offline learning. , gives the user's health data analysis.
在图9给出了关于人脸面部健康程度的机器学习算法流程图。如图9所示,本示例中所述信息处理方法可包括:A flow chart of the machine learning algorithm for facial health is given in Figure 9. As shown in FIG. 9, the information processing method in this example may include:
步骤S1:输入人脸图像健康程度训练数据。这些人脸图像健康程度训练数据可为样本数据。Step S1: Input face image health degree training data. These face image health training data can be sample data.
步骤S2:提取人脸面部各个器官的颜色及温度特征;Step S2: extracting color and temperature characteristics of various organs of the face and face;
步骤S3:将颜色及温度特征输入到利用AdaBoost分类器或SVM等分类器分类器。其中,Adaboost是一种利用迭代算法针对同一个训练集训练不同的弱分类器,然后把这些弱分类器集合起来,构成一个更强的最终分类器(这个最终的分类器即为强分类器)。所述SVM为Support Vector Machine的缩写,是一种支持向量机的分类器。Step S3: Input color and temperature characteristics to a classifier classifier such as an AdaBoost classifier or SVM. Among them, Adaboost is an iterative algorithm to train different weak classifiers for the same training set, and then combine these weak classifiers to form a stronger final classifier (this final classifier is a strong classifier) . The SVM is an abbreviation of Support Vector Machine, which is a support vector machine classifier.
步骤S4:获取面部健康程度特征分类参数。Step S4: Acquire a facial health degree feature classification parameter.
步骤S5:基于健康程度特征分类参数形成人脸图像监控程度检测数据。接下来会确定是否达到训练要求,若未达到训练要求返回步骤S3,若达到训练要求可以对实际人脸图像健康程度进行检测了。Step S5: Forming face image monitoring degree detection data based on the health degree feature classification parameter. Next, it is determined whether the training requirement is met. If the training requirement is not returned to step S3, the actual face image health can be detected if the training requirement is met.
步骤S6:输入实际人脸图像健康程度检测数据,这里的人脸图像健康程度检测数据可以为从红外光图像中获取的温度信息和/或从可见光图像中检测得到的肤色信息等。这里的实际人脸图像监控程度检测数据可对应于检测样本。Step S6: input actual face image health degree detection data, where the face image health degree detection data may be temperature information acquired from the infrared light image and/or skin color information detected from the visible light image. The actual face image monitoring degree detection data here may correspond to the detection sample.
步骤S7:分析实测结果。 Step S7: Analyze the measured result.
步骤S8:步骤S7得到的分析结果不满足要求返回算法设计流程,改进算法,返回步骤S2。Step S8: The analysis result obtained in step S7 does not satisfy the requirement return algorithm design flow, and the algorithm is improved, and the process returns to step S2.
步骤S9:步骤S7中得到的分析结果满足要求算法完成。Step S9: The analysis result obtained in step S7 satisfies the requirement that the algorithm is completed.
在本实施例中可以反复多次执行所述步骤S6至步骤S7,若对实际人脸图像健康程度检测数据的分析结果的准确率达到指定阈值,则可认为满足要求,否则不满足要求。这里的分析结果可包括健康状态信息的结果。In the embodiment, the step S6 to the step S7 may be repeatedly performed. If the accuracy of the analysis result of the actual face image health degree detection data reaches a specified threshold, it may be considered that the requirement is met, otherwise the requirement is not satisfied. The results of the analysis herein may include the results of the health status information.
在步骤S1中输入的人脸图像健康程度训练数据为进行学习机训练的样本数据,以下介绍一下样本数据的制作工程。The face image health degree training data input in step S1 is sample data for performing the learning machine training. The following describes the production process of the sample data.
采集大量的人体面部彩色图像和红外光图像,利用医学设备测试每个人体的脾胃健康数值,根据测试结果,按照这些人群的健康程度依次给出数值标签。图10为一个健康程度值的轴,将人的健康以0到100来打分,若打分位于60分以下,则表示对应的用户处于亚健康状态,若打分在60及60分以上则认为用户处于健康状态。A large number of human face color images and infrared light images are collected, and the spleen and stomach health values of each human body are tested by medical equipment, and according to the test results, numerical labels are sequentially given according to the health of these people. Figure 10 shows the axis of a health value. The health of a person is scored from 0 to 100. If the score is below 60, the corresponding user is in a sub-health state. If the score is above 60 and 60, the user is considered to be in the sub-health state. health status.
样本特征提取过程:依据前面所述,主要分类特征为面部皮肤的肤色和温度,这里提取每个样本人群的鼻尖部位的肤色特征和温度特征,作为特征向量。Sample feature extraction process: According to the foregoing, the main classification feature is the skin color and temperature of the facial skin, where the skin color features and temperature features of the nose tip of each sample population are extracted as feature vectors.
温度特征可以根据红外图像的像素值大小换算为对应的温度值,颜色信息可以通过建立颜色映射表,根据彩色图像的图像颜色信息查表获取颜色值,建立黄、白、赤、黑、青五种基本的颜色表,再根据颜色数值的大小,确定区域的颜色深浅,分为淡、中、深三类,这样可以获取样本的颜色特征,之后可以建立健康特征向量矩阵如下表:(注:特征矩阵向量中的数值是为了说明方法,与实际测量数据有偏差)The temperature characteristic can be converted into a corresponding temperature value according to the pixel value of the infrared image, and the color information can be obtained by establishing a color mapping table, and obtaining a color value according to the image color information of the color image, and establishing yellow, white, red, black, and blue A basic color table, according to the size of the color value, determine the color depth of the area, divided into light, medium and deep, so that you can get the color characteristics of the sample, then you can establish a health feature vector matrix as follows: (Note: The values in the feature matrix vector are used to illustrate the method and deviate from the actual measurement data.
Figure PCTCN2016099295-appb-000001
Figure PCTCN2016099295-appb-000001
Figure PCTCN2016099295-appb-000002
Figure PCTCN2016099295-appb-000002
依据特征向量矩阵可以直观看出不同的特征组合表示出不同人体脾胃健康值。According to the eigenvector matrix, it can be intuitively seen that different combinations of features represent different human spleen and stomach health values.
分类器设计过程:这里采取比较流行的AdaBoost分类器,理论比较成熟,且在人脸检测与识别等模式识别分类中得到有效实践。这个AdaBoost分类允许设计者不断地加入新的弱分类器,直到达到某个预定的足够小的误差率。在AdaBoost分类中,每一个训练样本都被赋予一个权重,表明它被某个分类分类器选入训练集的概率。如果某个样本已经被准确分类,那么在构造下一个训练集中,它被选中的概率就被降低;相反,如果某个样本没有被正确分类,那么它的权重就得到提高。通过这样的方式,AdaBoost分类器能够聚焦于那么较为困难分类的样本上。这些弱检测器只比随机猜测好一点,对于二类问题来说只是比50%的猜测好一点。但是通过一定算法把这些检测能力很弱的分类器融合起来,就会得到一个分类能力很强的强分类器。这里不限于AdaBoost分类器,也可以选择SVM等其他的分类器,这里不再阐述。Classifier design process: The popular AdaBoost classifier is adopted here, the theory is mature, and it is effectively practiced in pattern recognition and classification such as face detection and recognition. This AdaBoost classification allows the designer to continually add new weak classifiers until a predetermined sufficiently small error rate is reached. In the AdaBoost classification, each training sample is given a weight indicating the probability that it will be selected into the training set by a classification classifier. If a sample has been accurately classified, then the probability of its selection is reduced in constructing the next training set; conversely, if a sample is not correctly classified, its weight is increased. In this way, the AdaBoost classifier can focus on samples that are more difficult to classify. These weak detectors are only a little better than random guesses, and for the second type of problems are only a little better than the 50% guess. However, by combining certain classifiers with weak detection capabilities by a certain algorithm, a strong classifier with strong classification ability is obtained. This is not limited to the AdaBoost classifier, but other classifiers such as SVM can also be selected, which will not be explained here.
数据训练与参数调整过程:将样本分为训练样本和测试样本,训练样本主要是用来进行分类器学习,测试样本主要是来检测分类学习参数是否满足要求。首先将训练样本送入分类器中,根据分类器的流程,进行数据的迭代特征提取、特征参数比较、迭代的特征参数分类阈值计算、样本重新分类等过程。之后,将这些过程计算出的结果参数在测试样本上进行特征向量提取、特征参数样本重新分类等流程,最后得出样本判决的正确率和错误率,如果正确率和错误率满足了设计要求,比如要求分类正确的概率在95%以上,那么分类器学习完成;反之,如果测试结果正确率低于95%,则要重新调整分类器的参数设置或者加大样本的数 量或者加入新的特征属性等等。Data training and parameter adjustment process: The sample is divided into training samples and test samples. The training samples are mainly used for classifier learning. The test samples are mainly used to detect whether the classification learning parameters meet the requirements. Firstly, the training samples are sent to the classifier. According to the flow of the classifier, the iterative feature extraction, feature parameter comparison, iterative feature parameter classification threshold calculation, and sample reclassification are performed. After that, the result parameters calculated by these processes are subjected to feature vector extraction and feature parameter sample reclassification on the test sample, and finally the correct rate and error rate of the sample decision are obtained. If the correct rate and the error rate satisfy the design requirements, For example, if the probability of correct classification is above 95%, then the classifier learning is completed; otherwise, if the test result correct rate is lower than 95%, then the parameter setting of the classifier should be re-adjusted or the number of samples should be increased. Quantity or add new feature attributes and more.
实际测试过程:上述分类只是在有限的样本集上完成了学习测试过程,一个成功的分类器还需要在实际数据中进行测试,将这些过程计算出的结果参数在实际数据上进行特征向量提取、特征参数样本重新分类等流程,最后得出样本判决的正确率和错误率,如果正确率和错误率满足了设计要求,比如要求分类正确的概率在95%以上,那么分类器学习完成;反之,如果测试结果正确率低于95%,则要重新调整分类器的参数设置或者加大样本的数量或者加入新的特征属性等等。(这一调整过程与数据训练与参数调整过程中的测试过程类似)。The actual test process: the above classification only completes the learning test process on a limited sample set. A successful classifier also needs to test in the actual data, and the result parameters calculated by these processes are extracted from the feature data on the actual data. The process of reclassifying the characteristic parameter samples, and finally the correct rate and error rate of the sample decision. If the correct rate and the error rate satisfy the design requirements, for example, the probability that the classification is correct is above 95%, then the classifier learning is completed; If the test result correct rate is lower than 95%, then re-adjust the parameter settings of the classifier or increase the number of samples or add new feature attributes. (This adjustment process is similar to the test process in the data training and parameter adjustment process).
最后将分类器给出的用户测试数据与标准健康人脸数据进行比较,给出用户当前的健康程度数值,让用户有一个直观的健康数据认识,并同用户之前的测试结果做比较,分析用户的健康程度是否下降还是上升。最后依据健康数据分析,对于用户的健康给予一定的健康建议。Finally, the user test data given by the classifier is compared with the standard healthy face data to give the user the current health level value, so that the user has an intuitive health data understanding, and compares with the user's previous test results, and analyzes the user. Whether the health level is declining or rising. Finally, based on the analysis of health data, certain health advice is given to the health of the user.
示例二:Example two:
如图11所示,本示例提供一种信息处理方法,包括:As shown in FIG. 11, this example provides an information processing method, including:
步骤S11:人脸面部图像数据获取,该步骤可对应于前述实施例中采集红外光图像和可见光图像;Step S11: acquiring facial image data, the step may correspond to acquiring an infrared light image and a visible light image in the foregoing embodiment;
步骤S12:人脸面部特征分析,该步骤可相当于前述实施例中提取温度信息和肤色信息。Step S12: facial facial feature analysis, which may be equivalent to extracting temperature information and skin color information in the foregoing embodiment.
步骤S13:特征选择,这里可为选择一个或多个特征进行分析。Step S13: Feature selection, where one or more features can be selected for analysis.
步骤S14:特征分类学习。Step S14: Feature classification learning.
步骤S15:获取特征分类学习参数。Step S15: Acquire feature classification learning parameters.
步骤S16:输入实际人脸数据Step S16: input actual face data
步骤S17:实际人脸数据测试结果。Step S17: actual face data test result.
步骤S18:测试结果比对分析。这里的测试结果可相对于前述实施 例中的健康状态信息。将这里的健康状态信息与映射关系中的健康状态信息进行比对。这里的映射关系可为健康状态信息与健康建议的映射关系。Step S18: The test results are compared and analyzed. The test results here can be compared to the previous implementation Health status information in the example. The health status information here is compared with the health status information in the mapping relationship. The mapping relationship here can be a mapping relationship between health status information and health suggestions.
步骤S19:给出健康建议。Step S19: Give a health suggestion.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed. In addition, the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration The unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、 磁碟或者光盘等各种可以存储程序代码的介质。A person skilled in the art can understand that all or part of the steps of implementing the above method embodiments may be completed by using hardware related to the program instructions. The foregoing program may be stored in a computer readable storage medium, and the program is executed when executed. The foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), and a RAM (Random Access Memory). A variety of media that can store program code, such as a disk or an optical disk.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。 The above is only the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and modifications made in accordance with the principles of the present invention should be understood as falling within the scope of the present invention.

Claims (20)

  1. 一种信息处理方法,所述方法包括:An information processing method, the method comprising:
    利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像;Acquiring the user's face with infrared light to form an infrared light image and collecting the user's face with visible light to form a visible light image;
    分析所述红外光图像,确定所述用户面部的温度信息并分析所述可见光图像,确定所述用户面部的肤色信息;Analyzing the infrared light image, determining temperature information of the user's face, and analyzing the visible light image to determine skin color information of the user's face;
    结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。Combining the temperature information and the skin color information, analyzing health status information of the user.
  2. 根据权利要求1所述的方法,其中,The method of claim 1 wherein
    所述结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息,包括:The analyzing the health status information of the user by combining the temperature information and the skin color information includes:
    利用机器学习算法分析所述温度信息及所述肤色信息,获得所述用户的健康状态信息。The temperature information and the skin color information are analyzed by a machine learning algorithm to obtain health status information of the user.
  3. 根据要求2所述的方法,其中,The method according to claim 2, wherein
    所述方法还包括:The method further includes:
    利用样本数据作为学习机的输入数据进行算法训练,获得训练算法;所述样本数据包括样本温度信息和样本肤色信息;Performing algorithm training by using sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information;
    采用测试数据对所述训练算法进行验证,获得验证结果;所述测试数据包括测试温度信息和测试温度信息;The training algorithm is verified by using test data to obtain a verification result; the test data includes test temperature information and test temperature information;
    若所述验证结果表明训练算法满足预设条件,则确定所述训练算法为所述机器学习算法。And if the verification result indicates that the training algorithm meets the preset condition, determining that the training algorithm is the machine learning algorithm.
  4. 根据权利要求1所述的方法,其中,The method of claim 1 wherein
    所述方法还包括:The method further includes:
    利用所述可见光图像定位所述用户面部器官的分布位置;Positioning the distribution position of the user's facial organs using the visible light image;
    所述分析所述红外光图像,确定所述用户面部的温度信息,包括 The analyzing the infrared light image to determine temperature information of the user's face, including
    结合所述分布位置及所述红外光图像,确定用户面部各个器官的温度值及器官之间的温度差。Combining the distribution position and the infrared light image, determining a temperature value of each organ of the user's face and a temperature difference between the organs.
  5. 根据权利要求4所述的方法,其中,The method of claim 4, wherein
    所述结合所述分布位置及所述红外光图像,确定用户面部器官的温度值及器官之间的温度差,包括:Combining the distribution position and the infrared light image to determine a temperature value of a facial organ of the user and a temperature difference between the organs, including:
    提取所述红外光图像中指定器官的像素值;Extracting pixel values of a specified organ in the infrared light image;
    将所述像素值换算成温度值;Converting the pixel value into a temperature value;
    根据获得所述温度值,计算所述器官之间的温度差。A temperature difference between the organs is calculated based on obtaining the temperature value.
  6. 根据权利要求1至5任一项所述的方法,其中,The method according to any one of claims 1 to 5, wherein
    所述结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息,包括:The analyzing the health status information of the user by combining the temperature information and the skin color information includes:
    从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息。Extracting a color value of a pixel at a position where the user's face is located from the visible light image to obtain the skin color information.
  7. 根据权利要求6所述的方法,其中,The method of claim 6 wherein
    所述方法还包括:The method further includes:
    获取影响采集形成所述可见光图像形成的预定参数;Obtaining a predetermined parameter that affects the acquisition to form the visible light image;
    所述从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息,包括:Extracting the color value of the pixel at the location of the user's face from the visible light image to obtain the skin color information, including:
    根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息。Correcting the color value according to the predetermined parameter; the corrected color value is the skin color information.
  8. 根据权利要求7所述的方法,其特征在于,The method of claim 7 wherein:
    所述获取影响采集形成所述可见光图像形成的预定参数,包括The acquiring affects the acquisition to form predetermined parameters for forming the visible light image, including
    获取采集形成所述可见光图像的采集单元的色温参数;Obtaining a color temperature parameter of the acquisition unit that collects the visible light image;
    所述根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息,包括: Determining the color value according to the predetermined parameter; the corrected color value is the skin color information, including:
    根据所述色温值修正所述颜色值,获得所述修正后的颜色值。The color value is corrected according to the color temperature value to obtain the corrected color value.
  9. 根据权利要求7所述的方法,其中,The method of claim 7 wherein
    所述获取影响采集形成所述可见光图像形成的预定参数,包括The acquiring affects the acquisition to form predetermined parameters for forming the visible light image, including
    获取采集形成所述可见光图像的环境光照值;Obtaining an ambient illumination value that is collected to form the visible light image;
    所述根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息,包括:Determining the color value according to the predetermined parameter; the corrected color value is the skin color information, including:
    根据所述环境光照值修正所述颜色值,获得所述修正后的颜色值。The color value is corrected according to the ambient light value to obtain the corrected color value.
  10. 根据权利要求1至5任一项所述的方法,其中,The method according to any one of claims 1 to 5, wherein
    所述利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像包括:The acquiring the infrared light image by using the infrared light to form the infrared light image of the user and collecting the visible light image by using the visible light to form the visible light image includes:
    利用双目采集单元同时分别采集所述红外光图像和所述可见光图像。The infrared light image and the visible light image are separately acquired by a binocular acquisition unit.
  11. 一种电子设备,所述电子设备包括:An electronic device, the electronic device comprising:
    采集单元,配置为利用红外光采集用户面部形成红外光图像并利用可见光采集所述用户面部形成可见光图像;An acquisition unit configured to acquire an infrared light image by using the infrared light to collect the user's face and collect the visible light image by using the visible light to collect the user's face;
    分析单元,配置为分析所述红外光图像,确定所述用户面部的温度信息并分析所述可见光图像,确定所述用户面部的肤色信息;An analyzing unit, configured to analyze the infrared light image, determine temperature information of the user's face, and analyze the visible light image to determine skin color information of the user's face;
    获得单元,配置为结合所述温度信息及所述肤色信息,分析所述用户的健康状态信息。The obtaining unit is configured to analyze the health status information of the user by combining the temperature information and the skin color information.
  12. 根据权利要求11所述的电子设备,其中,The electronic device according to claim 11, wherein
    所述获得单元,配置为利用机器学习算法分析所述温度信息及所述肤色信息,获得所述用户的健康状态信息。The obtaining unit is configured to analyze the temperature information and the skin color information by using a machine learning algorithm to obtain health status information of the user.
  13. 根据权利要求12所述的电子设备,其中,The electronic device according to claim 12, wherein
    所述电子设备还包括:The electronic device further includes:
    训练单元,配置为利用样本数据作为学习机的输入数据进行算法训练,获得训练算法;所述样本数据包括样本温度信息和样本肤色信息; a training unit configured to perform algorithm training using the sample data as input data of the learning machine to obtain a training algorithm; the sample data includes sample temperature information and sample skin color information;
    验证单元,配置为采用测试数据对所述训练算法进行验证,获得验证结果;所述测试数据包括测试温度信息和测试温度信息;a verification unit configured to verify the training algorithm by using test data to obtain a verification result; the test data includes test temperature information and test temperature information;
    确定单元,配置为若所述验证结果表明训练算法满足预设条件,则确定所述训练算法为所述机器学习算法。And a determining unit, configured to determine that the training algorithm is the machine learning algorithm if the verification result indicates that the training algorithm meets a preset condition.
  14. 根据权利要求11所述的电子设备,其中,The electronic device according to claim 11, wherein
    所述电子设备还包括:The electronic device further includes:
    定位单元,还配置为利用所述可见光图像定位所述用户面部器官的分布位置;a positioning unit, configured to locate a distribution position of the user's facial organs by using the visible light image;
    所述分析单元,还配置为结合所述分布位置及所述红外光图像,确定用户面部各个器官的温度值及器官之间的温度差。The analyzing unit is further configured to determine a temperature value of each organ of the user's face and a temperature difference between the organs in combination with the distribution position and the infrared light image.
  15. 根据权利要求14所述的电子设备,其中,The electronic device according to claim 14, wherein
    所述分析单元,配置为提取所述红外光图像中指定器官的像素值;将所述像素值换算成温度值;根据获得所述温度值,计算所述器官之间的温度差。The analyzing unit is configured to extract a pixel value of a specified organ in the infrared light image; convert the pixel value into a temperature value; and calculate a temperature difference between the organs according to the obtained temperature value.
  16. 根据权利要求11至15任一项所述的电子设备,其中,The electronic device according to any one of claims 11 to 15, wherein
    所述分析单元,配置为从所述可见光图像中提取所述用户面部所在位置的像素的颜色值,以获得所述肤色信息。The analyzing unit is configured to extract a color value of a pixel at a position where the user's face is located from the visible light image to obtain the skin color information.
  17. 根据权利要求16所述的电子设备,其中,The electronic device according to claim 16, wherein
    所述电子设备还包括:The electronic device further includes:
    获取单元,配置为获取影响采集形成所述可见光图像形成的预定参数;An acquiring unit configured to acquire a predetermined parameter that affects the acquisition to form the visible light image;
    所述分析单元,还配置为根据所述预定参数,修正所述颜色值;所述修正后的颜色值为所述肤色信息。The analyzing unit is further configured to correct the color value according to the predetermined parameter; the corrected color value is the skin color information.
  18. 根据权利要求17所述的电子设备,其特征在于,The electronic device according to claim 17, wherein
    所述获取单元,配置为获取采集形成所述可见光图像的采集单元的色温参数; The acquiring unit is configured to acquire a color temperature parameter of an acquisition unit that collects the visible light image;
    所述分析单元,配置为根据所述色温值修正所述颜色值,获得所述修正后的颜色值。The analyzing unit is configured to correct the color value according to the color temperature value to obtain the corrected color value.
  19. 根据权利要求11至15任一项所述的电子设备,其中,The electronic device according to any one of claims 11 to 15, wherein
    所述采集单元为双目采集单元,配置为同时分别采集所述红外光图像和所述可见光图像。The acquisition unit is a binocular acquisition unit configured to separately acquire the infrared light image and the visible light image at the same time.
  20. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至10任一项所述信息处理方法。 A computer storage medium having stored therein computer executable instructions for performing the information processing method of any one of claims 1 to 10.
PCT/CN2016/099295 2015-11-17 2016-09-19 Information processing method, electronic device and computer storage medium WO2017084428A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510797277.6A CN105455781A (en) 2015-11-17 2015-11-17 Information processing method and electronic device
CN201510797277.6 2015-11-17

Publications (1)

Publication Number Publication Date
WO2017084428A1 true WO2017084428A1 (en) 2017-05-26

Family

ID=55594301

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099295 WO2017084428A1 (en) 2015-11-17 2016-09-19 Information processing method, electronic device and computer storage medium

Country Status (2)

Country Link
CN (1) CN105455781A (en)
WO (1) WO2017084428A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108553081A (en) * 2018-01-03 2018-09-21 京东方科技集团股份有限公司 A kind of diagnostic system based on tongue fur image
CN110196103A (en) * 2019-06-27 2019-09-03 Oppo广东移动通信有限公司 Thermometry and relevant device
CN111027489A (en) * 2019-12-12 2020-04-17 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
US10755414B2 (en) 2018-04-27 2020-08-25 International Business Machines Corporation Detecting and monitoring a user's photographs for health issues
WO2020171554A1 (en) * 2019-02-19 2020-08-27 Samsung Electronics Co., Ltd. Method and apparatus for measuring body temperature using a camera
CN112950732A (en) * 2021-02-23 2021-06-11 北京三快在线科技有限公司 Image generation method and device, storage medium and electronic equipment
CN113008404A (en) * 2021-02-22 2021-06-22 深圳市商汤科技有限公司 Temperature measuring method and device, electronic device and storage medium
CN115984126A (en) * 2022-12-05 2023-04-18 北京拙河科技有限公司 Optical image correction method and device based on input instruction
CN117152397A (en) * 2023-10-26 2023-12-01 慧医谷中医药科技(天津)股份有限公司 Three-dimensional face imaging method and system based on thermal imaging projection

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105455781A (en) * 2015-11-17 2016-04-06 努比亚技术有限公司 Information processing method and electronic device
KR102375177B1 (en) * 2016-04-22 2022-03-16 핏스킨 인코포레이티드 Systems and method for skin analysis using electronic devices
CN108074647A (en) * 2016-11-15 2018-05-25 深圳大森智能科技有限公司 A kind of health data collection method and apparatus
US10762635B2 (en) * 2017-06-14 2020-09-01 Tusimple, Inc. System and method for actively selecting and labeling images for semantic segmentation
CN108241433B (en) * 2017-11-27 2019-03-12 王国辉 Fatigue strength analyzing platform
CN110909566A (en) * 2018-09-14 2020-03-24 奇酷互联网络科技(深圳)有限公司 Health analysis method, mobile terminal and computer-readable storage medium
CN110312033B (en) * 2019-06-17 2021-02-02 Oppo广东移动通信有限公司 Electronic device, information pushing method and related product
CN111337142A (en) * 2020-04-07 2020-06-26 北京迈格威科技有限公司 Body temperature correction method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825075A (en) * 2005-02-25 2006-08-30 安捷伦科技有限公司 System and method for detecting thermal anomalies
US20130116591A1 (en) * 2011-11-04 2013-05-09 Alan C. Heller Systems and devices for real time health status credentialing
WO2014141084A1 (en) * 2013-03-14 2014-09-18 Koninklijke Philips N.V. Device and method for determining vital signs of a subject
WO2015169634A1 (en) * 2014-05-07 2015-11-12 Koninklijke Philips N.V. Device, system and method for extracting physiological information
CN105455781A (en) * 2015-11-17 2016-04-06 努比亚技术有限公司 Information processing method and electronic device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030064356A1 (en) * 2001-10-01 2003-04-03 Gilles Rubinstenn Customized beauty tracking kit
US8489539B2 (en) * 2009-10-05 2013-07-16 Elc Management, Llc Computer-aided diagnostic systems and methods for determining skin compositions based on traditional chinese medicinal (TCM) principles
CN204362181U (en) * 2014-12-05 2015-05-27 北京蚁视科技有限公司 Gather the image collecting device of infrared light image and visible images simultaneously
CN104434038B (en) * 2014-12-15 2017-02-08 无限极(中国)有限公司 Acquired skin data processing method, device and system
CN104618709B (en) * 2015-01-27 2017-05-03 天津大学 Dual-binocular infrared and visible light fused stereo imaging system
CN104825136B (en) * 2015-05-26 2018-08-10 高也陶 The color portion information collection of Traditional Chinese Medicine face area and analysis system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825075A (en) * 2005-02-25 2006-08-30 安捷伦科技有限公司 System and method for detecting thermal anomalies
US20130116591A1 (en) * 2011-11-04 2013-05-09 Alan C. Heller Systems and devices for real time health status credentialing
WO2014141084A1 (en) * 2013-03-14 2014-09-18 Koninklijke Philips N.V. Device and method for determining vital signs of a subject
WO2015169634A1 (en) * 2014-05-07 2015-11-12 Koninklijke Philips N.V. Device, system and method for extracting physiological information
CN105455781A (en) * 2015-11-17 2016-04-06 努比亚技术有限公司 Information processing method and electronic device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108553081A (en) * 2018-01-03 2018-09-21 京东方科技集团股份有限公司 A kind of diagnostic system based on tongue fur image
US10755414B2 (en) 2018-04-27 2020-08-25 International Business Machines Corporation Detecting and monitoring a user's photographs for health issues
US10755415B2 (en) 2018-04-27 2020-08-25 International Business Machines Corporation Detecting and monitoring a user's photographs for health issues
WO2020171554A1 (en) * 2019-02-19 2020-08-27 Samsung Electronics Co., Ltd. Method and apparatus for measuring body temperature using a camera
CN110196103A (en) * 2019-06-27 2019-09-03 Oppo广东移动通信有限公司 Thermometry and relevant device
CN111027489B (en) * 2019-12-12 2023-10-20 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN111027489A (en) * 2019-12-12 2020-04-17 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN113008404A (en) * 2021-02-22 2021-06-22 深圳市商汤科技有限公司 Temperature measuring method and device, electronic device and storage medium
CN112950732A (en) * 2021-02-23 2021-06-11 北京三快在线科技有限公司 Image generation method and device, storage medium and electronic equipment
CN112950732B (en) * 2021-02-23 2022-04-01 北京三快在线科技有限公司 Image generation method and device, storage medium and electronic equipment
CN115984126A (en) * 2022-12-05 2023-04-18 北京拙河科技有限公司 Optical image correction method and device based on input instruction
CN117152397A (en) * 2023-10-26 2023-12-01 慧医谷中医药科技(天津)股份有限公司 Three-dimensional face imaging method and system based on thermal imaging projection
CN117152397B (en) * 2023-10-26 2024-01-26 慧医谷中医药科技(天津)股份有限公司 Three-dimensional face imaging method and system based on thermal imaging projection

Also Published As

Publication number Publication date
CN105455781A (en) 2016-04-06

Similar Documents

Publication Publication Date Title
WO2017084428A1 (en) Information processing method, electronic device and computer storage medium
CN108629747B (en) Image enhancement method and device, electronic equipment and storage medium
CN105354838B (en) The depth information acquisition method and terminal of weak texture region in image
CN109191410A (en) A kind of facial image fusion method, device and storage medium
US20210343041A1 (en) Method and apparatus for obtaining position of target, computer device, and storage medium
WO2017140182A1 (en) Image synthesis method and apparatus, and storage medium
CN106878588A (en) A kind of video background blurs terminal and method
CN110140106A (en) According to the method and device of background image Dynamically Announce icon
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN108900790A (en) Method of video image processing, mobile terminal and computer readable storage medium
CN106447641A (en) Image generation device and method
CN107018331A (en) A kind of imaging method and mobile terminal based on dual camera
CN109167910A (en) focusing method, mobile terminal and computer readable storage medium
CN106791416A (en) A kind of background blurring image pickup method and terminal
CN106534696A (en) Focusing apparatus and method
CN106603931A (en) Binocular shooting method and device
US20230076109A1 (en) Method and electronic device for adding virtual item
CN110072061A (en) A kind of interactive mode image pickup method, mobile terminal and storage medium
WO2023151472A1 (en) Image display method and apparatus, and terminal and storage medium
CN106506778A (en) A kind of dialing mechanism and method
CN108419009A (en) Image definition enhancing method and device
CN107357500A (en) A kind of picture-adjusting method, terminal and storage medium
CN113542610A (en) Shooting method, mobile terminal and storage medium
CN106385573A (en) Picture processing method and terminal
CN106713640A (en) Brightness adjustment method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16865606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16865606

Country of ref document: EP

Kind code of ref document: A1