WO2017124899A1 - 一种信息处理方法及装置、电子设备 - Google Patents

一种信息处理方法及装置、电子设备 Download PDF

Info

Publication number
WO2017124899A1
WO2017124899A1 PCT/CN2016/112490 CN2016112490W WO2017124899A1 WO 2017124899 A1 WO2017124899 A1 WO 2017124899A1 CN 2016112490 W CN2016112490 W CN 2016112490W WO 2017124899 A1 WO2017124899 A1 WO 2017124899A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
screen
camera
line
Prior art date
Application number
PCT/CN2016/112490
Other languages
English (en)
French (fr)
Inventor
程山
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017124899A1 publication Critical patent/WO2017124899A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Definitions

  • the present invention relates to electronic technologies, and in particular, to an information processing method and apparatus, and an electronic device.
  • the user's mobile phone camera experience has been greatly improved.
  • the user can complete a relatively standard picture with clear motion with only a finger, but this series of operations is often If the two hands cooperate to complete (guarantee the stability) or click the camera button with a single thumb to complete the photo, then in the case of a single finger, the problem of holding the hand may cause the photo taken is not as perfect as imagined, for example Blurring or incorrect focus or framing due to jitter.
  • the embodiment of the present invention provides an information processing method and apparatus, and an electronic device, which can capture a perfect image when the user is unable to perform a two-hand operation to complete the photographing action, in order to solve at least one problem existing in the prior art.
  • an embodiment of the present invention provides an information processing method, where the method includes:
  • the camera is focused on a focus position of the line of sight in the screen to capture an image based on the focus position.
  • an embodiment of the present invention provides an information processing apparatus, where the apparatus includes a first a detection unit, an acquisition unit, an analysis unit, and a photographing unit, wherein:
  • the first detecting unit is configured to detect whether the camera is turned on
  • the acquiring unit is configured to acquire an image of the eye of the user if the camera is turned on;
  • the analyzing unit is configured to analyze the eye image to obtain a focus position of the user's line of sight in the screen, and the screen is a display screen of the electronic device;
  • the photographing unit is configured to focus the camera on a focus position of the line of sight in the screen to capture an image based on the focus position.
  • an embodiment of the present invention provides an electronic device, where the electronic device includes a processor and a display screen, where:
  • the processor is configured to detect whether the camera is turned on; if the camera is turned on, the eye image of the user is acquired; and the eye image is analyzed to obtain a focus position of the user's line of sight in the screen, and the screen is a display screen of the electronic device. Placing an image of the camera at a focus position of the line of sight in the screen, and outputting the image to a display screen;
  • the display screen is configured to display the image.
  • the embodiment of the invention provides an information processing method and device, and an electronic device, wherein detecting whether the camera is turned on; if the camera is turned on, acquiring an image of the eye of the user; analyzing the image of the eye to obtain a focus of the user's line of sight on the screen Position, the screen is a display screen of the electronic device; the camera is focused on the image in which the line of sight is in a focus position in the screen; thus, a perfect image can be captured when the user cannot perform the two-hand operation to complete the photographing action.
  • 1-1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing various embodiments of the present invention.
  • Figure 1-2 is a schematic structural diagram of a photographic lens in the mobile terminal shown in Figure 1-1;
  • FIG. 1-3 are schematic diagrams showing an implementation process of an information processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of an implementation process of an information processing method according to Embodiment 2 of the present invention
  • 3-1 is a schematic flowchart of an implementation process of an information processing method according to Embodiment 3 of the present invention.
  • 3-2 is a schematic diagram 1 of abstracting an eye into a geometric model according to an embodiment of the present invention
  • 3-3 is a schematic diagram 2 of abstracting an eye into a geometric model according to an embodiment of the present invention.
  • 3-4 is a schematic diagram 3 of abstracting an eye into a geometric model according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a fourth embodiment of an information processing apparatus according to Embodiment 4 of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1-1 is a schematic structural diagram of hardware of an optional mobile terminal in implementing various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110 and an audio/video (A/V) input.
  • Figure 1-1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that allow mobile terminal 100 to Radio communication between wireless communication systems or networks.
  • the wireless communication unit can include at least one of the mobile communication module 112, the wireless internet module 113, and the short-range communication module 114.
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), etc. .
  • the short range communication module 114 is a module configured to support short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the A/V input unit 120 is configured to receive an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of a still image or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151 in the output unit 150.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 is configured as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port configured to connect a device having an identification module, audio input/output (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 may be configured to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more components within the mobile terminal 100, or may be configured to be at the mobile terminal and externally Data is transferred between devices.
  • the interface unit 170 may be configured to allow a path through which power is supplied from the base to the mobile terminal 100, or may be configured to allow various command signals input from the base to be transmitted thereto through The path of the mobile terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when moving When the terminal 100 is in the phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 may be configured as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units 151 (or other display devices), for example, the mobile terminal 100 may include an external display unit (not shown) and an internal display unit (not shown) Out).
  • the touch screen can be configured to detect touch input pressure as well as touch input position and touch input area.
  • the memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal 100.
  • the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 configured to reproduce or play back multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or an image drawing input performed on the touch screen as a character or an image.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like is taken as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211.
  • the imaging element 1212 is configured to image the subject image and acquire captured image data.
  • Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 1212 is connected to the imaging circuit 1213.
  • the imaging circuit 1213 performs charge accumulation control and image signal readout control in the imaging element 1212, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 1213 is connected to an A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
  • the bus 1227 is a transmission path configured to transfer various data read or generated inside the camera.
  • the A/D converter 1214 is connected to the bus 1227, and is further connected to an image processor 1215, a JPEG processor 1216, a microcomputer 1217, and a SDRAM (Synchronous Dynamic).
  • the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226.
  • the JPEG method is employed as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be employed.
  • the operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button
  • the operation controls such as various input buttons and various input keys detect the operational state of these operation controls.
  • the detection result is output to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217.
  • the microcomputer 1217 executes various processing sequences corresponding to the user's operation in accordance with the detection result from the operation position of the operation unit 1223.
  • the flash memory 1224 stores programs configured to execute various processing sequences of the microcomputer 1217.
  • the microcomputer 1217 performs overall control of the camera in accordance with the program.
  • the flash memory 1224 is stored
  • the various adjustment values of the camera, the microcomputer 1217 read out the adjustment value, and control the camera in accordance with the adjustment value.
  • the SDRAM 1218 is an electrically rewritable volatile memory configured to temporarily store image data or the like.
  • the SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
  • the memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225.
  • the recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 1220 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218.
  • the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed.
  • the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
  • the LCD 1226 is configured to display an image on the back of the camera body.
  • the LCD 1226 is not limited thereto, and various display panels (LCD 1226) such as an organic EL may be used.
  • the present invention is not limited thereto, and various display panels such as an organic EL may be used.
  • an embodiment of the present invention provides an information processing method, where the method is applied to an electronic device, and the function implemented by the information processing method can be implemented by using an electronic device.
  • the processor in the device invokes program code to implement.
  • the program code can be stored in a computer storage medium.
  • the electronic device includes at least a processor and a storage medium.
  • FIG. 1-3 is a schematic flowchart of an implementation of an information processing method according to an embodiment of the present invention. As shown in Figure 1-3, the information processing method includes:
  • Step S101 detecting whether the camera is turned on
  • the electronic device may be a fixed electronic device such as a personal computer (PC), and may also be a portable device such as a personal digital assistant (PAD), a tablet computer, or a laptop computer.
  • PC personal computer
  • PAD personal digital assistant
  • Tablet computers can also be mobile electronic devices such as smart phones.
  • the camera may be an electronic device itself or an external device.
  • a tablet computer and a mobile phone generally have a camera configured therein, and the desktop personal computer can externally connect the camera when the camera is not equipped with the camera itself.
  • the photographed photograph or the photographed video is collectively referred to as a photographed image.
  • the method provided by the embodiment of the present invention can be applied to photographing and photographing a video. In case.
  • some electronic devices may have more than one camera.
  • the mobile phone may have a front camera and a rear camera.
  • the camera in step S101 includes a front camera and a rear camera, and as long as one camera is turned on, It is considered that the camera is detected to be turned on.
  • Step S102 if the camera is turned on, acquiring an eye image of the user
  • the eye image of the user is acquired.
  • the eye image of the user can be obtained by turning on the front camera.
  • Step S103 analyzing the eye image to obtain a focus position of the user's line of sight in the screen, the screen being a display screen of the electronic device;
  • step S104 the focusing the camera on the focus position of the line of sight in the screen comprises:
  • Step S1041 converting the focus position of the line of sight in the screen into a coordinate point
  • Step S1042 displaying the coordinate point on a screen of the electronic device
  • step S1043 the camera is focused on the coordinate point.
  • the focus position can be displayed on the display screen of the terminal. If the user feels that the current focus position is not the object that the user really wants to shoot, the user can adjust the position of the terminal to the mobile phone. For example, the user can adjust the angle of the mobile phone in the hand to change the angle or distance between the mobile phone and the ground plane, thereby changing the object of the mobile phone.
  • step S103 the analyzing the eye image to obtain a focus position of the user's line of sight in the screen includes:
  • Step S1031 analyzing the eye image to obtain a pupil position of the user's eyes, a center position of a diagonal of the eyes of the eyes, and a center position of the image;
  • Step S1032 calculating a displacement vector according to the pupil position of the left eye and the center position of the diagonal of the eye corner of the left eye Or, calculate the displacement vector from the center position of the pupil of the right eye and the diagonal of the right eye.
  • Step S1033 calculating a displacement vector according to the center position of the line between the pupil position of the left eye and the pupil position of the right eye and the center position of the image
  • Step S1034 according to the displacement vector And displacement vector Calculate the focus position of the user's line of sight on the screen.
  • the displacement vector And displacement vector Calculate the focus position of the user's line of sight on the screen, including:
  • detecting whether the camera is turned on if the camera is turned on, acquiring the user Eye image; analyzing the eye image to obtain a focus position of the user's line of sight in the screen, the screen being a display screen of the electronic device; focusing the camera on the focus position of the line of sight in the screen
  • a perfect image can be taken.
  • an embodiment of the present invention provides an information processing method, where the method is applied to an electronic device, and the functions implemented by the information processing method may be implemented by using a processor in the electronic device to call the program code, and of course, the program code may be Stored in a computer storage medium, it can be seen that the electronic device includes at least a processor and a storage medium.
  • the information processing method includes:
  • Step S201 when the electronic device is configured to take a photo, detecting whether the camera is turned on;
  • the electronic device may be a fixed electronic device such as a personal computer (PC), and may also be a portable device such as a personal digital assistant (PAD), a tablet computer, or a laptop computer.
  • PC personal computer
  • PAD personal digital assistant
  • Tablet computers can also be mobile electronic devices such as smart phones.
  • the camera may be an electronic device itself or an external device.
  • a tablet computer and a mobile phone generally have a camera configured therein, and the desktop personal computer can externally connect the camera when the camera is not equipped with the camera itself.
  • the photographed photograph or the photographed video is collectively referred to as a photographed image.
  • the method provided by the embodiment of the present invention can be applied to photographing and photographing a video. In case.
  • some electronic devices may have more than one camera.
  • the mobile phone may have a front camera and a rear camera, and the camera in step S201 includes a front camera and a rear camera, as long as one camera is turned on, It is considered that the camera is detected to be turned on.
  • Step S202 if the camera is turned on, the eye image of the user is acquired
  • the eye image of the user is acquired.
  • the eye image of the user can be obtained by turning on the front camera.
  • Step S203 analyzing the eye image to obtain a focus position of the user's line of sight in the screen, the screen being a display screen of the electronic device;
  • Step S204 the camera is focused on an image in which the line of sight is in a focus position in the screen.
  • Step S205 detecting whether the hold time of the focus position of the line of sight in the screen is greater than a preset threshold; if the hold time is greater than the threshold, proceeding to step S206; if the hold time is greater than the threshold, proceeding to step S202 .
  • Step S206 the camera is focused on an image in which the line of sight is in a focus position in the screen.
  • 3-1 is a schematic flowchart of an implementation of an information processing method according to Embodiment 3 of the present invention. As shown in Figure 3-1, the detailed process of the embodiment includes:
  • Step S301 when the rear camera is turned on, detecting whether the front camera is turned on, if the front camera is turned on, executing step S302; otherwise, ending the processing flow;
  • the specific operation scenario corresponding to this step may be: when the rear camera is turned on, the front camera is simultaneously turned on, and then step S302 is performed;
  • Step S302 acquiring an eye image of the user through the front camera
  • Step S303 when the eye image of the user is acquired, analyzing the eye image to obtain eye data
  • the eye image is analyzed to obtain the pupil position of both eyes and the pupil position of both eyes.
  • the center point position of the connection between the center (the center point position is referred to as the center position), the center point position of the screen, and the like, and a series of eye data;
  • Step S304 calculating a focus position of the line of sight in the screen according to the eye data
  • the user's eyeball image can be monitored in real time, and the focus position of the line of sight in the screen is updated in real time, wherein the focus position of the line of sight in the screen is the focus point of the camera;
  • Step S305 converting the focus position of the line of sight in the screen into a coordinate point, and displaying the coordinate point on the screen;
  • the specific process generated by the steps of this embodiment includes: 1) when there is a positional movement, the position to which the line of sight is moved is the focus position at which the current camera is to focus, and the camera is triggered to perform real-time focusing on the point; 2) When the line of sight moves to the position of the camera button pair, the last position of the previous line of sight at this position is recorded to focus and take a picture; 3) while the line of sight remains stationary, the current focus is kept in focus, and the current position is detected to be in focus. .
  • Step 306 detecting whether there is an operation of stopping the eye-control photographing, and if so, controlling the rear camera to take a photo based on the focus position; otherwise, proceeding to step S302.
  • the operation of stopping the eye-control photographing is stopped, and it is detected whether the position of the pupil in the eye image coincides with the center position of the diagonal line, and if it coincides, the operation of stopping the eye-control photographing is detected. That is, when the position of the pupil in the eye image coincides with the center position of the diagonal line, that is, the operation of stopping the eye control photographing, at this time, the camera is activated to take a picture; specifically, the rear camera is activated to take a picture.
  • the embodiment of the invention realizes the eye control function, and focuses on the eye in real time to complete the photographing, thereby providing a solution capable of taking a perfect photograph when the user cannot perform the photographing operation by the two-handed operation, that is, focusing by eye control. And taking pictures, keeping the stability of the photos taken by your mobile phone.
  • the above steps S301 to S306 can be implemented by a processor in an electronic device such as a mobile phone or a tablet computer.
  • the key of this embodiment is to analyze the eye image acquired by the front camera to obtain eye data; then analyze the direction of the human eye according to the eye data, the imaging position in the image sensor and the layout of the eye; And processing to complete the determination of the line of sight direction, the following describes how the lower line of sight is analyzed.
  • the eye is abstracted into a geometric model used to calculate the offset distance and offset of the pupil's pupil position (ie, coordinate 2) from the center of the diagonal of the eye (coordinate 1).
  • a geometric model used to calculate the offset distance and offset of the pupil's pupil position ie, coordinate 2
  • the center position of the diagonal of the abstract eye corner (the dotted line 31 in the following figure is the diagonal of the eye corner) is marked as coordinate 1;
  • the lens contour information of the eye determines the center position of the pupil, wherein the center position of the pupil It can be the geometric center position of the pupil (the geometric center position is marked as coordinate 2).
  • the algorithm should be used when necessary.
  • the missing part forms a completed circle for calculating the geometric center.
  • the position where the eyes are gazing is to observe the current information vertically (or can be understood as the observation center position).
  • the offset vector 32 (offset direction and offset distance).
  • the value and the position of the current binocular in the current image are used to determine the focus position of the user's line of sight in the screen, and the focus position is displayed on the screen. It should be noted that the above displacement vector is an offset vector after the lens is imaged, and the offset distance is small, and an appropriate weighted amplification is required in the calculation.
  • the user has a more stable camera experience when using one-handed photographing, and realizes the purpose of “seeing where to focus” by eye-controlled positioning.
  • Users who take pictures with both hands provide the convenience of taking pictures and a better photo experience.
  • Users who are unable to take out their hands for a while are also added to the user's operating experience - even a single hand can shoot a custom focus point.
  • There is no jittery photo; this embodiment is also a feature that is unique in the current mobile phone competition, and can be used as a highlight to promote the entertainment experience of the camera, and at the same time, some other operations can be realized according to the front eye control function.
  • it can provide interfaces to third-party developers and develop some related functions/entertainment apps (applications) to increase the fun of eye control.
  • an embodiment of the present invention provides an information processing apparatus, and each unit included in the apparatus and each module included in each unit may be implemented by a processor in the terminal, and may also be specifically The logic circuit is implemented; in the process of a specific embodiment, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • the apparatus 400 includes a first detecting unit 401, an obtaining unit 402, an analyzing unit 403, and a photographing unit 404, where:
  • the first detecting unit 401 is configured to detect whether the camera is turned on;
  • the acquiring unit 402 is configured to acquire an eye image of the user if the camera is turned on;
  • the analyzing unit 403 is configured to analyze the eye image to obtain a focus position of the user's line of sight in the screen, and the screen is a display screen of the electronic device;
  • the photographing unit 404 is configured to focus the camera on an image in which the line of sight is in a focus position in the screen.
  • the photographing unit further includes a conversion module, a display module, and a photographing module, wherein:
  • the conversion module is configured to convert the focus position of the line of sight in the screen into a coordinate point
  • the display module is configured to display the coordinate point on a screen of the electronic device
  • the photographing module is configured to photograph the image by focusing the camera at the coordinate point.
  • the analyzing unit 403 includes an analysis module, a first calculation module, a second calculation module, and a third calculation module, where:
  • the analyzing module is configured to analyze the eye image to obtain a pupil position of a user's eyes, a center position of a diagonal of the eye corners of the eyes, and a center position of the image;
  • the first calculating module is configured to calculate a displacement vector according to a pupil position of the left eye and a center position of a diagonal of the eye corner of the left eye Or, calculate the displacement vector from the center position of the pupil of the right eye and the diagonal of the right eye.
  • the second calculating module is configured to calculate a displacement vector according to a center position of the line between the pupil position of the left eye and the pupil position of the right eye and a center position of the image
  • the third calculation module is configured to be according to the displacement vector And displacement vector Calculate the focus position of the user's line of sight on the screen.
  • the third calculation module is configured to be based on Determining displacement vector Where m and n are the amplification factors of the vector.
  • the device when the electronic device is configured to take a photo, the device further includes a second detecting unit configured to detect whether a hold time of the focus position of the line of sight in the screen is greater than a preset threshold; The holding time is greater than the threshold, triggering the photographing unit 404; if the holding time is greater than the threshold, the acquiring unit 402 is triggered.
  • a second detecting unit configured to detect whether a hold time of the focus position of the line of sight in the screen is greater than a preset threshold; The holding time is greater than the threshold, triggering the photographing unit 404; if the holding time is greater than the threshold, the acquiring unit 402 is triggered.
  • an embodiment of the present invention further provides an electronic device, where the electronic device includes a processor and a display screen, where:
  • the processor is configured to detect whether the camera is turned on; if the camera is turned on, the eye image of the user is acquired; and the eye image is analyzed to obtain a focus position of the user's line of sight in the screen, and the screen is a display screen of the electronic device. Placing an image of the camera at a focus position of the line of sight in the screen, and outputting the image to a display screen;
  • the display screen is configured to display the image.
  • the photographing the camera to focus on the focus position of the line of sight in the screen comprises: converting the focus position of the line of sight in the screen into a coordinate point;
  • the coordinate point is displayed on a screen of the electronic device;
  • the camera is focused on the coordinate point to capture an image;
  • the display screen is configured to display the coordinate points.
  • the analyzing the eye image to obtain a focus position of the user's line of sight in the screen includes: analyzing the eye image to obtain a pupil position of the user's eyes and a diagonal of the eye corners of the eyes. Center position, center position of the image; calculation of displacement vector from the center position of the pupil of the left eye and the diagonal of the eye of the left eye Or, calculate the displacement vector from the center position of the pupil of the right eye and the diagonal of the right eye. Calculate the displacement vector from the center position of the line between the pupil position of the left eye and the pupil position of the right eye and the center position of the image According to the displacement vector And displacement vector Calculate the focus position of the user's line of sight on the screen.
  • the displacement vector is And displacement vector Calculate the focus position of the user's line of sight on the screen, including: Determining displacement vector Where m and n are the amplification factors of the vector.
  • the processor when the electronic device is configured to take a photo, is further configured to detect whether a hold time of the focus position of the line of sight in the screen is greater than a preset threshold; if the hold time Above the threshold, the camera is focused on an image in which the line of sight is at a focus position in the screen.
  • the processor is further configured to: if the retention time is greater than the threshold, acquire an eye image of the user, analyze the eye image, and obtain a focus position of the user's line of sight in the screen; Detecting whether a hold time of the focus position of the line of sight in the screen is greater than a preset threshold; if the hold time is greater than the threshold, the camera focuses on the focus position of the line of sight in the screen to capture an image; When the hold time is greater than the threshold, the eye image of the user is acquired.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place.
  • the party may also be distributed to multiple network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a read only memory (ROM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • ROM read only memory
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本发明公开了一种信息处理方法及装置、电子设备,其中所述方法包括:检测是否开启摄像头;如果开启摄像头,获取用户的眼部图像;分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。

Description

一种信息处理方法及装置、电子设备 技术领域
本发明涉及电子技术,尤其涉及一种信息处理方法及装置、电子设备。
背景技术
随着当前是手机拍照技术的提升,大大提升了用户的手机拍照体验,自动模式下用户只需动动手指即可完成一张较为标准的画质清晰的照片,但是,这一系列操作往往是两手配合完成(保证其稳定性)或者通过单手大拇指点击拍照键完成的拍照,那么在单指的情况下由于手持不稳的问题有可能造成拍出来的照片不是想象中的那么完美,例如模糊或焦点不正确或取景因抖动而偏差。
发明内容
有鉴于此,本发明实施例为解决现有技术中存在的至少一个问题而提供一种信息处理方法及装置、电子设备,当用户无法进行双手操作完成拍照动作时能够拍摄出完美的图像。
本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供一种信息处理方法,所述方法包括:
检测是否开启摄像头;
如果开启摄像头,获取用户的眼部图像;
分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
将所述摄像头对焦在所述视线在屏幕中的聚焦位置,以基于所述聚焦位置拍摄图像。
第二方面,本发明实施例提供一种信息处理装置,所述装置包括第一 检测单元、获取单元、分析单元和拍摄单元,其中:
所述第一检测单元,配置为检测是否开启摄像头;
所述获取单元,配置为如果开启摄像头,获取用户的眼部图像;
所述分析单元,配置为分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
所述拍摄单元,配置为将所述摄像头对焦在所述视线在屏幕中的聚焦位置,以基于所述聚焦位置拍摄图像。
第三方面,本发明实施例提供一种电子设备,所述电子设备包括处理器和显示屏,其中:
所述处理器,配置为检测是否开启摄像头;如果开启摄像头,获取用户的眼部图像;分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像,输出所述图像至显示屏;
所述显示屏,配置为显示所述图像。
本发明实施例提供一种信息处理方法及装置、电子设备,其中,检测是否开启摄像头;如果开启摄像头,获取用户的眼部图像;分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像;如此,当用户无法进行双手操作完成拍照动作时能够拍摄出完美的图像。
附图说明
图1-1为实现本发明各个实施例中的一个可选移动终端的硬件结构示意图;
图1-2为如图1-1所示的移动终端中摄影镜头的组成结构示意图;
图1-3为本发明实施例一信息处理方法的实现流程示意图;
图2为本发明实施例二信息处理方法的实现流程示意图;
图3-1为本发明实施例三信息处理方法的实现流程示意图;
图3-2为本发明实施例将眼睛抽象成为一个几何模型的示意图一;
图3-3为本发明实施例将眼睛抽象成为一个几何模型的示意图二;
图3-4为本发明实施例将眼睛抽象成为一个几何模型的示意图三;
图4为本发明实施例四信息处理装置的组成结构示意图。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明的技术方案,并不用于限定本发明的保护范围。现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1-1为实现本发明各个实施例中的一个可选移动终端的硬件结构示意图,如图1-1所示,移动终端100可以包括无线通信单元110、音频/视频(A/V)输入单元120、用户输入单元130、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1-1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与 无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括移动通信模块112、无线互联网模块113和短程通信模块114中的至少一个。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括无线LAN(WLAN)(Wi-Fi)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA)等等。
短程通信模块114是配置为支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
A/V输入单元120配置为接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图像或视频的图像数据进行处理。处理后的图像帧可以显示在输出单元150中的显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
接口单元170配置为至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、配置为连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以配置为接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件,或者可以配置为在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以配置为允许通过其将电力从底座提供到移动终端100的路径,或者可以配置为允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动 终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以配置为输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元151(或其它显示装置),例如,移动终端100可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可配置为检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号,并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端100的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括配置为再现或回放多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图像绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,已经按照其功能描述了移动终端。下面,为了简要起见,将描 述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
本发明实施例中所述移动终端还包括摄影镜头,参见图1-2所示,摄影镜头1211由配置为形成被摄体像的多个光学镜头构成,为单焦点镜头或变焦镜头。摄影镜头1211在镜头驱动器1221的控制下能够在光轴方向上移动,镜头驱动器1221根据来自镜头驱动控制电路1222的控制信号,控制摄影镜头1211的焦点位置,在变焦镜头的情况下,也可控制焦点距离。镜头驱动控制电路1222按照来自微型计算机1217的控制命令进行镜头驱动器1221的驱动控制。
在摄影镜头1211的光轴上、由摄影镜头1211形成的被摄体像的位置附近配置有摄像元件1212。摄像元件1212配置为对被摄体像摄像并取得摄像图像数据。在摄像元件1212上二维且呈矩阵状配置有构成各像素的光电二极管。各光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与各光电二极管连接的电容器进行电荷蓄积。各像素的前表面配置有拜耳排列的RGB滤色器。
摄像元件1212与摄像电路1213连接,该摄像电路1213在摄像元件1212中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。摄像电路1213与A/D转换器1214连接,该A/D转换器1214对模拟图像信号进行模数转换,向总线1227输出数字图像信号(以下称之为图像数据)。
总线1227是配置为传送在相机的内部读出或生成的各种数据的传送路径。在总线1227连接着上述A/D转换器1214,此外还连接着图像处理器1215、JPEG处理器1216、微型计算机1217、SDRAM(Synchronous Dynamic  random access memory,同步动态随机存取内存)1218、存储器接口(以下称之为存储器I/F)1219、LCD(Liquid Crystal Display,液晶显示器)驱动器1220。
图像处理器1215对基于摄像元件1212的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器1216在将图像数据记录于记录介质1225时,按照JPEG压缩方式压缩从SDRAM1218读出的图像数据。此外,JPEG处理器1216为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质1225中的文件,在JPEG处理器1216中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM1218中并在LCD1226上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然也可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。
微型计算机1217发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机1217连接着操作单元1223和闪存1224。
操作单元1223包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作控件,检测这些操作控件的操作状态。
将检测结果向微型计算机1217输出。此外,在作为显示器的LCD1226的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机1217输出。微型计算机1217根据来自操作单元1223的操作位置的检测结果,执行与用户的操作对应的各种处理序列。
闪存1224存储配置为执行微型计算机1217的各种处理序列的程序。微型计算机1217根据该程序进行相机整体的控制。此外,闪存1224存储 相机的各种调整值,微型计算机1217读出调整值,按照该调整值进行相机的控制。
SDRAM1218是配置为对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM1218暂时存储从A/D转换器1214输出的图像数据和在图像处理器1215、JPEG处理器1216等中进行了处理后的图像数据。
存储器接口1219与记录介质1225连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质1225、和从记录介质1225中读出的控制。记录介质1225例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。
LCD驱动器1220与LCD1226连接,将由图像处理器1215处理后的图像数据存储于SDRAM1218,需要显示时,读取SDRAM1218存储的图像数据并在LCD1226上显示,或者,JPEG处理器1216压缩过的图像数据存储于SDRAM1218,在需要显示时,JPEG处理器1216读取SDRAM1218的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD1226进行显示。
LCD1226配置在相机主体的背面进行图像显示。然而该LCD1226不限于此,也可以采用有机EL等各种显示面板(LCD1226),然而不限于此,也可以采用有机EL等各种显示面板。
上面以移动终端为例说明电子设备的硬件结构以及摄影镜头,需要说明的是,本发明实施例提供的各方法同样适用于个人计算机这类的电子设备。
下面结合附图和具体实施例对本发明的技术方案进一步详细阐述。
实施例一
为了解决背景技术中存在的问题,本发明实施例提供一种信息处理方法,该方法应用于电子设备,该信息处理方法所实现的功能可以通过电子 设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该电子设备至少包括处理器和存储介质。
图1-3为本发明实施例一信息处理方法的实现流程示意图,如图1-3所示,该信息处理方法包括:
步骤S101,检测是否开启摄像头;
这里,在具体实施的过程中,所述电子设备可以为个人计算机(PC,Personal Computer)这种固定的电子设备,还可以为如个人数字助理(PAD)、平板电脑、手提电脑这种便携式的电子设备,当然还可以为如智能手机这种移动电子设备。
这里,所述摄像头可以为电子设备自身的,也可以为外接的,例如,平板电脑和手机一般都自身配置有摄像头,而台式的个人计算机当自身不配置有摄像头时,可以外接摄像头,以实现利用外接的摄像头进行拍摄照片或拍摄视频,在本实施例中,无论是拍摄照片还是拍摄视频统一称为拍摄图像,换句话说,本发明实施例提供的方法可以应用于拍摄照片和拍摄视频的情况下。
这里,一般来说,有些的电子设备可能不止一个摄像头,例如,手机可以有前置摄像头和后置摄像头,在步骤S101中的摄像头包括前置摄像头和后置摄像头,只要有一个摄像头开启,则认为检测到摄像头开启。
步骤S102,如果开启摄像头,获取用户的眼部图像;
这里,所述步骤S102中的获取用户的眼部图像,在具体实施的过程中,可以通过开启前置摄像头来实现获取用户的眼部图像。
步骤S103,分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
步骤S104,将所述摄像头对焦在所述视线在屏幕中的聚焦位置,以基于所述聚焦位置拍摄图像。
本发明实施例中,步骤S104,所述将所述摄像头对焦在所述视线在屏幕中的聚焦位置,包括:
步骤S1041,将所述视线在屏幕中的聚焦位置转换成坐标点;
步骤S1042,将所述坐标点显示在电子设备的屏幕上;
步骤S1043,将所述摄像头对焦在所述坐标点。
这里,为了方便用户调整终端如手机的位置,可以将聚焦位置显示在终端的显示屏上,如果用户觉得当前的聚焦位置不是用户真正要拍摄的对象,则用户可以调整终端在位置,以手机为例,用户可以调整手机在手中的角度,以实现手机与大地平面的夹角或距离上的改变,从而改变手机的拍摄对象。
本发明实施例中,步骤S103,所述分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,包括:
步骤S1031,分析所述眼部图像,得到用户双眼的瞳孔位置、双眼的眼角的对角线的中心位置、图像的中心位置;
步骤S1032,根据左眼的瞳孔位置和左眼的眼角的对角线的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000001
或者,根据右眼的瞳孔位置和右眼的眼角的对角线的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000002
步骤S1033,根据左眼的瞳孔位置与右眼的瞳孔位置之间连线的中心位置和图像的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000003
步骤S1034,根据所述位移矢量
Figure PCTCN2016112490-appb-000004
和位移矢量
Figure PCTCN2016112490-appb-000005
计算用户的视线在屏幕中的聚焦位置。
这里,所述根据所述位移矢量
Figure PCTCN2016112490-appb-000006
和位移矢量
Figure PCTCN2016112490-appb-000007
计算用户的视线在屏幕中的聚焦位置,包括:
根据
Figure PCTCN2016112490-appb-000008
确定位移矢量
Figure PCTCN2016112490-appb-000009
其中m、n为矢量的放大系数。
本发明实施例中,检测是否开启摄像头;如果开启摄像头,获取用户 的眼部图像;分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像;如此,当用户无法进行双手操作完成拍照动作时能够拍摄出完美的图像。
实施例二
基于前述的实施例,本发明实施例提供一种信息处理方法,该方法应用于电子设备,该信息处理方法所实现的功能可以通过电子设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该电子设备至少包括处理器和存储介质。
图2为本发明实施例二信息处理方法的实现流程示意图,如图2所示,该信息处理方法包括:
步骤S201,当所述电子设备配置为拍摄照片时,检测是否开启摄像头;
这里,在具体实施的过程中,所述电子设备可以为个人计算机(PC,Personal Computer)这种固定的电子设备,还可以为如个人数字助理(PAD)、平板电脑、手提电脑这种便携式的电子设备,当然还可以为如智能手机这种移动电子设备。
这里,所述摄像头可以为电子设备自身的,也可以为外接的,例如,平板电脑和手机一般都自身配置有摄像头,而台式的个人计算机当自身不配置有摄像头时,可以外接摄像头,以实现利用外接的摄像头进行拍摄照片或拍摄视频,在本实施例中,无论是拍摄照片还是拍摄视频统一称为拍摄图像,换句话说,本发明实施例提供的方法可以应用于拍摄照片和拍摄视频的情况下。
这里,一般来说,有些的电子设备可能不止一个摄像头,例如,手机可以有前置摄像头和后置摄像头,在步骤S201中的摄像头包括前置摄像头和后置摄像头,只要有一个摄像头开启,则认为检测到摄像头开启。
步骤S202,如果开启摄像头,获取用户的眼部图像;
这里,所述步骤S202中的获取用户的眼部图像,在具体实施的过程中,可以通过开启前置摄像头来实现获取用户的眼部图像。
步骤S203,分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
步骤S204,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。
步骤S205,检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;如果所述保持时间大于所述阈值,进入步骤S206;如果所述保持时间大于所述阈值,进入步骤S202。
步骤S206,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。
实施例三
本实施例将提供一种信息处理方法,该方法是利用通过纯软件分析的方法实现拍摄出完美照片的功能,无需增加硬件成本。图3-1为本发明实施例三信息处理方法的实现流程示意图,如图3-1所示,该实施例的详细流程包括:
步骤S301,开启后置摄像头拍照时,检测是否开启前置摄像头,如果开启前置摄像头,则执行步骤S302;否则,结束处理流程;
本步骤对应的具体操作场景可以为:在开启后置摄像头拍照时,同时开启前置摄像头,然后执行步骤S302;
步骤S302,通过前置摄像头获取用户的眼部图像;
步骤S303,当获取到用户的眼部图像时,对所述眼部图像进行分析,得到眼部数据;
具体来说,分析所述眼部图像,得到双眼的瞳孔位置、双眼的瞳孔位 置之间的连线的中心点位置(中心点位置简称中心位置)、屏幕的中心点位置当等一系列的眼部数据;
步骤S304,根据眼部数据来计算视线在屏幕中的聚焦位置;
这里,在具体实施的过程中可以实时监控用户的眼球图像,实时的更新视线在屏幕中的聚焦位置,其中视线在屏幕中的聚焦位置即摄像头的对焦点;
步骤S305,将视线在屏幕中的聚焦位置转化成为坐标点,并在屏幕上显示所述坐标点;
这里,本实施例步骤产生的具体的过程包括:1)当有位置移动时,视线所移动到的位置即为当前摄像头所要进行对焦的对焦位置,并触发摄像头对该点进行实时对焦;2)当视线移动到拍照按钮对的位置时记录这一位置的上一个视线最后停留的位置进行对焦并拍照;3)当视线保持不动时则保持当前对焦点进行对焦,并检测当前位置是否正确对焦。
步骤306,检测是否有停止眼控拍照的操作,是时,控制后置摄像头基于所述聚焦位置拍摄照片;反之,进入步骤S302。
这里,停止眼控拍照的操作,检测眼部图像中瞳孔的位置与对角线的中心位置是否重合,若重合,则检测到停止眼控拍照的操作。即眼部图像中瞳孔的位置与对角线的中心位置重合在一起的时候,即为停止眼控拍照的操作,这时,启动摄像头进行拍照;具体为,启动后置摄像头进行拍照。
本发明实施例实现了眼控功能,通过实时跟踪眼睛来对焦,进而完成拍照,从而提供一种当用户无法进行双手操作完成拍照动作时能够拍摄出完美照片的解决方案,即通过眼控进行对焦和拍照,保持但手下手机拍照的稳定性。
上述的步骤S301至步骤S306都可以通过电子设备如手机、平板电脑中的处理器来实现。
本实施例的关键在于,分析通过前置摄像头获取的眼部图像,得到眼部数据;然后根据眼部数据分析人眼的方向,在图像传感器(sensor)中的成像位置和眼睛的布局情况;并进行处理,来完成视线方向的确定,下面就具体阐述下视线是如何分析的。
1)确定视线方向;
首先,如图3-2所示,将眼睛抽象成为一个几何模型,用来计算眼睛的瞳孔位置(即坐标2)距离眼角的对角线的中心位置(坐标1)的偏移距离以及偏移方向,然后根据偏移距离和偏移方向生成偏移矢量
Figure PCTCN2016112490-appb-000010
然后,抽象眼角的对角线的中心位置(下图中的虚线线条31即为眼角的对角线)标记为坐标1;再将眼睛的晶状体轮廓信息确定瞳孔的中心位置,其中瞳孔的中心位置可以为瞳孔的几何中心位置(该几何中心位置标记为坐标2),需要说明的是,由于人的基因或其他外界因素的影响,晶状体不会完全暴露出来所以在需要的时候要用算法补齐缺失的部分形成一个完成的圆用来计算几何中心。当坐标1和坐标2重合的时候,则认为眼神注视的位置为垂直观察当前信息(或可理解为观察中心位置)。如图3-3所示,当坐标1和坐标2有偏移,则需要计算偏移矢量
Figure PCTCN2016112490-appb-000011
32(偏移方向及偏移距离)。
通过计算这个偏移矢量
Figure PCTCN2016112490-appb-000012
的值以及结合当前双眼在当前图像中的位置来确定用户的视线在屏幕中的聚焦位置,将聚焦位置显示在屏幕上。需要说明的是,上述的位移矢量是在晶状体成像后的偏移矢量,偏移的距离较小,在计算时需要适当的加权放大。
2)人眼观察位置;
已知视线的偏移矢量,那么现在需要确定人眼观察位置:如图3-4所示,若将预览获取的图像信息抽象到电子设备的显示屏的屏幕上,那么人眼的位置可以通过另外的矢量信息进行计算,假定将屏幕的中心设置33为矢量的原点,那么原点与双眼的中心位置(两眼瞳孔之间的连线的中心点34) 之间有方向的连线就是双眼在屏幕中的位移矢量
Figure PCTCN2016112490-appb-000013
35,结合这个位移矢量
Figure PCTCN2016112490-appb-000014
和视线的偏移矢量
Figure PCTCN2016112490-appb-000015
计算视线在屏幕中的聚焦位置。
举个例子来说:当眼睛正处于“位置1”的这个状态(标记为
Figure PCTCN2016112490-appb-000016
),而视线位置计算的偏移矢量是
Figure PCTCN2016112490-appb-000017
由于双眼的位置本来就靠下,所以即使观察中心也是需要视线向上观察的,所以在计算视线停留位置坐标也就是的时候需要将眼神观察屏幕中心时产生的矢量抵消掉,得到一个真实的矢量值
Figure PCTCN2016112490-appb-000018
(m、n为矢量的放大系数),通过矢量
Figure PCTCN2016112490-appb-000019
最终确定视线在屏幕中的聚焦位置。
当然,这还不能精确的确定视线停留位置,因为在实现过程中需要匹配和调试m、n这两个放大系数,从而得到更加精确的确定位置以及使用户达到更加真实的眼控体验。
这仅是通过眼控功能实现的对焦和拍照功能,同时,可以提供眼控功能的数据输出的接口以及功能开关的接口供第三方应用调用,促进第三方APP开发商对此功能使用的扩展开发,丰富此功能的使用领域,开发出更多更具特色的应用,增加用户的手机使用乐趣
本实施例的应用场景和收益效果:基于本实施例,用户在使用单手拍照时具有更稳定的拍照体验,通过眼控定位,真正的实现“看哪里就向哪里对焦”的目的,对不能使用双手拍照的用户提供拍照的便利性和更好的拍照体验,对那些暂时无法抽出双手进行拍照的用户也增加了用户的操作体验-即使是单手也能拍出一张自定义对焦点,没有抖动的照片;本实施例也是当前手机竞争中具有特色的一个功能,可以作为一个亮点进行宣传,从而增加拍照的娱乐性体验,同时依据前置的眼控功能还可以实现一些其他的操作,同时可以提供接口给第三方开发商,开发出一些与其相关的功能\娱乐型的APP(应用程序),从而增加眼控的使用乐趣。
实施例四
基于前述的实施例,本发明实施例提供一种信息处理装置,该装置所包括的各单元,以及各单元所包括的各模块,都可以通过终端中的处理器来实现,当然也可通过具体的逻辑电路实现;在具体实施例的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图4为本发明实施例四信息处理装置的组成结构示意图,如图4所示,该装置400包括第一检测单元401、获取单元402、分析单元403和拍摄单元404,其中:
所述第一检测单元401,配置为检测是否开启摄像头;
所述获取单元402,配置为如果开启摄像头,获取用户的眼部图像;
所述分析单元403,配置为分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
所述拍摄单元404,配置为将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。
本发明实施例中,所述拍摄单元进一步包括转换模块、显示模块和拍摄模块,其中:
所述转换模块,配置为将所述视线在屏幕中的聚焦位置转换成坐标点;
所述显示模块,配置为将所述坐标点显示在电子设备的屏幕上;
所述拍摄模块,配置为将所述摄像头对焦在所述坐标点拍摄图像。
本发明实施例中,所述分析单元403包括分析模块、第一计算模块、第二计算模块和第三计算模块,其中:
所述分析模块,配置为分析所述眼部图像,得到用户双眼的瞳孔位置、双眼的眼角的对角线的中心位置、图像的中心位置;
所述第一计算模块,配置为根据左眼的瞳孔位置和左眼的眼角的对角线的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000020
或者,根据右眼的瞳孔位置和右眼的眼角的 对角线的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000021
所述第二计算模块,配置为根据左眼的瞳孔位置与右眼的瞳孔位置之间连线的中心位置和图像的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000022
所述第三计算模块,配置为根据所述位移矢量
Figure PCTCN2016112490-appb-000023
和位移矢量
Figure PCTCN2016112490-appb-000024
计算用户的视线在屏幕中的聚焦位置。
这里,所述第三计算模块,配置为根据
Figure PCTCN2016112490-appb-000025
确定位移矢量
Figure PCTCN2016112490-appb-000026
其中m、n为矢量的放大系数。
本发明实施例中,当所述电子设备配置为拍摄照片时,所述装置还包括第二检测单元,配置为检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;如果所述保持时间大于所述阈值,触发拍摄单元404;如果所述保持时间大于所述阈值,触发所述获取单元402。
这里需要指出的是:以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本发明装置实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解,为节约篇幅,因此不再赘述。
实施例五
基于前述的实施例,本发明实施例再提供一种电子设备,该电子设备包括处理器和显示屏,其中:
所述处理器,配置为检测是否开启摄像头;如果开启摄像头,获取用户的眼部图像;分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像,输出所述图像至显示屏;
所述显示屏,配置为显示所述图像。
本发明实施例中,所述将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像,包括:将所述视线在屏幕中的聚焦位置转换成坐标点;将 所述坐标点显示在电子设备的屏幕上;将所述摄像头对焦在所述坐标点拍摄图像;
所述显示屏,配置为显示所述坐标点。
本发明实施例中,所述分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,包括:分析所述眼部图像,得到用户双眼的瞳孔位置、双眼的眼角的对角线的中心位置、图像的中心位置;根据左眼的瞳孔位置和左眼的眼角的对角线的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000027
或者,根据右眼的瞳孔位置和右眼的眼角的对角线的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000028
根据左眼的瞳孔位置与右眼的瞳孔位置之间连线的中心位置和图像的中心位置计算位移矢量
Figure PCTCN2016112490-appb-000029
根据所述位移矢量
Figure PCTCN2016112490-appb-000030
和位移矢量
Figure PCTCN2016112490-appb-000031
计算用户的视线在屏幕中的聚焦位置。
本发明实施例中,所述根据所述位移矢量
Figure PCTCN2016112490-appb-000032
和位移矢量
Figure PCTCN2016112490-appb-000033
计算用户的视线在屏幕中的聚焦位置,包括:根据
Figure PCTCN2016112490-appb-000034
确定位移矢量
Figure PCTCN2016112490-appb-000035
其中m、n为矢量的放大系数。
本发明实施例中,当所述电子设备配置为拍摄照片时,所述处理器,还配置为检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;如果所述保持时间大于所述阈值,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。
本发明实施例中,所述处理器,还配置为如果所述保持时间大于所述阈值,则获取用户的眼部图像,分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置;检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;如果所述保持时间大于所述阈值,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像;如果所述保持时间大于所述阈值,则获取用户的眼部图像。
这里需要指出的是:以上电子设备实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本发 明电子设备实施例中未披露的技术细节,本领域的技术人员请参照本发明方法实施例的描述而理解,为节约篇幅,这里不再赘述。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地 方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种信息处理方法,所述方法包括:
    检测是否开启摄像头;
    如果开启摄像头,获取用户的眼部图像;
    分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
    将所述摄像头对焦在所述视线在屏幕中的聚焦位置,以基于所述聚焦位置拍摄图像。
  2. 根据权利要求1所述的方法,其中,所述将将所述摄像头对焦在所述视线在屏幕中的聚焦位置,以基于所述聚焦位置拍摄图像,包括:
    将所述视线在屏幕中的聚焦位置转换成坐标点;
    将所述坐标点显示在电子设备的屏幕上;
    将所述摄像头对焦在所述坐标点拍摄图像。
  3. 根据权利要求1所述的方法,其中,所述分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,包括:
    分析所述眼部图像,得到用户双眼的瞳孔位置、双眼的眼角的对角线的中心位置、图像的中心位置;
    根据左眼的瞳孔位置和左眼的眼角的对角线的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100001
    或者,根据右眼的瞳孔位置和右眼的眼角的对角线的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100002
    根据左眼的瞳孔位置与右眼的瞳孔位置之间连线的中心位置和图像的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100003
    根据所述位移矢量
    Figure PCTCN2016112490-appb-100004
    和位移矢量
    Figure PCTCN2016112490-appb-100005
    计算用户的视线在屏幕中的聚焦位置。
  4. 根据权利要求3所述的方法,其中,所述根据所述位移矢量
    Figure PCTCN2016112490-appb-100006
    和位移矢量
    Figure PCTCN2016112490-appb-100007
    计算用户的视线在屏幕中的聚焦位置,包括:
    根据
    Figure PCTCN2016112490-appb-100008
    确定位移矢量
    Figure PCTCN2016112490-appb-100009
    其中m、n为矢量的放大系数。
  5. 根据权利要求1至4任一项所述的方法,其中,当所述电子设备配置为拍摄照片时,所述方法还包括:
    检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;
    如果所述保持时间大于所述阈值,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    如果所述保持时间大于所述阈值,则获取用户的眼部图像,
    分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置;
    检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;
    如果所述保持时间大于所述阈值,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像;
    如果所述保持时间大于所述阈值,则获取用户的眼部图像。
  7. 根据权利要求6所述的方法,其中,所述检测是否开启摄像头,包括:
    开启后置摄像头拍照时,检测是否开启前置摄像头;
    相应的,所述如果开启摄像头,获取用户的眼部图像,包括:
    如果开启前置摄像头,则通过所述前置摄像头获取用户的眼部图像。
  8. 根据权利要求1所述的方法,其中,所述基于所述聚焦位置拍摄图像,包括:
    检测是否有停止眼控拍照的操作,是时,控制后置摄像头基于所述聚焦位置拍摄照片。
  9. 根据权利要求8所述的方法,其中,所述检测是否有停止眼控拍照 的操作,包括:
    检测眼部图像中瞳孔的位置与对角线的中心位置是否重合,若重合,则检测到停止眼控拍照的操作。
  10. 一种信息处理装置,所述装置包括第一检测单元、获取单元、分析单元和拍摄单元,其中:
    所述第一检测单元,配置为检测是否开启摄像头;
    所述获取单元,配置为如果开启摄像头,获取用户的眼部图像;
    所述分析单元,配置为分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;
    所述拍摄单元,配置为将所述摄像头对焦在所述视线在屏幕中的聚焦位置,以基于所述聚焦位置拍摄图像。
  11. 根据权利要求10所述的装置,其中,所述拍摄单元进一步包括转换模块、显示模块和拍摄模块,其中:
    所述转换模块,配置为将所述视线在屏幕中的聚焦位置转换成坐标点;
    所述显示模块,配置为将所述坐标点显示在电子设备的屏幕上;
    所述拍摄模块,配置为将所述摄像头对焦在所述坐标点拍摄图像。
  12. 根据权利要求10所述的装置,其中,所述分析单元包括分析模块、第一计算模块、第二计算模块和第三计算模块,其中:
    所述分析模块,配置为分析所述眼部图像,得到用户双眼的瞳孔位置、双眼的眼角的对角线的中心位置、图像的中心位置;
    所述第一计算模块,配置为根据左眼的瞳孔位置和左眼的眼角的对角线的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100010
    或者,根据右眼的瞳孔位置和右眼的眼角的对角线的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100011
    所述第二计算模块,配置为根据左眼的瞳孔位置与右眼的瞳孔位置之间连线的中心位置和图像的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100012
    所述第三计算模块,配置为根据所述位移矢量
    Figure PCTCN2016112490-appb-100013
    和位移矢量
    Figure PCTCN2016112490-appb-100014
    计算用户的视线在屏幕中的聚焦位置。
  13. 根据权利要求12所述的装置,其中,所述第三计算模块,配置为根据
    Figure PCTCN2016112490-appb-100015
    确定位移矢量
    Figure PCTCN2016112490-appb-100016
    其中m、n为矢量的放大系数。
  14. 根据权利要求10至13任一项所述的装置,其中,所述分析单元,配置为检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;
    如果所述保持时间大于所述阈值,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像。
  15. 根据权利要求14所述的装置,其中,所述分析单元,配置为如果所述保持时间大于所述阈值,则获取用户的眼部图像,分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置;检测所述视线在屏幕中的聚焦位置的保持时间是否大于预设的阈值;如果所述保持时间大于所述阈值,将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像;如果所述保持时间大于所述阈值,则获取用户的眼部图像。
  16. 根据权利要求15所述的装置,其中,所述第一检测单元,配置为开启后置摄像头拍照时,检测是否开启前置摄像头;
    相应的,所述获取单元,配置为如果开启前置摄像头,则通过所述前置摄像头获取用户的眼部图像。
  17. 根据权利要求10所述的装置,其中,所述分析单元,配置为检测是否有停止眼控拍照的操作,是时,控制后置摄像头基于所述聚焦位置拍摄照片。
  18. 根据权利要求17所述的装置,其中,所述分析单元,配置为检测眼部图像中瞳孔的位置与对角线的中心位置是否重合,若重合,则检测到停止眼控拍照的操作。
  19. 一种电子设备,所述电子设备包括处理器和显示屏,其中:
    所述处理器,配置为检测是否开启摄像头;如果开启摄像头,获取用户的眼部图像;分析所述眼部图像,得到用户的视线在屏幕中的聚焦位置,所述屏幕为电子设备的显示屏;将所述摄像头对焦在所述视线在屏幕中的聚焦位置拍摄图像,输出所述图像至显示屏;
    所述显示屏,配置为显示所述图像。
  20. 根据权利要求19所述的电子设备,其中,所述处理器,配置为分析所述眼部图像,得到用户双眼的瞳孔位置、双眼的眼角的对角线的中心位置、图像的中心位置;
    根据左眼的瞳孔位置和左眼的眼角的对角线的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100017
    或者,根据右眼的瞳孔位置和右眼的眼角的对角线的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100018
    根据左眼的瞳孔位置与右眼的瞳孔位置之间连线的中心位置和图像的中心位置计算位移矢量
    Figure PCTCN2016112490-appb-100019
    根据所述位移矢量
    Figure PCTCN2016112490-appb-100020
    和位移矢量
    Figure PCTCN2016112490-appb-100021
    计算用户的视线在屏幕中的聚焦位置。
PCT/CN2016/112490 2016-01-20 2016-12-27 一种信息处理方法及装置、电子设备 WO2017124899A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610035453.7A CN105704369B (zh) 2016-01-20 2016-01-20 一种信息处理方法及装置、电子设备
CN201610035453.7 2016-01-20

Publications (1)

Publication Number Publication Date
WO2017124899A1 true WO2017124899A1 (zh) 2017-07-27

Family

ID=56226693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112490 WO2017124899A1 (zh) 2016-01-20 2016-12-27 一种信息处理方法及装置、电子设备

Country Status (2)

Country Link
CN (1) CN105704369B (zh)
WO (1) WO2017124899A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046744A (zh) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 一种关注区域检测方法、装置、可读存储介质及终端设备
CN111077989A (zh) * 2019-05-27 2020-04-28 广东小天才科技有限公司 一种基于电子设备的屏幕控制方法及电子设备
CN111881763A (zh) * 2020-06-30 2020-11-03 北京小米移动软件有限公司 确定用户注视位置的方法、装置、存储介质和电子设备
CN112672058A (zh) * 2020-12-26 2021-04-16 维沃移动通信有限公司 拍摄方法及装置
CN114071002A (zh) * 2020-08-04 2022-02-18 珠海格力电器股份有限公司 拍照方法、装置、存储介质及终端设备
CN115705567A (zh) * 2021-08-06 2023-02-17 荣耀终端有限公司 支付方法及相关装置
CN116820246A (zh) * 2023-07-06 2023-09-29 上海仙视电子科技有限公司 一种视角自适应的屏幕调节控制方法及装置
CN117148959A (zh) * 2023-02-27 2023-12-01 荣耀终端有限公司 眼动追踪的帧率调整方法及相关装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704369B (zh) * 2016-01-20 2019-02-15 努比亚技术有限公司 一种信息处理方法及装置、电子设备
CN106713764A (zh) * 2017-01-24 2017-05-24 维沃移动通信有限公司 一种拍照方法及移动终端
CN112702506A (zh) * 2019-10-23 2021-04-23 北京小米移动软件有限公司 拍摄方法、拍摄装置及电子设备
CN114339037A (zh) * 2021-12-23 2022-04-12 臻迪科技股份有限公司 一种自动对焦的方法、装置、设备和存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1084728A (zh) * 1992-06-02 1994-04-06 佳能株式会社 具有视线检测器的光学装置
CN103246044A (zh) * 2012-02-09 2013-08-14 联想(北京)有限公司 一种自动对焦方法、系统及具有该系统的照相机和摄像机
CN103338331A (zh) * 2013-07-04 2013-10-02 上海斐讯数据通信技术有限公司 一种采用眼球控制对焦的图像采集系统
CN103516985A (zh) * 2013-09-18 2014-01-15 上海鼎为软件技术有限公司 移动终端及其获取图像的方法
CN103795926A (zh) * 2014-02-11 2014-05-14 惠州Tcl移动通信有限公司 利用眼球跟踪技术控制拍照对焦的方法、系统及拍照设备
CN105049717A (zh) * 2015-07-02 2015-11-11 上海闻泰电子科技有限公司 用于数码相机的瞳孔控制自动对焦方法及系统
CN105704369A (zh) * 2016-01-20 2016-06-22 努比亚技术有限公司 一种信息处理方法及装置、电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4536248B2 (ja) * 2000-11-24 2010-09-01 オリンパス株式会社 撮像装置
JP4966816B2 (ja) * 2007-10-25 2012-07-04 株式会社日立製作所 視線方向計測方法および視線方向計測装置
CN102149325B (zh) * 2008-09-26 2013-01-02 松下电器产业株式会社 视线方向判定装置及视线方向判定方法
CN104699124A (zh) * 2015-03-24 2015-06-10 天津通信广播集团有限公司 一种基于视线观看角度检测的电视机角度调整方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1084728A (zh) * 1992-06-02 1994-04-06 佳能株式会社 具有视线检测器的光学装置
CN103246044A (zh) * 2012-02-09 2013-08-14 联想(北京)有限公司 一种自动对焦方法、系统及具有该系统的照相机和摄像机
CN103338331A (zh) * 2013-07-04 2013-10-02 上海斐讯数据通信技术有限公司 一种采用眼球控制对焦的图像采集系统
CN103516985A (zh) * 2013-09-18 2014-01-15 上海鼎为软件技术有限公司 移动终端及其获取图像的方法
CN103795926A (zh) * 2014-02-11 2014-05-14 惠州Tcl移动通信有限公司 利用眼球跟踪技术控制拍照对焦的方法、系统及拍照设备
CN105049717A (zh) * 2015-07-02 2015-11-11 上海闻泰电子科技有限公司 用于数码相机的瞳孔控制自动对焦方法及系统
CN105704369A (zh) * 2016-01-20 2016-06-22 努比亚技术有限公司 一种信息处理方法及装置、电子设备

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111077989A (zh) * 2019-05-27 2020-04-28 广东小天才科技有限公司 一种基于电子设备的屏幕控制方法及电子设备
CN111077989B (zh) * 2019-05-27 2023-11-24 广东小天才科技有限公司 一种基于电子设备的屏幕控制方法及电子设备
CN111046744B (zh) * 2019-11-21 2023-04-18 深圳云天励飞技术股份有限公司 一种关注区域检测方法、装置、可读存储介质及终端设备
CN111046744A (zh) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 一种关注区域检测方法、装置、可读存储介质及终端设备
CN111881763A (zh) * 2020-06-30 2020-11-03 北京小米移动软件有限公司 确定用户注视位置的方法、装置、存储介质和电子设备
CN114071002A (zh) * 2020-08-04 2022-02-18 珠海格力电器股份有限公司 拍照方法、装置、存储介质及终端设备
CN114071002B (zh) * 2020-08-04 2023-01-31 珠海格力电器股份有限公司 拍照方法、装置、存储介质及终端设备
CN112672058A (zh) * 2020-12-26 2021-04-16 维沃移动通信有限公司 拍摄方法及装置
CN112672058B (zh) * 2020-12-26 2022-05-03 维沃移动通信有限公司 拍摄方法及装置
CN115705567A (zh) * 2021-08-06 2023-02-17 荣耀终端有限公司 支付方法及相关装置
CN115705567B (zh) * 2021-08-06 2024-04-19 荣耀终端有限公司 支付方法及相关装置
CN117148959A (zh) * 2023-02-27 2023-12-01 荣耀终端有限公司 眼动追踪的帧率调整方法及相关装置
CN116820246A (zh) * 2023-07-06 2023-09-29 上海仙视电子科技有限公司 一种视角自适应的屏幕调节控制方法及装置
CN116820246B (zh) * 2023-07-06 2024-05-28 上海仙视电子科技有限公司 一种视角自适应的屏幕调节控制方法及装置

Also Published As

Publication number Publication date
CN105704369A (zh) 2016-06-22
CN105704369B (zh) 2019-02-15

Similar Documents

Publication Publication Date Title
WO2017124899A1 (zh) 一种信息处理方法及装置、电子设备
KR102444085B1 (ko) 휴대용 통신 장치 및 휴대용 통신 장치의 영상 표시 방법
US10511758B2 (en) Image capturing apparatus with autofocus and method of operating the same
CN109891874B (zh) 一种全景拍摄方法及装置
JP6267363B2 (ja) 画像を撮影する方法および装置
KR20220080195A (ko) 촬영 방법 및 전자 장치
EP3076659B1 (en) Photographing apparatus, control method thereof, and non-transitory computer-readable recording medium
WO2021031609A1 (zh) 活体检测方法及装置、电子设备和存储介质
EP3001247B1 (en) Method and terminal for acquiring panoramic image
WO2017107629A1 (zh) 移动终端、数据传输系统及移动终端拍摄方法
TWI706379B (zh) 圖像處理方法及裝置、電子設備和儲存介質
KR102018887B1 (ko) 신체 부위 검출을 이용한 이미지 프리뷰
WO2016029641A1 (zh) 照片获取方法及装置
US9959484B2 (en) Method and apparatus for generating image filter
WO2017012269A1 (zh) 通过图像确定空间参数的方法、装置及终端设备
WO2017114048A1 (zh) 移动终端及联系人标识方法
WO2017088609A1 (zh) 图像去噪装置和方法
WO2017000491A1 (zh) 获取虹膜图像的方法、装置及红膜识别设备
CN109803165A (zh) 视频处理的方法、装置、终端及存储介质
WO2018184260A1 (zh) 文档图像的校正方法及装置
WO2017114088A1 (zh) 一种自动白平衡方法及装置、终端、存储介质
WO2018098860A1 (zh) 照片合成方法及装置
WO2022033272A1 (zh) 图像处理方法以及电子设备
US20230224574A1 (en) Photographing method and apparatus
JP2015126326A (ja) 電子機器及び画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16886158

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16886158

Country of ref document: EP

Kind code of ref document: A1