WO2017071532A1 - 一种自拍合影的方法和装置 - Google Patents

一种自拍合影的方法和装置 Download PDF

Info

Publication number
WO2017071532A1
WO2017071532A1 PCT/CN2016/102848 CN2016102848W WO2017071532A1 WO 2017071532 A1 WO2017071532 A1 WO 2017071532A1 CN 2016102848 W CN2016102848 W CN 2016102848W WO 2017071532 A1 WO2017071532 A1 WO 2017071532A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face recognition
point
portraits
faces
Prior art date
Application number
PCT/CN2016/102848
Other languages
English (en)
French (fr)
Inventor
谢书勋
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017071532A1 publication Critical patent/WO2017071532A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Definitions

  • This document relates to, but is not limited to, the field of terminal technology, and in particular relates to a method and apparatus for self-photographing.
  • This paper provides a method and device for self-timer taking photos, which can improve the convenience of the user's self-timer and the quality of the group picture.
  • An embodiment of the present invention provides a method for self-timer taking a picture, which is applied to a terminal, and the method includes the following steps:
  • the face of the grouper is determined based on face recognition
  • the camera is triggered to perform shooting.
  • the determining a positional relationship between the face of the photographer includes:
  • the determining the positional relationship between the faces of the photographers is suitable for taking a group photo, including:
  • the determining whether the face of the poser is on the same focal plane comprises:
  • the area ratio r of the face recognition frame corresponding to the faces of the two photographers is greater than the first threshold, it is determined that the faces of the two photographers are on the same focal plane; if the face of the two photographers corresponds to the face recognition frame If the area ratio r is less than or equal to the first threshold, it is determined that the faces of the two photographers are not in the same focal plane;
  • the area ratio r of the face recognition frame corresponding to the face of the two photographers is the area s1 of the face recognition frame of the first grouper divided by the area s2 of the face recognition frame of the second figure, s1 is less than or equal to S2.
  • the determining whether the face of the poser is close to each other comprises:
  • the closeness t of the two poser faces is greater than the second threshold, it is determined that the posers are close to each other; if the closeness t of the two poser faces is less than or equal to the second threshold, it is determined that the posers are not close to each other;
  • the calculating the proximity t of the faces of the two photographers is calculated in the following manner:
  • the center point of the face recognition frame of the first group photo is A1
  • the center point of the face recognition frame of the second group photo is B1
  • the center point of the face recognition frame of the two photo participants is the first photo with the first point
  • the intersection of the face recognition frame of the person is A2
  • the intersection of the center point of the face recognition frame of the two posers and the face recognition frame of the second poser is B2
  • the distance between the point A1 and the point B1 is s
  • the distance between point A1 and point A2 is da
  • the distance between point B1 and point B2 is db.
  • the determining the face of the poser based on the face recognition comprises:
  • Face recognition is performed on the face of the subject, and when the number of recognized faces is more than one, the face that is focused by the two face recognition frames having the largest area is used as the face of the photographer.
  • the embodiment of the invention further provides a device for self-timer taking a picture, which is applied to the terminal, and includes:
  • a face detection module configured to determine a face of the grouper based on face recognition in the self-photographing mode
  • a position calculation module configured to determine a positional relationship between the face of the photographer
  • the shooting module is set to trigger the camera to take a shot when it is determined that the positional relationship between the face of the photographer is suitable for taking a group photo.
  • the location calculation module is configured to determine a positional relationship between the face of the poser in the following manner: determining whether the face of the poser is on the same focal plane and close to each other;
  • the photographing module is configured to determine a positional relationship between the faces of the poser in a manner suitable for taking a photo: determining that the face of the poser is on the same focal plane and close to each other.
  • the location calculation module is configured to determine whether the face of the photographer is on the same focal plane in the following manner:
  • the area ratio r of the face recognition frame corresponding to the faces of the two photographers is greater than the first threshold, it is determined that the faces of the two photographers are on the same focal plane; if the face of the two photographers corresponds to the face recognition frame If the area ratio r is less than or equal to the first threshold, it is determined that the faces of the two photographers are not in the same focal plane;
  • the area ratio r of the face recognition frame corresponding to the face of the two photographers is the area s1 of the face recognition frame of the first grouper divided by the area s2 of the face recognition frame of the second figure, s1 is less than or equal to S2.
  • the location calculation module is configured to determine whether the face of the photographer is close to each other in the following manner:
  • the closeness t of the two poser faces is greater than the second threshold, it is determined that the posers are close to each other; if the closeness t of the two poser faces is less than or equal to the second threshold, it is determined that the posers are not close to each other;
  • the calculating the proximity t of the faces of the two photographers is calculated in the following manner:
  • the center point of the face recognition frame of the first group photo is A1
  • the center point of the face recognition frame of the second group photo is B1
  • the center point of the face recognition frame of the two photo participants is the first photo with the first point
  • the intersection of the face recognition frame of the person is A2
  • the intersection of the center point of the face recognition frame of the two posers and the face recognition frame of the second poser is B2
  • the distance between the point A1 and the point B1 is s
  • the distance between point A1 and point A2 is da
  • the distance between point B1 and point B2 is db.
  • the face detection module is configured to determine a face of the photographer based on face recognition in the following manner:
  • Face recognition is performed on the face of the subject, and when the number of recognized faces is more than one, the face that is focused by the two face recognition frames having the largest area is used as the face of the photographer.
  • the embodiment of the invention provides a method for self-timer grouping, which is applied to a terminal, and includes:
  • the number of portraits being photographed and the positional relationship between the portrait faces are determined based on face recognition;
  • the camera shutter is triggered to take a portrait face photo.
  • determining the number of portraits being photographed and the positional relationship between the portrait faces based on face recognition include:
  • the facial grouping condition includes: at least two portraits are in the same row and the faces of the two portraits are close to each other;
  • determining the positional relationship between the portraits according to the positional relationship between the face recognition frames corresponding to the portraits including:
  • the area ratio of the two face recognition frames is greater than the first threshold, determining that the two corresponding face recognition frames are in the same row; wherein, when the first threshold is less than 1, the area ratio is an area The area of the small face recognition frame divided by the area of the face recognition frame with a large area;
  • determining the positional relationship between the portraits according to the positional relationship between the face recognition frames corresponding to the portraits including:
  • the determining the facial proximity t of the two portraits according to the ratio of the size of the two face frames corresponding to the two portraits to the distance between the two face frames includes:
  • the center point of the face recognition frame of the first portrait is A1
  • the center point of the face recognition frame of the second portrait is B1
  • the center point of the two face recognition frames is connected with the face recognition of the first figure
  • the intersection point of the frame is A2
  • the intersection point of the center point connection line of the two face recognition frames and the face recognition frame of the second figure is B2
  • the distance between the A1 point and the B1 point is s
  • the distance between the A1 point and the A2 point is da
  • the distance between point B1 and point B2 is db;
  • An embodiment of the present invention provides a device for self-timer taking a picture, which is applied to a terminal, and includes:
  • the face detection and processing module is configured to determine, according to face recognition, the number of portraits being photographed and the positional relationship between the portrait faces in the self-photographing mode;
  • the group photographing module is configured to trigger the camera shutter to take a portrait face photo when the number of portraits exceeds two and the positional relationship of at least two portraits conforms to the facial grouping condition.
  • the face detection and processing module is configured to determine the number of portraits being photographed and the positional relationship between the portrait faces based on face recognition in the following manner:
  • the facial grouping condition includes: at least two portraits are in the same row and the faces of the two portraits are close to each other;
  • the face detection and processing module is configured to determine a positional relationship between the portraits according to a positional relationship between the face recognition frames corresponding to the portraits in the following manner:
  • the area ratio of the two face recognition frames is greater than the first threshold, determining that the two corresponding face recognition frames are in the same row; wherein, when the first threshold is less than 1, the area ratio is Divide the area of the face recognition frame with a small area by the area of the face recognition frame with a large area;
  • the face detection and processing module is configured to determine a positional relationship between the portraits according to a positional relationship between the face recognition frames corresponding to the portraits in the following manner:
  • the degree of closeness of the faces of the two portraits is determined according to the ratio of the size of the two face frames corresponding to the two portraits to the distance between the two face frames:
  • the center point of the face recognition frame of the first portrait is A1
  • the center point of the face recognition frame of the second portrait is B1
  • the center point of the two face recognition frames is connected with the face recognition of the first figure
  • the intersection point of the frame is A2
  • the intersection point of the center point connection line of the two face recognition frames and the face recognition frame of the second figure is B2
  • the distance between the A1 point and the B1 point is s
  • the distance between the A1 point and the A2 point is da
  • the distance between point B1 and point B2 is db;
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the method and device for self-timer photographing provided by the embodiment of the present invention use face recognition to determine the positional relationship between the face of the grouper, when the face of the grouper is close to each other and in the same focus When the plane is on, the terminal triggers the camera to take pictures automatically.
  • This self-timer grouping method is convenient to operate and can improve user experience and photography quality.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a communication system supporting communication between mobile terminals of the present invention
  • FIG. 3 is a flowchart of a method for self-timer taking a photo according to an embodiment of the present invention.
  • Fig. 4 is a diagram showing the degree of closeness of a human face in the formula (1-1).
  • FIG. 5 is a schematic diagram of a device for self-timer taking a photo according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of another method for self-timer taking a picture according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another apparatus for self-timer taking a picture according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of a method for self-timer taking a photo according to a specific example of the present invention.
  • FIG. 9 is a corresponding photo 1 in a specific example of the present invention.
  • Figure 10 is a corresponding photo 2 of a specific example of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit can include a broadcast connection At least one of the receiving module 111, the mobile communication module 112, the wireless internet module 113, the short-range communication module 114, and the location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Purple BeeTM and more.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is GPS (Global Positioning System).
  • GPS Global Positioning System
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information based on longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense the sliding type Whether the phone is on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 1410 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include liquid crystal display At least one of a display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable In addition to programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. Additionally, the controller 180 can include a multimedia module 1810 for reproducing (or playing back) multimedia data, which can be constructed within the controller 180 or can be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 may include multiple BSC 2750s.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • an embodiment of the present invention provides a method for self-timer taking a picture, which is applied to a terminal, including:
  • determining the face of the poser based on the face recognition comprises:
  • Face recognition is performed on the face of the subject, and when the number of recognized faces is more than one, the face focused by the two face recognition frames having the largest area is used as the face of the photographer;
  • determining the positional relationship between the face of the photographer includes:
  • determining whether the face of the grouper is on the same focal plane comprises:
  • the area ratio r of the face recognition frame corresponding to the faces of the two photographers is greater than the first threshold, it is determined that the faces of the two photographers are on the same focal plane; if the face of the two photographers corresponds to the face recognition frame If the area ratio r is less than or equal to the first threshold, it is determined that the faces of the two photographers are not in the same focal plane;
  • the area ratio r of the face recognition frame corresponding to the face of the two photographers is the area s1 of the face recognition frame of the first grouper divided by the area s2 of the face recognition frame of the second figure, s1 is less than or equal to S2;
  • the first threshold is 0.8;
  • determining whether the face of the grouper is close to each other comprises:
  • the closeness t of the two poser faces is greater than the second threshold, it is determined that the posers are close to each other; if the closeness t of the two poser faces is less than or equal to the second threshold, it is determined that the posers are not close to each other;
  • the calculating the proximity t of the faces of the two photographers can be calculated in the following manner:
  • the center point of the face recognition frame of the first grouper is A1
  • the center point of the face recognition frame of the second grouper is B1
  • the center point of the face recognition frame of the two photographers The intersection of the connection with the face recognition frame of the first photo-collector is A2, and the intersection of the center point of the face recognition frame of the two photographers and the face recognition frame of the second photo-collector is B2, A1 point and B1
  • the distance of the point is s, the distance between point A1 and point A2 is da, and the distance between point B1 and point B2 is db;
  • the distance between two points can be measured by pixels
  • the second threshold is 0.8
  • determining the positional relationship between the faces of the photo journeytors is suitable for taking a group photo, including:
  • the triggering the camera to perform the shooting includes: emitting a prompt sound and triggering the camera to shoot.
  • an embodiment of the present invention provides a device for self-timer taking a picture, which is applied to a terminal, and includes:
  • the face detection module 501 is configured to determine a face of the photographer based on the face recognition in the self-photographing mode
  • the location calculation module 502 is configured to determine a positional relationship between the face of the photographer
  • the photographing module 503 is configured to trigger the camera to perform photographing when it is determined that the positional relationship between the photographer's faces is suitable for taking a group photo.
  • the location calculation module 502 is configured to determine the positional relationship between the face of the photographer in the following manner:
  • the shooting module 503 is configured to determine the positional relationship between the faces of the photographers in the following manner:
  • the location calculation module 502 is configured to determine whether the face of the photographer is in the same focal plane in the following manner:
  • the area ratio r of the face recognition frame corresponding to the faces of the two photographers is greater than the first threshold, it is determined that the faces of the two photographers are on the same focal plane; if the face of the two photographers corresponds to the face recognition frame If the area ratio r is less than or equal to the first threshold, it is determined that the faces of the two photographers are not in the same focal plane;
  • the area ratio r of the face recognition frame corresponding to the face of the two photographers is the area s1 of the face recognition frame of the first grouper divided by the area s2 of the face recognition frame of the second figure, s1 is less than or equal to S2.
  • the location calculation module 502 is configured to determine whether the faces of the groupers are close to each other in the following manner:
  • the closeness t of the two poser faces is greater than the second threshold, it is determined that the posers are close to each other; if the closeness t of the two poser faces is less than or equal to the second threshold, it is determined that the posers are not close to each other.
  • the location calculation module 502 is configured to calculate the proximity t of the two photographers' faces, and can be calculated in the following manner:
  • the center point of the face recognition frame of the first group photographer is A1
  • the second combination The center point of the shadow recognition box of the shadow is B1
  • the intersection point of the center point of the face recognition frame of the two photographers and the face recognition frame of the first poser is A2
  • the face recognition of the two photographers The intersection of the center point of the frame and the face recognition frame of the second photo-collector is B2, the distance between the point A1 and the point B1 is s, the distance between the point A1 and the point A2 is da, and the distance between the point B1 and the point B2 is db.
  • the face detection module 501 is configured to determine a face of the photographer based on face recognition in the following manner:
  • Face recognition is performed on the face of the subject, and when the number of recognized faces is more than one, the face that is focused by the two face recognition frames having the largest area is used as the face of the photographer.
  • the shooting module 503 is configured to trigger the camera to shoot in the following manner: a prompt sound is emitted, and the camera is triggered to shoot.
  • the embodiment of the present invention provides a method for self-timer taking a picture, which is applied to a terminal, including:
  • determining the number of portraits being photographed and the positional relationship between the portrait faces based on face recognition include:
  • the facial grouping condition includes: at least two portraits are in the same row and the faces of the two portraits are close to each other;
  • determining the positional relationship between the portraits according to the positional relationship between the face recognition frames corresponding to the portraits including:
  • determining the positional relationship between the portraits according to the positional relationship between the face recognition frames corresponding to the portraits including:
  • the determining the facial proximity t of the two portraits according to the ratio of the size of the two face frames corresponding to the two portraits to the distance between the two face frames includes:
  • the center point of the face recognition frame of the first portrait is A1
  • the center point of the face recognition frame of the second portrait is B1
  • the center point of the two face recognition frames is connected with the face recognition of the first figure
  • the intersection point of the frame is A2
  • the intersection point of the center point connection line of the two face recognition frames and the face recognition frame of the second figure is B2
  • the distance between the A1 point and the B1 point is s
  • the distance between the A1 point and the A2 point is da
  • the distance between point B1 and point B2 is db;
  • an embodiment of the present invention provides a device for self-timer taking a picture, which is applied to a terminal, and includes:
  • the face detection and processing module 701 is configured to determine, according to face recognition, the number of portraits being photographed and the positional relationship between the portrait faces in the self-photographing mode;
  • the group photographing module 702 is configured to trigger the camera shutter to take a portrait face photo when the number of portraits exceeds two and the positional relationship of at least two portraits conforms to the facial grouping condition.
  • the face detection and processing module 701 is configured to determine the number of portraits being photographed and the positional relationship between the portrait faces based on face recognition in the following manner:
  • the facial grouping condition includes: at least two portraits are in the same row and the faces of the two portraits are close to each other;
  • the face detection and processing module 701 is configured to determine a positional relationship between the portraits according to a positional relationship between the face recognition frames corresponding to the portraits in the following manner:
  • the area ratio of the two face recognition frames is greater than the first threshold, determining that the two corresponding face recognition frames are in the same row; wherein, when the first threshold is less than 1, the area ratio is an area The area of the small face recognition frame divided by the area of the face recognition frame with a large area;
  • the face detection and processing module 701 is configured to determine a positional relationship between the portraits according to a positional relationship between the face recognition frames corresponding to the portraits in the following manner:
  • the degree of closeness of the faces of the two portraits is determined according to the ratio of the size of the two face frames corresponding to the two portraits to the distance between the two face frames:
  • the center point of the face recognition frame of the first portrait is A1
  • the center point of the face recognition frame of the second portrait is B1
  • the center point of the two face recognition frames is connected with the face recognition of the first figure
  • the intersection point of the frame is A2
  • the intersection point of the center point connection line of the two face recognition frames and the face recognition frame of the second figure is B2
  • the distance between the A1 point and the B1 point is s
  • the distance between the A1 point and the A2 point is da
  • the distance between point B1 and point B2 is db.
  • a method for taking a selfie photo specifically includes the following steps:
  • Step S801 the user selects a self-timer grouping mode
  • Step S802 the mobile phone identifies the number of faces
  • Step S803 determining whether the number of faces exceeds one, if yes, proceeding to step S804, otherwise returning to step S802;
  • Step S804 selecting two face frames with the largest area as the face frame corresponding to the face of the grouper;
  • Step S805 calculating an area ratio r of the selected two face frames
  • s1 is the area of the first face frame
  • s2 is the area of the second face frame
  • s1 is less than or equal to s2;
  • Step S806 determining whether r is greater than the first threshold, if yes, proceeding to step S807, otherwise returning to step S802;
  • the first threshold is 0.8;
  • Step S807 calculating the proximity of the two faces
  • the absolute distance between the two faces on the image is small, and when photographing at close range, the absolute distance between the two faces on the image is large.
  • the size of the face needs to be included in the calculation. Therefore, the ratio of the size of the face frame to the distance between the faces is used to determine the proximity of the face.
  • the center point of the first face frame is A1
  • the center point of the second face frame is B1
  • the intersection of the center point of the two face frames and the first face frame is A2
  • the intersection of the center point line of the two face frames and the second person face frame is B2
  • the distance between the A1 point and the B1 point is s
  • the distance between the A1 point and the A2 point is da
  • the distance between the B1 point and the B2 point is Db
  • da is 188 pixels
  • db is 224 pixels
  • s is 896 pixels
  • da is 336 pixels
  • db is 320 pixels
  • s is 640 pixels
  • Step S808 it is determined whether the proximity t between the two faces is greater than the second threshold, and then step S809 is performed;
  • the second threshold is 0.8
  • the closeness t between the faces of the two faces is 45.9%, which is much less than 80%, indicating that the faces of the two people are not close enough to meet the conditions for taking a group photo;
  • the closeness t between the faces of the two faces is 102.5%, which is greater than 80%, indicating that the faces of the two persons are close enough to meet the conditions for taking a group photo.
  • Step S809 issuing a prompt tone, triggering the camera to take a group photo
  • an embodiment of the present invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the method and device for self-timer photographing provided by the above embodiments use face recognition to determine the positional relationship between the face of the poser.
  • the terminal triggers the camera.
  • Auto-photographing this self-timer photo is easy to operate and improves user experience and photographic quality.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. Instructions to achieve their corresponding functions. This application is not limited to any specific combination of hardware and software.
  • the positional relationship between the face of the photographer is determined by using face recognition.
  • the terminal triggers the camera to automatically take a picture.
  • the self-timer photo is easy to operate and improves the user experience and quality of photography.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

本文公开了一种自拍合影的方法,包括:在自拍合影模式下,基于人脸识别确定合影者的面部;确定合影者面部之间的位置关系;在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。

Description

一种自拍合影的方法和装置 技术领域
本文涉及但不限于终端技术领域,尤其涉及的是一种自拍合影的方法和装置。
背景技术
当两人(比如,情侣)需要自拍合影时往往需要让第三者帮忙拍照,或借助相机的定时拍照功能,或者使用前置摄像头拍照。但以上方法或多或少有一些缺点。因为很多时候不想麻烦他人,定时拍照的操作时间相对过长,利用前置摄像头时用单手握住手机的同时还要点击拍照键,操作起来不方便,而且容易引起拍照画面抖动。
因此,如何提供一种便捷的自拍合影的解决方案,是需要解决的技术问题。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本文提供一种自拍合影的方法和装置,能够提高用户自拍合影的便捷性和合影画面的质量。
本发明实施例提供了一种自拍合影的方法,应用于终端,所述方法包括步骤:
在自拍合影模式下,基于人脸识别确定合影者的面部;
确定合影者面部之间的位置关系;
在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。
可选地,所述确定合影者面部之间的位置关系,包括:
确定合影者的面部是否处于同一焦平面上以及是否相互靠近;
所述判定合影者的面部之间的位置关系适合拍摄合影,包括:
判定合影者的面部处于同一焦平面上并且相互靠近。
可选地,所述确定合影者的面部是否处于同一焦平面上,包括:
如果两个合影者的面部对应的人脸识别框的面积比r大于第一阈值,则判定两个合影者的面部处于同一焦平面上;如果两个合影者的面部对应的人脸识别框的面积比r小于或等于第一阈值,则判定两个合影者的面部没有处于同一焦平面上;
其中,两个合影者的面部对应的人脸识别框的面积比r为第一合影者的人脸识别框的面积s1除以第二合影者的人脸识别框的面积s2,s1小于或等于s2。
可选地,所述确定合影者的面部是否相互靠近,包括:
计算两个合影者面部的靠近程度t;
如果两个合影者面部的靠近程度t大于第二阈值,则判定合影者相互靠近;如果两个合影者面部的靠近程度t小于或等于第二阈值,则判定合影者没有相互靠近;
其中,所述计算两个合影者面部的靠近程度t,采用以下方式进行计算:
Figure PCTCN2016102848-appb-000001
其中,第一合影者的人脸识别框的中心点为A1,第二合影者的人脸识别框的中心点为B1,两个合影者的人脸识别框的中心点连线与第一合影者的人脸识别框的交点为A2,两个合影者的人脸识别框的中心点连线与第二合影者的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
可选地,所述基于人脸识别确定合影者的面部,包括:
对拍摄对象的面部进行人脸识别,在识别出的人脸个数多于一个时,将面积最大的两个人脸识别框所聚焦的面部作为合影者的面部。
本发明实施例还提供一种自拍合影的装置,应用于终端,包括:
人脸检测模块,设置为在自拍合影模式下,基于人脸识别确定合影者的面部;
位置计算模块,设置为确定合影者面部之间的位置关系;
拍摄模块,设置为在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。
可选地,所述位置计算模块,设置为采用以下方式确定合影者面部之间的位置关系:确定合影者的面部是否处于同一焦平面上以及是否相互靠近;
所述拍摄模块,设置为采用以下方式判定合影者的面部之间的位置关系适合拍摄合影:判定合影者的面部处于同一焦平面上并且相互靠近。
可选地,所述位置计算模块,设置为采用以下方式确定合影者的面部是否处于同一焦平面上:
如果两个合影者的面部对应的人脸识别框的面积比r大于第一阈值,则判定两个合影者的面部处于同一焦平面上;如果两个合影者的面部对应的人脸识别框的面积比r小于或等于第一阈值,则判定两个合影者的面部没有处于同一焦平面上;
其中,两个合影者的面部对应的人脸识别框的面积比r为第一合影者的人脸识别框的面积s1除以第二合影者的人脸识别框的面积s2,s1小于或等于s2。
可选地,所述位置计算模块,设置为采用以下方式确定合影者的面部是否相互靠近:
计算两个合影者面部的靠近程度t;
如果两个合影者面部的靠近程度t大于第二阈值,则判定合影者相互靠近;如果两个合影者面部的靠近程度t小于或等于第二阈值,则判定合影者没有相互靠近;
其中,所述计算两个合影者面部的靠近程度t,采用以下方式进行计算:
Figure PCTCN2016102848-appb-000002
其中,第一合影者的人脸识别框的中心点为A1,第二合影者的人脸识别框的中心点为B1,两个合影者的人脸识别框的中心点连线与第一合影者的人脸识别框的交点为A2,两个合影者的人脸识别框的中心点连线与第二合影者的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
可选地,所述人脸检测模块,设置为采用以下方式基于人脸识别确定合影者的面部:
对拍摄对象的面部进行人脸识别,在识别出的人脸个数多于一个时,将面积最大的两个人脸识别框所聚焦的面部作为合影者的面部。
本发明实施例提供了一种自拍合影的方法,应用于终端,包括:
在自拍合影模式下,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系;
在判定人像的数量超过两个且至少存在两个人像的位置关系符合面部合影条件,则触发照相机快门拍摄人像面部合影照片。
可选地,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系,包括:
对拍摄对象的面部进行人脸识别,根据识别出的人脸数量确定人像的数量;
根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系;
可选地,所述面部合影条件包括:至少存在两个人像在同一排且两个人像的面部之间靠近;
可选地,所述根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系,包括:
当两个人脸识别框的面积比大于第一阈值时,确定所述两个人脸识别框对应的拍摄对象在同一排;其中,当所述第一阈值小于1时,所述面积比是用面积小的人脸识别框的面积除以面积大的人脸识别框的面积;
可选地,所述根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系,包括:
根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度;
其中,所述根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度t,包括:
Figure PCTCN2016102848-appb-000003
其中,第一人像的人脸识别框的中心点为A1,第二人像的人脸识别框的中心点为B1,两个人脸识别框的中心点连线与第一人像的人脸识别框的交点为A2,两个人脸识别框的中心点连线与第二人像的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db;
本发明实施例提供了一种自拍合影的装置,应用于终端,包括:
人脸检测及处理模块,设置为在自拍合影模式下,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系;
合影拍摄模块,设置为在判定人像的数量超过两个且至少存在两个人像的位置关系符合面部合影条件,则触发照相机快门拍摄人像面部合影照片。
可选地,人脸检测及处理模块,设置为采用以下方式基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系:
对拍摄对象的面部进行人脸识别,根据识别出的人脸数量确定人像的数量;
根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系;
可选地,所述面部合影条件包括:至少存在两个人像在同一排且两个人像的面部之间靠近;
可选地,人脸检测及处理模块,设置为采用以下方式根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系:
当两个人脸识别框的面积比大于第一阈值时,确定所述两个人脸识别框对应的拍摄对象在同一排;其中,当所述第一阈值小于1时,所述面积比是 用面积小的人脸识别框的面积除以面积大的人脸识别框的面积;
可选地,人脸检测及处理模块,设置为采用以下方式根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系:
根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度t:
Figure PCTCN2016102848-appb-000004
其中,第一人像的人脸识别框的中心点为A1,第二人像的人脸识别框的中心点为B1,两个人脸识别框的中心点连线与第一人像的人脸识别框的交点为A2,两个人脸识别框的中心点连线与第二人像的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db;
本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述方法。
与相关技术相比,本发明实施例提供的一种自拍合影的方法和装置,利用人脸识别判断出合影者面部之间的位置关系,当合影者的面部相互靠得比较近且在同一焦平面上时,终端触发相机自动拍照,这种自拍合影的方式操作便捷,能够提高用户体验和摄影质量。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为实现本发明各个实施例的移动终端的硬件结构示意;
图2为支持本发明移动终端之间进行通信的通信系统的示意图;
图3为本发明实施例的一种自拍合影的方法流程图。
图4为公式(1-1)中计算人脸靠近程度的示意图。
图5为本发明实施例的一种自拍合影的装置示意图。
图6为本发明实施例的另一种自拍合影的方法流程图。
图7为本发明实施例的另一种自拍合影的装置示意图。
图8为本发明具体示例的一种自拍合影的方法流程图。
图9为本发明具体示例中对应的照片一。
图10为本发明具体示例中对应的照片二。
本发明的实施方式
下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端的硬件结构示意。
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接 收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、 紫蜂TM等等。
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位系统)。根据当前的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风1220,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机1210。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型 电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器1410将在下面结合触摸屏来对此进行描述。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显 示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incomingcommunication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦 除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块1810,多媒体模块1810可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图2描述其中根据本发明的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC2750。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
基于上述移动终端硬件结构以及通信系统,提出本文各个实施例。
如图3所示,本发明实施例提出一种自拍合影的方法,应用于终端,包括:
S301,在自拍合影模式下,基于人脸识别确定合影者的面部;
其中,所述基于人脸识别确定合影者的面部,包括:
对拍摄对象的面部进行人脸识别,在识别出的人脸个数多于一个时,将面积最大的两个人脸识别框所聚焦的面部作为合影者的面部;
S302,确定合影者面部之间的位置关系;
其中,所述确定合影者面部之间的位置关系,包括:
确定合影者的面部是否处于同一焦平面上以及是否相互靠近;
其中,所述确定合影者的面部是否处于同一焦平面上,包括:
如果两个合影者的面部对应的人脸识别框的面积比r大于第一阈值,则判定两个合影者的面部处于同一焦平面上;如果两个合影者的面部对应的人脸识别框的面积比r小于或等于第一阈值,则判定两个合影者的面部没有处于同一焦平面上;
其中,两个合影者的面部对应的人脸识别框的面积比r为第一合影者的人脸识别框的面积s1除以第二合影者的人脸识别框的面积s2,s1小于或等于s2;
其中,所述第一阈值为0.8;
其中,所述确定合影者的面部是否相互靠近,包括:
计算两个合影者面部的靠近程度t;
如果两个合影者面部的靠近程度t大于第二阈值,则判定合影者相互靠近;如果两个合影者面部的靠近程度t小于或等于第二阈值,则判定合影者没有相互靠近;
其中,所述计算两个合影者面部的靠近程度t,可以采用以下方式进行计算:
Figure PCTCN2016102848-appb-000005
其中,如图4所示,第一合影者的人脸识别框的中心点为A1,第二合影者的人脸识别框的中心点为B1,两个合影者的人脸识别框的中心点连线与第一合影者的人脸识别框的交点为A2,两个合影者的人脸识别框的中心点连线与第二合影者的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db;
其中,两点之间的距离可以用像素作为度量单位;
其中,所述第二阈值为0.8;
S303,在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。
其中,所述判定合影者的面部之间的位置关系适合拍摄合影,包括:
判定合影者的面部处于同一焦平面上并且相互靠近;
其中,所述触发相机进行拍摄,包括:发出提示音,并触发相机进行拍摄。
如图5所示,本发明实施例提供了一种自拍合影的装置,应用于终端,其特征在于,包括:
人脸检测模块501,设置为在自拍合影模式下,基于人脸识别确定合影者的面部;
位置计算模块502,设置为确定合影者面部之间的位置关系;
拍摄模块503,设置为在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。
其中,位置计算模块502,设置为采用以下方式确定合影者面部之间的位置关系:
确定合影者的面部是否处于同一焦平面上以及是否相互靠近。
其中,拍摄模块503,设置为采用以下方式判定合影者的面部之间的位置关系适合拍摄合影:
判定合影者的面部处于同一焦平面上并且相互靠近。
其中,位置计算模块502,用于设置为采用以下方式确定合影者的面部是否处于同一焦平面上:
如果两个合影者的面部对应的人脸识别框的面积比r大于第一阈值,则判定两个合影者的面部处于同一焦平面上;如果两个合影者的面部对应的人脸识别框的面积比r小于或等于第一阈值,则判定两个合影者的面部没有处于同一焦平面上;
其中,两个合影者的面部对应的人脸识别框的面积比r为第一合影者的人脸识别框的面积s1除以第二合影者的人脸识别框的面积s2,s1小于或等于s2。
其中,位置计算模块502,设置为采用以下方式确定合影者的面部是否相互靠近:
计算两个合影者面部的靠近程度t;
如果两个合影者面部的靠近程度t大于第二阈值,则判定合影者相互靠近;如果两个合影者面部的靠近程度t小于或等于第二阈值,则判定合影者没有相互靠近。
其中,位置计算模块502,用于计算两个合影者面部的靠近程度t,可以采用以下方式进行计算:
Figure PCTCN2016102848-appb-000006
其中,如图4所示,第一合影者的人脸识别框的中心点为A1,第二合 影者的人脸识别框的中心点为B1,两个合影者的人脸识别框的中心点连线与第一合影者的人脸识别框的交点为A2,两个合影者的人脸识别框的中心点连线与第二合影者的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
其中,人脸检测模块501,设置为采用以下方式基于人脸识别确定合影者的面部:
对拍摄对象的面部进行人脸识别,在识别出的人脸个数多于一个时,将面积最大的两个人脸识别框所聚焦的面部作为合影者的面部。
其中,拍摄模块503,设置为采用以下方式触发相机进行拍摄:发出提示音,并触发相机进行拍摄。
如图6所示,本发明实施例提出一种自拍合影的方法,应用于终端,包括:
S601,在自拍合影模式下,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系;
S602,在判定人像的数量超过两个且至少存在两个人像的位置关系符合面部合影条件,则触发相机快门拍摄人像面部合影照片;
可选地,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系,包括:
对拍摄对象的面部进行人脸识别,根据识别出的人脸数量确定人像的数量;
根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系;
可选地,所述面部合影条件包括:至少存在两个人像在同一排且两个人像的面部之间靠近;
可选地,所述根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系,包括:
当两个人脸识别框的面积比大于第一阈值时,确定所述两个人脸识别框 对应的拍摄对象在同一排;其中,当所述第一阈值小于1时,所述面积比是用面积小的人脸识别框的面积除以面积大的人脸识别框的面积;
可选地,所述根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系,包括:
根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度;
其中,所述根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度t,包括:
Figure PCTCN2016102848-appb-000007
其中,第一人像的人脸识别框的中心点为A1,第二人像的人脸识别框的中心点为B1,两个人脸识别框的中心点连线与第一人像的人脸识别框的交点为A2,两个人脸识别框的中心点连线与第二人像的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db;
如图7所示,本发明实施例提出一种自拍合影的装置,应用于终端,包括:
人脸检测及处理模块701,设置为在自拍合影模式下,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系;
合影拍摄模块702,设置为在判定人像的数量超过两个且至少存在两个人像的位置关系符合面部合影条件,则触发照相机快门拍摄人像面部合影照片。
可选地,人脸检测及处理模块701,设置为采用以下方式基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系:
对拍摄对象的面部进行人脸识别,根据识别出的人脸数量确定人像的数量;
根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系;
可选地,所述面部合影条件包括:至少存在两个人像在同一排且两个人像的面部之间靠近;
可选地,人脸检测及处理模块701,设置为采用以下方式根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系:
当两个人脸识别框的面积比大于第一阈值时,确定所述两个人脸识别框对应的拍摄对象在同一排;其中,当所述第一阈值小于1时,所述面积比是用面积小的人脸识别框的面积除以面积大的人脸识别框的面积;
可选地,人脸检测及处理模块701,设置为采用以下方式根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系:
根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度t:
Figure PCTCN2016102848-appb-000008
其中,第一人像的人脸识别框的中心点为A1,第二人像的人脸识别框的中心点为B1,两个人脸识别框的中心点连线与第一人像的人脸识别框的交点为A2,两个人脸识别框的中心点连线与第二人像的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
具体示例
如图8所示,以手机为例,一种自拍合影的方法具体包括以下步骤:
步骤S801,用户选择自拍合影模式;
步骤S802,手机识别人脸个数;
步骤S803,判断人脸个数是否超过一个,是则执行步骤S804,否则返回步骤S802;
步骤S804,选出面积最大的两个人脸框作为合影者的面部对应的人脸框;
步骤S805,计算选出的两个人脸框的面积比r;
其中,r=s1/s2,s1为第一个人脸框的面积,s2为第二个人脸框的面积,s1小于或等于s2;
步骤S806,判断r是否大于第一阈值,是则执行步骤S807,否则返回步骤S802;
其中,所述第一阈值为0.8;
步骤S807,对两个人脸的靠近程度进行计算;
远距离拍照时,两个人脸在图像上的绝对距离较小,而近距离拍照时,两个人脸在图像上的绝对距离较大。为了能够准确判定不同拍照距离下两个人脸的靠近程度,需要将人脸的大小也纳入计算。因此,使用人脸框的大小与人脸之间的距离的比值来判定人脸的靠近程度。
采用以下公式计算二人面部之间的靠近程度t,具体地:
Figure PCTCN2016102848-appb-000009
其中,如图4所示,第一个人脸框的中心点为A1,第二个人脸框的中心点为B1,两个人脸框的中心点连线与第一个人脸框的交点为A2,两个人脸框的中心点连线与第二个人脸框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db;其中,两点之间的距离可以用像素作为度量单位;
以图9为例,da为188个像素,db为224个像素,s为896个像素,则可以算出二人面部的靠近程度t为(188+224)/896=45.9%;
以图10为例,da为336个像素,db为320个像素,s为640个像素,算出二人面部的靠近程度t为(336+320)/640=102.5%;
步骤S808,判断二人面部之间的靠近程度t是否大于第二阈值,是则执行步骤S809;
其中,所述第二阈值为0.8;
以图9为例,二人面部之间的靠近程度t为45.9%,远小于80%,说明两人的面部还不够靠近,不符合拍摄合影的条件;
以图10为例,二人面部之间的靠近程度t为102.5%,大于80%,说明两人的面部足够靠近,符合拍摄合影的条件。
步骤S809,发出提示音,触发相机拍摄合影;
此外,本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述方法。
上述实施例提供的一种自拍合影的方法和装置,利用人脸识别判断出合影者面部之间的位置关系,当合影者的面部相互靠得比较近且在同一焦平面上时,终端触发相机自动拍照,这种自拍合影的方式操作便捷,能够提高用户体验和摄影质量。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序指令来实现其相应功能。本申请不限制于任何特定形式的硬件和软件的结合。
需要说明的是,本申请还可有其他多种实施例,在不背离本申请精神及其实质的情况下,熟悉本领域的技术人员可根据本申请作出各种相应的改变和变形,但这些相应的改变和变形都应属于本申请所附的权利要求的保护范围。
工业实用性
本发明实施例提供的技术方案,利用人脸识别判断出合影者面部之间的位置关系,当合影者的面部相互靠得比较近且在同一焦平面上时,终端触发相机自动拍照,这种自拍合影的方式操作便捷,能够提高用户体验和摄影质量。

Claims (20)

  1. 一种自拍合影的方法,应用于终端,所述方法包括步骤:
    在自拍合影模式下,基于人脸识别确定合影者的面部;
    确定合影者面部之间的位置关系;
    在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。
  2. 如权利要求1所述的方法,其中:
    所述确定合影者面部之间的位置关系,包括:
    确定合影者的面部是否处于同一焦平面上以及是否相互靠近;
    所述判定合影者的面部之间的位置关系适合拍摄合影,包括:
    判定合影者的面部处于同一焦平面上并且相互靠近。
  3. 如权利要求2所述的方法,其中:
    所述确定合影者的面部是否处于同一焦平面上,包括:
    如果两个合影者的面部对应的人脸识别框的面积比r大于第一阈值,则判定两个合影者的面部处于同一焦平面上;如果两个合影者的面部对应的人脸识别框的面积比r小于或等于第一阈值,则判定两个合影者的面部没有处于同一焦平面上;
    其中,两个合影者的面部对应的人脸识别框的面积比r为第一合影者的人脸识别框的面积s1除以第二合影者的人脸识别框的面积s2,s1小于或等于s2。
  4. 如权利要求2所述的方法,其中:
    所述确定合影者的面部是否相互靠近,包括:
    计算两个合影者面部的靠近程度t;
    如果两个合影者面部的靠近程度t大于第二阈值,则判定合影者相互靠近;如果两个合影者面部的靠近程度t小于或等于第二阈值,则判定合影者 没有相互靠近;
    其中,所述计算两个合影者面部的靠近程度t,采用以下方式进行计算:
    Figure PCTCN2016102848-appb-100001
    其中,第一合影者的人脸识别框的中心点为A1,第二合影者的人脸识别框的中心点为B1,两个合影者的人脸识别框的中心点连线与第一合影者的人脸识别框的交点为A2,两个合影者的人脸识别框的中心点连线与第二合影者的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
  5. 如权利要求1-4中任一项所述的方法,其中:
    所述基于人脸识别确定合影者的面部,包括:
    对拍摄对象的面部进行人脸识别,在识别出的人脸个数多于一个时,将面积最大的两个人脸识别框所聚焦的面部作为合影者的面部。
  6. 一种自拍合影的装置,应用于终端,包括:
    人脸检测模块,设置为在自拍合影模式下,基于人脸识别确定合影者的面部;
    位置计算模块,设置为确定合影者面部之间的位置关系;
    拍摄模块,设置为在判定合影者的面部之间的位置关系适合拍摄合影时,触发相机进行拍摄。
  7. 如权利要求6所述的装置,其中:
    所述位置计算模块,设置为采用以下方式确定合影者面部之间的位置关系:确定合影者的面部是否处于同一焦平面上以及是否相互靠近;
    所述拍摄模块,设置为采用以下方式判定合影者的面部之间的位置关系适合拍摄合影:判定合影者的面部处于同一焦平面上并且相互靠近。
  8. 如权利要求7所述的装置,其中:
    所述位置计算模块,设置为采用以下方式确定合影者的面部是否处于同 一焦平面上:
    如果两个合影者的面部对应的人脸识别框的面积比r大于第一阈值,则判定两个合影者的面部处于同一焦平面上;如果两个合影者的面部对应的人脸识别框的面积比r小于或等于第一阈值,则判定两个合影者的面部没有处于同一焦平面上;
    其中,两个合影者的面部对应的人脸识别框的面积比r为第一合影者的人脸识别框的面积s1除以第二合影者的人脸识别框的面积s2,s1小于或等于s2。
  9. 如权利要求7所述的装置,其中:
    所述位置计算模块,设置为采用以下方式确定合影者的面部是否相互靠近:
    计算两个合影者面部的靠近程度t;
    如果两个合影者面部的靠近程度t大于第二阈值,则判定合影者相互靠近;如果两个合影者面部的靠近程度t小于或等于第二阈值,则判定合影者没有相互靠近;
    其中,所述计算两个合影者面部的靠近程度t,采用以下方式进行计算:
    Figure PCTCN2016102848-appb-100002
    其中,第一合影者的人脸识别框的中心点为A1,第二合影者的人脸识别框的中心点为B1,两个合影者的人脸识别框的中心点连线与第一合影者的人脸识别框的交点为A2,两个合影者的人脸识别框的中心点连线与第二合影者的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
  10. 如权利要求6-9中任一项所述的装置,其中:
    所述人脸检测模块,设置为采用以下方式基于人脸识别确定合影者的面部:
    对拍摄对象的面部进行人脸识别,在识别出的人脸个数多于一个时,将 面积最大的两个人脸识别框所聚焦的面部作为合影者的面部。
  11. 一种自拍合影的方法,应用于终端,所述方法包括步骤:
    在自拍合影模式下,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系;
    在判定人像的数量超过两个且至少存在两个人像的位置关系符合面部合影条件,则触发相机快门拍摄人像面部合影照片。
  12. 如权利要求11所述的方法,其中:
    所述面部合影条件包括:至少存在两个人像在同一排且两个人像的面部之间靠近。
  13. 如权利要求12所述的方法,其中:
    基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系,包括:
    对拍摄对象的面部进行人脸识别,根据识别出的人脸数量确定人像的数量;
    根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系。
  14. 如权利要求13所述的方法,其中:
    所述根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系,包括:
    当两个人脸识别框的面积比大于第一阈值时,确定所述两个人脸识别框对应的拍摄对象在同一排;其中,当所述第一阈值小于1时,所述面积比是用面积小的人脸识别框的面积除以面积大的人脸识别框的面积。
  15. 如权利要求13所述的方法,其中:
    所述根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系,包括:
    根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比 值来判定两个人像的面部靠近程度t:
    Figure PCTCN2016102848-appb-100003
    其中,第一人像的人脸识别框的中心点为A1,第二人像的人脸识别框的中心点为B1,两个人脸识别框的中心点连线与第一人像的人脸识别框的交点为A2,两个人脸识别框的中心点连线与第二人像的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
  16. 一种自拍合影的装置,应用于终端,包括:
    人脸检测及处理模块,设置为在自拍合影模式下,基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系;
    合影拍摄模块,设置为在判定人像的数量超过两个且至少存在两个人像的位置关系符合面部合影条件,则触发照相机快门拍摄人像面部合影照片。
  17. 如权利要求16所述的装置,其中:
    所述面部合影条件包括:至少存在两个人像在同一排且两个人像的面部之间靠近。
  18. 如权利要求17所述的装置,其中:
    人脸检测及处理模块,设置为采用以下方式基于人脸识别确定正在拍摄的人像的数量以及人像面部之间的位置关系:
    对拍摄对象的面部进行人脸识别,根据识别出的人脸数量确定人像的数量;
    根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系。
  19. 如权利要求18所述的装置,其中:
    人脸检测及处理模块,设置为采用以下方式根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系:
    当两个人脸识别框的面积比大于第一阈值时,确定所述两个人脸识别框 对应的拍摄对象在同一排;其中,当所述第一阈值小于1时,所述面积比是用面积小的人脸识别框的面积除以面积大的人脸识别框的面积。
  20. 如权利要求18所述的装置,其中:
    人脸检测及处理模块,设置为采用以下方式根据人像对应的人脸识别框之间的位置关系确定人像之间的位置关系:
    根据两个人像对应的两个人脸框的大小与两个人脸框之间的距离的比值来判定两个人像的面部靠近程度t:
    Figure PCTCN2016102848-appb-100004
    其中,第一人像的人脸识别框的中心点为A1,第二人像的人脸识别框的中心点为B1,两个人脸识别框的中心点连线与第一人像的人脸识别框的交点为A2,两个人脸识别框的中心点连线与第二人像的人脸识别框的交点为B2,A1点与B1点的距离为s,A1点与A2点的距离为da,B1点与B2点的距离为db。
PCT/CN2016/102848 2015-10-30 2016-10-21 一种自拍合影的方法和装置 WO2017071532A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510727210.5 2015-10-30
CN201510727210.5A CN105430258B (zh) 2015-10-30 2015-10-30 一种自拍合影的方法和装置

Publications (1)

Publication Number Publication Date
WO2017071532A1 true WO2017071532A1 (zh) 2017-05-04

Family

ID=55508161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102848 WO2017071532A1 (zh) 2015-10-30 2016-10-21 一种自拍合影的方法和装置

Country Status (2)

Country Link
CN (1) CN105430258B (zh)
WO (1) WO2017071532A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200023A (zh) * 2020-09-24 2021-01-08 上海新氦类脑智能科技有限公司 基于人群交互智能图像采集与互动方法、系统、终端以及介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430258B (zh) * 2015-10-30 2018-06-01 努比亚技术有限公司 一种自拍合影的方法和装置
CN106231200B (zh) * 2016-08-29 2018-12-11 广东欧珀移动通信有限公司 一种拍照方法及装置
CN106355549A (zh) * 2016-09-30 2017-01-25 北京小米移动软件有限公司 拍照方法及设备
CN109040643B (zh) * 2018-07-18 2021-04-20 奇酷互联网络科技(深圳)有限公司 移动终端及远程合影的方法、装置
CN111246078A (zh) * 2018-11-29 2020-06-05 北京小米移动软件有限公司 影像处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005217768A (ja) * 2004-01-29 2005-08-11 Fuji Photo Film Co Ltd デジタルカメラ
CN101212572A (zh) * 2006-12-27 2008-07-02 富士胶片株式会社 图像获取设备及图像获取方法
US20090237521A1 (en) * 2008-03-19 2009-09-24 Fujifilm Corporation Image capturing apparatus and method for controlling image capturing
CN103186763A (zh) * 2011-12-28 2013-07-03 富泰华工业(深圳)有限公司 人脸识别系统及方法
CN105430258A (zh) * 2015-10-30 2016-03-23 努比亚技术有限公司 一种自拍合影的方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5060233B2 (ja) * 2007-09-25 2012-10-31 富士フイルム株式会社 撮像装置およびその自動撮影方法
JP5144422B2 (ja) * 2007-09-28 2013-02-13 富士フイルム株式会社 撮影装置及び撮影方法
KR101362765B1 (ko) * 2007-11-07 2014-02-13 삼성전자주식회사 촬영 장치 및 그 제어 방법
CN102932596A (zh) * 2012-10-26 2013-02-13 广东欧珀移动通信有限公司 一种拍照的方法、装置及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005217768A (ja) * 2004-01-29 2005-08-11 Fuji Photo Film Co Ltd デジタルカメラ
CN101212572A (zh) * 2006-12-27 2008-07-02 富士胶片株式会社 图像获取设备及图像获取方法
US20090237521A1 (en) * 2008-03-19 2009-09-24 Fujifilm Corporation Image capturing apparatus and method for controlling image capturing
CN103186763A (zh) * 2011-12-28 2013-07-03 富泰华工业(深圳)有限公司 人脸识别系统及方法
CN105430258A (zh) * 2015-10-30 2016-03-23 努比亚技术有限公司 一种自拍合影的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200023A (zh) * 2020-09-24 2021-01-08 上海新氦类脑智能科技有限公司 基于人群交互智能图像采集与互动方法、系统、终端以及介质
CN112200023B (zh) * 2020-09-24 2023-03-28 上海新氦类脑智能科技有限公司 基于人群交互智能图像采集与互动方法、系统、终端以及介质

Also Published As

Publication number Publication date
CN105430258B (zh) 2018-06-01
CN105430258A (zh) 2016-03-23

Similar Documents

Publication Publication Date Title
WO2017071532A1 (zh) 一种自拍合影的方法和装置
WO2017050115A1 (zh) 一种图像合成方法和装置
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
CN106454121B (zh) 双摄像头拍照方法及装置
WO2018076935A1 (zh) 图像虚化处理方法、装置、移动终端和计算机存储介质
CN106909274B (zh) 一种图像显示方法和装置
WO2017045650A1 (zh) 一种图片处理方法及终端
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
WO2016173468A1 (zh) 组合操作方法和装置、触摸屏操作方法及电子设备
WO2017071481A1 (zh) 一种移动终端及其实现分屏的方法
CN106453924A (zh) 一种图像拍摄方法和装置
WO2016058458A1 (zh) 电池电量的管理方法、移动终端和计算机存储介质
WO2018050014A1 (zh) 对焦方法及拍照设备、存储介质
WO2017143855A1 (zh) 具有截屏功能的装置和截屏方法
CN106713716B (zh) 一种双摄像头的拍摄控制方法和装置
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
WO2017071469A1 (zh) 一种移动终端和图像拍摄方法、计算机存储介质
WO2017041714A1 (zh) 一种获取rgb数据的方法和装置
CN106657782B (zh) 一种图片处理方法和终端
WO2018076938A1 (zh) 图像处理装置及方法和计算机存储介质
CN106911881B (zh) 一种基于双摄像头的动态照片拍摄装置、方法和终端
WO2017045647A1 (zh) 一种处理图像的移动终端和方法
WO2018050080A1 (zh) 一种移动终端、图片处理方法及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16858967

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16858967

Country of ref document: EP

Kind code of ref document: A1