CN106851104B - A kind of method and device shot according to user perspective - Google Patents

A kind of method and device shot according to user perspective Download PDF

Info

Publication number
CN106851104B
CN106851104B CN201710111156.0A CN201710111156A CN106851104B CN 106851104 B CN106851104 B CN 106851104B CN 201710111156 A CN201710111156 A CN 201710111156A CN 106851104 B CN106851104 B CN 106851104B
Authority
CN
China
Prior art keywords
smart machine
personage
photo
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710111156.0A
Other languages
Chinese (zh)
Other versions
CN106851104A (en
Inventor
陈小翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710111156.0A priority Critical patent/CN106851104B/en
Publication of CN106851104A publication Critical patent/CN106851104A/en
Application granted granted Critical
Publication of CN106851104B publication Critical patent/CN106851104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of methods shot according to user perspective.This method includes smart machine by any one camera acquisition image in dual camera, and smart machine obtains the number of person in described image using image recognition technology;Smart machine carries out range measurement to each of image object by the binocular ranging technology of dual camera, obtains the distance between each personage and smart machine;The acquisition parameters of dual camera are arranged according to the distance between each personage and smart machine automatically for smart machine, shoot a photo respectively for each personage;Smart machine selects a photo and shows on the screen of smart machine.This method makes everyone that can obtain one using oneself as the photo at visual angle, so that the shooting of smart phone is more in line with the interest of user, improves user experience.

Description

A kind of method and device shot according to user perspective
[technical field]
The present invention relates to a kind of technique for taking, more precisely a kind of method shot according to user perspective and it is System.
[background technique]
With the development of camera function in smart machine, when smart machine shoots personage, may be implemented continuous Shoot multiple pictures.
In the prior art, when smart machine carries out being continuously shot multiple pictures to personage, every illumination be not with What the visual angle of different user was shot, but continuous a variety of shootings are carried out with some artificial focus in personage.
This method obtains the quantity of personage and the distance of each personage by the dual camera on smart machine, then with every It is that each personage shoots a photo respectively that acquisition parameters, which are arranged, in the distance of a personage automatically, to make everyone that can obtain One, using oneself as the group picture photo of focus, makes the shooting of smart phone be more in line with the interest of user, improves user's body It tests.
[summary of the invention]
In view of the foregoing drawbacks, the present invention provides a kind of method and devices shot according to user perspective.A kind of The method shot according to user perspective, comprising: smart machine passes through any one in the dual camera on the smart machine A camera obtains image, and the smart machine obtains the number of person in described image using image recognition technology;The intelligence Energy equipment carries out range measurement to each of described image object by the binocular ranging technology of the dual camera, obtains institute State the distance between each personage and the smart machine;The smart machine is according to each personage and the smart machine The distance between automatic setting dual camera acquisition parameters, shoot a photo respectively for each personage;The smart machine It selects a photo and is shown on the screen of the smart machine.
Optionally, shooting focal length is arranged according to each personage in the smart machine at a distance from the smart machine, Shooting aperture, shutter, ISO, exposure, white balance is arranged according to the background of each personage in the smart machine.
Optionally, the smart machine is after each personage shoots a photo, and point saves photo centered on the personage.
Optionally, before shooting photo, user manually selects the smart machine in the view-finder of the smart machine The personage for needing to shoot;The smart machine is only that the personage of user's selection shoots a photo respectively.
Optionally, the smart machine is in share photos, using the head portrait of image recognition technology automatic identification other side, so The personage in other side's head portrait and photo is matched afterwards, the photo of successful match is shared with other side.
In addition the present invention also proposes a kind of device shot according to user perspective, comprising:
Person recognition module: for obtaining image by any one camera in the dual camera on smart machine, Number of person in described image is obtained using image recognition technology;
Range finder module: for the binocular ranging technology by the dual camera on the smart machine in described image Each personage carries out range measurement, obtains the distance between each personage and the smart machine;Shooting module: it is used for root The acquisition parameters of dual camera are set automatically according to the distance between each personage and the smart machine, for each personage point It Pai She not a photo;
Display module: for selecting a photo and showing on the screen of the smart machine photo of selection.
Optionally, described device further include:
Parameter setting module: it for shooting focal length to be arranged at a distance from the smart machine according to each personage, uses Aperture, shutter, ISO, exposure, white balance are shot in being arranged according to the background of each personage.
Optionally, described device further include: memory module: in shooting after a photo for each personage with the personage and being Heart point saves photo.
Optionally, described device further include:
Personage's selecting module: for the smart machine before shooting photo, view-finder of the user in the smart machine In manually select and need the personage that shoots;The smart machine is only that the personage of user's selection shoots a photo respectively.
Optionally, described device further include:
Sharing module: using the head portrait of image recognition technology automatic identification other side when for share photos, then other side Personage in head portrait and photo matches, and the photo of successful match is shared with other side.
Beneficial effects of the present invention: this method is obtained by any one camera in the dual camera on smart machine Image, then smart machine obtains the number of person in described image using image recognition technology;Smart machine passes through double camera shootings Head binocular ranging technology to each of image object carry out range measurement, obtain between each personage and smart machine away from From it is that each personage shoots a photo respectively that acquisition parameters are then arranged automatically with the distance of each personage, to make each People can obtain one using oneself as the photo at visual angle, so that the shooting of smart phone is more in line with the interest of user, improve User experience.
[Detailed description of the invention]
The hardware structural diagram of Fig. 1 mobile terminal of each embodiment to realize the present invention.
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1.
Fig. 3 is the method flow diagram of the embodiment of the method one provided by the invention shot according to user perspective.
Fig. 4 is the method flow diagram of the embodiment of the method two provided by the invention shot according to user perspective.
Fig. 5 is the method flow diagram of the embodiment of the method three provided by the invention shot according to user perspective.
Fig. 6 is the functional block diagram of the Installation practice four provided by the invention shot according to user perspective.
Fig. 7 is the functional block diagram of the Installation practice five provided by the invention shot according to user perspective.
Fig. 8 is the functional block diagram of the Installation practice six provided by the invention shot according to user perspective.
Fig. 9 is the Matlab binocular vision calibration figure of binocular range measurement principle.
Figure 10 is the distortion correction figure of binocular range measurement principle.
Figure 11 is that binocular range measurement principle converts camera to canonical form figure.
Figure 12 is binocular distance measurement procedure chart.
[specific embodiment]
It should be appreciated that described herein, specific examples are only used to explain the present invention, is not intended to limit the present invention.
The mobile terminal of each embodiment of the present invention is realized in description with reference to the drawings.In subsequent description, use For indicate element such as " module ", " component " or " unit " suffix only for being conducive to explanation of the invention, itself There is no specific meanings.Therefore, " module " can be used mixedly with " component ".
Mobile terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as moving Phone, smart phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP The mobile terminal of (portable media player), navigation device etc. and such as number TV, desktop computer etc. are consolidated Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that in addition to being used in particular for moving Except the element of purpose, the construction of embodiment according to the present invention can also apply to the terminal of fixed type.
Fig. 1 to realize the present invention the mobile terminal of each embodiment hardware configuration signal.
Mobile terminal 100 may include wireless communication unit 110, A/V (audio/video) input unit 120, user's input Unit 130, output unit 140, memory 150, interface unit 160, controller 170 and power supply unit 180 etc..Fig. 1 is shown Mobile terminal with various assemblies, it should be understood that be not required for implementing all components shown.It can be alternatively Implement more or fewer components.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more components, allows mobile terminal 100 and wireless communication system Or the radio communication between network.For example, wireless communication unit may include mobile communication module 111, wireless Internet mould At least one of block 112, short range communication module 113.
Mobile communication module 111 sends radio signals to base station (for example, access point, node B etc.), exterior terminal And at least one of server and/or receive from it radio signal.Such radio signal may include that voice is logical Talk about signal, video calling signal or according to text and/or Multimedia Message transmission and/or received various types of data.
The Wi-Fi (Wireless Internet Access) of the support mobile terminal of wireless Internet module 112.The module can be internally or externally It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 113 is the module for supporting short range communication.Some examples of short-range communication technology include indigo plant Tooth TM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybee TM etc..
A/V input unit 120 is for receiving audio or video signal.A/V input unit 120 may include 121 He of camera Microphone 122, camera 121 is to the static images obtained in video acquisition mode or image capture mode by image capture apparatus Or the image data of video is handled.Treated, and picture frame may be displayed on display unit 141.It is handled through camera 121 Picture frame afterwards can store in memory 150 (or other storage mediums) or be sent out via wireless communication unit 110 It send, two or more cameras 121 can be provided according to the construction of mobile terminal.Microphone 122 can be in telephone calling model, note Sound (audio data) is received via microphone in record mode, speech recognition mode etc. operational mode, and can will in this way Acoustic processing be audio data.Audio that treated (voice) data can be converted in the case where telephone calling model can The format output of mobile communication base station is sent to via mobile communication module 111.Various types of make an uproar can be implemented in microphone 122 Sound eliminates (or inhibition) algorithm to eliminate the noise or do that (or inhibition) generates during sending and receiving audio signal It disturbs.
The order that user input unit 130 can be inputted according to user generates key input data to control each of mobile terminal Kind operation.User input unit 130 allows user to input various types of information, and may include keyboard, metal dome, touch Plate (for example, the sensitive component of detection due to the variation of resistance, pressure, capacitor etc. caused by being contacted), idler wheel, rocking bar etc. Deng.Particularly, when touch tablet is superimposed upon in the form of layer on display unit 141, touch screen can be formed.
Interface unit 160 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example, External device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, wired or nothing Line data port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Identification module can be storage and use each of mobile terminal 100 for verifying user It plants information and may include subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) Etc..In addition, the device (hereinafter referred to as " identification device ") with identification module can take the form of smart card, therefore, know Other device can be connect via port or other attachment devices with mobile terminal 100.Interface unit 170, which can be used for receiving, to be come from The input (for example, data information, electric power etc.) of external device (ED) and the input received is transferred in mobile terminal 100 One or more elements can be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connect with external base, interface unit 160 may be used as allowing will be electric by it Power, which is provided from pedestal to the path or may be used as of mobile terminal 100, allows the various command signals inputted from pedestal to pass through it It is transferred to the path of mobile terminal.The various command signals or electric power inputted from pedestal, which may be used as mobile terminal for identification, is The no signal being accurately fitted on pedestal.Output unit 140 is configured to provide with vision, audio and/or tactile manner defeated Signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.) out.Output unit 140 may include display Unit 141, audio output module 142 etc..
Display unit 141 may be displayed on the information handled in mobile terminal 100.For example, when mobile terminal 100 is in electricity When talking about call mode, display unit 141 can show and converse or other communicate (for example, text messaging, multimedia file Downloading etc.) relevant user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling mode Or when image capture mode, display unit 141 can show captured image and/or received image, show video or figure Picture and the UI or GUI of correlation function etc..
Meanwhile when display unit 141 and touch tablet in the form of layer it is superposed on one another to form touch screen when, display unit 141 may be used as input unit and output device.Display unit 141 may include liquid crystal display (LCD), thin film transistor (TFT) In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least It is a kind of.Some in these displays may be constructed such that transparence to allow user to watch from outside, this is properly termed as transparent Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific Desired embodiment, mobile terminal 100 may include two or more display units (or other display devices), for example, moving Dynamic terminal may include outernal display unit (not shown) and inner display unit (not shown).Touch screen can be used for detecting touch Input pressure and touch input position and touch input area.
Audio output module 142 can mobile terminal be in call signal reception pattern, call mode, logging mode, It is when under the isotypes such as speech recognition mode, broadcast reception mode, wireless communication unit 110 is received or in memory 150 The audio data transducing audio signal of middle storage and to export be sound.Moreover, audio output module 142 can provide and movement The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that terminal 100 executes. Audio output module 142 may include loudspeaker, buzzer etc..
Memory 150 can store the software program etc. of the processing and control operation that are executed by controller 170, Huo Zheke Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And And memory 150 can store about the vibrations of various modes and audio signal exported when touching and being applied to touch screen Data.
Memory 150 may include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, more Media card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access storage Device (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc..Moreover, mobile terminal 100 can execute memory with by network connection The network storage device of 160 store function cooperates.
The overall operation of the usually control mobile terminal of controller 170.For example, controller 170 executes and voice communication, data Communication, video calling etc. relevant control and processing.In addition, controller 170 may include for reproducing (or playback) more matchmakers The multi-media module 171 of volume data, multi-media module 171 can construct in controller 170, or can be structured as and control Device 170 separates.Controller 170 can be with execution pattern identifying processing, by the handwriting input executed on the touchscreen or picture It draws input and is identified as character or image.
Power supply unit 180 receives external power or internal power under the control of controller 170 and provides operation each member Electric power appropriate needed for part and component.
Various embodiments described herein can be to use the calculating of such as computer software, hardware or any combination thereof Machine readable medium is implemented.Hardware is implemented, embodiment described herein can be by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can Programming gate array (FPGA), controller, microcontroller, microprocessor, is designed to execute function described herein processor At least one of electronic unit is implemented, and in some cases, such embodiment can be implemented in controller 180. For software implementation, the embodiment of such as process or function can with allow to execute the individual of at least one functions or operations Software module is implemented.Software code can by the software application (or program) write with any programming language appropriate Lai Implement, software code can store in memory 150 and be executed by controller 170.
So far, oneself is through describing mobile terminal according to its function.In the following, for the sake of brevity, will description such as folded form, Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc., which is used as, to be shown Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 may be constructed such that using via frame or grouping send data it is all if any Line and wireless communication system and satellite-based communication system operate.
Referring now to Fig. 2 description communication system that wherein mobile terminal according to the present invention can operate.
Different air interface and/or physical layer can be used in such communication system.For example, used by communication system Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system System (UMTS) (particularly, long term evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under The description in face is related to cdma communication system, but such introduction is equally applicable to other types of system.
With reference to Fig. 2, wireless communication system may include multiple mobile terminals 100, multiple base stations (BS) 270,270A, 270B Base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN) 290 interface is formed.MSC280 is also structured to be formed with the BSC275 that can be couple to base station 270 via back haul link Interface.Back haul link can be constructed according to any in several known interfaces, the interface include such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system may include multiple as shown in Figure 2 BSC275。
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of direction specific direction Each subregion of line covering is radially far from BS270.Alternatively, each subregion can be by two or more for diversity reception Antenna covering.Each BS270, which may be constructed such that, supports multiple frequency distribution, and the distribution of each frequency has specific frequency spectrum (for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed, which intersects, can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly indicating single BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Alternatively, each subregion of specific BS270 can be claimed For multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to the mobile terminal operated in system by broadcsting transmitter (BT) 295 100.Mobile communication module 111 as shown in Figure 1 is arranged at mobile terminal 100 to receive the broadcast sent by BT295 Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.The help of satellite 300 positions multiple mobile terminals At least one of 100.
In Fig. 2, multiple satellites 300 are depicted, it is understood that, it is useful to can use any number of satellite acquisition Location information.Mobile terminal as shown in Figure 1 can also include GPS module, and GPS module is generally configured to and satellite 300 cooperations are to obtain desired location information.GPS tracking technique or except GPS tracking technique is substituted, can be used can be with Track other technologies of the position of mobile terminal.In addition, at least one 300 property of can choose of GPS satellite or be additionally located in Manage satellite dmb transmission.As a typical operation of wireless communication system, BS270 is received from the anti-of various mobile terminals 100 To link signal.Mobile terminal 100 usually participates in call, information receiving and transmitting and other types of communication.Certain base station 270 is received Each reverse link signal is handled in specific BS270.The data of acquisition are forwarded to relevant BSC275.BSC is mentioned For the mobile management function of converse resource allocation and the coordination including the soft switching process between BS270.BSC275 will also be received To data be routed to MSC280, the additional route service for forming interface with PSTN290 is provided.Similarly, PSTN290 and MSC280 form interface, and MSC and BSC275 form interface, and BSC275 controls BS270 correspondingly with will be positive Link signal is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the method for the present invention is proposed.
Embodiment one
With reference to Fig. 3, a method of it is shot according to user perspective, comprising:
S102, smart machine obtain image, intelligence by any one camera in the dual camera on smart machine Equipment obtains the number of person in image using image recognition technology.
Major-minor timesharing is not present in dual camera in smart machine, then obtains a figure using any one camera Picture;If the dual camera in smart machine is main and auxiliary camera, an image is obtained using main camera, then intelligence Equipment obtains the quantity of the personage in the image using image recognition introduction.
After smart machine acquires the image containing face with camera, detection and tracking face in the picture automatically, to inspection The face that measures carries out a series of the relevant technologies of face, usually also referred to as Identification of Images, face recognition.
Recognition of face mainly includes four component parts, is respectively as follows: man face image acquiring and detection, facial image are located in advance Reason, facial image feature extraction and matching and identification.
1, recognition of face man face image acquiring and detection:
Man face image acquiring: different facial images can be transferred through pick-up lens and collect, such as still image, dynamic Image, different positions, different expressions etc. can be acquired well.When user is in the coverage of acquisition equipment When interior, acquisition equipment can search for automatically and shoot the facial image of user.
Face datection: Face datection is in the pretreatment for being mainly used for recognition of face, i.e., accurate calibration goes out face in the picture Position and size.The pattern feature very abundant for including in facial image, as histogram feature, color characteristic, template characteristic, Structure feature and Haar feature etc..
Face datection is exactly information useful among these to be picked out, and realize Face datection using these features.Mainstream Method for detecting human face be based on features above use Adaboost learning algorithm, Adaboost algorithm is a kind of side for classifying Method, it is combined some weaker classification methods, is combined into new very strong classification method.
Some rectangular characteristic (weak typings that can most represent face are picked out using Adaboost algorithm during Face datection Device), Weak Classifier is configured to a strong classifier, then several strong classifiers that training is obtained in the way of Nearest Neighbor with Weighted Voting It is composed in series the cascade filtering of a cascade structure, effectively improves the detection speed of classifier.
2. recognition of face facial image pre-processes:
Facial image pretreatment: the image preprocessing for face is based on Face datection as a result, handling image And finally serve the process of feature extraction.The original image that system obtains by various conditions due to being limited and being done at random It disturbs, tends not to directly use, it is necessary to which it is pre- to carry out the images such as gray correction, noise filtering to it in the early stage of image procossing Processing.For facial image, preprocessing process mainly includes light compensation, the greyscale transformation, histogram of facial image Equalization, normalization, geometric correction, filtering and sharpening etc..
3. recognition of face facial image feature extraction:
Facial image feature extraction: it is special that feature workable for face identification system is generally divided into visual signature, pixels statistics Sign, facial image transformation coefficient feature, facial image algebraic characteristic etc..Face characteristic extracts certain features aiming at face It carries out.Face characteristic extracts, and also referred to as face characterizes, it is the process that feature modeling is carried out to face.What face characteristic extracted Method, which is summed up, is divided into two major classes: one is Knowledge based engineering characterizing methods;Another is based on algebraic characteristic or statistics The characterizing method of study.
Knowledge based engineering characterizing method mainly according to the shape description of human face and they the distance between characteristic The characteristic for facilitating face classification is obtained, characteristic component generally includes Euclidean distance, curvature and angle between characteristic point Degree etc..Face is locally made of eyes, nose, mouth, chin etc., and to these parts and the geometry of structural relation is retouched between them It states, can be used as the important feature of identification face, these features are referred to as geometrical characteristic.Knowledge based engineering face characterizes Method and template matching method based on geometrical characteristic.
4. the matching of recognition of face facial image and identification:
Facial image matching and identification: the feature templates stored in the characteristic and database of the facial image of extraction into Row search matching, by setting a threshold value, when similarity is more than this threshold value, then result matching obtained is exported.Face Identification is exactly to be compared face characteristic to be identified with obtained skin detection, according to similarity degree to face Identity information is judged.
S103, smart machine carry out apart from survey each of image object by the binocular ranging technology of dual camera Amount, obtains the distance between each personage and smart machine
Smart machine is to realize that smart machine uses binocular by the dual camera of smart machine at a distance from each personage Location algorithm obtains smart machine at a distance from each personage.Binocular location algorithm process include: off-line calibration, binocular correction, Binocular ranging.
1, off-line calibration:
The purpose of calibration is the internal reference (focal length, picture centre, distortion factor etc.) and outer ginseng (R (rotation) square for obtaining camera Battle array T (translation) matrix, for two camera).Method more commonly used at present is the gridiron pattern scaling method of Zhang Zhengyou, There is realization on Opencv and Matlab.But generally in order to obtain higher stated accuracy, using (the 60*60 lattice of technical grade Son) meeting of glass panel effect is more preferably.And someone also suggests using Matlab, because precision includes that effect of visualization can more preferable one A bit, and the result of Matlab saves as xml, and Opencv can also directly be read in, but trouble of the step relative to Opencv It is some.Fig. 9 is Matlab binocular vision calibration figure.
Step are as follows:
(1) left camera calibration obtains inside and outside parameter.
(2) right parameter camera calibration obtains outer ginseng.
(3) binocular calibration obtains the translation rotation relationship between camera.
2, binocular is corrected:
The purpose of correction be with reference between figure and target figure, only exist the difference in X-direction.Improve disparity computation Accuracy.Correction is divided into two steps
(1) distortion correction
Distortion correction effect refers to Figure 10
(2) canonical form is converted by camera
Because of correction section, the position of image all the points can be recalculated, thus the resolution ratio of algorithm process is got over It is big time-consuming bigger, and generally require two images of processing in real time.And this Algorithm parallelization strong normalization degree is higher, builds View is hardened using IVE, the acceleration mode in similar Opencv, first obtains mapping Map, then parallelization uses mapping Map weight Newly obtain location of pixels.The rectification function in Opencv is cvStereoRectify.Canonical form is converted with reference to figure by camera 11.
3, binocular ranging:
Binocular ranging is the core of binocular depth estimation, has developed many years, also there is very more algorithms, main mesh Be calculate with reference to pixel between figure and target figure opposite matching relationship, be broadly divided into local and non local algorithm.Generally There are following several steps.
(1) matching error calculates
(2) error is integrated
(3) disparity map calculating/optimization
(4) disparity map is corrected
Using fixed size or on-fixed size windows, the Optimum Matching position of a line where calculating therewith.Such as the following figure For simplest local mode, the best corresponding points position of a line is asked, left and right view X-coordinate position difference is disparity map.In order to increase Plus noise, the robustness of illumination can be used fixed window and matched, and can also be carried out again later to image using LBP transformation Matching.Match penalties, which calculate function, to be had: SAD, SSD, NCC etc..Maximum search range can also be limited using maximum disparity, it can also To use integrogram and Box Filter to carry out acceleration calculating.The preferable local matching algorithm of effect is based on Guided at present The binocular ranging algorithm using Box Filter and integrogram of Filter, local algorithm are easy to parallelization, and calculating speed is fast, but It is that the regional effect less for texture is bad, generally to image segmentation, divides the image into texture-rich and the sparse area of texture Domain adjusts matching window size, and texture is sparse to use wicket, Lai Tigao matching effect.
Non local matching algorithm, the searching for parallax of the task is regarded as minimize one it is determining based on whole binocular rangings Pair loss function, ask the minimum value of the loss function that optimal parallax relationship can be obtained, focus on solving in image do not know The matching problem in region mainly has Dynamic Programming (Dynamic Programming), belief propagation (Blief Propagation), figure cuts algorithm (Graph Cut).What effect was best at present is also that figure cuts algorithm, the figure provided in Opencv It is very big to cut algorithmic match time-consuming.
Figure cuts algorithm primarily to horizontal and vertical direction continuity constraint cannot be merged by solving dynamic programming algorithm Matching problem is regarded as and seeks minimal cut problem in the picture using these constraints by problem.
Since it is considered that global energy minimization, non local algorithm is general time-consuming larger, poorly using hardware-accelerated.But It is for blocking, the sparse situation of texture solves preferable.After having obtained match point, generally pass through left and right sight consistency The match point with high confidence level is detected and determined in mode.Much like front and back only passes through left and right view to the matched thought of light stream The point of line consistency check is just considered stable matching point.It can also be found out in this way because blocking, noise, what error hiding obtained Point.
About the post-processing of disparity map, using the method for median filtering, neighborhood territory pixel is used to the gray value of current point Intermediate value replaces, and this method can remove salt-pepper noise very well.Can remove because noise or weak Texture Matching failure Isolated point.
Binocular distance measurement process is commonly divided into camera calibration, image acquisition, image preprocessing, target detection and spy Levy six steps such as extraction, Stereo matching, three-dimensional reconstruction.Such as Figure 12
S1031 camera calibration
Camera calibration is in order to determine the position of video camera, inner parameter and external parameter, to establish imaging model, really Object point is determined in world coordinate system with the corresponding relationship between it on the image plane picture point.One of basic task of stereoscopic vision It is the geological information that the image information obtained from video camera calculates object in three-dimensional space, and thus rebuilds and identify object Body, and the geometrical model of video camera imaging determine space object surface point three-dimensional geometry position and image in corresponding points it Between correlation, these geometrical model parameters are exactly camera parameters.These parameters must be by experiment just under normal circumstances It can obtain, this process is known as camera calibration.Camera calibration is it needs to be determined that video camera inner geometry and optical characteristics The three-dimensional position and direction (external parameter) of the camera coordinate system of (inner parameter) and an opposite world coordinate system.It is calculating In machine vision, if will be calibrated using multiple video cameras to each video camera.
S1032 image obtains
The image acquisition of binocular vision is by two of different location or a video camera by mobile or rotary taking The same scene obtains the image of two width different perspectivess.In binocular vision system, the acquisition of depth information is to be carried out in two steps 's.
S1033 image preprocessing
Two dimensional image is generated by optical imaging system, contains various random noises affected by environment and distortion, Therefore it needs to pre-process original image, to inhibit garbage, prominent useful information, improving image quality.Image is pre- There are two the purpose of processing is main: improving the visual effect of image, improve image definition;Image is set to become to be more advantageous to calculating Various signature analysis are convenient in the processing of machine.
S1034 target detection and feature extraction
Target detection refers to from by extracting target object to be detected in pretreated image.Feature extraction refers to from inspection Specified characteristic point is extracted in the target measured.Due to still can operate with image spy without a kind of blanket theory at present The extraction of sign, so as to cause the diversity of matching characteristic in stereoscopic vision research.Currently, common matching characteristic mainly has area Characteristic of field, line feature and point-like character etc..In general, the special 4 binocular distance measurement systematic researches sign of large scale containing compared with Image information abundant, readily available quick matching, but number in the picture is less, positioning accuracy is poor, feature extraction It is difficult with description.And small scale features number is more, but information contained is less, thus be to overcome ambiguity matching and mention in matching High operation efficiency needs stronger constraint criterion and matching strategy.Good matching characteristic should have stability, invariance, can Distinction, uniqueness and effective solution matched ability of ambiguity.
S1035 Stereo matching
Stereo matching refers to according to the calculating to selected feature, the corresponding relationship between feature is established, by the same space Photosites of the physical points in different images are mapped.When space three-dimensional scene is projected as two dimensional image, same scenery Image under different perspectives can be very different, and the factors in scene, as scene geometry and physical characteristic, Noise jamming, illumination condition and distortion of camera etc., the gray value being all integrated into single image.It therefore, be accurately To the image for containing so more unfavorable factors carry out matching unambiguously be it is very difficult, this problem is not yet so far It is well solved.The validity of Stereo matching depends on the solution of three problems: finding the essential attribute between feature, selection Correct matching characteristic and foundation can correctly match the stable algorithm of selected feature.
S1036 three-dimensional reconstruction
After obtaining anaglyph by Stereo matching, depth image, and restoration scenario 3D information can be determined.Shadow The factor for ringing range measurement accuracy mainly has camera calibration error, digital quantization, feature detection and matches positioning accuracy Deng.The restructuring procedure for implementing three-dimensional space in computer vision, is made of, each link several main sport technique segments There are main influence factor and key technique.
The shooting that dual camera is arranged according to the distance between each personage and smart machine automatically for S104, smart machine is joined Number shoots a photo for each personage respectively
After smart machine obtains the distance between each personage and smart machine according to dual camera, with each personage and intelligence Can the distance of equipment, ambient light be foundation, following acquisition parameters are set automatically and are shot: aperture, shutter, ISO, focusing, Survey light, white balance.If dual camera is main and auxiliary camera, the acquisition parameters of main camera are only adjusted, main camera is used Carry out photograph taking;If dual camera does not distinguish parameter main and auxiliary, while that two cameras are arranged, two cameras are all Photograph taking is carried out, two photos are then synthesized using algorithm by a photo.
Parameter setting method is as follows:
1, aperture is set
Aperture indicates that f value is smaller with f value, then aperture is bigger (such as: f1 > f4 > f8).Aperture is bigger, and the depth of field is more shallow, more holds Easily take that main body is clear, the photo of blurred background comes.Smart machine can be configured according to the theme that user selects, such as Fruit user selects shooting background blurred image, then tunes up f-number.
2, shutter is set
Shutter is indicated with length of time: such as 1/125 second, 1/8 second, 1 second, number was bigger, and the time is longer, and shutter speed is got over Slowly.Shutter speed can not then solidify the movement of people/object shot slowly excessively, and as the hand shake of photographer causes trembling for photo Dynamic model paste.
When smart machine judges that the mobile less or background light of personage is relatively bright, f-number is set as lesser value, Such as 1/8 second;If background light is darker, f-number is adjusted to larger, such as larger than 2 seconds or more values.
3, ISO is set
ISO value is lower, poorer to the sensibility of light, while picture can be finer and smoother, in this case it is necessary to bigger light Circle or slower shutter speed;ISO value is higher, more sensitive to light, but picture will appear particle and noise, in such case Under, it can be with than faster shutter speed or lesser aperture.When smart machine judges that shooting personage's background light is darker, Automatic setting ISO is the larger value, such as 800;When background light becomes clear, ISO is set as smaller value, such as 200.
4, setting focusing
Smart machine is automatically using the personage selected as single point focalizing.
5, light is surveyed in setting
There are mainly three types of metering modes: light, central heavy spot light-metering, spot light-metering are surveyed in evaluation.
When not having the strong light of apparent bulk or simultaneous bulk shade in picture, it is set as evaluation and surveys light; In light in complicated and highly non-uniform picture, selected element metering mode;Alignment subject main body carries out survey light, for example is clapping When portrait, spot light-metering is used.
6, white balance is set
When user does not have manual setting white balance, smart machine is set as automatic white balance.
S105, smart machine are after each personage shoots a photo, and point saves photo centered on the personage.
Smart machine using each personage at a distance from smart machine, ambient light as foundation, setting acquisition parameters clapped After taking the photograph, when saving photo, saved by photo center of the personage.If carrying out photo preservation centered on the personage When, when there are part personages not in photo, then focal length (track back) is adjusted, be stored in all persons can in photo.
S106, smart machine select a photo and show on the screen of smart machine.
After smart machine saves all photos specifically shot, therefrom randomly chooses a photo and be shown in smart machine On display screen.User can select a personage in shooting view-finder, then show the photo centered on the personage.
The present embodiment by using each personage at a distance from smart machine, ambient light as foundation, setting acquisition parameters be Everyone shoots a photo, so that everyone can obtain one using oneself as the photo of focus when shooting group picture, User is improved to take pictures experience.
Embodiment two
With reference to Fig. 4, another method shot according to user perspective is present embodiments provided.On one basis of embodiment On, allow user when shooting personage's group picture, certain personnel is selected to shoot for focus.Such as the step S101 institute in Fig. 4 Show, before shooting photo, user selected in the view-finder of smart machine (image by clicking the personage selects) certain A little important persons, then respectively with these personnel at a distance from smart machine, ambient light be according to setting acquisition parameters, for The personnel of selection shoot a photo respectively.
The present embodiment can save shooting photo by selecting certain personnel to carry out photograph taking for focus before shooting Time can also save the memory space of smart machine.
Embodiment three
With reference to Fig. 5, another method shot according to user perspective is present embodiments provided.On one basis of embodiment On, smart machine is that after each personnel shoot a photo, these photos can be shared with corresponding personnel.Smart machine is dividing When enjoying these photos, according to the sharing object that user selects, the photo of the relevant personnel of the sharing object is automatically selected.Such as Fig. 5 In shown step S107, smart machine is in share photos, using the head portrait of image recognition technology automatic identification other side, then Personage in object head portrait and photo is matched, the photo of successful match is shared with other side.
For smart machine when user selects to share object, the head portrait for obtaining the instant message applications of the sharing object is (such as micro- Letter), face recognition technology is then used, is obtained and is shared object head portrait and belong to same personage and shot as focus using the personage Then the photo is shared with him by photo.If other side's instant message applications is not provided with head portrait, the name of the user is obtained Claim, then obtains the corresponding name of photo personage and parent from local data base or remote server by face recognition technology Category relationship.Then judge whether the user's name in other side's instant message applications is to obtain after the photo array by local system The photo, is if it is shared with him by name or the name of relatives.
Smart machine after photographs have been taken, can be by way of one-key sharing, using each personage as focus shooting Photo be shared with corresponding personnel automatically.One-key sharing process is as follows:
1, smart machine using the personage as focus shoot photo after, smart machine instant message applications (such as wechat, QQ, Alipay etc.) address list list in search everyone head portrait.
2, the head portrait in the image and address list list that shoot the personage in photo using the personage as focus is used to carry out It compares (being compared using face recognition technology).
If 3, compared successfully, which is shared with other side by instant message applications.
The present embodiment can automatically divide the photo after shooting using each personage as focal length automatically by photo sharing function Enjoy instant message applications corresponding to the personage, facilitate user picture shoot after carry out photo sharing, receive everyone with Oneself is the photo of focal length shooting, improves users' satisfaction degree.
Example IV
With reference to Fig. 6, a kind of device shot according to user perspective is present embodiments provided, comprising:
P202 person recognition module: for obtaining figure by any one camera in the dual camera on smart machine Picture obtains the number of person in described image using image recognition technology;
Person recognition module obtains the number of person in smart machine view-finder using image recognition technology.Image recognition master To include four component parts, be respectively as follows: man face image acquiring and detection, facial image pretreatment, facial image feature extraction And it matches and identifies.
1, recognition of face man face image acquiring and detection:
Man face image acquiring: different facial images can be transferred through pick-up lens and collect, such as still image, dynamic Image, different positions, different expressions etc. can be acquired well.When user is in the coverage of acquisition equipment When interior, acquisition equipment can search for automatically and shoot the facial image of user.
Face datection: Face datection is in the pretreatment for being mainly used for recognition of face, i.e., accurate calibration goes out face in the picture Position and size.The pattern feature very abundant for including in facial image, as histogram feature, color characteristic, template characteristic, Structure feature and Haar feature etc..
Face datection is exactly information useful among these to be picked out, and realize Face datection using these features.Mainstream Method for detecting human face be based on features above use Adaboost learning algorithm, Adaboost algorithm is a kind of side for classifying Method, it is combined some weaker classification methods, is combined into new very strong classification method.
Some rectangular characteristic (weak typings that can most represent face are picked out using Adaboost algorithm during Face datection Device), Weak Classifier is configured to a strong classifier, then several strong classifiers that training is obtained in the way of Nearest Neighbor with Weighted Voting It is composed in series the cascade filtering of a cascade structure, effectively improves the detection speed of classifier.
2, recognition of face facial image pre-processes:
Facial image pretreatment: the image preprocessing for face is based on Face datection as a result, handling image And finally serve the process of feature extraction.The original image that system obtains by various conditions due to being limited and being done at random It disturbs, tends not to directly use, it is necessary to which it is pre- to carry out the images such as gray correction, noise filtering to it in the early stage of image procossing Processing.For facial image, preprocessing process mainly includes light compensation, the greyscale transformation, histogram of facial image Equalization, normalization, geometric correction, filtering and sharpening etc..
3, recognition of face facial image feature extraction:
Facial image feature extraction: it is special that feature workable for face identification system is generally divided into visual signature, pixels statistics Sign, facial image transformation coefficient feature, facial image algebraic characteristic etc..Face characteristic extracts certain features aiming at face It carries out.Face characteristic extracts, and also referred to as face characterizes, it is the process that feature modeling is carried out to face.What face characteristic extracted Method, which is summed up, is divided into two major classes: one is Knowledge based engineering characterizing methods;Another is based on algebraic characteristic or statistics The characterizing method of study.
Knowledge based engineering characterizing method mainly according to the shape description of human face and they the distance between characteristic The characteristic for facilitating face classification is obtained, characteristic component generally includes Euclidean distance, curvature and angle between characteristic point Degree etc..Face is locally made of eyes, nose, mouth, chin etc., and to these parts and the geometry of structural relation is retouched between them It states, can be used as the important feature of identification face, these features are referred to as geometrical characteristic.Knowledge based engineering face characterizes Method and template matching method based on geometrical characteristic.
4, the matching of recognition of face facial image and identification:
Facial image matching and identification: the feature templates stored in the characteristic and database of the facial image of extraction into Row search matching, by setting a threshold value, when similarity is more than this threshold value, then result matching obtained is exported.Face Identification is exactly to be compared face characteristic to be identified with obtained skin detection, according to similarity degree to face Identity information is judged.
P203 range finder module: for the binocular ranging technology by the dual camera on the smart machine to described image Each of object carry out range measurement, obtain the distance between each personage and the smart machine;
Range finder module is measured each personage using binocular ranging technology and is set with intelligence using the dual camera on smart machine It is the distance between standby.There are mainly two types of structural form and four kinds of product form for dual camera:
Two kinds of structural forms:
1, integral structure:
Two camera modules are encapsulated on one wiring board simultaneously, then increases bracket and fixes and calibrate.The structure is to two The encapsulation precision requirement of camera is higher, and high-accuracy sealed in unit such as AA equipment is needed to complete, to the inclined of two cameras Shifting degree, inclined light shaft degree control it is high, need by the wiring board of special hardware material such as high-flatness, firm pedestal, The motor of demagnetization, it is also desirable to which special packaging technology is completed.
2, separate structure:
Two individual cameras, are fixed by the bracket calibration.This scheme is relatively low to assembling precision requirement, no Need to put into high-precision equipment, also merely add fixed bracket on hardware, production process be also only increased camera calibration and Bracket is fixed.
Four kinds of functional forms:
1, with visual angle with chip dual camera:
Realize image synthesis and special efficacy, it is feature-rich, it such as pixel superposition, HDR, first takes pictures and focuses afterwards, super night claps, is virtual The functions such as aperture, ranging.
2, main camera+pair camera:
Realization, which is first taken pictures, a few functions such as focuses afterwards, is background blurring.
3, different perspectives scheme:
One width close shot and a width distant view image are acquired using wide-angle and narrow angle mirror head respectively, is synthesized by image and realizes 3X/ 5X simulated optical zoom function, solve the problems, such as single camera find a view generated when scaling image sharpness decline.
4,3-D scanning dual camera:
It realizes to the 3D scanning of object and modeling function.Functionally with the scanning modeling phase of the Project Tango of Google Seemingly, but double hardware plans taken the photograph are more simple and cost is more excellent, while scanning distance and precision have difference.
The binocular range measurement principle that range finder module uses is as follows:
Smart machine is to realize that smart machine uses binocular by the dual camera of smart machine at a distance from each personage Vision algorithm obtains smart machine at a distance from each personage.Binocular vision algorithm flow include: off-line calibration, binocular correction, Binocular ranging.
1, off-line calibration:
The purpose of calibration is the internal reference (focal length, picture centre, distortion factor etc.) and outer ginseng (R (rotation) square for obtaining camera Battle array T (translation) matrix, for two camera).More commonly used method is gridiron pattern scaling method at present, Opencv and There is realization on Matlab.But generally in order to obtain higher stated accuracy, using (60*60 grid) glass surface of technical grade Plate effect can be more preferable.And someone also suggests using Matlab, because precision includes that effect of visualization can be much better, and The result of Matlab saves as xml, and Opencv can also directly be read in, but step is some relative to having bothered for Opencv. Fig. 9 is Matlab binocular vision calibration figure.
Step are as follows:
(1) left camera calibration obtains inside and outside parameter.
(2) right parameter camera calibration obtains outer ginseng.
(3) binocular calibration obtains the translation rotation relationship between camera.
2, binocular is corrected:
The purpose of correction be with reference between figure and target figure, only exist the difference in X-direction.Improve disparity computation Accuracy.Correction is divided into two steps
(1) distortion correction
Distortion correction effect refers to Figure 10
(2) canonical form is converted by camera
Because of correction section, the position of image all the points can be recalculated, thus the resolution ratio of algorithm process is got over It is big time-consuming bigger, and generally require two images of processing in real time.And this Algorithm parallelization strong normalization degree is higher, builds View is hardened using IVE, the acceleration mode in similar Opencv, first obtains mapping Map, then parallelization uses mapping Map weight Newly obtain location of pixels.The rectification function in Opencv is cvStereoRectify.Canonical form is converted with reference to figure by camera 11.
3, binocular ranging:
Binocular ranging is the core of binocular depth estimation, has developed many years, also there is very more algorithms, main mesh Be calculate with reference to pixel between figure and target figure opposite matching relationship, be broadly divided into local and non local algorithm.Generally There are following several steps.
(1) matching error calculates
(2) error is integrated
(3) disparity map calculating/optimization
(4) disparity map is corrected
Using fixed size or on-fixed size windows, the Optimum Matching position of a line where calculating therewith.Such as the following figure For simplest local mode, the best corresponding points position of a line is asked, left and right view X-coordinate position difference is disparity map.In order to increase Plus noise, the robustness of illumination can be used fixed window and matched, and can also be carried out again later to image using LBP transformation Matching.Match penalties, which calculate function, to be had: SAD, SSD, NCC etc..Maximum search range can also be limited using maximum disparity, it can also To use integrogram and Box Filter to carry out acceleration calculating.The preferable local matching algorithm of effect is based on Guided at present The binocular ranging algorithm using Box Filter and integrogram of Filter, local algorithm are easy to parallelization, and calculating speed is fast, but It is that the regional effect less for texture is bad, generally to image segmentation, divides the image into texture-rich and the sparse area of texture Domain adjusts matching window size, and texture is sparse to use wicket, Lai Tigao matching effect.
Non local matching algorithm, the searching for parallax of the task is regarded as minimize one it is determining based on whole binocular rangings Pair loss function, ask the minimum value of the loss function that optimal parallax relationship can be obtained, focus on solving in image do not know The matching problem in region mainly has Dynamic Programming (Dynamic Programming), belief propagation (Blief Propagation), figure cuts algorithm (Graph Cut).What effect was best at present is also that figure cuts algorithm, the figure provided in Opencv It is very big to cut algorithmic match time-consuming.
Figure cuts algorithm primarily to horizontal and vertical direction continuity constraint cannot be merged by solving dynamic programming algorithm Matching problem is regarded as and seeks minimal cut problem in the picture using these constraints by problem.
Since it is considered that global energy minimization, non local algorithm is general time-consuming larger, poorly using hardware-accelerated.But It is for blocking, the sparse situation of texture solves preferable.
After having obtained match point, generally by way of the sight consistency of left and right, it is detected and determined with high confidence level Match point.Much like front and back is only just considered steady by the point of left and right sight consistency check to the matched thought of light stream Determine match point.It can also be found out in this way because blocking, noise, the point that error hiding obtains.
About the post-processing of disparity map, using the method for median filtering, neighborhood territory pixel is used to the gray value of current point Intermediate value replaces, and this method can remove salt-pepper noise very well.Can remove because noise or weak Texture Matching failure Isolated point.
The binocular distance measurement process of range finder module is divided into camera calibration, image acquisition, image preprocessing, target detection With six steps such as feature extraction, Stereo matching, three-dimensional reconstruction.
(1) camera calibration
Camera calibration is in order to determine the position of video camera, inner parameter and external parameter, to establish imaging model, really Object point is determined in world coordinate system with the corresponding relationship between it on the image plane picture point.One of basic task of stereoscopic vision It is the geological information that the image information obtained from video camera calculates object in three-dimensional space, and thus rebuilds and identify object Body, and the geometrical model of video camera imaging determine space object surface point three-dimensional geometry position and image in corresponding points it Between correlation, these geometrical model parameters are exactly camera parameters.These parameters must be by experiment just under normal circumstances It can obtain, this process is known as camera calibration.Camera calibration is it needs to be determined that video camera inner geometry and optical characteristics The three-dimensional position and direction (external parameter) of the camera coordinate system of (inner parameter) and an opposite world coordinate system.It is calculating In machine vision, if will be calibrated using multiple video cameras to each video camera.
(2) image obtains
The image acquisition of binocular vision is by two of different location or a video camera by mobile or rotary taking The same scene obtains the image of two width different perspectivess.In binocular vision system, the acquisition of depth information is to be carried out in two steps 's.
(3) image preprocessing
Two dimensional image is generated by optical imaging system, contains various random noises affected by environment and distortion, Therefore it needs to pre-process original image, to inhibit garbage, prominent useful information, improving image quality.Image is pre- There are two the purpose of processing is main: improving the visual effect of image, improve image definition;Image is set to become to be more advantageous to calculating Various signature analysis are convenient in the processing of machine.
(4) target detection and feature extraction
Target detection refers to from by extracting target object to be detected in pretreated image.Feature extraction refers to from inspection Specified characteristic point is extracted in the target measured.Due to still can operate with image spy without a kind of blanket theory at present The extraction of sign, so as to cause the diversity of matching characteristic in stereoscopic vision research.Currently, common matching characteristic mainly has area Characteristic of field, line feature and point-like character etc..In general, the special 4 binocular distance measurement systematic researches sign of large scale containing compared with Image information abundant, readily available quick matching, but number in the picture is less, positioning accuracy is poor, feature extraction It is difficult with description.And small scale features number is more, but information contained is less, thus be to overcome ambiguity matching and mention in matching High operation efficiency needs stronger constraint criterion and matching strategy.Good matching characteristic should have stability, invariance, can Distinction, uniqueness and effective solution matched ability of ambiguity.
(5) Stereo matching
Stereo matching refers to according to the calculating to selected feature, the corresponding relationship between feature is established, by the same space Photosites of the physical points in different images are mapped.When space three-dimensional scene is projected as two dimensional image, same scenery Image under different perspectives can be very different, and the factors in scene, as scene geometry and physical characteristic, Noise jamming, illumination condition and distortion of camera etc., the gray value being all integrated into single image.It therefore, be accurately To the image for containing so more unfavorable factors carry out matching unambiguously be it is very difficult, this problem is not yet so far It is well solved.The validity of Stereo matching depends on the solution of three problems: finding the essential attribute between feature, selection Correct matching characteristic and foundation can correctly match the stable algorithm of selected feature.
(6) three-dimensional reconstruction
After obtaining anaglyph by Stereo matching, depth image, and restoration scenario 3D information can be determined.Shadow The factor for ringing range measurement accuracy mainly has camera calibration error, digital quantization, feature detection and matches positioning accuracy Deng.
P204 parameter setting module: burnt for shooting to be arranged at a distance from the smart machine according to each personage Away from for shooting aperture, shutter, ISO, exposure, white balance to be arranged according to the background of each personage.
Parametric procedure is arranged in parameter setting module:
(1) aperture is set
Aperture indicates that f value is smaller with f value, then aperture is bigger (such as: 1 > f of f, 4 > f 8).Aperture is bigger, and the depth of field is more shallow, Easier to take that main body is clear, the photo of blurred background comes.Smart machine can be set according to the theme that user selects It sets, if user selects shooting background blurred image, tunes up f-number.
(2) shutter is set
Shutter is indicated with length of time: such as 1/125 second, 1/8 second, 1 second, number was bigger, and the time is longer, and shutter speed is got over Slowly.Shutter speed can not then solidify the movement of people/object shot slowly excessively, and as the hand shake of photographer causes trembling for photo Dynamic model paste.
When smart machine judges that the mobile less or background light of personage is relatively bright, f-number is set as lesser value, Such as 1/8 second;If background light is darker, f-number is adjusted to larger, such as larger than 2 seconds or more values.
(3) ISO is set
ISO value is lower, poorer to the sensibility of light, while picture can be finer and smoother, in this case it is necessary to bigger light Circle or slower shutter speed;ISO value is higher, more sensitive to light, but picture will appear particle and noise, in such case Under, it can be with than faster shutter speed or lesser aperture.When smart machine judges that shooting personage's background light is darker, Automatic setting ISO is the larger value, such as 800;When background light becomes clear, ISO is set as smaller value, such as 200.
(4) setting focusing
Smart machine is automatically using the personage selected as single point focalizing.
(5) light is surveyed in setting
There are mainly three types of metering modes: light, central heavy spot light-metering, spot light-metering are surveyed in evaluation.
When not having the strong light of apparent bulk or simultaneous bulk shade in picture, it is set as evaluation and surveys light; In light in complicated and highly non-uniform picture, selected element metering mode;Alignment subject main body carries out survey light, for example is clapping When portrait, spot light-metering is used.
(6) white balance is set
When user does not have manual setting white balance, smart machine is set as automatic white balance.
P205 shooting module: for setting pair to be taken the photograph automatically according to the distance between each personage and the smart machine As the acquisition parameters of head, a photo is shot respectively for each personage.
Shooting module using each personage at a distance from smart machine, ambient light as foundation, setting acquisition parameters clapped It takes the photograph.
P206 memory module: preservation photo is put centered on the personage after shooting a photo for each personage.
After smart machine shoots photo, when saving photo using memory module, saved by photo center of the personage. If carry out photo preservation centered on the personage, when there are part personages not in photo, then adjusts focal length and (zoom out mirror Head), be stored in all persons can in photo.
P207 display module: for selecting a photo and showing on the screen of the smart machine photo of selection.
After smart machine saves all photos specifically shot, display module therefrom randomly chooses a photo and is shown in intelligence On the display screen of energy equipment.User can select a personage in shooting view-finder, and then display module is selected according to user Character image, the photo of the personage on the display screen of smart machine.
The present embodiment by using each personage at a distance from smart machine, ambient light as foundation, setting acquisition parameters be Everyone shoots a photo, so that everyone can obtain one using oneself as the photo of focus when shooting group picture, User is improved to take pictures experience.
Embodiment six
With reference to Fig. 7, another device shot according to user perspective is present embodiments provided, in the base of embodiment five It further include P201 personage's selecting module on plinth.
Personage's selecting module, when shooting personage's group picture, selects certain personnel to shoot for focus for user.Such as Before shooting photo, user selects (image by clicking the personage selects) certain heavy in the view-finder of smart machine Want personnel, then respectively with these personnel at a distance from smart machine, ambient light be according to setting acquisition parameters, be selected Personnel shoot a photo respectively.
The present embodiment can save shooting photo by selecting certain personnel to carry out photograph taking for focus before shooting Time can also save the memory space of smart machine.
Embodiment seven
With reference to Fig. 7, another device shot according to user perspective is present embodiments provided, in the base of embodiment five It further include P208 sharing module on plinth.Sharing module is that after each personnel shoot a photo, these photos can be shared with Corresponding personnel.Smart machine, according to the sharing object that user selects, automatically selects the sharing object phase when sharing these photos The photo of the personnel of pass.
For smart machine when user selects to share object, the head portrait for obtaining the instant message applications of the sharing object is (such as micro- Letter), face recognition technology is then used, is obtained and is shared object head portrait and belong to same personage and shot as focus using the personage Then the photo is shared with him by photo.
Smart machine after photographs have been taken, can be by way of one-key sharing, using each personage as focus shooting Photo be shared with corresponding personnel automatically.One-key sharing process is as follows:
1, smart machine using the personage as focus shoot photo after, smart machine instant message applications (such as wechat, QQ, Alipay etc.) address list list in search everyone head portrait.
2, the head portrait in the image and address list list that shoot the personage in photo using the personage as focus is used to carry out It compares (being compared using face recognition technology).
If 3, compared successfully, which is shared with other side by instant message applications.
The present embodiment can automatically divide the photo after shooting using each personage as focal length automatically by photo sharing function Enjoy instant message applications corresponding to the personage, facilitate user picture shoot after carry out photo sharing, receive everyone with Oneself is the photo of focal length shooting, improves users' satisfaction degree.
Describe the technical principle of the embodiment of the present invention in conjunction with specific embodiments above, these descriptions are intended merely to explain this The principle of inventive embodiments, and it cannot be construed to the limitation to protection scope of the embodiment of the present invention in any way, this field Technical staff does not need to pay for creative labor the other specific embodiments that can associate the embodiment of the present invention, these sides Formula is fallen within the protection scope of the embodiment of the present invention.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes Business device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (8)

1. a kind of method shot according to user perspective characterized by comprising
Smart machine obtains image by any one camera in the dual camera on the smart machine, and the intelligence is set The standby number of person obtained using image recognition technology in described image;
The smart machine carries out distance to each of described image object by the binocular ranging technology of the dual camera Measurement obtains the distance between each personage and the smart machine;
The bat of dual camera is arranged according to the distance between each personage and the smart machine automatically for the smart machine Parameter is taken the photograph, is that each personage shoots one using oneself as the group picture photo of focus respectively;
The smart machine selects a photo and shows on the screen of the smart machine;
The smart machine obtains the head portrait for sharing the instant message applications of object, using recognition of face skill in share photos Art is obtained and shares object head portrait and belong to same personage and the photo that is shot using the personage as focus and be shared with the sharing pair As.
2. the method according to claim 1, wherein the smart machine is according to each personage and the intelligence Can equipment distance be arranged shooting focal length, the smart machine according to the background of each personage be arranged shooting aperture, shutter, ISO, exposure, white balance.
3. the method according to claim 1, wherein the smart machine is that each personage shoots one with oneself After the group picture photo of focus, photo is saved by optical center of the personage by adjusting focal length.
4. user is described the method according to claim 1, wherein the smart machine is before shooting photo The personage for needing to shoot is manually selected in the view-finder of smart machine;The smart machine is only that the personage of user's selection claps respectively One is taken the photograph using oneself as the group picture photo of focus.
5. a kind of device shot according to user perspective characterized by comprising
Person recognition module: it for obtaining image by any one camera in the dual camera on smart machine, uses Image recognition technology obtains the number of person in described image;
Range finder module: for the binocular ranging technology by the dual camera on the smart machine to each of described image Personage carries out range measurement, obtains the distance between each personage and the smart machine;Shooting module: for according to institute The acquisition parameters that dual camera is arranged in the distance between each personage and the smart machine automatically are stated, are clapped respectively for each personage One is taken the photograph using oneself as the group picture photo of focus;
Display module: for selecting a photo and showing on the screen of the smart machine photo of selection;The intelligence Equipment obtains the head portrait for sharing the instant message applications of object in share photos, using face recognition technology, obtains and shares Object head portrait is belonged to same personage and the photo that is shot using the personage as focus and is shared with the sharing object.
6. device according to claim 5, which is characterized in that further include:
Parameter setting module: for shooting focal length to be arranged at a distance from the smart machine according to each personage, it is used for root According to background setting the shooting aperture, shutter, ISO, exposure, white balance of each personage.
7. device according to claim 5, which is characterized in that further include:
Memory module: after shooting one using oneself as the group picture photo of focus for each personage, by adjusting focal length with this Personage is that optical center saves photo.
8. device according to claim 5, which is characterized in that further include:
Personage's selecting module: for the smart machine before shooting photo, user's hand in the view-finder of the smart machine The personage that dynamic selection needs to shoot;The smart machine is only that the personage that user selects shoots one using oneself as focus respectively Group picture photo.
CN201710111156.0A 2017-02-28 2017-02-28 A kind of method and device shot according to user perspective Active CN106851104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710111156.0A CN106851104B (en) 2017-02-28 2017-02-28 A kind of method and device shot according to user perspective

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710111156.0A CN106851104B (en) 2017-02-28 2017-02-28 A kind of method and device shot according to user perspective

Publications (2)

Publication Number Publication Date
CN106851104A CN106851104A (en) 2017-06-13
CN106851104B true CN106851104B (en) 2019-11-22

Family

ID=59134613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710111156.0A Active CN106851104B (en) 2017-02-28 2017-02-28 A kind of method and device shot according to user perspective

Country Status (1)

Country Link
CN (1) CN106851104B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395979A (en) * 2017-08-14 2017-11-24 天津帕比特科技有限公司 The image-pickup method and system of hollow out shelter are removed based on multi-angled shooting
CN109388233B (en) * 2017-08-14 2022-07-29 财团法人工业技术研究院 Transparent display device and control method thereof
CN107680060A (en) * 2017-09-30 2018-02-09 努比亚技术有限公司 A kind of image distortion correction method, terminal and computer-readable recording medium
CN108024056B (en) 2017-11-30 2019-10-29 Oppo广东移动通信有限公司 Imaging method and device based on dual camera
CN107959778B (en) 2017-11-30 2019-08-20 Oppo广东移动通信有限公司 Imaging method and device based on dual camera
CN107835372A (en) 2017-11-30 2018-03-23 广东欧珀移动通信有限公司 Imaging method, device, mobile terminal and storage medium based on dual camera
CN108108704A (en) * 2017-12-28 2018-06-01 努比亚技术有限公司 Face identification method and mobile terminal
CN108446025B (en) * 2018-03-21 2021-04-23 Oppo广东移动通信有限公司 Shooting control method and related product
CN108921863B (en) * 2018-06-12 2022-06-14 江南大学 Acquisition method of foot data acquisition device
CN109215085B (en) * 2018-08-23 2021-09-17 上海小萌科技有限公司 Article statistical method using computer vision and image recognition
CN109712104A (en) * 2018-11-26 2019-05-03 深圳艺达文化传媒有限公司 The exposed method of self-timer video cartoon head portrait and Related product
CN109919988A (en) * 2019-03-27 2019-06-21 武汉万屏电子科技有限公司 A kind of stereoscopic image processing method suitable for three-dimensional endoscope
CN110942434B (en) * 2019-11-22 2023-05-05 华兴源创(成都)科技有限公司 Display compensation system and method of display panel
CN111770279B (en) * 2020-08-03 2022-04-08 维沃移动通信有限公司 Shooting method and electronic equipment
CN114363516A (en) * 2021-12-28 2022-04-15 苏州金螳螂文化发展股份有限公司 Interactive photographing system based on human face recognition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
CN101933016A (en) * 2008-01-29 2010-12-29 索尼爱立信移动通讯有限公司 Camera system and based on the method for picture sharing of camera perspective
CN103813098A (en) * 2012-11-12 2014-05-21 三星电子株式会社 Method and apparatus for shooting and storing multi-focused image in electronic device
CN104469123A (en) * 2013-09-17 2015-03-25 联想(北京)有限公司 A method for supplementing light and an image collecting device
CN104660909A (en) * 2015-03-11 2015-05-27 酷派软件技术(深圳)有限公司 Image acquisition method, image acquisition device and terminal
CN104853096A (en) * 2015-04-30 2015-08-19 广东欧珀移动通信有限公司 Rotation camera-based shooting parameter determination method and terminal
CN105894031A (en) * 2016-03-31 2016-08-24 青岛海信移动通信技术股份有限公司 Photo selection method and photo selection device
CN105939445A (en) * 2016-05-23 2016-09-14 武汉市公安局公共交通分局 Fog penetration shooting method based on binocular camera
CN105981362A (en) * 2014-02-18 2016-09-28 华为技术有限公司 Method for obtaining a picture and multi-camera system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101269900B1 (en) * 2008-12-22 2013-05-31 한국전자통신연구원 Method and apparatus for representing motion control camera effect based on synchronized multiple image
KR20140007529A (en) * 2012-07-09 2014-01-20 삼성전자주식회사 Apparatus and method for taking a picture in camera device and wireless terminal having a camera device
KR101952684B1 (en) * 2012-08-16 2019-02-27 엘지전자 주식회사 Mobile terminal and controlling method therof, and recording medium thereof
CN104243828B (en) * 2014-09-24 2019-01-11 宇龙计算机通信科技(深圳)有限公司 A kind of method, apparatus and terminal shooting photo
JP6445844B2 (en) * 2014-11-05 2018-12-26 キヤノン株式会社 Imaging device and method performed in imaging device
CN106034179A (en) * 2015-03-18 2016-10-19 中兴通讯股份有限公司 Photo sharing method and device
CN105025162A (en) * 2015-06-16 2015-11-04 惠州Tcl移动通信有限公司 Automatic photo sharing method, mobile terminals and system
CN105005597A (en) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 Photograph sharing method and mobile terminal
JP6546474B2 (en) * 2015-07-31 2019-07-17 キヤノン株式会社 Image pickup apparatus and control method thereof
CN105611174A (en) * 2016-02-29 2016-05-25 广东欧珀移动通信有限公司 Control method, control apparatus and electronic apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
CN101933016A (en) * 2008-01-29 2010-12-29 索尼爱立信移动通讯有限公司 Camera system and based on the method for picture sharing of camera perspective
CN103813098A (en) * 2012-11-12 2014-05-21 三星电子株式会社 Method and apparatus for shooting and storing multi-focused image in electronic device
CN104469123A (en) * 2013-09-17 2015-03-25 联想(北京)有限公司 A method for supplementing light and an image collecting device
CN105981362A (en) * 2014-02-18 2016-09-28 华为技术有限公司 Method for obtaining a picture and multi-camera system
CN104660909A (en) * 2015-03-11 2015-05-27 酷派软件技术(深圳)有限公司 Image acquisition method, image acquisition device and terminal
CN104853096A (en) * 2015-04-30 2015-08-19 广东欧珀移动通信有限公司 Rotation camera-based shooting parameter determination method and terminal
CN105894031A (en) * 2016-03-31 2016-08-24 青岛海信移动通信技术股份有限公司 Photo selection method and photo selection device
CN105939445A (en) * 2016-05-23 2016-09-14 武汉市公安局公共交通分局 Fog penetration shooting method based on binocular camera

Also Published As

Publication number Publication date
CN106851104A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106851104B (en) A kind of method and device shot according to user perspective
CN105245774B (en) A kind of image processing method and terminal
CN106878588A (en) A kind of video background blurs terminal and method
CN105354838B (en) The depth information acquisition method and terminal of weak texture region in image
CN106454121B (en) Double-camera shooting method and device
CN111462311B (en) Panorama generation method and device and storage medium
CN104954689B (en) A kind of method and filming apparatus that photo is obtained using dual camera
CN105100775B (en) A kind of image processing method and device, terminal
CN112150399B (en) Image enhancement method based on wide dynamic range and electronic equipment
CN106612397A (en) Image processing method and terminal
CN106605403A (en) Photographing method and electronic device
CN105744159A (en) Image synthesizing method and device
CN105227837A (en) A kind of image combining method and device
CN114092364A (en) Image processing method and related device
CN105898159A (en) Image processing method and terminal
CN106778524A (en) A kind of face value based on dual camera range finding estimates devices and methods therefor
CN106791204A (en) Mobile terminal and its image pickup method
CN109889724A (en) Image weakening method, device, electronic equipment and readable storage medium storing program for executing
CN113973173B (en) Image synthesis method and electronic equipment
CN105187724B (en) A kind of mobile terminal and method handling image
CN106603931A (en) Binocular shooting method and device
CN110430357B (en) Image shooting method and electronic equipment
CN107018331A (en) A kind of imaging method and mobile terminal based on dual camera
CN106686301A (en) Picture shooting method and device
WO2021147921A1 (en) Image processing method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant