CN106851104A - A kind of method and device shot according to user perspective - Google Patents
A kind of method and device shot according to user perspective Download PDFInfo
- Publication number
- CN106851104A CN106851104A CN201710111156.0A CN201710111156A CN106851104A CN 106851104 A CN106851104 A CN 106851104A CN 201710111156 A CN201710111156 A CN 201710111156A CN 106851104 A CN106851104 A CN 106851104A
- Authority
- CN
- China
- Prior art keywords
- smart machine
- personage
- photo
- image
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/617—Upgrading or updating of programs or applications for camera control
Abstract
The invention discloses a kind of method shot according to user perspective.The method includes that smart machine obtains image by any one camera in dual camera, and smart machine obtains the number of person in described image using image recognition technology;Smart machine carries out range measurement by the binocular ranging technology of dual camera to each personage in image, obtains the distance between each personage and smart machine;Smart machine, according to the automatic acquisition parameters for setting dual camera of the distance between each personage and smart machine, is that each personage shoots a photo respectively;Smart machine selects a photo and is shown on the screen of smart machine.This method causes that everyone can obtain a photo with oneself as visual angle, the shooting of smart mobile phone is more conformed to the interest of user, improves Consumer's Experience.
Description
【Technical field】
The present invention relates to a kind of technique for taking, more precisely a kind of method shot according to user perspective and it is
System.
【Background technology】
With the development of camera function in smart machine, when smart machine shoots to personage, it is possible to achieve continuous
Shoot multiple pictures.
In the prior art, when smart machine carries out being continuously shot multiple pictures to personage, every illumination be not with
What the visual angle of different user was shot, but continuous various shootings are carried out with certain the artificial focus in personage.
This method obtains the quantity of personage and the distance of each personage by the dual camera on smart machine, then with every
The distance of individual personage sets acquisition parameters for each personage shoots a photo respectively automatically, so that everyone can obtain
One oneself has been the photo at visual angle, the shooting of smart mobile phone is more conformed to the interest of user, improves Consumer's Experience.
【The content of the invention】
For drawbacks described above, the invention provides a kind of method and device shot according to user perspective.A kind of root
The method shot according to user perspective, including:Smart machine is by any one in the dual camera on the smart machine
Individual camera obtains image, and the smart machine obtains the number of person in described image using image recognition technology;The intelligence
Energy equipment carries out range measurement to each personage in described image by the binocular ranging technology of the dual camera, obtains institute
State the distance between each personage and described smart machine;The smart machine each personage and smart machine according to
The distance between the automatic acquisition parameters that dual camera is set, be that each personage shoots a photo respectively;The smart machine
One photo of selection simultaneously shows on the screen of the smart machine.
Alternatively, the smart machine each personage according to sets shooting focal length with the distance of the smart machine,
The background of the smart machine each personage according to sets and shoots aperture, shutter, ISO, exposure, white balance.
Alternatively, the smart machine is after each personage shoots a photo, to be put centered on the personage and preserve photo.
Alternatively, before photo is shot, user manually selects the smart machine in the view-finder of the smart machine
Need the personage for shooting;The smart machine is only for the personage of user's selection shoots a photo respectively.
Alternatively, the smart machine is in share photos, using the head portrait of image recognition technology automatic identification other side, so
The personage in other side's head portrait and photo is matched afterwards, the photo that the match is successful is shared with other side.
The present invention also proposes a kind of device shot according to user perspective in addition, including:Person recognition module:For
Image is obtained by any one camera in the dual camera on smart machine, the figure is obtained using image recognition technology
Number of person as in;
Range finder module:For the binocular ranging technology by the dual camera on the smart machine in described image
Each personage carries out range measurement, obtains the distance between described each personage and described smart machine;Taking module:For root
It is each personage point according to described each personage and the automatic acquisition parameters for setting dual camera of the distance between the smart machine
Pai She not a photo;
Display module:The photo of selection is shown for one photo of selection and on the screen of the smart machine.
Alternatively, described device also includes:
Parameter setting module:For setting shooting focal length with the distance of the smart machine according to described each personage, use
Aperture, shutter, ISO, exposure, white balance are shot in being set according to the background of each personage.
Alternatively, described device also includes:Memory module:In being with the personage after each personage one photo of shooting
Heart point preserves photo.
Alternatively, described device also includes:
Personage's selecting module:For the smart machine before photo is shot, view-finder of the user in the smart machine
In manually select need shoot personage;The smart machine is only for the personage of user's selection shoots a photo respectively.
Alternatively, described device also includes:
Sharing module:Using the head portrait of image recognition technology automatic identification other side during for share photos, then other side
Personage in head portrait and photo is matched, and the photo that the match is successful is shared with other side.
Beneficial effects of the present invention:This method is obtained by any one camera in the dual camera on smart machine
Image, then smart machine is using the number of person in image recognition technology acquisition described image;Smart machine is by double shootings
Head binocular ranging technology range measurement is carried out to each personage in image, obtain between each personage and smart machine away from
From, acquisition parameters are then set with the distance of each personage automatically and shoots a photo respectively as each personage, so that each
People can obtain a photo with oneself as visual angle, the shooting of smart mobile phone is more conformed to the interest of user, improve
Consumer's Experience.
【Brief description of the drawings】
Fig. 1 is the hardware architecture diagram of the mobile terminal for realizing each embodiment of the invention.
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1.
Fig. 3 is the method flow diagram of the embodiment of the method one shot according to user perspective that the present invention is provided.
Fig. 4 is the method flow diagram of the embodiment of the method two shot according to user perspective that the present invention is provided.
Fig. 5 is the method flow diagram of the embodiment of the method three shot according to user perspective that the present invention is provided.
Fig. 6 is the functional block diagram of the device embodiment four shot according to user perspective that the present invention is provided.
Fig. 7 is the functional block diagram of the device embodiment five shot according to user perspective that the present invention is provided.
Fig. 8 is the functional block diagram of the device embodiment six shot according to user perspective that the present invention is provided.
Fig. 9 is the Matlab binocular vision calibration figures of binocular range measurement principle.
Figure 10 is the distortion correction figure of binocular range measurement principle.
Figure 11 is that camera is converted into binocular range measurement principle canonical form figure.
Figure 12 is binocular distance measurement procedure chart.
【Specific embodiment】
It should be appreciated that specific embodiment described herein is only used to explain the present invention, it is not intended to limit the present invention.
The mobile terminal of each embodiment of the invention is realized referring now to Description of Drawings.In follow-up description, use
For represent element such as " module ", " part " or " unit " suffix only for being conducive to explanation of the invention, itself
Not specific meaning.Therefore, " module " can be used mixedly with " part ".
Mobile terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as moving
Phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP
The mobile terminal of (portable media player), guider etc. and such as numeral TV, desktop computer etc. are consolidated
Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that, except being used in particular for movement
Outside the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is that the hardware configuration of the mobile terminal for realizing each embodiment of the invention is illustrated.
Mobile terminal 1 00 can include wireless communication unit 110, A/V (audio/video) input block 120, user input
Unit 130, output unit 140, memory 150, interface unit 160, controller 170 and power subsystem 180 etc..Fig. 1 shows
Mobile terminal with various assemblies, it should be understood that being not required for implementing all components for showing.Can be alternatively
Implement more or less component.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more assemblies, and it allows mobile terminal 1 00 and wireless communication system
Or the radio communication between network.For example, wireless communication unit can include mobile communication module 111, wireless Internet mould
At least one of block 112, short range communication module 113.
Mobile communication module 111 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal can be logical including voice
Words signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 112 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 113 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant
Tooth TM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybee TM etc..
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include the He of camera 121
Microphone 122, the static images that 121 pairs, camera is obtained in Video Capture pattern or image capture mode by image capture apparatus
Or the view data of video is processed.Picture frame after treatment may be displayed on display unit 141.Processed through camera 121
Picture frame afterwards can be stored in memory 150 (or other storage mediums) or sent out via wireless communication unit 110
Send, two or more cameras 121 can be provided according to the construction of mobile terminal.Microphone 122 can be in telephone calling model, note
Sound (voice data) is received via microphone in record pattern, speech recognition mode etc. operational mode, and can be by so
Acoustic processing be voice data.Audio (voice) data after treatment can be converted in the case of telephone calling model can
The form for being sent to mobile communication base station via mobile communication module 111 is exported.Microphone 122 can implement various types of making an uproar
Sound eliminates (or suppression) algorithm to eliminate the noise or dry that (or suppression) produces during reception and transmission audio signal
Disturb.
User input unit 130 can generate key input data to control each of mobile terminal according to the order of user input
Plant operation.User input unit 130 allows the various types of information of user input, and can include keyboard, metal dome, touch
Plate (for example, detection due to being touched caused by resistance, pressure, electric capacity etc. change sensitive component), roller, rocking bar etc.
Deng.Especially, when touch pad is superimposed upon on display unit 141 in the form of layer, touch-screen can be formed.
Interface unit 160 is connected the interface that can pass through with mobile terminal 1 00 as at least one external device (ED).For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Identification module can be that storage uses each of mobile terminal 1 00 for verifying user
Kind of information and subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) can be included
Etc..In addition, the device (hereinafter referred to as " identifying device ") with identification module can take the form of smart card, therefore, know
Other device can be connected via port or other attachment means with mobile terminal 1 00.Interface unit 170 can be used for reception and come from
The input (for example, data message, electric power etc.) of the external device (ED) and input that will be received is transferred in mobile terminal 1 00
One or more elements can be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 1 00 is connected with external base, interface unit 160 can serve as allowing by it by electricity
Power provides to the path of mobile terminal 1 00 from base or can serve as allowing the various command signals being input into from base to pass through it
It is transferred to the path of mobile terminal.Be can serve as recognizing that mobile terminal is from the various command signals or electric power of base input
The no signal being accurately fitted within base.Output unit 140 is configured to provide defeated with vision, audio and/or tactile manner
Go out signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 140 can include display
Unit 141, dio Output Modules 142 etc..
Display unit 141 may be displayed on the information processed in mobile terminal 1 00.For example, when mobile terminal 1 00 is in electricity
During words call mode, display unit 141 can show and converse or other communicate (for example, text messaging, multimedia file
Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 1 00 is in video calling pattern
Or during image capture mode, display unit 141 can show the image of capture and/or the image of reception, show video or figure
UI or GUI of picture and correlation function etc..
Meanwhile, when display unit 141 and touch pad in the form of layer it is superposed on one another to form touch-screen when, display unit
141 can serve as input unit and output device.Display unit 141 can include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, and this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired implementation method, mobile terminal 1 00 can include two or more display units (or other display devices), for example, moving
Dynamic terminal can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used to detect touch
Input pressure and touch input position and touch input area.
Dio Output Modules 142 can mobile terminal be in call signal reception pattern, call mode, logging mode,
It is that wireless communication unit 110 is received or in memory 150 when under the isotypes such as speech recognition mode, broadcast reception mode
The voice data transducing audio signal of middle storage and it is output as sound.And, dio Output Modules 142 can be provided and movement
The audio output (for example, call signal receives sound, message sink sound etc.) of the specific function correlation that terminal 100 is performed.
Dio Output Modules 142 can include loudspeaker, buzzer etc..
Memory 150 can store software program for the treatment and control operation performed by controller 170 etc., Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And, memory 150 can store the vibration of various modes on being exported when touching and being applied to touch-screen and audio signal
Data.
Memory 150 can include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, many
Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..And, mobile terminal 1 00 can perform memory with by network connection
The network storage device cooperation of 160 store function.
The overall operation of the generally control mobile terminal of controller 170.For example, controller 170 is performed and voice call, data
Communication, video calling etc. related control and treatment.In addition, controller 170 can be included for reproducing (or playback) many matchmakers
The multi-media module 171 of volume data, multi-media module 171 can be constructed in controller 170, or can be structured as and control
Device 170 is separated.Controller 170 can be with execution pattern identifying processing, the handwriting input that will be performed on the touchscreen or picture
Draw input and be identified as character or image.
Power subsystem 180 receives external power or internal power under the control of controller 170 and provides operation each unit
Appropriate electric power needed for part and component.
Various implementation methods described herein can be with use such as computer software, hardware or its any combination of calculating
Machine computer-readable recording medium is implemented.Implement for hardware, implementation method described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein
At least one in electronic unit is implemented, and in some cases, such implementation method can be implemented in controller 180.
For software implementation, the implementation method of such as process or function can with allow to perform the single of at least one function or operation
Software module is implemented.Software code can be come by the software application (or program) write with any appropriate programming language
Implement, software code can be stored in memory 150 and performed by controller 170.
So far, oneself according to its function through describing mobile terminal.Below, for the sake of brevity, will description such as folded form,
Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc. is used as showing
Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 1 00 as shown in Figure 1 may be constructed such that using via frame or packet transmission data it is all if any
Line and wireless communication system and satellite-based communication system are operated.
The communication system that mobile terminal wherein of the invention can be operated is described referring now to Fig. 2.
Such communication system can use different air interface and/or physical layer.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to Fig. 2, wireless communication system can include multiple mobile terminal 1s 00, multiple base station (BS) 270, base station controls
Device (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and the shape of Public Switched Telephony Network (PSTN) 290
Into interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link.Flyback line
If any one in the interface that road can be known according to Ganji is constructed, the interface includes such as E1/T1, ATM, IP, PPP, frame
Relaying, HDSL, ADSL or xDSL.It will be appreciated that system can include multiple BSC2750 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction
Each subregion of line covering is radially away from BS270.Or, each subregion can be by two or more for diversity reception
Antenna is covered.Each BS270 may be constructed such that the multiple frequency distribution of support, and the distribution of each frequency has specific frequency spectrum
(for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed intersects can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single
BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Or, each subregion of specific BS270 can be claimed
It is multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to broadcsting transmitter (BT) 295 mobile terminal operated in system
100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 1 00 to receive the broadcast sent by BT295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps position multiple mobile terminals
At least one of 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that be, it is possible to use any number of satellite obtains useful
Location information.GPS module 115 as shown in Figure 1 is generally configured to coordinate with satellite 300 to be believed with obtaining desired positioning
Breath.Substitute GPS tracking techniques or outside GPS tracking techniques, it is possible to use other of the position of mobile terminal can be tracked
Technology.In addition, at least one gps satellite 300 can optionally or additionally process satellite dmb transmission.
Used as a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminal 1s 00
Signal.Mobile terminal 1 00 generally participates in call, information receiving and transmitting and other types of communication.Each of the reception of certain base station 270 is anti-
Processed in specific BS270 to link signal.The data of acquisition are forwarded to the BSC275 of correlation.BSC provides call
Resource allocation and the mobile management function of the coordination including the soft switching process between BS270.The number that BSC275 will also be received
According to MSC280 is routed to, it provides the extra route service for forming interface with PSTN290.Similarly, PSTN290 with
MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 correspondingly controls BS270 with by forward link signals
It is sent to mobile terminal 1 00.
Based on above-mentioned mobile terminal hardware configuration and communication system, the inventive method each embodiment is proposed.
Embodiment one
With reference to Fig. 3, a kind of method shot according to user perspective, including:
S102, smart machine obtain image, intelligence by any one camera in the dual camera on smart machine
Equipment obtains the number of person in image using image recognition technology.
Dual camera in smart machine does not exist major-minor timesharing, then obtain a figure using any one camera
Picture;If the dual camera in smart machine is main and auxiliary camera, an image is obtained using main camera, then intelligence
Equipment obtains the quantity of the personage in the image using image recognition introduction.
After smart machine gathers the image containing face with camera, detect and track face in the picture automatically, to inspection
The face for measuring carries out a series of correlation techniques of face, generally also referred to as Identification of Images, face recognition.
Recognition of face mainly includes four parts, respectively:Man face image acquiring and detection, facial image are located in advance
Reason, facial image feature extraction and matching and identification.
1st, recognition of face man face image acquiring and detection:
Man face image acquiring:Different facial images can be transferred through pick-up lens and collect, such as still image, dynamic
The aspects such as image, different positions, different expressions can be gathered well.When user is in the coverage of collecting device
When interior, collecting device can automatically be searched for and shoot the facial image of user.
Face datection:Face datection is being mainly used in the pretreatment of recognition of face, i.e., accurate calibration goes out face in the picture
Position and size.The pattern feature very abundant included in facial image, such as histogram feature, color characteristic, template characteristic,
Architectural feature and Haar features etc..
Face datection is exactly that information useful among these is picked out, and realizes Face datection using these features.Main flow
Method for detecting human face be based on features above use Adaboost learning algorithms, Adaboost algorithm is a kind of side for classifying
Method, it is combined some weaker sorting techniques, is combined into new very strong sorting technique.
Picking out some using Adaboost algorithm during Face datection can most represent the rectangular characteristic (weak typing of face
Device), Weak Classifier is configured to a strong classifier according to the mode of Nearest Neighbor with Weighted Voting, then some strong classifiers for obtaining will be trained
A cascade filtering for cascade structure is composed in series, the detection speed of grader is effectively improved.
2. recognition of face facial image pretreatment:
Facial image is pre-processed:It is, based on Face datection result, image to be processed for the image preprocessing of face
And finally serve the process of feature extraction.The original image that system is obtained by various conditions due to being limited and random dry
Disturb, tend not to directly use, it is necessary to which it is pre- to carry out the images such as gray correction, noise filtering to it in the early stage of image procossing
Treatment.For facial image, its preprocessing process mainly includes light compensation, greyscale transformation, the histogram of facial image
Equalization, normalization, geometric correction, filtering and sharpening etc..
3. recognition of face facial image feature extraction:
Facial image feature extraction:It is special that the usable feature of face identification system is generally divided into visual signature, pixels statisticses
Levy, facial image conversion coefficient feature, facial image algebraic characteristic etc..Face characteristic extracts some features aiming at face
Carry out.Face characteristic is extracted, and also referred to as face is characterized, and it is the process that feature modeling is carried out to face.What face characteristic was extracted
Method is summed up and is divided into two major classes:One kind is Knowledge based engineering characterizing method;Another is based on algebraic characteristic or statistics
The characterizing method of study.Knowledge based engineering characterizing method be mainly according to the shape description of human face and between them away from
Being obtained from characteristic contributes to the characteristic of face classification, and its characteristic component generally includes Euclidean distance, the song between characteristic point
Rate and angle etc..Face is locally made up of eyes, nose, mouth, chin etc., local and structural relation between them several to these
What is described, and can be referred to as geometric properties as the key character of identification face, these features.Knowledge based engineering face characterizes main
Including method and template matching method based on geometric properties.
4. the matching of recognition of face facial image and identification:
Facial image is matched and identification:The characteristic of the facial image of extraction is entered with the feature templates of storage in database
Line search is matched, by setting a threshold value, when similarity exceedes this threshold value, then and the result output for matching being obtained.Face
Identification is exactly that face characteristic to be identified is compared with the skin detection for having obtained, according to similarity degree to face
Identity information is judged.
S103, smart machine are carried out apart from survey by the binocular ranging technology of dual camera to each personage in image
Amount, obtains the distance between each personage and smart machine
Smart machine is to realize that smart machine uses binocular by the dual camera of smart machine with the distance of each personage
Location algorithm obtains the distance of smart machine and each personage.Binocular location algorithm flow includes:Off-line calibration, binocular correction,
Binocular ranging.
1st, off-line calibration:
The purpose of demarcation is the internal reference (focal length, picture centre, distortion factor etc.) and outer ginseng (R (rotation) square for obtaining camera
Battle array T (translation) matrix, for two camera).Method the more commonly used at present is the gridiron pattern scaling method of Zhang Zhengyou,
There is realization on Opencv and Matlab.But it is general in order to obtain stated accuracy higher, using (the 60*60 lattice of technical grade
Son) glass panel effect can be more preferable.And someone also advises using Matlab, because precision includes that effect of visualization can more preferable one
A bit, and the result of Matlab saves as xml, Opencv can also directly read in, but trouble of the step relative to Opencv
Some.Fig. 9 is Matlab binocular vision calibration figures.
Step is:
(1) left camera calibration, obtains inside and outside parameter.
(2) right parameter camera calibration obtains outer ginseng.
(3) binocular calibration, obtains the translation rotation relationship between camera.
2nd, binocular correction:
The purpose of correction is obtained with reference between figure and target figure, only exists the difference in X-direction.Improve disparity computation
Accuracy.Correction is divided into two steps
(1) distortion correction
Distortion correction effect refers to Figure 10
(2) camera is converted into canonical form
Because correction section, can to image position a little recalculate, thus the resolution ratio of algorithm process is got over
It is time-consuming bigger greatly, and generally require two images of real-time processing.And this Algorithm parallelization strong normalization degree is higher, build
View is hardened using IVE, is similar to the acceleration pattern in Opencv, first obtains mapping Map, then parallelization uses mapping Map weights
Newly obtain location of pixels.The rectification function in Opencv is cvStereoRectify.Camera is converted into canonical form with reference to figure
11.
3rd, binocular ranging:
Binocular ranging is the core that binocular depth is estimated, has developed many years, also there is very many algorithms, main mesh
Be calculate with reference to pixel between figure and target figure relative matching relationship, be broadly divided into local and non local algorithm.Typically
There are following several steps.
(1) matching error is calculated
(2) error is integrated
(3) disparity map calculates/optimization
(4) disparity map correction
Using fixed size or on-fixed size windows, the Optimum Matching position of a line where calculating therewith.Below figure
It is simplest local mode, asks the optimal corresponding points position of a line, left and right view X-coordinate position difference is disparity map.In order to increase
Plus noise, the robustness of illumination can be matched using stationary window, it is also possible to be carried out again after being converted using LBP to image
Matching.Match penalties calculate function to be had:SAD, SSD, NCC etc..Maximum search scope can also be limited using maximum disparity, also may be used
Speed-up computation is carried out with using integrogram and BoxFilter.The current preferable local matching algorithm of effect is based on Guided
The use Box Filter of Filter and the binocular ranging algorithm of integrogram, local algorithm are easy to parallelization, and calculating speed is fast, but
It is that the regional effect less for texture be not good, typically to image segmentation, image is divided into texture-rich and the sparse area of texture
Domain, adjusts matching window size, and texture sparse use wicket improves matching effect.
Non local matching algorithm, by search for parallax task regard as minimize one determination based on whole binocular rangings
To loss function, ask the minimum value of the loss function to can obtain optimal parallax relation, emphatically solve image in do not know
The matching problem in region, mainly there is Dynamic Programming (Dynamic Programming), belief propagation (Blief
Propagation), figure cuts algorithm (Graph Cut).What effect was best at present is also that figure cuts algorithm, the figure provided in Opencv
Cut algorithmic match time-consuming very big.
Figure cuts algorithm primarily to solving dynamic programming algorithm can not merge horizontally and vertically direction continuity constraint
Problem, matching problem is regarded as and seeks minimal cut problem in the picture using these constraints.
Since it is considered that global energy minimization, non local algorithm typically take it is larger, poorly using hardware-accelerated.But
It is that, for blocking, it is preferable that the sparse situation of texture is solved.Obtain after match point, typically passed through left and right sight line uniformity
Mode, is detected and determined the match point with high confidence level.The thought matched to light stream before and after much like, is only regarded by left and right
The point of line consistency check is just considered stable matching point.Can also so find out because blocking, noise, what error hiding was obtained
Point.
Post processing on disparity map, using the method for medium filtering, the gray value to current point uses neighborhood territory pixel
Intermediate value replaces, and this method can very well remove salt-pepper noise.Can remove what is failed because of noise or weak Texture Matching
Isolated point.
Binocular distance measurement process is commonly divided into camera calibration, image acquisition, image preprocessing, target detection and spy
Levy six steps such as extraction, Stereo matching, three-dimensional reconstruction.Such as Figure 12
S1031 camera calibrations
Camera calibration is in order to determine the position of video camera, inner parameter and external parameter, to set up imaging model, really
Object point is with the corresponding relation between it on the image plane picture point in determining world coordinate system.One of basic task of stereoscopic vision
It is the geological information of object during the image information obtained from video camera calculates three dimensions, and thus rebuilds and recognize thing
Body, and the geometrical model of video camera imaging determine the three-dimensional geometry position of space object surface point and corresponding points in image it
Between correlation, these geometrical model parameters are exactly camera parameters.Generally these parameters must be by testing
Can obtain, this process is known as camera calibration.Camera calibration is it needs to be determined that video camera inner geometry and optical characteristics
The three-dimensional position and direction (external parameter) of the camera coordinate system of (inner parameter) and a relative world coordinate system.Calculating
In machine vision, if using multiple video cameras, will be calibrated to each video camera.
S1032 images are obtained
It is by mobile or rotary taking by two of diverse location or a video camera that the image of binocular vision is obtained
Same scene, obtains the image of two width different visual angles.In binocular vision system, the acquisition of depth information is to be carried out in two steps
's.
S1033 image preprocessings
Two dimensional image is generated by optical imaging system, contains various random noises affected by environment and distortion,
Therefore need to pre-process original image, to suppress garbage, prominent useful information, improve picture quality.Image is pre-
The purpose for the treatment of mainly has two:Improve the visual effect of image, improve image definition;Image is set to become to be more beneficial for calculating
The treatment of machine, is easy to various features to analyze.
S1034 target detections and feature extraction
Target detection refers to extract target object to be detected from the image by pretreatment.Feature extraction refers to from inspection
The characteristic point specified is extracted in the target for measuring.It is special due to still can operate with image without a kind of blanket theory at present
The extraction levied, so as to result in the diversity of matching characteristic in stereoscopic vision research.At present, conventional matching characteristic mainly has area
Characteristic of field, line feature and point-like character etc..In general, large scale spy 4 binocular distance measurement systematic researches levy containing compared with
Abundant image information, it is easy to quickly matched, but number in the picture is less, and positioning precision is poor, feature extraction
It is difficult with description.And small scale features number is more, but information contained is less, thus is to overcome ambiguity to match and carry in matching
Operation efficiency high is, it is necessary to stronger constraint criterion and matching strategy.Good matching characteristic should have stability, consistency, can
Distinction, uniqueness and the ability that effectively solution ambiguity is matched.
S1035 Stereo matchings
Stereo matching refers to according to the calculating to selected feature, the corresponding relation set up between feature, by same space
Photosites of the physical points in different images are mapped.When space three-dimensional scene is projected as two dimensional image, same scenery
Image under different visual angles can be very different, and the factors in scene, such as scene geometry and physical characteristic,
Noise jamming, illumination condition and distortion of camera etc., are all integrated into the gray value in single image.Therefore, be exactly
It is very difficult to the matching that the image for containing so many unfavorable factors is carried out unambiguously, this problem does not have also so far
It is well solved.The validity of Stereo matching depends on three solutions of problem:Find the essential attribute between feature, selection
Correct matching characteristic and foundation can correctly match the stable algorithm of selected feature.
S1036 three-dimensional reconstructions
After anaglyph is obtained by Stereo matching, depth image, and restoration scenario 3D information just can be determined.Shadow
The factor for ringing range measurement accuracy mainly has camera calibration error, digital quantization, feature detection and matches positioning precision
Deng.Implement the restructuring procedure of three dimensions in computer vision, be made up of several main sport technique segments, each link
There is main influence factor and guardian technique.
S104, smart machine set the shooting ginseng of dual camera according to the distance between each personage and smart machine automatically
Number, is that each personage shoots a photo respectively
After smart machine obtains the distance between each personage and smart machine according to dual camera, with each personage and intelligence
The distance of energy equipment, ambient light are foundation, following acquisition parameters are set automatically and is shot:Aperture, shutter, ISO, focusing,
Light-metering, white balance.If dual camera is main and auxiliary camera, the acquisition parameters of main camera are only adjusted, use main camera
Carry out photograph taking;If dual camera do not differentiate between it is main and auxiliary, simultaneously set two parameters of camera, two cameras are all
Photograph taking is carried out, two photos are then synthesized using algorithm by a photo.
Parameter setting method is as follows:
The 1st, aperture is set
Aperture represents that f values are smaller, then aperture is bigger (such as with f values:f1>f4>f8).Aperture is bigger, and the depth of field is more shallow, more holds
Easily take that main body clearly, the photo of blurred background comes.The theme that smart machine can be selected according to user is configured, such as
Fruit user selects shooting background blurred image, then tune up f-number.
The 2nd, shutter is set
Shutter is represented with time length:Such as 1/125 second, 1/8 second, 1 second, numeral was bigger, and the time is more long, and shutter speed is got over
Slowly.The excessively slow action that cannot then solidify people/thing shot of shutter speed, and as the hand shake of photographer causes trembling for photo
Dynamic model is pasted.
When smart machine judges that personage's less or background light of movement relatively becomes clear, f-number is set to less value,
Such as 1/8 second;If background light is dark, f-number is adjusted to larger, the value of such as larger than more than 2 seconds.
The 3rd, ISO is set
ISO values are lower, and the sensitiveness to light is poorer, while picture can be finer and smoother, in this case it is necessary to bigger light
Circle or slower shutter speed;ISO values are higher, more sensitive to light, but picture occurs particle and noise, in such case
Under, can be with than shutter speed faster or less aperture.When smart machine judges that shooting personage's background light is dark,
The automatic ISO that sets is higher value, such as 800;When background light is bright, ISO is set to smaller value, such as 200.
The 4th, focusing is set
The automatic personage with selection of smart machine is that single-point is focused.
The 5th, light-metering is set
Metering mode mainly has three kinds:Evaluate light-metering, central heavy spot light-metering, spot light-metering.
There is no apparent bulk high light in picture, or bulk shade it is simultaneous when, be set to evaluate light-metering;
Light is complicated and highly non-uniform picture in, selected element metering mode;Alignment subject main body carries out light-metering, such as clapping
During portrait, spot light-metering is used.
The 6th, white balance is set
When user without white balance is set manually, smart machine is set to AWB.
S105, smart machine are after each personage shoots a photo, to be put centered on the personage and preserve photo.
Smart machine, as foundation, is set acquisition parameters and is clapped with distance, the ambient light of each personage and smart machine
After taking the photograph, when photo is preserved, preserved by photo center of the personage.If carrying out photo preservation centered on the personage
When, when there is part personage not in photo, then adjusting focal length (track back), makes all persons to be stored in photo.
S106, smart machine select a photo and are shown on the screen of smart machine.
After smart machine preserves the current all photos for shooting, therefrom randomly choose a photo and be displayed in smart machine
On display screen.User can select a personage in view-finder is shot, and then show the photo centered on the personage.
By distance, the ambient light with each personage and smart machine as foundation, set acquisition parameters is the present embodiment
Everyone shoots a photo so that when group picture is shot, everyone can obtain a photo with oneself as focus,
User is improve to take pictures experience.
Embodiment two
With reference to Fig. 4, another method shot according to user perspective is present embodiments provided.On the basis of embodiment one
On, it is allowed to user selects some personnel for focus is shot when personage's group picture is shot.Such as before photo is shot, user
In the view-finder of smart machine select (being selected by the image for clicking on the personage) some important persons, then respectively with
It, according to acquisition parameters are set, is that the personnel for having selected shoot one respectively that the distance of these personnel and smart machine, ambient light are
Open photo.
The present embodiment carries out photograph taking by selecting before shooting some personnel for focus, can save and shoot photo
Time, it is also possible to save the memory space of smart machine.
Embodiment three
With reference to Fig. 5, another method shot according to user perspective is present embodiments provided.On the basis of embodiment one
On, smart machine is that after each personnel shoots a photo, these photos can be shared with corresponding personnel.Smart machine is dividing
When enjoying these photos, object is shared according to what user selected, automatically select the photo for sharing the related personnel of object.
Smart machine is when object is shared in user's selection, and the head portrait for obtaining the MSN for sharing object is (such as micro-
Letter), then using face recognition technology, obtain and share what object head portrait was belonged to same personage and shot as focus with the personage
The photo, is then shared with him by photo.If other side's MSN is not provided with head portrait, the name of the user is obtained
Claim, the corresponding name of photo personage and parent are then obtained from local data base or remote server by face recognition technology
Category relation.Then judge whether user's name in other side's MSN is to obtain after the photo array by local system
The photo, is if it is shared with him by name or the name of relatives.
Smart machine after photographs have been taken, can by way of one-key sharing, with each personage be focus shoot
Photo be shared with corresponding personnel automatically.One-key sharing process is as follows:
1st, smart machine with the personage as focus shoot photo after, smart machine MSN (such as wechat,
QQ, Alipay etc.) address list list in search everyone head portrait.
Head portrait in the image and address list list of the personage in the 2, using with the personage as focus shooting photo is carried out
Compare (being compared using face recognition technology).
If the 3, compared successfully, the photo is shared with other side by MSN.
The present embodiment can automatically divide the photo after being shot as focal length with each personage automatically by photo sharing function
Enjoy MSN corresponding to the personage, facilitate user picture to carry out photo and share after shooting, receive everyone with
Oneself is the photo that focal length shoots, and improves users' satisfaction degree.
Example IV
With reference to Fig. 6, a kind of device shot according to user perspective is present embodiments provided, including:
P202 person recognition modules:For obtaining figure by any one camera in the dual camera on smart machine
Picture, the number of person in described image is obtained using image recognition technology;
Person recognition module obtains the number of person in smart machine view-finder using image recognition technology.Image recognition master
To include four parts, respectively:Man face image acquiring and detection, facial image pretreatment, facial image feature extraction
And matching and identification.
1st, recognition of face man face image acquiring and detection:
Man face image acquiring:Different facial images can be transferred through pick-up lens and collect, such as still image, dynamic
The aspects such as image, different positions, different expressions can be gathered well.When user is in the coverage of collecting device
When interior, collecting device can automatically be searched for and shoot the facial image of user.
Face datection:Face datection is being mainly used in the pretreatment of recognition of face, i.e., accurate calibration goes out face in the picture
Position and size.The pattern feature very abundant included in facial image, such as histogram feature, color characteristic, template characteristic,
Architectural feature and Haar features etc..
Face datection is exactly that information useful among these is picked out, and realizes Face datection using these features.Main flow
Method for detecting human face be based on features above use Adaboost learning algorithms, Adaboost algorithm is a kind of side for classifying
Method, it is combined some weaker sorting techniques, is combined into new very strong sorting technique.
Picking out some using Adaboost algorithm during Face datection can most represent the rectangular characteristic (weak typing of face
Device), Weak Classifier is configured to a strong classifier according to the mode of Nearest Neighbor with Weighted Voting, then some strong classifiers for obtaining will be trained
A cascade filtering for cascade structure is composed in series, the detection speed of grader is effectively improved.
2nd, recognition of face facial image pretreatment:
Facial image is pre-processed:It is, based on Face datection result, image to be processed for the image preprocessing of face
And finally serve the process of feature extraction.The original image that system is obtained by various conditions due to being limited and random dry
Disturb, tend not to directly use, it is necessary to which it is pre- to carry out the images such as gray correction, noise filtering to it in the early stage of image procossing
Treatment.For facial image, its preprocessing process mainly includes light compensation, greyscale transformation, the histogram of facial image
Equalization, normalization, geometric correction, filtering and sharpening etc..
3rd, recognition of face facial image feature extraction:
Facial image feature extraction:It is special that the usable feature of face identification system is generally divided into visual signature, pixels statisticses
Levy, facial image conversion coefficient feature, facial image algebraic characteristic etc..Face characteristic extracts some features aiming at face
Carry out.Face characteristic is extracted, and also referred to as face is characterized, and it is the process that feature modeling is carried out to face.What face characteristic was extracted
Method is summed up and is divided into two major classes:One kind is Knowledge based engineering characterizing method;Another is based on algebraic characteristic or statistics
The characterizing method of study.Knowledge based engineering characterizing method be mainly according to the shape description of human face and between them away from
Being obtained from characteristic contributes to the characteristic of face classification, and its characteristic component generally includes Euclidean distance, the song between characteristic point
Rate and angle etc..Face is locally made up of eyes, nose, mouth, chin etc., local and structural relation between them several to these
What is described, and can be referred to as geometric properties as the key character of identification face, these features.Knowledge based engineering face characterizes main
Including method and template matching method based on geometric properties.
4th, the matching of recognition of face facial image and identification:
Facial image is matched and identification:The characteristic of the facial image of extraction is entered with the feature templates of storage in database
Line search is matched, by setting a threshold value, when similarity exceedes this threshold value, then and the result output for matching being obtained.Face
Identification is exactly that face characteristic to be identified is compared with the skin detection for having obtained, according to similarity degree to face
Identity information is judged.
P203 range finder modules:For the binocular ranging technology by the dual camera on the smart machine to described image
In each personage carry out range measurement, obtain the distance between described each personage and described smart machine;
Range finder module measures each personage and is set with intelligence using the dual camera on smart machine, using binocular ranging technology
It is the distance between standby.Dual camera mainly has two kinds of structural forms and four kinds of product form:
Two kinds of structural forms:
1st, integrative-structure:
Two camera modules are encapsulated on one wiring board simultaneously, is then increased support and is fixed and calibrate.The structure is to two
The encapsulation precision requirement of camera is higher, it is necessary to high-accuracy sealed in unit such as AA equipment is completed, to the inclined of two cameras
Shifting degree, inclined light shaft degree control it is high, it is necessary to pass through the wiring board of special hardware material such as high-flatness, firm base,
The motor of demagnetization, it is also desirable to which special packaging technology is completed.
2nd, Split type structure:
Two single cameras, by support fixed calibration.This scheme is relatively low to assembling precision requirement, no
Need to put into high-precision equipment, also merely add fixed support on hardware, production process be also only increased camera calibration and
Support is fixed.
Four kinds of functional forms:
1st, with visual angle with chip dual camera:
Realize image synthesis and special efficacy, it is feature-rich, such as pixel superposition, HDR, first take pictures and focus afterwards, super night bat, virtually
The functions such as aperture, range finding.
2nd, main camera+pair camera:
Realization first take pictures focus afterwards, a few functions such as background blurring.
3rd, different visual angles scheme:
One width close shot and a width distant view image are gathered using wide-angle and narrow angle mirror head respectively, is synthesized by image and is realized 3X/
5X simulated optical zoom functions, solve single camera find a view scaling when the image sharpness that produces decline problem.
4th, 3-D scanning dual camera:
Realize to the 3D scannings of object and modeling function.Functionally with the scanning modeling phase of the Project Tango of Google
Seemingly, but double hardware plans taken the photograph more simple and cost is more excellent, while scanning distance and precision have difference.
The binocular range measurement principle that range finder module is used is as follows:
Smart machine is to realize that smart machine uses binocular by the dual camera of smart machine with the distance of each personage
Vision algorithm obtains the distance of smart machine and each personage.Binocular vision algorithm flow includes:Off-line calibration, binocular correction,
Binocular ranging.
1st, off-line calibration:
The purpose of demarcation is the internal reference (focal length, picture centre, distortion factor etc.) and outer ginseng (R (rotation) square for obtaining camera
Battle array T (translation) matrix, for two camera).The more commonly used method is gridiron pattern scaling method at present, Opencv and
There is realization on Matlab.But it is general in order to obtain stated accuracy higher, using (60*60 grid) glass surface of technical grade
Plate effect can be more preferable.And someone also advises using Matlab, because precision can be better including effect of visualization, and
The result of Matlab saves as xml, and Opencv can also directly read in, but step has bothered some relative to Opencv.
Fig. 9 is Matlab binocular vision calibration figures.
Step is:
(1) left camera calibration, obtains inside and outside parameter.
(2) right parameter camera calibration obtains outer ginseng.
(3) binocular calibration, obtains the translation rotation relationship between camera.
2nd, binocular correction:
The purpose of correction is obtained with reference between figure and target figure, only exists the difference in X-direction.Improve disparity computation
Accuracy.Correction is divided into two steps
(1) distortion correction
Distortion correction effect refers to Figure 10
(2) camera is converted into canonical form
Because correction section, can to image position a little recalculate, thus the resolution ratio of algorithm process is got over
It is time-consuming bigger greatly, and generally require two images of real-time processing.And this Algorithm parallelization strong normalization degree is higher, build
View is hardened using IVE, is similar to the acceleration pattern in Opencv, first obtains mapping Map, then parallelization uses mapping Map weights
Newly obtain location of pixels.The rectification function in Opencv is cvStereoRectify.Camera is converted into canonical form with reference to figure
11.
3rd, binocular ranging:
Binocular ranging is the core that binocular depth is estimated, has developed many years, also there is very many algorithms, main mesh
Be calculate with reference to pixel between figure and target figure relative matching relationship, be broadly divided into local and non local algorithm.Typically
There are following several steps.
(1) matching error is calculated
(2) error is integrated
(3) disparity map calculates/optimization
(4) disparity map correction
Using fixed size or on-fixed size windows, the Optimum Matching position of a line where calculating therewith.Below figure
It is simplest local mode, asks the optimal corresponding points position of a line, left and right view X-coordinate position difference is disparity map.In order to increase
Plus noise, the robustness of illumination can be matched using stationary window, it is also possible to be carried out again after being converted using LBP to image
Matching.Match penalties calculate function to be had:SAD, SSD, NCC etc..Maximum search scope can also be limited using maximum disparity, also may be used
Speed-up computation is carried out with using integrogram and Box Filter.The current preferable local matching algorithm of effect is based on Guided
The use Box Filter of Filter and the binocular ranging algorithm of integrogram, local algorithm are easy to parallelization, and calculating speed is fast, but
It is that the regional effect less for texture be not good, typically to image segmentation, image is divided into texture-rich and the sparse area of texture
Domain, adjusts matching window size, and texture sparse use wicket improves matching effect.
Non local matching algorithm, by search for parallax task regard as minimize one determination based on whole binocular rangings
To loss function, ask the minimum value of the loss function to can obtain optimal parallax relation, emphatically solve image in do not know
The matching problem in region, mainly there is Dynamic Programming (Dynamic Programming), belief propagation (Blief
Propagation), figure cuts algorithm (Graph Cut).What effect was best at present is also that figure cuts algorithm, the figure provided in Opencv
Cut algorithmic match time-consuming very big.
Figure cuts algorithm primarily to solving dynamic programming algorithm can not merge horizontally and vertically direction continuity constraint
Problem, matching problem is regarded as and seeks minimal cut problem in the picture using these constraints.
Since it is considered that global energy minimization, non local algorithm typically take it is larger, poorly using hardware-accelerated.But
It is that, for blocking, it is preferable that the sparse situation of texture is solved.
Obtain after match point, typically by way of the sight line uniformity of left and right, be detected and determined with high confidence level
Match point.The thought matched to light stream before and after much like, is only just considered steady by the point of left and right sight line consistency check
Determine match point.Can also so find out because blocking, noise, the point that error hiding is obtained.
Post processing on disparity map, using the method for medium filtering, the gray value to current point uses neighborhood territory pixel
Intermediate value replaces, and this method can very well remove salt-pepper noise.Can remove what is failed because of noise or weak Texture Matching
Isolated point.
The binocular distance measurement process of range finder module is divided into camera calibration, image acquisition, image preprocessing, target detection
With six steps such as feature extraction, Stereo matching, three-dimensional reconstruction.
(1) camera calibration
Camera calibration is in order to determine the position of video camera, inner parameter and external parameter, to set up imaging model, really
Object point is with the corresponding relation between it on the image plane picture point in determining world coordinate system.One of basic task of stereoscopic vision
It is the geological information of object during the image information obtained from video camera calculates three dimensions, and thus rebuilds and recognize thing
Body, and the geometrical model of video camera imaging determine the three-dimensional geometry position of space object surface point and corresponding points in image it
Between correlation, these geometrical model parameters are exactly camera parameters.Generally these parameters must be by testing
Can obtain, this process is known as camera calibration.Camera calibration is it needs to be determined that video camera inner geometry and optical characteristics
The three-dimensional position and direction (external parameter) of the camera coordinate system of (inner parameter) and a relative world coordinate system.Calculating
In machine vision, if using multiple video cameras, will be calibrated to each video camera.
(2) image is obtained
It is by mobile or rotary taking by two of diverse location or a video camera that the image of binocular vision is obtained
Same scene, obtains the image of two width different visual angles.In binocular vision system, the acquisition of depth information is to be carried out in two steps
's.
(3) image preprocessing
Two dimensional image is generated by optical imaging system, contains various random noises affected by environment and distortion,
Therefore need to pre-process original image, to suppress garbage, prominent useful information, improve picture quality.Image is pre-
The purpose for the treatment of mainly has two:Improve the visual effect of image, improve image definition;Image is set to become to be more beneficial for calculating
The treatment of machine, is easy to various features to analyze.
(4) target detection and feature extraction
Target detection refers to extract target object to be detected from the image by pretreatment.Feature extraction refers to from inspection
The characteristic point specified is extracted in the target for measuring.It is special due to still can operate with image without a kind of blanket theory at present
The extraction levied, so as to result in the diversity of matching characteristic in stereoscopic vision research.At present, conventional matching characteristic mainly has area
Characteristic of field, line feature and point-like character etc..In general, large scale spy 4 binocular distance measurement systematic researches levy containing compared with
Abundant image information, it is easy to quickly matched, but number in the picture is less, and positioning precision is poor, feature extraction
It is difficult with description.And small scale features number is more, but information contained is less, thus is to overcome ambiguity to match and carry in matching
Operation efficiency high is, it is necessary to stronger constraint criterion and matching strategy.Good matching characteristic should have stability, consistency, can
Distinction, uniqueness and the ability that effectively solution ambiguity is matched.
(5) Stereo matching
Stereo matching refers to according to the calculating to selected feature, the corresponding relation set up between feature, by same space
Photosites of the physical points in different images are mapped.When space three-dimensional scene is projected as two dimensional image, same scenery
Image under different visual angles can be very different, and the factors in scene, such as scene geometry and physical characteristic,
Noise jamming, illumination condition and distortion of camera etc., are all integrated into the gray value in single image.Therefore, be exactly
It is very difficult to the matching that the image for containing so many unfavorable factors is carried out unambiguously, this problem does not have also so far
It is well solved.The validity of Stereo matching depends on three solutions of problem:Find the essential attribute between feature, selection
Correct matching characteristic and foundation can correctly match the stable algorithm of selected feature.
(6) three-dimensional reconstruction
After anaglyph is obtained by Stereo matching, depth image, and restoration scenario 3D information just can be determined.Shadow
The factor for ringing range measurement accuracy mainly has camera calibration error, digital quantization, feature detection and matches positioning precision
Deng.
P204 parameter setting modules:Shoot burnt for being set with the distance of the smart machine according to described each personage
Away from for setting shooting aperture, shutter, ISO, exposure, white balance according to the background of each personage.
Parameter setting module arrange parameter process:
(1) aperture is set
Aperture represents that f values are smaller, then aperture is bigger (such as with f values:f1>f4>f8).Aperture is bigger, and the depth of field is more shallow, more holds
Easily take that main body clearly, the photo of blurred background comes.The theme that smart machine can be selected according to user is configured, such as
Fruit user selects shooting background blurred image, then tune up f-number.
(2) shutter is set
Shutter is represented with time length:Such as 1/125 second, 1/8 second, 1 second, numeral was bigger, and the time is more long, and shutter speed is got over
Slowly.The excessively slow action that cannot then solidify people/thing shot of shutter speed, and as the hand shake of photographer causes trembling for photo
Dynamic model is pasted.
When smart machine judges that personage's less or background light of movement relatively becomes clear, f-number is set to less value,
Such as 1/8 second;If background light is dark, f-number is adjusted to larger, the value of such as larger than more than 2 seconds.
(3) ISO is set
ISO values are lower, and the sensitiveness to light is poorer, while picture can be finer and smoother, in this case it is necessary to bigger light
Circle or slower shutter speed;ISO values are higher, more sensitive to light, but picture occurs particle and noise, in such case
Under, can be with than shutter speed faster or less aperture.When smart machine judges that shooting personage's background light is dark,
The automatic ISO that sets is higher value, such as 800;When background light is bright, ISO is set to smaller value, such as 200.
(4) focusing is set
The automatic personage with selection of smart machine is that single-point is focused.
(5) light-metering is set
Metering mode mainly has three kinds:Evaluate light-metering, central heavy spot light-metering, spot light-metering.
There is no apparent bulk high light in picture, or bulk shade it is simultaneous when, be set to evaluate light-metering;
Light is complicated and highly non-uniform picture in, selected element metering mode;Alignment subject main body carries out light-metering, such as clapping
During portrait, spot light-metering is used.
(6) white balance is set
When user without white balance is set manually, smart machine is set to AWB.
P205 taking modules:For being taken the photograph with the automatic setting pair of the distance between the smart machine according to described each personage
It is that each personage shoots a photo respectively as the acquisition parameters of head.
Taking module, as foundation, is set acquisition parameters and is clapped with distance, the ambient light of each personage and smart machine
Take the photograph.
P206 memory modules:Preservation photo is put centered on the personage after shooting a photo for each personage.
After smart machine shoots photo, when preserving photo using memory module, preserved by photo center of the personage.
If photo preservation is carried out centered on the personage, when there is part personage not in photo, then adjusting focal length (zooms out mirror
Head), all persons is stored in photo.
P207 display modules:The photo of selection is shown for one photo of selection and on the screen of the smart machine.
After smart machine preserves the current all photos for shooting, display module therefrom randomly chooses a photo and is displayed in intelligence
On the display screen of energy equipment.User can select a personage in view-finder is shot, and then display module is selected according to user
Character image, the photo of the personage on the display screen of smart machine.
By distance, the ambient light with each personage and smart machine as foundation, set acquisition parameters is the present embodiment
Everyone shoots a photo so that when group picture is shot, everyone can obtain a photo with oneself as focus,
User is improve to take pictures experience.
Embodiment six
With reference to Fig. 7, another device shot according to user perspective is present embodiments provided, in the base of embodiment five
On plinth, also including P201 personage's selecting module.
Personage's selecting module is used for user when personage's group picture is shot, and selects some personnel for focus is shot.Such as
Before photo is shot, user selects (selected by the image for clicking on the personage) some weights in the view-finder of smart machine
Personnel are wanted, it, according to acquisition parameters are set, is to have selected then to be with the distance of these personnel and smart machine, ambient light respectively
Personnel shoot a photo respectively.
The present embodiment carries out photograph taking by selecting before shooting some personnel for focus, can save and shoot photo
Time, it is also possible to save the memory space of smart machine.
Embodiment seven
With reference to Fig. 7, another device shot according to user perspective is present embodiments provided, in the base of embodiment five
On plinth, also including P208 sharing modules.Sharing module is that after each personnel shoots a photo, these photos can be shared with
Corresponding personnel.Smart machine shares object when these photos are shared, according to what user selected, automatically selects this and shares object phase
The photo of the personnel of pass.
Smart machine is when object is shared in user's selection, and the head portrait for obtaining the MSN for sharing object is (such as micro-
Letter), then using face recognition technology, obtain and share what object head portrait was belonged to same personage and shot as focus with the personage
The photo, is then shared with him by photo.
Smart machine after photographs have been taken, can by way of one-key sharing, with each personage be focus shoot
Photo be shared with corresponding personnel automatically.One-key sharing process is as follows:
1st, smart machine with the personage as focus shoot photo after, smart machine MSN (such as wechat,
QQ, Alipay etc.) address list list in search everyone head portrait.
Head portrait in the image and address list list of the personage in the 2, using with the personage as focus shooting photo is carried out
Compare (being compared using face recognition technology).
If the 3, compared successfully, the photo is shared with other side by MSN.
The present embodiment can automatically divide the photo after being shot as focal length with each personage automatically by photo sharing function
Enjoy MSN corresponding to the personage, facilitate user picture to carry out photo and share after shooting, receive everyone with
Oneself is the photo that focal length shoots, and improves users' satisfaction degree.
The know-why of the embodiment of the present invention is described above in association with specific embodiment, these descriptions are intended merely to explain this
The principle of inventive embodiments, and the limitation to embodiment of the present invention protection domain can not be by any way construed to, this area
Technical staff associates other specific embodiments of the embodiment of the present invention, these sides by would not require any inventive effort
Formula is fallen within the protection domain of the embodiment of the present invention.
It should be noted that herein, term " including ", "comprising" or its any other variant be intended to non-row
His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and
And also include other key elements being not expressly set out, or also include for this process, method, article or device institute are intrinsic
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there is other identical element in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably implementation method.Based on such understanding, technical scheme is substantially done to prior art in other words
The part for going out contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used to so that a station terminal equipment (can be mobile phone, computer, clothes
Business device, air-conditioner, or network equipment etc.) perform method described in each embodiment of the invention.
The preferred embodiments of the present invention are these are only, the scope of the claims of the invention is not thereby limited, it is every to utilize this hair
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of method shot according to user perspective, it is characterised in that including:
Smart machine obtains image by any one camera in the dual camera on the smart machine, and the intelligence sets
The standby number of person obtained using image recognition technology in described image;
The smart machine carries out distance by the binocular ranging technology of the dual camera to each personage in described image
Measurement, obtains the distance between described each personage and described smart machine;
The smart machine each personage and the automatic bat that dual camera is set of the distance between the smart machine according to
Parameter is taken the photograph, is that each personage shoots a photo respectively;
The smart machine selects a photo and is shown on the screen of the smart machine.
2. method according to claim 1, it is characterised in that the smart machine each personage and intelligence according to
Can equipment distance set shooting focal length, the smart machine each personage according to background setting shooting aperture, shutter,
ISO, exposure, white balance.
3. method according to claim 1, it is characterised in that the smart machine is that each personage shoots a photo
Afterwards, put centered on the personage and preserve photo.
4. method according to claim 1, it is characterised in that before photo is shot, user is described for the smart machine
The personage for needing to shoot is manually selected in the view-finder of smart machine;The smart machine is only for the personage of user's selection claps respectively
Take the photograph a photo.
5. method according to claim 1, it is characterised in that the smart machine is known in share photos using image
The head portrait of other technology automatic identification other side, then matches the personage in other side's head portrait and photo, the photograph that the match is successful
Piece is shared with other side.
6. a kind of device shot according to user perspective, it is characterised in that including:
Person recognition module:For obtaining image by any one camera in the dual camera on smart machine, use
Image recognition technology obtains the number of person in described image;
Range finder module:For the binocular ranging technology by the dual camera on the smart machine to each in described image
Personage carries out range measurement, obtains the distance between described each personage and described smart machine;Taking module:For according to institute
Each personage and the automatic acquisition parameters that dual camera is set of the distance between the smart machine are stated, is that each personage claps respectively
Take the photograph a photo;
Display module:The photo of selection is shown for one photo of selection and on the screen of the smart machine.
7. device according to claim 6, it is characterised in that also include:
Parameter setting module:For setting shooting focal length with the distance of the smart machine according to described each personage, for root
Set according to the background of each personage and shoot aperture, shutter, ISO, exposure, white balance.
8. device according to claim 6, it is characterised in that also include:
Memory module:Preservation photo is put centered on the personage after shooting a photo for each personage.
9. device according to claim 6, it is characterised in that also include:
Personage's selecting module:For the smart machine before photo is shot, user's hand in the view-finder of the smart machine
Dynamic selection needs the personage for shooting;The smart machine is only for the personage of user's selection shoots a photo respectively.
10. device according to claim 6, it is characterised in that also include:
Sharing module:Using the head portrait of image recognition technology automatic identification other side during for share photos, then other side's head portrait
Matched with the personage in photo, the photo that the match is successful is shared with other side.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710111156.0A CN106851104B (en) | 2017-02-28 | 2017-02-28 | A kind of method and device shot according to user perspective |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710111156.0A CN106851104B (en) | 2017-02-28 | 2017-02-28 | A kind of method and device shot according to user perspective |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106851104A true CN106851104A (en) | 2017-06-13 |
CN106851104B CN106851104B (en) | 2019-11-22 |
Family
ID=59134613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710111156.0A Active CN106851104B (en) | 2017-02-28 | 2017-02-28 | A kind of method and device shot according to user perspective |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106851104B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107395979A (en) * | 2017-08-14 | 2017-11-24 | 天津帕比特科技有限公司 | The image-pickup method and system of hollow out shelter are removed based on multi-angled shooting |
CN107680060A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of image distortion correction method, terminal and computer-readable recording medium |
CN108108704A (en) * | 2017-12-28 | 2018-06-01 | 努比亚技术有限公司 | Face identification method and mobile terminal |
CN108446025A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Filming control method and Related product |
CN108921863A (en) * | 2018-06-12 | 2018-11-30 | 江南大学 | A kind of foot data acquisition device and method |
CN109215085A (en) * | 2018-08-23 | 2019-01-15 | 上海小萌科技有限公司 | A kind of article statistic algorithm using computer vision and image recognition |
CN109388233A (en) * | 2017-08-14 | 2019-02-26 | 财团法人工业技术研究院 | Transparent display device and control method thereof |
CN109712104A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | The exposed method of self-timer video cartoon head portrait and Related product |
CN109919988A (en) * | 2019-03-27 | 2019-06-21 | 武汉万屏电子科技有限公司 | A kind of stereoscopic image processing method suitable for three-dimensional endoscope |
US10554898B2 (en) | 2017-11-30 | 2020-02-04 | Guangdong Oppo Mobile Telecommunications Corp. Ltd. | Method for dual-camera-based imaging, and mobile terminal |
CN110942434A (en) * | 2019-11-22 | 2020-03-31 | 华兴源创(成都)科技有限公司 | Display compensation system and method of display panel |
US10616459B2 (en) | 2017-11-30 | 2020-04-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for dual-camera-based imaging and storage medium |
US10742860B2 (en) | 2017-11-30 | 2020-08-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for double-camera-based imaging |
CN111770279A (en) * | 2020-08-03 | 2020-10-13 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
CN114363516A (en) * | 2021-12-28 | 2022-04-15 | 苏州金螳螂文化发展股份有限公司 | Interactive photographing system based on human face recognition |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301440B1 (en) * | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
US20100157022A1 (en) * | 2008-12-22 | 2010-06-24 | Electronics And Telecommunications Research Institute | Method and apparatus for implementing motion control camera effect based on synchronized multi-images |
CN101933016A (en) * | 2008-01-29 | 2010-12-29 | 索尼爱立信移动通讯有限公司 | Camera system and based on the method for picture sharing of camera perspective |
CN103546682A (en) * | 2012-07-09 | 2014-01-29 | 三星电子株式会社 | Camera device and method for processing image |
CN103595909A (en) * | 2012-08-16 | 2014-02-19 | Lg电子株式会社 | Mobile terminal and controlling method thereof |
CN103813098A (en) * | 2012-11-12 | 2014-05-21 | 三星电子株式会社 | Method and apparatus for shooting and storing multi-focused image in electronic device |
CN104243828A (en) * | 2014-09-24 | 2014-12-24 | 宇龙计算机通信科技(深圳)有限公司 | Method, device and terminal for shooting pictures |
CN104469123A (en) * | 2013-09-17 | 2015-03-25 | 联想(北京)有限公司 | A method for supplementing light and an image collecting device |
CN104660909A (en) * | 2015-03-11 | 2015-05-27 | 酷派软件技术(深圳)有限公司 | Image acquisition method, image acquisition device and terminal |
CN104853096A (en) * | 2015-04-30 | 2015-08-19 | 广东欧珀移动通信有限公司 | Rotation camera-based shooting parameter determination method and terminal |
CN105005597A (en) * | 2015-06-30 | 2015-10-28 | 广东欧珀移动通信有限公司 | Photograph sharing method and mobile terminal |
CN105025162A (en) * | 2015-06-16 | 2015-11-04 | 惠州Tcl移动通信有限公司 | Automatic photo sharing method, mobile terminals and system |
US20160127630A1 (en) * | 2014-11-05 | 2016-05-05 | Canon Kabushiki Kaisha | Image capture apparatus and method executed by image capture apparatus |
CN105611174A (en) * | 2016-02-29 | 2016-05-25 | 广东欧珀移动通信有限公司 | Control method, control apparatus and electronic apparatus |
CN105894031A (en) * | 2016-03-31 | 2016-08-24 | 青岛海信移动通信技术股份有限公司 | Photo selection method and photo selection device |
CN105939445A (en) * | 2016-05-23 | 2016-09-14 | 武汉市公安局公共交通分局 | Fog penetration shooting method based on binocular camera |
CN105981362A (en) * | 2014-02-18 | 2016-09-28 | 华为技术有限公司 | Method for obtaining a picture and multi-camera system |
CN106034179A (en) * | 2015-03-18 | 2016-10-19 | 中兴通讯股份有限公司 | Photo sharing method and device |
US20170034421A1 (en) * | 2015-07-31 | 2017-02-02 | Canon Kabushiki Kaisha | Image pickup apparatus and method of controlling the same |
-
2017
- 2017-02-28 CN CN201710111156.0A patent/CN106851104B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301440B1 (en) * | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
CN101933016A (en) * | 2008-01-29 | 2010-12-29 | 索尼爱立信移动通讯有限公司 | Camera system and based on the method for picture sharing of camera perspective |
US20100157022A1 (en) * | 2008-12-22 | 2010-06-24 | Electronics And Telecommunications Research Institute | Method and apparatus for implementing motion control camera effect based on synchronized multi-images |
CN103546682A (en) * | 2012-07-09 | 2014-01-29 | 三星电子株式会社 | Camera device and method for processing image |
CN103595909A (en) * | 2012-08-16 | 2014-02-19 | Lg电子株式会社 | Mobile terminal and controlling method thereof |
CN103813098A (en) * | 2012-11-12 | 2014-05-21 | 三星电子株式会社 | Method and apparatus for shooting and storing multi-focused image in electronic device |
CN104469123A (en) * | 2013-09-17 | 2015-03-25 | 联想(北京)有限公司 | A method for supplementing light and an image collecting device |
CN105981362A (en) * | 2014-02-18 | 2016-09-28 | 华为技术有限公司 | Method for obtaining a picture and multi-camera system |
CN104243828A (en) * | 2014-09-24 | 2014-12-24 | 宇龙计算机通信科技(深圳)有限公司 | Method, device and terminal for shooting pictures |
US20160127630A1 (en) * | 2014-11-05 | 2016-05-05 | Canon Kabushiki Kaisha | Image capture apparatus and method executed by image capture apparatus |
CN104660909A (en) * | 2015-03-11 | 2015-05-27 | 酷派软件技术(深圳)有限公司 | Image acquisition method, image acquisition device and terminal |
CN106034179A (en) * | 2015-03-18 | 2016-10-19 | 中兴通讯股份有限公司 | Photo sharing method and device |
CN104853096A (en) * | 2015-04-30 | 2015-08-19 | 广东欧珀移动通信有限公司 | Rotation camera-based shooting parameter determination method and terminal |
CN105025162A (en) * | 2015-06-16 | 2015-11-04 | 惠州Tcl移动通信有限公司 | Automatic photo sharing method, mobile terminals and system |
CN105005597A (en) * | 2015-06-30 | 2015-10-28 | 广东欧珀移动通信有限公司 | Photograph sharing method and mobile terminal |
US20170034421A1 (en) * | 2015-07-31 | 2017-02-02 | Canon Kabushiki Kaisha | Image pickup apparatus and method of controlling the same |
CN105611174A (en) * | 2016-02-29 | 2016-05-25 | 广东欧珀移动通信有限公司 | Control method, control apparatus and electronic apparatus |
CN105894031A (en) * | 2016-03-31 | 2016-08-24 | 青岛海信移动通信技术股份有限公司 | Photo selection method and photo selection device |
CN105939445A (en) * | 2016-05-23 | 2016-09-14 | 武汉市公安局公共交通分局 | Fog penetration shooting method based on binocular camera |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107395979A (en) * | 2017-08-14 | 2017-11-24 | 天津帕比特科技有限公司 | The image-pickup method and system of hollow out shelter are removed based on multi-angled shooting |
CN109388233A (en) * | 2017-08-14 | 2019-02-26 | 财团法人工业技术研究院 | Transparent display device and control method thereof |
CN107680060A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of image distortion correction method, terminal and computer-readable recording medium |
US10554898B2 (en) | 2017-11-30 | 2020-02-04 | Guangdong Oppo Mobile Telecommunications Corp. Ltd. | Method for dual-camera-based imaging, and mobile terminal |
US10742860B2 (en) | 2017-11-30 | 2020-08-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for double-camera-based imaging |
US10616459B2 (en) | 2017-11-30 | 2020-04-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for dual-camera-based imaging and storage medium |
CN108108704A (en) * | 2017-12-28 | 2018-06-01 | 努比亚技术有限公司 | Face identification method and mobile terminal |
CN108446025A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Filming control method and Related product |
CN108446025B (en) * | 2018-03-21 | 2021-04-23 | Oppo广东移动通信有限公司 | Shooting control method and related product |
CN108921863A (en) * | 2018-06-12 | 2018-11-30 | 江南大学 | A kind of foot data acquisition device and method |
CN109215085A (en) * | 2018-08-23 | 2019-01-15 | 上海小萌科技有限公司 | A kind of article statistic algorithm using computer vision and image recognition |
CN109215085B (en) * | 2018-08-23 | 2021-09-17 | 上海小萌科技有限公司 | Article statistical method using computer vision and image recognition |
CN109712104A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | The exposed method of self-timer video cartoon head portrait and Related product |
CN109919988A (en) * | 2019-03-27 | 2019-06-21 | 武汉万屏电子科技有限公司 | A kind of stereoscopic image processing method suitable for three-dimensional endoscope |
CN110942434A (en) * | 2019-11-22 | 2020-03-31 | 华兴源创(成都)科技有限公司 | Display compensation system and method of display panel |
CN110942434B (en) * | 2019-11-22 | 2023-05-05 | 华兴源创(成都)科技有限公司 | Display compensation system and method of display panel |
CN111770279A (en) * | 2020-08-03 | 2020-10-13 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
CN111770279B (en) * | 2020-08-03 | 2022-04-08 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
CN114363516A (en) * | 2021-12-28 | 2022-04-15 | 苏州金螳螂文化发展股份有限公司 | Interactive photographing system based on human face recognition |
Also Published As
Publication number | Publication date |
---|---|
CN106851104B (en) | 2019-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106851104B (en) | A kind of method and device shot according to user perspective | |
CN105245774B (en) | A kind of image processing method and terminal | |
CN106878588A (en) | A kind of video background blurs terminal and method | |
CN108629747B (en) | Image enhancement method and device, electronic equipment and storage medium | |
CN105354838B (en) | The depth information acquisition method and terminal of weak texture region in image | |
CN104954689B (en) | A kind of method and filming apparatus that photo is obtained using dual camera | |
CN111462311B (en) | Panorama generation method and device and storage medium | |
CN105100775B (en) | A kind of image processing method and device, terminal | |
CN106612397A (en) | Image processing method and terminal | |
CN106791204A (en) | Mobile terminal and its image pickup method | |
CN108322644A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN106605403A (en) | Photographing method and electronic device | |
CN107018331A (en) | A kind of imaging method and mobile terminal based on dual camera | |
CN105898159A (en) | Image processing method and terminal | |
CN106778524A (en) | A kind of face value based on dual camera range finding estimates devices and methods therefor | |
CN108108704A (en) | Face identification method and mobile terminal | |
CN113727012B (en) | Shooting method and terminal | |
CN109889724A (en) | Image weakening method, device, electronic equipment and readable storage medium storing program for executing | |
CN116582741B (en) | Shooting method and equipment | |
CN106603931A (en) | Binocular shooting method and device | |
WO2021147921A1 (en) | Image processing method, electronic device and computer-readable storage medium | |
CN103533228B (en) | Method and system for generating a perfect shot image from multiple images | |
CN107705251A (en) | Picture joining method, mobile terminal and computer-readable recording medium | |
CN106534590B (en) | A kind of photo processing method, device and terminal | |
CN106954020B (en) | A kind of image processing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |