CN104954670A - Photographing method and device - Google Patents

Photographing method and device Download PDF

Info

Publication number
CN104954670A
CN104954670A CN201510220902.0A CN201510220902A CN104954670A CN 104954670 A CN104954670 A CN 104954670A CN 201510220902 A CN201510220902 A CN 201510220902A CN 104954670 A CN104954670 A CN 104954670A
Authority
CN
China
Prior art keywords
information
image
field
view image
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510220902.0A
Other languages
Chinese (zh)
Other versions
CN104954670B (en
Inventor
吴俊�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510220902.0A priority Critical patent/CN104954670B/en
Publication of CN104954670A publication Critical patent/CN104954670A/en
Application granted granted Critical
Publication of CN104954670B publication Critical patent/CN104954670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a photographing method. The photographing method includes displaying 3D (three dimensional) images to be synthesized in a preset area; acquiring current position pointing information in real time, generating vision images corresponding to the 3D images according to the acquired position pointing information and the current photographing position information; synthesizing the vision images and the preset images. The invention further discloses a photographing device. Based on synthesis of the acquired vision images and the preset images, different scenes of photos can be generated and photographing scenes are rich.

Description

Photographic method and device
Technical field
The present invention relates to shooting technology field, particularly relate to a kind of photographic method and device.
Background technology
Current most of terminal equipment is as mobile phone, capital configures software of taking pictures, user can carry out alternately with the webserver after having clapped photo, add real-time watermark planar graph, the such as information such as " weather ", " mood ", " place ", but the scene that user takes pictures but cannot change, the scene of taking pictures all determined by photo environment substantially, such as, as user with certain position of family as a colourless wall, as take pictures background time, take photo just just simple people and the Bai wall come, obviously, this scene of taking pictures is very single.
Summary of the invention
Main purpose of the present invention is to propose a kind of photographic method and device, is intended to solve the very single technical problem of scene of taking pictures.
For achieving the above object, a kind of photographic method provided by the invention, described photographic method comprises the following steps:
3D rendering to be synthesized is shown at predeterminable area;
Its current position directional information of Real-time Obtaining;
Field-of-view image corresponding to described 3D rendering is generated according to the described position directional information obtained and current picture-taking position information;
Described field-of-view image and the image preset are synthesized.
Preferably, described photographic method also comprises:
In the process of display field-of-view image, if receive the picture-taking position information updating instruction of user's input, upgrade picture-taking position information current in described 3D rendering.
Preferably, described described field-of-view image and the image step of carrying out synthesizing preset to be comprised:
Obtain the image preset;
Determine the profile information in described image;
The object of preset kind in described image is extracted according to the described profile information determined;
The image that described field-of-view image is corresponding with the described object of extraction synthesizes.
Preferably, described described field-of-view image and the image step of carrying out synthesizing preset to be comprised:
Obtain the image preset;
Determine the profile information in described image;
The object of preset kind in described image is extracted according to the described profile information determined;
The image that described field-of-view image is corresponding with the described object of extraction synthesizes.
Preferably, described by described field-of-view image with preset image carry out the step of synthesizing after, described photographic method comprises:
When detecting the information addition instruction of user's input, determine the information that described information addition instruction is corresponding;
The described information determined is added in composograph, to show the composograph adding described information.
In addition, for achieving the above object, the present invention also proposes a kind of camera arrangement, and described camera arrangement comprises:
Display module, for showing 3D rendering to be synthesized at predeterminable area;
Acquisition module, for its current position directional information of Real-time Obtaining;
Generation module, for generating field-of-view image corresponding to described 3D rendering according to the described position directional information obtained and current picture-taking position information;
Synthesis module, for synthesizing described field-of-view image and the image preset.
Preferably, described camera arrangement also comprises:
Update module, in the process of display field-of-view image, if receive the picture-taking position information updating instruction of user's input, upgrades picture-taking position information current in described 3D rendering.
Preferably, described synthesis module comprises:
Acquiring unit, for obtaining default image;
Determining unit, for determining the profile information in described image;
Extraction unit, for extracting the object of preset kind in described image according to the described profile information determined;
Synthesis unit, synthesizes for the image that described field-of-view image is corresponding with the described object of extraction.
Preferably, described camera arrangement also comprises:
First determination module, for when detecting parameter regulating command, determines the parameter that described parameter regulating command is corresponding;
Processing module, for regulate described field-of-view image according to described parameter and generate the field-of-view image after regulating.
Preferably, described camera arrangement also comprises:
Second determination module, for when detecting the information addition instruction of user's input, determines the information that described information addition instruction is corresponding;
Add module, for adding in composograph by the described information determined, to show the composograph adding described information.
The photographic method that the present invention proposes and device, 3D rendering to be synthesized is shown at predeterminable area, its current position directional information of Real-time Obtaining, field-of-view image corresponding to described 3D rendering is generated according to the described position directional information obtained and current picture-taking position information, and described field-of-view image and the image preset are synthesized, along with the difference of position directional information in 3D rendering, the field-of-view image got also can be thereupon different, be in the same localities even if achieve, also can take the photo of various different scene, make the scene of taking pictures abundanter.
Accompanying drawing explanation
Fig. 1 is the hardware configuration schematic diagram of the mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the radio communication device schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the schematic flow sheet of photographic method first embodiment of the present invention;
Fig. 4 is the refinement schematic flow sheet of step S40 in Fig. 3;
Fig. 5 is the schematic flow sheet of photographic method second embodiment of the present invention;
Fig. 6 is the schematic flow sheet of photographic method of the present invention 3rd embodiment;
Fig. 7 is the high-level schematic functional block diagram of camera arrangement first embodiment of the present invention;
Fig. 8 is the refinement high-level schematic functional block diagram of synthesis module 40 in Fig. 7;
Fig. 9 is the high-level schematic functional block diagram of camera arrangement second embodiment of the present invention;
Figure 10 is the high-level schematic functional block diagram of camera arrangement of the present invention 3rd embodiment.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desktop computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input unit 120, user input unit 130, sensing cell 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and radio communication device or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcaster Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO @) the digital broadcast apparatus receiving digital broadcast of data broadcast device, received terrestrial digital broadcasting integrated service (ISDB-T) etc.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcaster of broadcast singal and above-mentioned digital broadcast apparatus.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in memory 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (global pick device).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating location and temporal information uses three satellites and by the error of the position that uses an other satellite correction calculation to go out and temporal information.In addition, GPS module 115 can carry out computational speed information by Continuous plus current location information in real time.
A/V input unit 120 is for audio reception or vision signal.A/V input unit 120 can comprise camera 121 and microphone 122, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 121 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of audio signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration of mobile terminal 100 or speed is moved and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power subsystem 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 1410 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other jockey.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, audio signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input unit and output device.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin-film transistor LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific execution mode wanted, mobile terminal 100 can comprise two or more display units (or other display unit), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in memory 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise pick-up, buzzer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incoming communication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Memory 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, memory 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of audio signal.
Memory 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type memory (such as, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 181 for reproducing (or playback) multi-medium data, and multi-media module 181 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power subsystem 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various execution mode described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, execution mode described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such execution mode can be implemented in controller 180.For implement software, the execution mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memory 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
With reference to the electrical structure block diagram that Fig. 2, Fig. 2 are camera in Fig. 1.
Phtographic lens 1211 is made up of the multiple optical lens for the formation of shot object image, is single-focus lens or zoom lens.Phtographic lens 1211 can move in the direction of the optical axis under the control of lens driver 1221, lens driver 1221 is according to the control signal from lens driving control circuit 1222, control the focal position of phtographic lens 1211, when zoom lens, also can control focal length.Lens driving control circuit 1222 carries out the drived control of lens driver 1221 according to the control command from microcomputer 1217.
Imaging apparatus 1212 is configured with near the position of the shot object image formed on the optical axis of phtographic lens 1211, by phtographic lens 1211.Imaging apparatus 1212 is for making a video recording to shot object image and obtaining image data.On imaging apparatus 1212 two dimension and be arranged in a matrix the photodiode forming each pixel.Each photodiode produces the opto-electronic conversion electric current corresponding with light income, and this opto-electronic conversion electric current carries out charge accumulation by the capacitor be connected with each photodiode.The front surface of each pixel is configured with the RGB colour filter of Bayer arrangement.
Imaging apparatus 1212 is connected with imaging circuit 1213, this imaging circuit 1213 carries out charge accumulation and controls and picture signal reading control in imaging apparatus 1212, to the picture signal (analog picture signal) of this reading reduce reset noise after carry out waveform shaping, and then carry out gain raising etc. to become suitable signal level.
Imaging circuit 1213 is connected with A/D converter 1214, and this A/D converter 1214 pairs of analog picture signals carry out analog-to-digital conversion, to bus 1227 output digital image signal (hereinafter referred to as view data).
Bus 1227 is the transfer paths of the various data that the inside for being transmitted in camera reads or generates.Above-mentioned A/D converter 1214 is connected in bus 1227, be connected to image processor 1215, jpeg processor 1216, microcomputer 1217, SDRAM (Synchronous Dynamic random access memory in addition, SDRAM) 1218, memory interface (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display, liquid crystal display) driver 1220.
Image processor 1215 carries out OB to the view data of the output based on imaging apparatus 1212 and subtracts each other process, blank level adjustment, color matrix computing, gamma conversion, color difference signal process, noise removal process, changes the various image procossing such as process, edge treated simultaneously.Jpeg processor 1216 by Imagery Data Recording in recording medium 1225 time, according to JPEG compress mode compression from SDRAM1218 read view data.In addition, jpeg processor 1216 carries out the decompression of jpeg image data in order to carry out image reproducing display.When decompressing, read the file that is recorded in recording medium 1225, implement decompression in jpeg processor 1216 after, the view data of decompression to be temporarily stored in SDRAM1218 and to show on LCD1226.H.264 in addition, in the present embodiment, what adopt as image compression decompression mode is JPEG mode, but compressed and decompressed mode is not limited thereto, and certainly can adopt MPEG, TIFF, other the compressed and decompressed mode such as.
Microcomputer 1217 plays the function of the control part as this camera entirety, the unified various process sequences controlling camera.Microcomputer 1217 is connected to operating unit 1223 and flash memory 1224.
Operating unit 1223 includes but not limited to physical button or virtual key, this entity or virtual key can be the operational controls such as various load button and various enter keies such as power knob, key of taking pictures, edit key, dynamic image button, reproduction button, menu button, cross key, OK button, delete button, large buttons, detect the mode of operation of these operational controls.
Testing result is exported to microcomputer 1217.In addition, be provided with touch panel at the front surface of the LCD1226 as display, detect the touch location of user, this touch location is exported to microcomputer 1217.Microcomputer 1217, according to the testing result of the operating position from operating unit 1223, performs the various process sequences corresponding with the operation of user.
Flash memory 1224 stores the program of the various process sequences for performing microcomputer 1217.Microcomputer 1217 carries out the control of camera entirety according to this program.In addition, flash memory 1224 stores the various adjusted values of camera, and microcomputer 1217 reads adjusted value, carries out the control of camera according to this adjusted value.
SDRAM1218 is can the electricity volatile memory of rewriting for what carry out view data etc. temporarily storing.This SDRAM1218 temporarily stores the view data that exports from A/D converter 1214 and image processor 1215, jpeg processor 1216 etc., to have carried out the view data after processing.
Memory interface 1219 is connected with recording medium 1225, carries out the control by view data and the first-class data writing recording medium 1225 of file be attached in view data and reading from recording medium 1225.Recording medium 1225 be such as can on camera main-body the recording medium such as memory card of disassembled and assembled freely, but being not limited thereto, also can be the hard disk etc. be built in camera main-body.
Lcd driver 1210 is connected with LCD1226, view data after being processed by image processor 1215 is stored in SDRAM1218, when needing display, read the view data that SDRAM1218 stores also to show on LCD1226, or the compressed view data of jpeg processor 1216 is stored in SDRAM1218, when needs show, jpeg processor 1216 reads the compressed view data of SDRAM1218, then decompresses, and the view data after decompressing is shown by LCD1226.
Image display is carried out at the back side that LCD1226 is configured in camera main-body.This LCD1226LCD), but be not limited thereto, also can adopt the various display floaters (LCD1226) such as organic EL, but be not limited thereto, also can adopt the various display floaters such as organic EL.
Based on the electrical structure schematic diagram of above-mentioned mobile terminal hardware configuration and camera, each embodiment of photographic method of the present invention is proposed.
With reference to the schematic flow sheet that Fig. 3, Fig. 3 are photographic method first embodiment of the present invention.
The present embodiment proposes a kind of photographic method, and described photographic method comprises:
Step S10, shows 3D rendering to be synthesized at predeterminable area;
In the present embodiment, the 3D rendering that terminal can be prestored is as 3D rendering to be synthesized, and at the described 3D rendering to be synthesized of predeterminable area display, further, in order to increase the rich of background of taking pictures, before described step S10, preferably comprise the step selecting 3D rendering, the step of described selection 3D rendering comprises:
1) 3D rendering that terminal demonstration is preset is applied or presets application of taking pictures, when the icon applied described in user's touch-control being detected, the 3D rendering of display preset kind, and when detecting that user touches 3D rendering, using the 3D rendering of described touch as 3D rendering to be synthesized, and at the described 3D rendering to be synthesized of predeterminable area display.
2) when receiving the voice messaging (as user speech is reported: select 3D rendering) of user's input, at the 3D rendering of terminal predeterminable area display preset kind, and the corresponding numbering of each 3D rendering, when receiving voice selecting instruction (as the voice broadcast: No. 5) of user, using the 3D rendering of described voice selecting as 3D rendering to be synthesized, at the described 3D rendering to be synthesized of predeterminable area display.
The selection mode of the two kinds of 3D renderings enumerated is above only exemplary; those skilled in the art utilize technological thought of the present invention; the selection mode of other the various 3D rendering proposed according to its real needs, all in protection scope of the present invention, does not carry out exhaustive one by one at this.
Be understandable that, the described terminal first server corresponding with 3D rendering establishes a communications link relation, and when detecting 3D rendering selection instruction, obtains the 3D rendering that described selection instruction is corresponding, and at the 3D rendering of predeterminable area display acquisition.
Step S20, its current position directional information of Real-time Obtaining;
In the present embodiment, its current position directional information of described terminal Real-time Obtaining preferably by terminal preset application as gyroscope, carry out detecting to obtain the current position directional information of described terminal in real time to described terminal, described position directional information is three-dimensional pointing vector.
Step S30, generates field-of-view image corresponding to described 3D rendering according to the described position directional information obtained and current picture-taking position information;
In the present embodiment, the embodiment of described step S30 comprises:
1) the first embodiment, described terminal, according to current picture-taking position information in the 3D rendering of the described position directional information obtained and predeterminable area display, generates the field-of-view image that described 3D rendering is corresponding.In the present embodiment, preferably first download described 3D rendering, at the described 3D rendering that predeterminable area display is downloaded, and the position directional information of acquisition is presented in described 3D rendering, generate field-of-view image corresponding to described 3D rendering according to described position directional information and current picture-taking position information.Be understandable that, only when position directional information changes, described terminal just regenerates field-of-view image corresponding to described 3D rendering according to new position directional information and current picture-taking position information, namely, when described position directional information changes, the field-of-view image that described terminal generates is also along with change.
2) the second embodiment, when the described position directional information obtained changes, the described position directional information obtained is sent to server by described terminal, for described server based on field-of-view image corresponding to 3D rendering described in described position directional information and the current in the 3 d image picture-taking position information feed back of described terminal.In the present embodiment, when preferred detection changes (direction that namely terminal camera is right changes) to the position directional information of terminal, the described position directional information obtained is sent to server, and described server is preferably 3D scene server.Be understandable that, when user according to the camera that terminal is original, direction is moved horizontally, vertically move or movable described terminal time, the position directional information of described terminal does not change, that is, only when the angle information of the camera of terminal changes relative to original angle information, described position directional information just changes.Be understandable that, the described position directional information of described server based on Real-time Obtaining and the picture-taking position of acquiescence, generate the real-time field-of-view image that real time position directional information is corresponding, that is, often receive a position directional information, server can generate a field-of-view image, when the position directional information received is different, the field-of-view image generated is also different, namely achieve the real-time change along with position directional information in described 3D rendering, the cyclogram of 3D rendering is also changed thereupon.
In the present embodiment, described current picture-taking position information is predeterminable is a default location information in 3D rendering, namely be preset as detect at every turn field-of-view image find a view instruction time, can directly using described default location information as position of finding a view, again in conjunction with the position directional information that described terminal obtains, generate the field-of-view image in 3D rendering.
Further, for improve take pictures intelligent, preferred version is, in the process of display field-of-view image, if receive the picture-taking position information updating instruction of user's input, upgrade picture-taking position information current in described 3D rendering, the embodiment of picture-taking position current in described renewal 3D rendering comprises:
1) the first embodiment, when the picture-taking position that described picture-taking position information is corresponding receives the touch slide of user's input, described terminal determines the target location that described touch slide is corresponding, and using described target location as current picture-taking position.That is, when user wants to select different picture-taking position, the picture-taking position that slide changes terminal is touched by input, and generate field-of-view image corresponding to described 3D rendering according to the described position directional information obtained and described current picture-taking position, thus the field-of-view image generated is made more to meet the demand of user.
2) the second embodiment, when the picture-taking position that described picture-taking position information is corresponding receives the touch clicking operation of user's input, the described picture-taking position of preferred shake, and when the clicking operation that user inputs again being detected, described terminal determine described in clicking operation is corresponding again target location, and using described target location as current picture-taking position.
In the present embodiment, picture-taking position corresponding to described current picture-taking position information can be an acquiescence point, or an icon is as little five-pointed star, can select to touch slide the acquiescence point of described display or little five-pointed star to realize the change of picture-taking position for user.Further, described terminal also can generate a virtual video camera in the 3 d image, and described virtual video camera is placed on described current picture-taking position, user intuitively can check the position of virtual video camera, thus find described current picture-taking position fast, the touch slide of the picture-taking position before the trade of going forward side by side or touch clicking operation.
Step S40, synthesizes described field-of-view image and the image preset.
In the present embodiment, with reference to Fig. 4, described step S40 comprises:
Step S41, obtains the image preset;
In the present embodiment, described default image can be the image that terminal prestores, the image obtained before also can be the current moment camera lens of taking pictures of terminal.
Step S42, determines the profile information in described image;
Step S43, extracts the object of preset kind in described image according to the described profile information determined;
Step S44, the image that described field-of-view image is corresponding with the described object of extraction synthesizes.
In the present embodiment, for better understanding described scheme, be exemplified below: object to be photographed sets posture of taking pictures before a colourless metope, terminal first obtains the true picture of described Bai Qiang and people, again filtration treatment is carried out to the described true picture obtained, to obtain real-time portrait wherein, described preferably to the portrait in described true picture, rim detection is carried out to the mode that true picture carries out filtration treatment, when the feature of described portrait and environment have larger difference, obtain the overall profile of described personage by rim detection, or when the feature of described portrait and environment there is no too big-difference time, as the clothes of portrait and the color of environment comparatively similar, because the face of personage is different from the pixel value in environment, can first obtain described facial contour, again using the region that is connected with facial contour as region to be detected, detected by the edge of rim detection to described region to be detected, to obtain the body contour of described personage, the exterior contour of real-time portrait is determined according to the personal profile obtained and facial contour, again based on the real-time portrait exterior contour determined, extract the image that described real-time portrait is corresponding.Finally, image corresponding with real-time portrait for the described field-of-view image received synthesizes by described terminal, and the mode of described synthesis is prior art, repeats no more herein.
The photographic method that the present embodiment proposes, 3D rendering to be synthesized is shown at predeterminable area, its current position directional information of Real-time Obtaining, field-of-view image corresponding to described 3D rendering is generated according to the described position directional information obtained and current picture-taking position information, and described field-of-view image and the image preset are synthesized, along with the difference of position directional information in 3D rendering, the field-of-view image got also can be thereupon different, be in the same localities even if achieve, also can take the photo of various different scene, make the scene of taking pictures abundanter.
Further, for improving the flexibility of taking pictures, with reference to Fig. 5, propose photographic method second embodiment of the present invention based on the first embodiment, in the present embodiment, between described step S30 and described step S40, described photographic method comprises:
Step S50, when detecting parameter regulating command, determines the parameter that described parameter regulating command is corresponding;
In the present embodiment, described parameter comprises the parameters such as focal length, the depth of field, convergent-divergent, white balance or scene time, when detecting parameter regulating command, described terminal determines the regulating parameter such as described focal length, the depth of field, convergent-divergent, white balance or scene time according to the inductor preset.
Step S60, to regulate described field-of-view image according to described parameter and generates the field-of-view image after regulating.
In the present embodiment, the step of opening the automatic shaping modes of parameter is comprised before described step S10, the opening ways of the automatic shaping modes of described parameter comprises: the application display parameters of taking pictures that terminal is being preset arrange interface, described optimum configurations interface display parameters and square frame corresponding to parameters, when preferably choosing square frame corresponding to parameter, the described parameter chosen enters automatic shaping modes, or in the mark that optimum configurations interface display parameter regulates automatically, the mark that described parameter regulates automatically is a corresponding rectangle frame preferably, when in like manner preferably choosing described rectangle frame, the automatic shaping modes of start-up parameter, namely the parameters in terminal all enters automatic shaping modes.That is, after user opens the automatic shaping modes of parameter, terminal is carrying out taking pictures in process, by the situation of taking pictures that predetermined inductive device inductive terminations is current, and determine parameter to be regulated according to actual conditions, the determination mode of described parameter to be regulated is exemplified below:
1) focal length of field-of-view image is regulated, such as, user by terminal near object to be photographed time, described terminal can determine the focal length of image according to the distance of current location and object to be photographed, and according to the described focal length determined, field-of-view image is adjusted, to generate new field-of-view image.
2) regulate the scene time of field-of-view image, such as, the time that described scene is preset originally is the scene on daytime, regulates as Timing, the scene of described field-of-view image is adjusted to night by the parameter preset.The regulative mode of described time is preferably the predeterminable area (as taken the upper left corner area at interface) at terminal taking interface, displayed scene set of time icon, when user touches described icon, eject and preset menu window, described menu window preferably includes scene time, described scene time can be divided into the default time period, as being divided into 4 time periods by 24 hours, and when detecting that user selects any one time period, obtain the time of described select time section correspondence, and regulate according to the time of time to field-of-view image obtained; Or described scene time be divided into daytime, night, dusk equal time scene, when detecting that user selects any one Novel Temporal Scenario, according to user select the time of Novel Temporal Scenario to field-of-view image regulate.Be understandable that, also described scene time can be preset as a default time, when taking pictures, using the time of described default time as scene of taking pictures at every turn.
The determination mode of the two kinds of parameters to be regulated enumerated is above only exemplary; those skilled in the art utilize technological thought of the present invention; the determination mode of other the various parameter to be regulated proposed according to its real needs, all in protection scope of the present invention, is not carried out exhaustive one by one at this.
Further, for improving the flexibility of taking pictures, with reference to Fig. 6, propose photographic method of the present invention 3rd embodiment based on the first embodiment, in the present embodiment, after described step S40, described photographic method comprises:
Step S70, when detecting the information addition instruction of user's input, determines the information that described information addition instruction is corresponding;
Step S80, adds in composograph by the described information determined, to show the composograph adding described information.
In the present embodiment, described information preferably includes word or pattern, described word comprises weather condition, shooting places or user mood etc., described pattern can be default image, as heart etc., namely user can add described word or pattern-information in the image of synthesis, and the photo taking is enriched and entertaining more.
The present invention further provides a kind of camera arrangement.
With reference to the high-level schematic functional block diagram that Fig. 7, Fig. 7 are camera arrangement first embodiment of the present invention.
It is emphasized that, to one skilled in the art, functional block diagram shown in Fig. 7 is only the exemplary plot of a preferred embodiment, and those skilled in the art, around the functional module of the camera arrangement shown in Fig. 7, can carry out supplementing of new functional module easily; The title of each functional module is self-defined title, and only for auxiliary each program function block understanding camera arrangement, be not used in and limit technical scheme of the present invention, the core of technical solution of the present invention is, the function that the functional module of respective define name will be reached.
The present embodiment proposes a kind of camera arrangement, and described camera arrangement comprises:
Display module 10, for showing 3D rendering to be synthesized at predeterminable area;
In the present embodiment, can using the 3D rendering that prestores as 3D rendering to be synthesized, described display module 10 is at the described 3D rendering to be synthesized of predeterminable area display, further, in order to increase the rich of background of taking pictures, preferred display module 10 comprises selected cell, and described selected cell is for selecting 3D rendering, and described selected cell selects the mode of 3D rendering to comprise:
1) described display module 10 shows the 3D rendering application or default application of taking pictures preset, when the icon applied described in user's touch-control being detected, the 3D rendering of display preset kind, and when detecting that user touches 3D rendering, using the 3D rendering of described touch as 3D rendering to be synthesized, and at the described 3D rendering to be synthesized of predeterminable area display.
2) when receiving the voice messaging (as user speech is reported: select 3D rendering) of user's input, described display module 10 is at the 3D rendering of predeterminable area display preset kind, and the corresponding numbering of each 3D rendering, when receiving voice selecting instruction (as the voice broadcast: No. 5) of user, using the 3D rendering of described voice selecting as 3D rendering to be synthesized, and at the described 3D rendering to be synthesized of predeterminable area display.
The selection mode of the two kinds of 3D renderings enumerated is above only exemplary; those skilled in the art utilize technological thought of the present invention; the selection mode of other the various 3D rendering proposed according to its real needs, all in protection scope of the present invention, does not carry out exhaustive one by one at this.
Be understandable that, the described camera arrangement first server corresponding with 3D rendering establishes a communications link relation, and when detecting 3D rendering selection instruction, obtain the 3D rendering that described selection instruction is corresponding, and the 3D rendering that described display module 10 obtains in predeterminable area display.
Acquisition module 20, for its current position directional information of Real-time Obtaining;
In the present embodiment, its current position directional information of described acquisition module 20 Real-time Obtaining preferably by preset application as gyroscope, carry out detecting to obtain the current position directional information of described camera arrangement in real time to described camera arrangement, described position directional information is three-dimensional pointing vector.
Generation module 30, for generating field-of-view image corresponding to described 3D rendering according to the described position directional information obtained and current picture-taking position information;
In the present embodiment, the embodiment that described generation module 30 generates field-of-view image corresponding to described 3D rendering according to the described position directional information obtained and current picture-taking position information comprises:
1) the first embodiment, picture-taking position information current in the 3D rendering of the described position directional information that described generation module 30 obtains according to described acquisition module 20 and predeterminable area display, generates the field-of-view image that described 3D rendering is corresponding.In the present embodiment, preferably first download described 3D rendering, the described 3D rendering that described display module 10 is downloaded in predeterminable area display, and the position directional information of acquisition is presented in described 3D rendering, generation module 30 generates field-of-view image corresponding to described 3D rendering according to described position directional information and current picture-taking position information.Be understandable that, only when position directional information changes, described generation module 30 just regenerates field-of-view image corresponding to described 3D rendering according to new position directional information and current picture-taking position information, namely, when described position directional information changes, the field-of-view image that described generation module 30 generates is also along with change.
2) the second embodiment, when the described position directional information that described acquisition module 20 obtains changes, the described position directional information obtained is sent to server by described acquisition module 20, for described server based on field-of-view image corresponding to 3D rendering described in described position directional information and picture-taking position information feed back current in the 3 d image.In the present embodiment, when preferred detection changes (direction that namely described camera arrangement camera is right changes) to the position directional information of described camera arrangement, the described position directional information obtained is sent to server, and described server is preferably 3D scene server.Be understandable that, when user according to the camera that terminal is original, direction is moved horizontally, vertically move or movable described terminal time, the position directional information of described camera arrangement does not change, that is, only when the angle information of the camera of described camera arrangement changes relative to original angle information, described position directional information just changes.Be understandable that, the described position directional information of described server based on Real-time Obtaining and the picture-taking position of acquiescence, generate the real-time field-of-view image that real time position directional information is corresponding, that is, often receive a position directional information, server can generate a field-of-view image, when the position directional information received is different, the field-of-view image generated is also different, namely achieve the real-time change along with position directional information in described 3D rendering, the cyclogram of 3D rendering is also changed thereupon.
In the present embodiment, described current picture-taking position information is predeterminable is a default location information in 3D rendering, namely be preset as detect at every turn field-of-view image find a view instruction time, can directly using described default location information as position of finding a view, again in conjunction with the position directional information that described acquisition module 20 obtains, described generation module 30 generates the field-of-view image in 3D rendering.
Further, for improve take pictures intelligent, preferred version is, described camera arrangement comprises update module, described update module is used for showing in the process of field-of-view image at display module 10, if receive the picture-taking position information updating instruction of user's input, upgrade picture-taking position information current in described 3D rendering, the embodiment that described update module upgrades picture-taking position current in 3D rendering comprises:
1) the first embodiment, when the picture-taking position that described picture-taking position information is corresponding receives the touch slide of user's input, determine the target location that described touch slide is corresponding, described update module using described target location as current picture-taking position.That is, when user wants to select different picture-taking position, the picture-taking position that slide changes terminal is touched by input, the described position directional information that described generation module 30 obtains according to described acquisition module 20 and described current picture-taking position generate field-of-view image corresponding to described 3D rendering, thus make the field-of-view image generated more meet the demand of user.
2) the second embodiment, when the picture-taking position that described picture-taking position information is corresponding receives the touch clicking operation of user's input, the described picture-taking position of preferred shake, and when the clicking operation that user inputs again being detected, described update module determine described in clicking operation is corresponding again target location, and using described target location as current picture-taking position.
In the present embodiment, picture-taking position corresponding to described current picture-taking position information can be an acquiescence point, or an icon is as little five-pointed star, can select to touch slide the acquiescence point of described display or little five-pointed star to realize the change of picture-taking position for user.Further, described generation module 30 also can generate a virtual video camera in the 3 d image, and described virtual video camera is placed on described current picture-taking position, user intuitively can check the position of virtual video camera, thus find described current picture-taking position fast, the touch slide of the picture-taking position before the trade of going forward side by side or touch clicking operation.
Synthesis module 40, for synthesizing described field-of-view image and the image preset.
In the present embodiment, with reference to Fig. 8, described synthesis module 40 comprises:
Acquiring unit 41, for obtaining default image;
In the present embodiment, described default image can be the image that described camera arrangement prestores, the image obtained before also can be the current moment camera lens of taking pictures of described camera arrangement.
Determining unit 42, for determining the profile information in described image;
Extraction unit 43, for extracting the object of preset kind in described image according to the described profile information determined;
Synthesis unit 44, synthesizes for the image that described field-of-view image is corresponding with the described object of extraction.
In the present embodiment, for better understanding described scheme, be exemplified below: object to be photographed sets posture of taking pictures before a colourless metope, terminal first obtains the true picture of described Bai Qiang and people, again filtration treatment is carried out to the described true picture obtained, to obtain real-time portrait wherein, described preferably to the portrait in described true picture, rim detection is carried out to the mode that true picture carries out filtration treatment, when the feature of described portrait and environment have larger difference, obtain the overall profile of described personage by rim detection, or when the feature of described portrait and environment there is no too big-difference time, as the clothes of portrait and the color of environment comparatively similar, because the face of personage is different from the pixel value in environment, first can obtain described facial contour by described acquiring unit 41, again using the region that is connected with facial contour as region to be detected, detected by the edge of rim detection to described region to be detected, to obtain the body contour of described personage, according to the personal profile obtained and facial contour, described determining unit 42 determines the exterior contour of real-time portrait, again based on the real-time portrait exterior contour determined, described extraction unit 43 extracts image corresponding to described real-time portrait.Finally, image corresponding with real-time portrait for the described field-of-view image received synthesizes by described synthesis unit 44, and the mode of described synthesis is prior art, repeats no more herein.
The camera arrangement that the present embodiment proposes, 3D rendering to be synthesized is shown at predeterminable area, its current position directional information of Real-time Obtaining, field-of-view image corresponding to described 3D rendering is generated according to the described position directional information obtained and current picture-taking position information, and described field-of-view image and the image preset are synthesized, along with the difference of position directional information in 3D rendering, the field-of-view image got also can be thereupon different, be in the same localities even if achieve, also can take the photo of various different scene, make the scene of taking pictures abundanter.
Further, for improving the flexibility of taking pictures, with reference to Fig. 9, propose camera arrangement second embodiment of the present invention based on the first embodiment, in the present embodiment, described camera arrangement also comprises:
First determination module 50, for when detecting parameter regulating command, determines the parameter that described parameter regulating command is corresponding;
In the present embodiment, described parameter comprises the parameters such as focal length, the depth of field, convergent-divergent, white balance or scene time, when detecting parameter regulating command, described first determination module 50 determines the regulating parameter such as described focal length, the depth of field, convergent-divergent, white balance or scene time according to the inductor preset.
Processing module 60, for regulate described field-of-view image according to described parameter and generate the field-of-view image after regulating.
In the present embodiment, described first determination module 50 comprises unlatching unit, described unlatching unit is for opening the automatic shaping modes of parameter, the opening ways of the automatic shaping modes of described parameter comprises: the application display parameters of taking pictures that described display module 10 is being preset arrange interface, described optimum configurations interface display parameters and square frame corresponding to parameters, when preferably choosing square frame corresponding to parameter, the described parameter chosen enters automatic shaping modes, or in the mark that optimum configurations interface display parameter regulates automatically, the mark that described parameter regulates automatically is a corresponding rectangle frame preferably, when in like manner preferably choosing described rectangle frame, the automatic shaping modes of start-up parameter, namely the parameters in terminal all enters automatic shaping modes.That is, after user opens the automatic shaping modes of parameter, carrying out taking pictures in process, by the situation of taking pictures that predetermined inductive device inductive terminations is current, and determine parameter to be regulated according to actual conditions, the determination mode of described parameter to be regulated is exemplified below:
1) focal length of field-of-view image is regulated, such as, user by described camera arrangement near object to be photographed time, described first determination module 50 can determine the focal length of image according to the distance of current location and object to be photographed, described processing module 60 adjusts field-of-view image according to the described focal length determined, to generate new field-of-view image.
2) regulate the scene time of field-of-view image, such as, the time that described scene is preset originally is the scene on daytime, regulates as Timing, the scene of described field-of-view image is adjusted to night by the parameter preset.The regulative mode of described time is preferably the predeterminable area (as taken the upper left corner area at interface) at described display module 10, displayed scene set of time icon, when user touches described icon, eject and preset menu window, described menu window preferably includes scene time, described scene time can be divided into the default time period, as being divided into 4 time periods by 24 hours, and when detecting that user selects any one time period, obtain the time of described select time section correspondence, described processing module 60 regulated according to the time of time to field-of-view image obtained, or described scene time be divided into daytime, night, dusk equal time scene, when detecting that user selects any one Novel Temporal Scenario, the time of Novel Temporal Scenario to field-of-view image that described processing module 60 is selected according to user regulates.Be understandable that, also described scene time can be preset as a default time, when taking pictures, using the time of described default time as scene of taking pictures at every turn.
The determination mode of the two kinds of parameters to be regulated enumerated is above only exemplary; those skilled in the art utilize technological thought of the present invention; the determination mode of other the various parameter to be regulated proposed according to its real needs, all in protection scope of the present invention, is not carried out exhaustive one by one at this.
Further, for improving the flexibility of taking pictures, with reference to Figure 10, propose camera arrangement of the present invention 3rd embodiment based on the first embodiment, in the present embodiment, described camera arrangement also comprises:
Second determination module 70, for when detecting the information addition instruction of user's input, determines the information that described information addition instruction is corresponding;
Add module 80, for adding in composograph by the described information determined, to show the composograph adding described information.
In the present embodiment, described information preferably includes word or pattern, described word comprises weather condition, shooting places or user mood etc., described pattern can be default image, as heart etc., namely user can add described word or pattern-information in the image of synthesis, and the photo taking is enriched and entertaining more.
It should be noted that, in this article, term " comprises ", " comprising " or its other variant any are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or system and not only comprise those key elements, but also comprise other key element clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or system.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the system comprising this key element and also there is other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that above-described embodiment method can add required general hardware platform by software and realize, hardware can certainly be passed through, but in a lot of situation, the former is better execution mode.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computer, server, air conditioner, or the network equipment etc.) perform method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize specification of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical field, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a photographic method, is characterized in that, described photographic method comprises the following steps:
3D rendering to be synthesized is shown at predeterminable area;
Its current position directional information of Real-time Obtaining;
Field-of-view image corresponding to described 3D rendering is generated according to the described position directional information obtained and current picture-taking position information;
Described field-of-view image and the image preset are synthesized.
2. photographic method as claimed in claim 1, it is characterized in that, described photographic method also comprises:
In the process of display field-of-view image, if receive the picture-taking position information updating instruction of user's input, upgrade picture-taking position information current in described 3D rendering.
3. photographic method as claimed in claim 1, is characterized in that, describedly the step that described field-of-view image and the image preset carry out synthesizing is comprised:
Obtain the image preset;
Determine the profile information in described image;
The object of preset kind in described image is extracted according to the described profile information determined;
The image that described field-of-view image is corresponding with the described object of extraction synthesizes.
4. photographic method as claimed in claim 1, it is characterized in that, the step that described described position directional information according to obtaining and current picture-taking position information generate field-of-view image corresponding to described 3D rendering with described described field-of-view image and default image carried out the step of synthesizing after, described photographic method comprises:
When detecting parameter regulating command, determine the parameter that described parameter regulating command is corresponding;
According to described parameter described field-of-view image regulated and generate the field-of-view image after regulating.
5. photographic method as claimed in claim 1, is characterized in that, described described field-of-view image and the image preset are carried out the step of synthesizing after, described photographic method comprises:
When detecting the information addition instruction of user's input, determine the information that described information addition instruction is corresponding;
The described information determined is added in composograph, to show the composograph adding described information.
6. a camera arrangement, is characterized in that, described camera arrangement comprises:
Display module, for showing 3D rendering to be synthesized at predeterminable area;
Acquisition module, for its current position directional information of Real-time Obtaining;
Generation module, for generating field-of-view image corresponding to described 3D rendering according to the described position directional information obtained and current picture-taking position information;
Synthesis module, for synthesizing described field-of-view image and the image preset.
7. camera arrangement as claimed in claim 6, it is characterized in that, described camera arrangement also comprises:
Update module, in the process of display field-of-view image, if receive the picture-taking position information updating instruction of user's input, upgrades picture-taking position information current in described 3D rendering.
8. camera arrangement as claimed in claim 6, it is characterized in that, described synthesis module comprises:
Acquiring unit, for obtaining default image;
Determining unit, for determining the profile information in described image;
Extraction unit, for extracting the object of preset kind in described image according to the described profile information determined;
Synthesis unit, synthesizes for the image that described field-of-view image is corresponding with the described object of extraction.
9. camera arrangement as claimed in claim 6, it is characterized in that, described camera arrangement also comprises:
First determination module, for when detecting parameter regulating command, determines the parameter that described parameter regulating command is corresponding;
Processing module, for regulate described field-of-view image according to described parameter and generate the field-of-view image after regulating.
10. camera arrangement as claimed in claim 6, it is characterized in that, described camera arrangement also comprises:
Second determination module, for when detecting the information addition instruction of user's input, determines the information that described information addition instruction is corresponding;
Add module, for adding in composograph by the described information determined, to show the composograph adding described information.
CN201510220902.0A 2015-04-30 2015-04-30 Photographic method and device Active CN104954670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510220902.0A CN104954670B (en) 2015-04-30 2015-04-30 Photographic method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510220902.0A CN104954670B (en) 2015-04-30 2015-04-30 Photographic method and device

Publications (2)

Publication Number Publication Date
CN104954670A true CN104954670A (en) 2015-09-30
CN104954670B CN104954670B (en) 2018-09-04

Family

ID=54168977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510220902.0A Active CN104954670B (en) 2015-04-30 2015-04-30 Photographic method and device

Country Status (1)

Country Link
CN (1) CN104954670B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365907A (en) * 2019-07-26 2019-10-22 维沃移动通信有限公司 A kind of photographic method, device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101998045A (en) * 2009-08-11 2011-03-30 佛山市顺德区顺达电脑厂有限公司 Image processing device capable of synthesizing scene information
CN103475826A (en) * 2013-09-27 2013-12-25 深圳市中视典数字科技有限公司 Video matting and synthesis method
CN103581528A (en) * 2012-07-19 2014-02-12 百度在线网络技术(北京)有限公司 Method for preprocessing in photographing process of mobile terminal and mobile terminal
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101998045A (en) * 2009-08-11 2011-03-30 佛山市顺德区顺达电脑厂有限公司 Image processing device capable of synthesizing scene information
CN103581528A (en) * 2012-07-19 2014-02-12 百度在线网络技术(北京)有限公司 Method for preprocessing in photographing process of mobile terminal and mobile terminal
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal
CN103475826A (en) * 2013-09-27 2013-12-25 深圳市中视典数字科技有限公司 Video matting and synthesis method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365907A (en) * 2019-07-26 2019-10-22 维沃移动通信有限公司 A kind of photographic method, device and electronic equipment
CN110365907B (en) * 2019-07-26 2021-09-21 维沃移动通信有限公司 Photographing method and device and electronic equipment

Also Published As

Publication number Publication date
CN104954670B (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN105262951A (en) Mobile terminal having binocular camera and photographing method
CN105100481A (en) Shooting method and apparatus, and mobile terminal
CN104767941A (en) Photography method and device
CN105430295A (en) Device and method for image processing
CN104683697A (en) Shooting parameter adjusting method and device
CN105096241A (en) Face image beautifying device and method
CN105578056A (en) Photographing terminal and method
CN104660903A (en) Shooting method and shooting device
CN105245777A (en) Method and device for generating video image
CN104639837A (en) Method and device for setting shooting parameters
CN105100609A (en) Mobile terminal and shooting parameter adjusting method
CN104902185A (en) Shooting method and shooting device
CN104811554A (en) Method and terminal for switching camera modes
CN105120164B (en) The processing means of continuous photo and method
CN104796625A (en) Picture synthesizing method and device
CN105335458A (en) Picture previewing method and apparatus
CN105681894A (en) Device and method for displaying video file
CN105163035A (en) Mobile terminal shooting system and mobile terminal shooting method
CN104935810A (en) Photographing guiding method and device
CN105187724A (en) Mobile terminal and method for processing images
CN105472246A (en) Photographing device and method
CN105407295B (en) Mobile terminal filming apparatus and method
CN105744170A (en) Picture photographing device and method
CN104822099A (en) Video packaging method and mobile terminal
CN104933102A (en) Picturing storage method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant