CN104954670B - Photographic method and device - Google Patents
Photographic method and device Download PDFInfo
- Publication number
- CN104954670B CN104954670B CN201510220902.0A CN201510220902A CN104954670B CN 104954670 B CN104954670 B CN 104954670B CN 201510220902 A CN201510220902 A CN 201510220902A CN 104954670 B CN104954670 B CN 104954670B
- Authority
- CN
- China
- Prior art keywords
- information
- image
- field
- rendering
- view image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention discloses a kind of photographic methods, and 3D rendering to be synthesized is shown in predeterminable area;Its current position directional information is obtained in real time;The corresponding field-of-view image of the 3D rendering is generated according to the position directional information of acquisition and current picture-taking position information;The field-of-view image and preset image are synthesized.The invention also discloses a kind of camera arrangements.The present invention is based on the field-of-view images obtained in real time and preset image to be synthesized, and the photo of various different scenes is produced so that scene of taking pictures is more abundant.
Description
Technical field
The present invention relates to field of photographing technology more particularly to a kind of photographic methods and device.
Background technology
Current most of terminal device such as mobile phone, can all configure software of taking pictures, and user can be with net after having clapped photo
Network server interacts, and adds real-time watermark planar graph, such as the information such as " weather ", " mood ", " place ", but user
The scene taken pictures can not but change, and the scene taken pictures is substantially all to be determined by photo environment, for example, when user is with certain of family
A position such as colourless wall, as take pictures background when, it is simple people and Bai walls to take the photo come just, it is clear that this bat
It is very single according to scene.
Invention content
It is a primary object of the present invention to propose a kind of photographic method and device, it is intended to which solution is taken pictures the very single skill of scene
Art problem.
To achieve the above object, a kind of photographic method provided by the invention, the photographic method include the following steps:
3D rendering to be synthesized is shown in predeterminable area;
Its current position directional information is obtained in real time;
It is corresponding that the 3D rendering is generated according to the position directional information of acquisition and current picture-taking position information
Field-of-view image;
The field-of-view image and preset image are synthesized.
Preferably, the photographic method further includes:
During showing field-of-view image, if receiving picture-taking position information update instruction input by user, institute is updated
State picture-taking position information current in 3D rendering.
Preferably, described the step of being synthesized the field-of-view image with preset image, includes:
Obtain preset image;
Determine the profile information in described image;
The object of preset kind in described image is extracted according to the determining profile information;
Field-of-view image image corresponding with the object of extraction is synthesized.
Preferably, described the step of being synthesized the field-of-view image with preset image, includes:
Obtain preset image;
Determine the profile information in described image;
The object of preset kind in described image is extracted according to the determining profile information;
Field-of-view image image corresponding with the object of extraction is synthesized.
Preferably, after described the step of being synthesized the field-of-view image and preset image, the photographic method
Including:
When detecting information addition instruction input by user, determine that described information addition instructs corresponding information;
Determining described information is added in composograph, with the composograph of display addition described information.
In addition, to achieve the above object, the present invention also proposes that a kind of camera arrangement, the camera arrangement include:
Display module, for showing 3D rendering to be synthesized in predeterminable area;
Acquisition module, for obtaining its current position directional information in real time;
Generation module, described in being generated according to the position directional information of acquisition and current picture-taking position information
The corresponding field-of-view image of 3D rendering;
Synthesis module, for synthesizing the field-of-view image and preset image.
Preferably, the camera arrangement further includes:
Update module is used for during showing field-of-view image, if receiving picture-taking position information input by user more
New command updates picture-taking position information current in the 3D rendering.
Preferably, the synthesis module includes:
Acquiring unit, for obtaining preset image;
Determination unit, for determining the profile information in described image;
Extraction unit, the object for extracting preset kind in described image according to the determining profile information;
Synthesis unit, for synthesizing field-of-view image image corresponding with the object of extraction.
Preferably, the camera arrangement further includes:
First determining module, for when detecting parameter regulation instruction, determining that the parameter regulation instructs corresponding ginseng
Number;
Processing module, for being adjusted to the field-of-view image according to the parameter and generating the cyclogram after adjusting
Picture.
Preferably, the camera arrangement further includes:
Second determining module, for when detecting information addition instruction input by user, determining described information addition instruction
Corresponding information;
Add module, for the described information determined to be added in composograph, with the conjunction of display addition described information
At image.
Photographic method and device proposed by the present invention, 3D rendering to be synthesized is shown in predeterminable area, is obtained it in real time and is worked as
Preceding position directional information generates the 3D according to the position directional information of acquisition and current picture-taking position information and schemes
It is synthesized as corresponding field-of-view image, and by the field-of-view image and preset image, in 3D rendering, letter is directed toward in position
The difference of breath, the field-of-view image got also can be different therewith, and various differences can be taken if being in the same localities by realizing
The photo of scene so that the scene taken pictures is more abundant.
Description of the drawings
The hardware architecture diagram of Fig. 1 mobile terminals of each embodiment to realize the present invention;
Fig. 2 is the wireless communication device schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow diagram of photographic method first embodiment of the present invention;
Fig. 4 is the refinement flow diagram of step S40 in Fig. 3;
Fig. 5 is the flow diagram of photographic method second embodiment of the present invention;
Fig. 6 is the flow diagram of photographic method 3rd embodiment of the present invention;
Fig. 7 is the high-level schematic functional block diagram of camera arrangement first embodiment of the present invention;
Fig. 8 is the refinement high-level schematic functional block diagram of synthesis module 40 in Fig. 7;
Fig. 9 is the high-level schematic functional block diagram of camera arrangement second embodiment of the present invention;
Figure 10 is the high-level schematic functional block diagram of camera arrangement 3rd embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The mobile terminal of each embodiment of the present invention is realized in description with reference to the drawings.In subsequent description, use
For indicate element such as " module ", " component " or " unit " suffix only for be conducive to the present invention explanation, itself
There is no specific meanings.Therefore, " module " can be used mixedly with " component ".
Mobile terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as moving
Phone, smart phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP
The mobile terminal of (portable media player), navigation device etc. and such as number TV, desktop computer etc. are consolidated
Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that in addition to being used in particular for moving
Except the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 to realize the present invention the mobile terminal of each embodiment hardware configuration signal.
Mobile terminal 100 may include wireless communication unit 110, A/V (audio/video) input unit 120, user's input
Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power supply unit 190
Etc..Fig. 1 shows the mobile terminal with various assemblies, it should be understood that being not required for implementing all groups shown
Part.More or fewer components can alternatively be implemented.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more components, allows mobile terminal 100 and wireless communication device
Or the radio communication between network.For example, wireless communication unit may include broadcasting reception module 111, mobile communication module
112, at least one of wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server
Relevant information.Broadcast channel may include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent
The broadcast singal and/or broadcast related information that the server or reception of broadcast singal and/or broadcast related information generate before
And send it to the server of terminal.Broadcast singal may include TV broadcast singals, radio signals, data broadcasting
Signal etc..Moreover, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Closing information can also provide via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 receives.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of digital multimedia broadcasting (DMB)
Program guide (EPG), digital video broadcast-handheld (DVB-H) electronic service guidebooks (ESG) etc. form and exist.Broadcast
Receiving module 111 can receive signal broadcast by using various types of broadcasters.Particularly, broadcasting reception module 111
It can be wide by using such as multimedia broadcasting-ground (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video
It broadcasts-holds (DVB-H), forward link media (MediaFLO@) data broadcasting device, received terrestrial digital broadcasting integrated service
(ISDB-T) etc. digital broadcast apparatus receives digital broadcasting.Broadcasting reception module 111, which may be constructed such that, to be adapted to provide for extensively
Broadcast the various broadcasters of signal and above-mentioned digital broadcast apparatus.Via broadcasting reception module 111 receive broadcast singal and/
Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal may include that voice is logical
Talk about signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting short range communication.Some examples of short-range communication technology include indigo plant
ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module of the location information for checking or obtaining mobile terminal.Location information module
Typical case be GPS (global pick device).According to current technology, GPS module 115, which calculates, comes from three or more satellites
Range information and correct time information and the Information application triangulation for calculating, to according to longitude, latitude
Highly accurately calculate three-dimensional current location information.Currently, it is defended using three for the method for calculating position and temporal information
Star and the error that calculated position and temporal information are corrected by using an other satellite.In addition, GPS module 115
It can be by Continuous plus current location information in real time come calculating speed information.
A/V input units 120 are for receiving audio or video signal.A/V input units 120 may include 121 He of camera
Microphone 122, camera 121 in video acquisition mode or image capture mode by image capture apparatus obtain static images
Or the image data of video is handled.Treated, and picture frame may be displayed on display unit 151.It is handled through camera 121
Picture frame afterwards can be stored in memory 160 (or other storage mediums) or be sent out via wireless communication unit 110
It send, two or more cameras 121 can be provided according to the construction of mobile terminal.Microphone 122 can be in telephone calling model, note
Sound (audio data) is received via microphone in record pattern, speech recognition mode etc. operational mode, and can will in this way
Acoustic processing be audio data.Audio that treated (voice) data can be converted in the case of telephone calling model can
The format output of mobile communication base station is sent to via mobile communication module 112.Microphone 122 can implement various types of make an uproar
Sound eliminates (or inhibition) algorithm to eliminate the noise or do that (or inhibition) generates during sending and receiving audio signal
It disturbs.
User input unit 130 can generate key input data to control each of mobile terminal according to order input by user
Kind operation.User input unit 130 allows user to input various types of information, and may include keyboard, metal dome, touch
Plate (for example, sensitive component of detection variation of resistance, pressure, capacitance etc. caused by being contacted), idler wheel, rocking bar etc.
Deng.Particularly, when touch tablet is superimposed upon in the form of layer on display unit 151, touch screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 100, (for example, mobile terminal 100 opens or closes shape
State), the position of mobile terminal 100, user is for the presence or absence of contact (that is, touch input) of mobile terminal 100, mobile terminal
100 orientation, the acceleration of mobile terminal 100 or by fast movement and direction etc., and generate for controlling mobile terminal 100
The order of operation or signal.For example, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing unit 140 can sense
The sliding-type phone is to open or close.In addition, sensing unit 140 can detect power supply unit 190 whether provide electric power or
Whether person's interface unit 170 couples with external device (ED).Sensing unit 140, which may include proximity sensor 1410, to be combined below
Touch screen is described this.
Interface unit 170 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example,
External device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, wired or nothing
Line data port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Identification module can store to use each of mobile terminal 100 for verifying user
It plants information and may include subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM)
Etc..In addition, the device with identification module can (hereinafter referred to as " identification device ") take the form of smart card therefore to know
Other device can be connect via port or other attachment devices with mobile terminal 100.Interface unit 170, which can be used for receiving, to be come from
The input (for example, data information, electric power etc.) of external device (ED) and the input received is transferred in mobile terminal 100
One or more elements can be used for the transmission data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connect with external base, interface unit 170 may be used as allowing will be electric by it
Power provides to the path of mobile terminal 100 from pedestal or may be used as that the various command signals inputted from pedestal is allowed to pass through it
It is transferred to the path of mobile terminal.The various command signals or electric power inputted from pedestal, which may be used as mobile terminal for identification, is
The no signal being accurately fitted on pedestal.Output unit 150 is configured to provide with vision, audio and/or tactile manner defeated
Go out signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 may include display
Unit 151, audio output module 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information handled in mobile terminal 100.For example, when mobile terminal 100 is in electricity
When talking about call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file
Download etc.) relevant user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern
Or when image capture mode, display unit 151 can show the image of capture and/or the image of reception, show video or figure
The UI or GUI etc. of picture and correlation function.
Meanwhile when display unit 151 and touch tablet in the form of layer it is superposed on one another to form touch screen when, display unit
151 may be used as input unit and output device.Display unit 151 may include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired embodiment, mobile terminal 100 may include two or more display units (or other display devices), for example, moving
Dynamic terminal may include outernal display unit (not shown) and inner display unit (not shown).Touch screen, which can be used for detecting, to be touched
Input pressure and touch input position and touch input area.
Audio output module 152 can mobile terminal be in call signal reception pattern, call mode, logging mode,
It is that wireless communication unit 110 is received or in memory 160 when under the isotypes such as speech recognition mode, broadcast reception mode
The audio data transducing audio signal of middle storage and to export be sound.Moreover, audio output module 152 can provide and movement
The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that terminal 100 executes.
Audio output module 152 may include sound pick-up, buzzer etc..
Alarm unit 153 can provide output notifying event to mobile terminal 100.Typical event can be with
Including calling reception, message sink, key signals input, touch input etc..Other than audio or video exports, alarm unit
153 can provide output with the generation of notification event in different ways.For example, alarm unit 153 can be in the form of vibration
Output is provided, when receiving calling, message or some other entrance communications (incoming communication), alarm list
Member 153 can provide tactile output (that is, vibration) to notify to user.It is exported by tactile as offer, even if
When the mobile phone of user is in the pocket of user, user also can recognize that the generation of various events.Alarm unit 153
The output of the generation of notification event can be provided via display unit 151 or audio output module 152.
Memory 160 can store the software program etc. of the processing and control operation that are executed by controller 180, Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And memory 160 can be stored about when the vibration and audio signal for touching the various modes that the when of being applied to touch screen exports
Data.
Memory 160 may include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, more
Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..Moreover, mobile terminal 100 can execute memory with by network connection
The network storage device of 160 store function cooperates.
The overall operation of the usually control mobile terminal of controller 180.For example, controller 180 executes and voice communication, data
Communication, video calling etc. relevant control and processing.In addition, controller 180 may include for reproducing (or playback) more matchmakers
The multi-media module 181 of volume data, multi-media module 181 can construct in controller 180, or can be structured as and control
Device 180 detaches.Controller 180 can be with execution pattern identifying processing, by the handwriting input executed on the touchscreen or picture
It draws input and is identified as character or image.
Power supply unit 190 receives external power or internal power under the control of controller 180 and provides operation each member
Electric power appropriate needed for part and component.
Various embodiments described herein can with use such as computer software, hardware or any combination thereof calculating
Machine readable medium is implemented.Hardware is implemented, embodiment described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), controller, microcontroller, microprocessor, is designed to execute function described herein processor
At least one of electronic unit is implemented, and in some cases, such embodiment can be implemented in controller 180.
For software implementation, the embodiment of such as process or function can with allow to execute the individual of at least one functions or operations
Software module is implemented.Software code can by the software application (or program) write with any programming language appropriate Lai
Implement, software code can be stored in memory 160 and be executed by controller 180.
So far, oneself according to its function through describing mobile terminal.In the following, for the sake of brevity, will description such as folded form,
Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc., which is used as, to be shown
Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
It is the electrical structure block diagram of camera in Fig. 1 with reference to Fig. 2, Fig. 2.
Phtographic lens 1211 is made of the multiple optical lens for being used to form shot object image, is single-focus lens or varifocal mirror
Head.Phtographic lens 1211 can move in the direction of the optical axis under the control of lens driver 1221, lens driver 1221
According to the control signal from lens driving control circuit 1222, the focal position of phtographic lens 1211 is controlled, in zoom lens
In the case of, it also can control focal length.Lens driving control circuit 1222 is according to the control command from microcomputer 1217
Carry out the drive control of lens driver 1221.
It is configured with and takes the photograph near the position of the shot object image formed on the optical axis of phtographic lens 1211, by phtographic lens 1211
Element 1212.Photographing element 1212 is for imaging shot object image and obtaining image data.On photographing element 1212
Two dimension and be arranged in a matrix the photodiode for constituting each pixel.Each photodiode generates photoelectricity corresponding with light income
Switching current, the opto-electronic conversion electric current carry out charge accumulation by the capacitor being connect with each photodiode.The preceding table of each pixel
Face is configured with the RGB colour filters of bayer arrangement.
Photographing element 1212 is connect with imaging circuit 1213, which carries out charge in photographing element 1212
Accumulation control and picture signal read control, are carried out after reducing resetting noise to the picture signal (analog picture signal) of the reading
Waveform shaping, and then progress gain raising etc. is to become signal level appropriate.
Imaging circuit 1213 is connect with A/D converter 1214, which carries out modulus to analog picture signal
Conversion, to 1227 output digital image signal (hereinafter referred to as image data) of bus.
Bus 1227 is the transmitting path for being transmitted in the various data that the inside of camera reads or generates.In bus
1227 are connected to above-mentioned A/D converter 1214, are additionally connected to image processor 1215, jpeg processor 1216, microcomputer
Calculation machine 1217, SDRAM (Synchronous Dynamic random access memory, Synchronous Dynamic Random Access Memory)
1218, memory interface (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display, liquid crystal display
Device) driver 1220.
Image processor 1215 carries out OB to the image data of the output based on photographing element 1212 and subtracts each other processing, white balance
Adjustment, color matrix operation, gamma conversion, colour difference signal processing, noise removal process while to change processing, edge treated etc. each
Kind image procossing.Jpeg processor 1216 is when by Imagery Data Recording in recording medium 1225, according to JPEG compression mode pressure
Contract the image data read from SDRAM1218.In addition, jpeg processor 1216 shows to carry out image reproducing and carries out JPEG
The decompression of image data.When unziping it, the file being recorded in recording medium 1225 is read, in jpeg processor 1216
In implement decompression after, the image data of decompression is temporarily stored in SDRAM1218 and is carried out on LCD1226
Display.In addition, in the present embodiment, as compression of images decompression mode using JPEG modes, however Compress softwares
Contracting mode is without being limited thereto, it is of course possible to using MPEG, TIFF, other compressed and decompressed modes such as H.264.
Microcomputer 1217 plays the function of the control unit as camera entirety, is uniformly controlled the various processing of camera
Sequence.Microcomputer 1217 is connected to operating unit 1223 and flash memory 1224.
Operating unit 1223 includes but not limited to physical button or virtual key, and the entity or virtual key can be electricity
Source button, camera button, edit key, dynamic image button, reproduction button, menu button, cross key, OK button, delete button,
The operational controls such as the various load buttons such as large buttons and various enter keys, detect the mode of operation of these operational controls,.
Testing result is exported to microcomputer 1217.In addition, the front surface in the LCD1226 as display is equipped with
Touch panel detects the touch location of user, which is exported to microcomputer 1217.Microcomputer 1217
According to the testing result of the operating position from operating unit 1223, various processing sequences corresponding with the operation of user are executed.
The program of various processing sequences of the storage of flash memory 1224 for executing microcomputer 1217.Microcomputer 1217
The control of camera entirety is carried out according to the program.In addition, flash memory 1224 stores the various adjusted values of camera, microcomputer 1217
Adjusted value is read, the control of camera is carried out according to the adjusted value.
SDRAM1218 be for image data etc. temporarily stored can electricity rewrite volatile memory.It should
SDRAM1218 temporarily stores the image data exported from A/D converter 1214 and in image processor 1215, jpeg processor
1216 it is equal in the image data that carried out that treated.
Memory interface 1219 is connect with recording medium 1225, into the text be about to image data He be attached in image data
First-class control data write-in recording medium 1225 and read from recording medium 1225 of part.Recording medium 1225 is, for example, can
The recording mediums such as memory card of disassembled and assembled freely on camera main-body, however it is without being limited thereto, can also be to be built in camera main-body
In hard disk etc..
LCD driver 1210 is connect with LCD1226, will treated that image data is stored in by image processor 1215
SDRAM1218 when needing display, reads the image data of SDRAM1218 storages and is shown on LCD1226, alternatively, at JPEG
The compressed image data of reason device 1216 is stored in SDRAM1218, and when needing display, jpeg processor 1216 is read
The compressed image data of SDRAM1218, then unzip it, the image data after decompression is carried out by LCD1226
Display.
LCD1226 configurations perform image display at the back side of camera main-body.The LCD1226LCD), however it is without being limited thereto,
The various display panels such as organic EL (LCD1226) may be used, however it is without being limited thereto, the various displays such as organic EL can also be used
Panel.
Electrical structure schematic diagram based on above-mentioned mobile terminal hardware configuration and camera proposes that photographic method of the present invention is each
A embodiment.
It is the flow diagram of photographic method first embodiment of the present invention with reference to Fig. 3, Fig. 3.
The present embodiment proposes that a kind of photographic method, the photographic method include:
Step S10 shows 3D rendering to be synthesized in predeterminable area;
In the present embodiment, can be using the 3D rendering that terminal prestores as 3D rendering to be synthesized, and show institute in predeterminable area
3D rendering to be synthesized is stated, includes choosing preferably before the step S10 to increase the rich of background of taking pictures further
The step of the step of selecting 3D rendering, the selection 3D rendering includes:
1) terminal shows preset 3D rendering application or default application of taking pictures, and is detecting the figure applied described in user's touch-control
When mark, the 3D rendering of preset kind is shown, and when detecting that user touches 3D rendering, using the 3D rendering of the touch as waiting for
3D rendering is synthesized, and the 3D rendering to be synthesized is shown in predeterminable area.
2) receiving voice messaging input by user, (such as user speech is reported:Select 3D rendering) when, it is default in terminal
Region shows the 3D rendering of preset kind, and each 3D rendering corresponds to a number, refers in the voice selecting for receiving user
Enable (such as voice broadcast:No. 5) when, using the 3D rendering of the voice selecting as 3D rendering to be synthesized, institute is shown in predeterminable area
State 3D rendering to be synthesized.
What the selection mode of two enumerated kind 3D rendering listed above was merely exemplary, those skilled in the art utilize this
The technological thought of invention, the selection mode of the various other 3D renderings proposed according to its specific requirements is in the protection of the present invention
In range, herein without exhaustive one by one.
It is understood that first server corresponding with 3D rendering establishes communication connection relationship to the terminal, and detecting
When to 3D rendering selection instruction, the corresponding 3D rendering of the selection instruction, and the 3D rendering obtained in predeterminable area display are obtained.
Step S20 obtains its current position directional information in real time;
In the present embodiment, the terminal obtains its current position directional information and is preferably answered by the way that terminal is preset in real time
With such as gyroscope, the terminal is measured in real time to obtain the current position directional information of the terminal, the position refers to
It is three-dimensional direction vector to information.
Step S30 generates the 3D according to the position directional information of acquisition and current picture-taking position information and schemes
As corresponding field-of-view image;
In the present embodiment, the embodiment of the step S30 includes:
1) first embodiment, the terminal are schemed according to the 3D that the position directional information and predeterminable area of acquisition are shown
Current picture-taking position information, generates the corresponding field-of-view image of the 3D rendering as in.In the present embodiment, it preferably first downloads
The position directional information of acquisition in the 3D rendering that predeterminable area display is downloaded, and is included in the 3D by the 3D rendering
In image, the corresponding cyclogram of the 3D rendering is generated according to the position directional information and current picture-taking position information
Picture.It is understood that when only when position, directional information changes, the terminal just according to new position directional information and
Current picture-taking position information regenerates the corresponding field-of-view image of the 3D rendering, i.e., when the position directional information becomes
When change, the field-of-view image that the terminal generates is also with variation.
2) second embodiment, when the position directional information of acquisition changes, the terminal will be described in acquisition
Position directional information is sent to server, schemes in 3D so that the server is based on the position directional information and the terminal
Current picture-taking position information feeds back the corresponding field-of-view image of the 3D rendering as in.In the present embodiment, end is preferably detected
The position directional information at end change (i.e. terminal camera to direction change) when, the position of acquisition is referred to
It is sent to server to information, the server is preferably 3D scene servers.It is understood that when user is according to terminal original
Come camera direction is moved horizontally, vertical shift or back-and-forth motion the terminal when, the position of the terminal refers to
Do not change to information, that is to say, that only change relative to original angle information in the angle information of the camera of terminal
When, the position directional information just changes.It is understood that the server is referred to based on the position obtained in real time
To the picture-taking position of information and acquiescence, the corresponding real-time field-of-view image of real time position directional information is generated, that is to say, that often connect
A position directional information is received, server produces a field-of-view image, raw when the position directional information difference received
At field-of-view image it is also different, that is, realize in the 3D rendering with the real-time change of position directional information so that 3D
The cyclogram of image also changes therewith.
In the present embodiment, the predeterminable default location letter in 3D rendering of the current picture-taking position information
Breath, that is, be preset as detecting every time field-of-view image find a view instruction when, can be directly using the default location information as position of finding a view
It sets, in conjunction with the position directional information that the terminal obtains, generates the field-of-view image in 3D rendering.
Further, it takes pictures for raising intelligent, preferred embodiment is, during showing field-of-view image, if receiving
It is instructed to picture-taking position information update input by user, updates picture-taking position information current in the 3D rendering, the update
The embodiment of current picture-taking position includes in 3D rendering:
1) first embodiment receives touch sliding input by user in the corresponding picture-taking position of the picture-taking position information
When operation, the terminal determines the corresponding target location of the touch slide, and using the target location as current
Picture-taking position.That is, when user wants the different picture-taking position of selection, slide can be touched by input and changed eventually
The picture-taking position at end, and the 3D rendering is generated according to the position directional information of acquisition and the current picture-taking position
Corresponding field-of-view image, so that the field-of-view image generated is more in line with the demand of user.
2) second embodiment receives touch input by user in the corresponding picture-taking position of the picture-taking position information and clicks
When operation, the picture-taking position is preferably shaken, and when detecting the clicking operation that user inputs again, the terminal determines institute
It states and again taps on the corresponding target location of operation, and using the target location as current picture-taking position.
In the present embodiment, the current corresponding picture-taking position of picture-taking position information can be an acquiescence point or one
For example small five-pointed star of a icon, so that user may be selected to touch the acquiescence point for sliding the display or small five-pointed star to realize position of taking pictures
The change set.Further, the terminal can also generate a virtual camera in the 3 d image, and by the virtual camera
It is placed on the current picture-taking position, user can intuitively check the position of virtual camera, described current to be quickly found out
Picture-taking position, the touch slide of the picture-taking position before the trade of going forward side by side or touch clicking operation.
Step S40 synthesizes the field-of-view image and preset image.
In the present embodiment, include with reference to Fig. 4, the step S40:
Step S41 obtains preset image;
In the present embodiment, the preset image can currently take pictures the moment for the image or terminal that terminal prestores
The image obtained before camera lens.
Step S42 determines the profile information in described image;
Step S43 extracts the object of preset kind in described image according to the determining profile information;
Step S44 synthesizes field-of-view image image corresponding with the object of extraction.
In the present embodiment, it to be best understood from the scheme, is exemplified below:Object to be photographed is set before a colourless metope
It takes pictures posture, terminal first obtains the true picture of the Bai Qiang and people, then is filtered processing to the true picture of acquisition,
It is described that the mode of processing is filtered preferably to the people in the true picture to true picture to obtain real-time portrait therein
It, can be by described in an edge detection acquisition when the feature of the portrait has larger difference with environment as carrying out edge detection
The overall profile of personage;Or when the feature of the portrait has no too big difference with environment, such as the clothes of portrait and environment
Color is more similar, since the face of personage is different from the pixel value in environment, can first obtain the facial contour, then will be with people
The region of face profile connection is detected the edge of the area to be tested by edge detection as area to be tested, with
The body contour for obtaining the personage determines the exterior contour of real-time portrait according to the personal profile of acquisition and facial contour,
Again based on determining real-time portrait exterior contour, the corresponding image of the real-time portrait is extracted.Finally, the terminal will receive
To field-of-view image image corresponding with real-time portrait synthesized, the mode of the synthesis is the prior art, herein not
It repeats again.
The photographic method that the present embodiment proposes, 3D rendering to be synthesized is shown in predeterminable area, and it is current to obtain it in real time
Position directional information generates the 3D rendering pair according to the position directional information of acquisition and current picture-taking position information
The field-of-view image answered, and the field-of-view image and preset image are synthesized, with position directional information in 3D rendering
Difference, the field-of-view image got also can be different therewith, and various different scenes can be taken if being in the same localities by realizing
Photo so that the scene taken pictures is more abundant.
Further, to improve the flexibility taken pictures, with reference to Fig. 5, photographic method of the present invention is proposed based on first embodiment
Second embodiment, in the present embodiment, between the step S30 and the step S40, the photographic method includes:
Step S50 determines that the parameter regulation instructs corresponding parameter when detecting parameter regulation instruction;
In the present embodiment, the parameter includes the parameters such as focal length, the depth of field, scaling, white balance or scene time, is being detected
When being instructed to parameter regulation, the terminal determines the focal length, the depth of field, scaling, white balance or scene according to preset inductor
The adjustment parameters such as time.
Step S60 is adjusted the field-of-view image according to the parameter and generates the field-of-view image after adjusting.
In the present embodiment, include the steps that opening parameter automatic adjustment pattern before the step S10, the parameter is certainly
The opening ways of dynamic shaping modes include:Interface, parameter setting circle is arranged using display parameters in preset take pictures in terminal
Face shows parameters and the corresponding box of parameters, when preferably choosing the corresponding box of parameter, the parameter chosen
Into automatic adjustment pattern, or in the mark of parameter setting interface display parameters automatic adjustment, parameter automatic adjustment
The preferred corresponding rectangle frame of mark, when similarly preferably choosing the rectangle frame, start-up parameter automatically adjusts pattern, i.e., in terminal
Parameters all enter automatic adjustment pattern.That is, after user opens parameter automatic adjustment pattern, terminal is carrying out
, can be by the current situation of taking pictures of predetermined inductive device inductive terminations during taking pictures, and ginseng to be regulated determines according to actual conditions
Number, the method for determination of the parameter to be regulated are exemplified below:
1) focal length of field-of-view image is adjusted, for example, user by terminal close to object to be photographed when, the terminal root
Can determine the focal length of image at a distance from object to be photographed according to current location, and according to the determining focal length to field-of-view image into
Row adjustment, to generate new field-of-view image.
2) scene time of field-of-view image is adjusted, for example, the scene script preset time is the field on daytime
The scene of the field-of-view image can be adjusted to night by scape by preset parameter regulation such as time adjustment.The tune of the time
Section mode is preferably the predeterminable area (upper left corner area as shot interface) at terminal taking interface, display scene time setting
Icon pops up when user touches the icon and presets menu window, the menu window preferably includes scene time, described
Scene time can be divided into the preset period, such as be divided into 4 periods by 24 hours, and detecting that user's selection is any
When a period, the selection period corresponding time is obtained, and carried out to the time of field-of-view image according to the time of acquisition
It adjusts;Or the scene time is divided into the Novel Temporal Scenarios such as daytime, night, dusk, when detecting that user selects any one
Between scene when, according to user select Novel Temporal Scenario the time of field-of-view image is adjusted.It is understood that also can be by institute
It states scene time and is preset as a default time, when taking pictures every time, using the default time as the time for scene of taking pictures.
What the method for determination of two enumerated kind parameter to be regulated listed above was merely exemplary, those skilled in the art's profit
With the technological thought of the present invention, the method for determination of the various other parameters to be regulated proposed according to its specific requirements is in this hair
In bright protection domain, herein without exhaustive one by one.
Further, to improve the flexibility taken pictures, with reference to Fig. 6, photographic method of the present invention is proposed based on first embodiment
3rd embodiment, in the present embodiment, after the step S40, the photographic method includes:
Step S70 determines that described information addition instructs corresponding letter when detecting information addition instruction input by user
Breath;
Determining described information is added in composograph by step S80, with the composite diagram of display addition described information
Picture.
In the present embodiment, described information preferably includes word or pattern, and the word includes weather condition, shooting places
Or user mood etc., the pattern can be preset image, such as heart, i.e. user can add the text in the image of synthesis
Word or pattern-information so that the photo taken more enriches and entertaining.
The present invention further provides a kind of camera arrangements.
It is the high-level schematic functional block diagram of camera arrangement first embodiment of the present invention with reference to Fig. 7, Fig. 7.
It is emphasized that it will be apparent to those skilled in the art that functional block diagram shown in Fig. 7 is only one preferably real
The exemplary plot of example is applied, those skilled in the art surrounds the function module of camera arrangement shown in Fig. 7, can carry out new work(easily
The supplement of energy module;The title of each function module is self-defined title, is only used for each program work(that auxiliary understands camera arrangement
Energy block, is not used in restriction technical scheme of the present invention, the core of technical solution of the present invention is the function module of each self-defined title
The function to be reached.
The present embodiment proposes that a kind of camera arrangement, the camera arrangement include:
Display module 10, for showing 3D rendering to be synthesized in predeterminable area;
In the present embodiment, can be using the 3D rendering to prestore as 3D rendering to be synthesized, the display module 10 is in preset areas
Domain shows the 3D rendering to be synthesized, and further, in order to increase the rich of background of taking pictures, preferred display module 10 includes choosing
Unit is selected, the selecting unit includes for selecting 3D rendering, the mode of the selecting unit selection 3D rendering:
1) display module 10 shows preset 3D rendering application or default application of taking pictures, and is detecting user's touch-control institute
When stating the icon of application, the 3D rendering of preset kind is shown, and when detecting that user touches 3D rendering, by the 3D of the touch
Image shows the 3D rendering to be synthesized as 3D rendering to be synthesized, and in predeterminable area.
2) receiving voice messaging input by user, (such as user speech is reported:Select 3D rendering) when, the display mould
Block 10 shows the 3D rendering of preset kind in predeterminable area, and each 3D rendering corresponds to a number, is receiving user's
Voice selecting instructs (such as voice broadcast:No. 5) when, using the 3D rendering of the voice selecting as 3D rendering to be synthesized, and pre-
If region shows the 3D rendering to be synthesized.
What the selection mode of two enumerated kind 3D rendering listed above was merely exemplary, those skilled in the art utilize this
The technological thought of invention, the selection mode of the various other 3D renderings proposed according to its specific requirements is in the protection of the present invention
In range, herein without exhaustive one by one.
It is understood that first server corresponding with 3D rendering establishes communication connection relationship for the camera arrangement, and
When detecting 3D rendering selection instruction, the corresponding 3D rendering of the selection instruction is obtained, and the display module 10 is default
The 3D rendering that region display obtains.
Acquisition module 20, for obtaining its current position directional information in real time;
In the present embodiment, the acquisition module 20 obtains its current position directional information preferably by preset in real time
Using such as gyroscope, the camera arrangement is measured in real time to obtain the current position directional information of the camera arrangement,
The position directional information is three-dimensional direction vector.
Generation module 30, for generating institute according to the position directional information of acquisition and current picture-taking position information
State the corresponding field-of-view image of 3D rendering;
In the present embodiment, the position directional information and current take pictures position of the generation module 30 according to acquisition
The embodiment that confidence breath generates the corresponding field-of-view image of the 3D rendering includes:
1) first embodiment, the generation module 30 according to the position directional information that the acquisition module 20 obtains with
And current picture-taking position information in the 3D rendering that shows of predeterminable area, generate the corresponding field-of-view image of the 3D rendering.At this
In embodiment, the 3D rendering is preferably first downloaded, the display module 10 shows the 3D rendering downloaded in predeterminable area,
And by the position directional information of acquisition include in the 3D rendering, generation module 30 is according to the position directional information and works as
Preceding picture-taking position information generates the corresponding field-of-view image of the 3D rendering.It is understood that only when position directional information is sent out
When changing, 30 ability of the generation module regenerates institute according to new position directional information and current picture-taking position information
The corresponding field-of-view image of 3D rendering is stated, i.e., when the position directional information changes, what the generation module 30 generated regards
Wild image is also with variation.
2) second embodiment, it is described to obtain when the position directional information that the acquisition module 20 obtains changes
The position directional information of acquisition is sent to server by modulus block 20, and letter is directed toward so that the server is based on the position
Breath and picture-taking position information current in the 3 d image feed back the corresponding field-of-view image of the 3D rendering.In the present embodiment,
It is preferred that detect the position directional information of the camera arrangement change (the i.e. described camera arrangement camera to direction send out
Changing) when, the position directional information of acquisition is sent to server, the server is preferably 3D scene servers.
It is understood that when user moves horizontally direction according to the original camera of terminal, vertical shift or front and back shifting
When moving the terminal, the position directional information of the camera arrangement does not change, that is to say, that only in the camera shooting of the camera arrangement
When the angle information of head changes relative to original angle information, the position directional information just changes.It can manage
Solution, picture-taking position of the server based on the position directional information and acquiescence that obtain in real time generate real-time position
Set the corresponding real-time field-of-view image of directional information, that is to say, that often receive a position directional information, server produces one
A field-of-view image, when the position directional information difference received, the field-of-view image of generation is also different, that is, realizes described
With the real-time change of position directional information in 3D rendering so that the cyclogram of 3D rendering also changes therewith.
In the present embodiment, the predeterminable default location letter in 3D rendering of the current picture-taking position information
Breath, that is, be preset as detecting every time field-of-view image find a view instruction when, can be directly using the default location information as position of finding a view
It sets, in conjunction with the position directional information that the acquisition module 20 obtains, the generation module 30 generates the cyclogram in 3D rendering
Picture.
Further, it takes pictures for raising intelligent, preferred embodiment is that the camera arrangement includes update module, described
Update module is used for during display module 10 shows field-of-view image, if receiving picture-taking position information input by user more
New command, updates picture-taking position information current in the 3D rendering, and the update module updates current in 3D rendering take pictures
The embodiment of position includes:
1) first embodiment receives touch sliding input by user in the corresponding picture-taking position of the picture-taking position information
When operation, the corresponding target location of the touch slide is determined, the update module is using the target location as current
Picture-taking position.That is, when user wants the different picture-taking position of selection, slide can be touched by input and changed
The picture-taking position of terminal, the position directional information that the generation module 30 is obtained according to the acquisition module 20 and described
Current picture-taking position generates the corresponding field-of-view image of the 3D rendering, so that the field-of-view image generated is more in line with user
Demand.
2) second embodiment receives touch input by user in the corresponding picture-taking position of the picture-taking position information and clicks
When operation, the picture-taking position is preferably shaken, and when detecting the clicking operation that user inputs again, the update module is true
Fixed described again tap on operates corresponding target location, and using the target location as current picture-taking position.
In the present embodiment, the current corresponding picture-taking position of picture-taking position information can be an acquiescence point or one
For example small five-pointed star of a icon, so that user may be selected to touch the acquiescence point for sliding the display or small five-pointed star to realize position of taking pictures
The change set.Further, the generation module 30 can also generate a virtual camera in the 3 d image, and will be described virtual
Video camera is placed on the current picture-taking position, and user can intuitively check the position of virtual camera, to be quickly found out
Current picture-taking position is stated, the touch slide or touch clicking operation of the picture-taking position before the trade of going forward side by side.
Synthesis module 40, for synthesizing the field-of-view image and preset image.
In the present embodiment, with reference to Fig. 8, the synthesis module 40 includes:
Acquiring unit 41, for obtaining preset image;
In the present embodiment, the preset image can be the image that prestores of the camera arrangement or described take pictures
Device is currently taken pictures the image obtained before moment camera lens.
Determination unit 42, for determining the profile information in described image;
Extraction unit 43, the object for extracting preset kind in described image according to the determining profile information;
Synthesis unit 44, for synthesizing field-of-view image image corresponding with the object of extraction.
In the present embodiment, it to be best understood from the scheme, is exemplified below:Object to be photographed is set before a colourless metope
It takes pictures posture, terminal first obtains the true picture of the Bai Qiang and people, then is filtered processing to the true picture of acquisition,
It is described that the mode of processing is filtered preferably to the people in the true picture to true picture to obtain real-time portrait therein
It, can be by described in an edge detection acquisition when the feature of the portrait has larger difference with environment as carrying out edge detection
The overall profile of personage;Or when the feature of the portrait has no too big difference with environment, such as the clothes of portrait and environment
Color is more similar, since the face of personage is different from the pixel value in environment, can first pass through the acquiring unit 41 and obtain institute
Facial contour is stated, then using the region being connect with facial contour as area to be tested, by edge detection to the area to be detected
The edge in domain is detected, described true according to the personal profile and facial contour of acquisition to obtain the body contour of the personage
Order member 42 determines the exterior contour of real-time portrait, then based on determining real-time portrait exterior contour, the extraction unit 43
Extract the corresponding image of the real-time portrait.Finally, the synthesis unit 44 by the field-of-view image received in real time
The corresponding image of portrait is synthesized, and the mode of the synthesis is the prior art, and details are not described herein again.
The camera arrangement that the present embodiment proposes, 3D rendering to be synthesized is shown in predeterminable area, and it is current to obtain it in real time
Position directional information generates the 3D rendering pair according to the position directional information of acquisition and current picture-taking position information
The field-of-view image answered, and the field-of-view image and preset image are synthesized, with position directional information in 3D rendering
Difference, the field-of-view image got also can be different therewith, and various different scenes can be taken if being in the same localities by realizing
Photo so that the scene taken pictures is more abundant.
Further, to improve the flexibility taken pictures, with reference to Fig. 9, camera arrangement of the present invention is proposed based on first embodiment
Second embodiment, in the present embodiment, the camera arrangement further includes:
First determining module 50, for when detecting parameter regulation instruction, determining that the parameter regulation instruction is corresponding
Parameter;
In the present embodiment, the parameter includes the parameters such as focal length, the depth of field, scaling, white balance or scene time, is being detected
When being instructed to parameter regulation, first determining module 50 determines the focal length, depth of field, scaling, white according to preset inductor
The adjustment parameters such as balance or scene time.
Processing module 60, for being adjusted to the field-of-view image according to the parameter and generating the cyclogram after adjusting
Picture.
In the present embodiment, first determining module 50 includes opening unit, and the opening unit is for opening parameter
Automatic adjustment pattern, the opening ways that the parameter automatically adjusts pattern include:The display module 10 is answered in preset take pictures
Interface is set with display parameters, the parameter setting interface shows parameters and the corresponding box of parameters, preferably hooks
When selecting the corresponding box of parameter, the parameter chosen enters automatic adjustment pattern, or in parameter setting interface display parameters
The mark of automatic adjustment, the preferred corresponding rectangle frame of mark of the parameter automatic adjustment, similarly preferably chooses the rectangle
When frame, start-up parameter automatically adjusts pattern, i.e. parameters in terminal all enter automatic adjustment pattern.That is, with
After parameter automatic adjustment pattern is opened at family, during being taken pictures, it can take pictures by the way that predetermined inductive device inductive terminations are current
Situation, and parameter to be regulated determines according to actual conditions, the method for determination of the parameter to be regulated are exemplified below:
1) focal length of field-of-view image is adjusted, for example, user by the camera arrangement close to object to be photographed when, institute
State the focal length that the first determining module 50 can determine image according to current location at a distance from object to be photographed, the processing module 60
Field-of-view image is adjusted according to the determining focal length, to generate new field-of-view image.
2) scene time of field-of-view image is adjusted, for example, the scene script preset time is the field on daytime
The scene of the field-of-view image can be adjusted to night by scape by preset parameter regulation such as time adjustment.The tune of the time
Section mode is preferably the predeterminable area (upper left corner area as shot interface) in the display module 10, and display scene time is set
Icon is set, when user touches the icon, pops up and presets menu window, the menu window preferably includes scene time, institute
The preset period can be divided by stating scene time, such as be divided into 4 periods by 24 hours, and detecting user's selection times
When one period, the selection period corresponding time is obtained, the processing module 60 is according to the time of acquisition to the visual field
The time of image is adjusted;Or the scene time is divided into the Novel Temporal Scenarios such as daytime, night, dusk, is detecting use
When family selects any one Novel Temporal Scenario, Novel Temporal Scenario that the processing module 60 is selected according to user to time of field-of-view image into
Row is adjusted.It is understood that the scene time can be also preset as to a default time, it, will be described silent when taking pictures every time
Recognize time of the time as scene of taking pictures.
What the method for determination of two enumerated kind parameter to be regulated listed above was merely exemplary, those skilled in the art's profit
With the technological thought of the present invention, the method for determination of the various other parameters to be regulated proposed according to its specific requirements is in this hair
In bright protection domain, herein without exhaustive one by one.
Further, to improve the flexibility taken pictures, referring to Fig.1 0, camera arrangement of the present invention is proposed based on first embodiment
3rd embodiment, in the present embodiment, the camera arrangement further includes:
Second determining module 70, for when detecting information addition instruction input by user, determining that described information addition refers to
Enable corresponding information;
Add module 80, for the described information determined to be added in composograph, with display addition described information
Composograph.
In the present embodiment, described information preferably includes word or pattern, and the word includes weather condition, shooting places
Or user mood etc., the pattern can be preset image, such as heart, i.e. user can add the text in the image of synthesis
Word or pattern-information so that the photo taken more enriches and entertaining.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that process, method, article or system including a series of elements include not only those elements, and
And further include the other elements being not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this
There is also other identical elements in the process of element, method, article or system.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be expressed in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used so that a station terminal equipment (can be mobile phone, computer, clothes
Be engaged in device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of photographic method, which is characterized in that the photographic method includes the following steps:
3D rendering to be synthesized is shown in predeterminable area;
Obtain the current position directional information of terminal in real time, wherein the position directional information refer to terminal camera to
Direction;
The 3D rendering pair is generated according to picture-taking position information current in the position directional information and 3D rendering of acquisition
The field-of-view image answered, wherein the current picture-taking position information is the position of finding a view in 3D rendering;
The field-of-view image and preset image are synthesized.
2. photographic method as described in claim 1, which is characterized in that the photographic method further includes:
During showing field-of-view image, if receiving picture-taking position information update instruction input by user, the 3D is updated
Current picture-taking position information in image.
3. photographic method as described in claim 1, which is characterized in that described to carry out the field-of-view image and preset image
The step of synthesis includes:
Obtain preset image;
Determine the profile information in described image;
The object of preset kind in described image is extracted according to the determining profile information;
Field-of-view image image corresponding with the object of extraction is synthesized.
4. photographic method as described in claim 1, which is characterized in that the position directional information according to acquisition and
Current picture-taking position information generate the step of 3D rendering corresponding field-of-view image with it is described by the field-of-view image with it is pre-
If image the step of being synthesized between, the photographic method includes:
When detecting parameter regulation instruction, determine that the parameter regulation instructs corresponding parameter;
The field-of-view image is adjusted according to the parameter and generates the field-of-view image after adjusting.
5. photographic method as described in claim 1, which is characterized in that described to carry out the field-of-view image and preset image
After the step of synthesis, the photographic method includes:
When detecting information addition instruction input by user, determine that described information addition instructs corresponding information;
Determining described information is added in composograph, with the composograph of display addition described information.
6. a kind of camera arrangement, which is characterized in that the camera arrangement includes:
Display module, for showing 3D rendering to be synthesized in predeterminable area;
Acquisition module, for obtaining the current position directional information of terminal in real time, wherein the position directional information refers to terminal
Camera to direction;
Generation module, for being generated according to picture-taking position information current in the position directional information and 3D rendering of acquisition
The corresponding field-of-view image of the 3D rendering, wherein the current picture-taking position information is the position of finding a view in 3D rendering;
Synthesis module, for synthesizing the field-of-view image and preset image.
7. camera arrangement as claimed in claim 6, which is characterized in that the camera arrangement further includes:
Update module, for during showing field-of-view image, referring to if receiving picture-taking position information update input by user
It enables, updates picture-taking position information current in the 3D rendering.
8. camera arrangement as claimed in claim 6, which is characterized in that the synthesis module includes:
Acquiring unit, for obtaining preset image;
Determination unit, for determining the profile information in described image;
Extraction unit, the object for extracting preset kind in described image according to the determining profile information;
Synthesis unit, for synthesizing field-of-view image image corresponding with the object of extraction.
9. camera arrangement as claimed in claim 6, which is characterized in that the camera arrangement further includes:
First determining module, for when detecting parameter regulation instruction, determining that the parameter regulation instructs corresponding parameter;
Processing module, for being adjusted to the field-of-view image according to the parameter and generating the field-of-view image after adjusting.
10. camera arrangement as claimed in claim 6, which is characterized in that the camera arrangement further includes:
Second determining module, for when detecting information addition instruction input by user, determining that described information addition instruction corresponds to
Information;
Add module, for the described information determined to be added in composograph, with the composite diagram of display addition described information
Picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510220902.0A CN104954670B (en) | 2015-04-30 | 2015-04-30 | Photographic method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510220902.0A CN104954670B (en) | 2015-04-30 | 2015-04-30 | Photographic method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104954670A CN104954670A (en) | 2015-09-30 |
CN104954670B true CN104954670B (en) | 2018-09-04 |
Family
ID=54168977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510220902.0A Active CN104954670B (en) | 2015-04-30 | 2015-04-30 | Photographic method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104954670B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110365907B (en) * | 2019-07-26 | 2021-09-21 | 维沃移动通信有限公司 | Photographing method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998045A (en) * | 2009-08-11 | 2011-03-30 | 佛山市顺德区顺达电脑厂有限公司 | Image processing device capable of synthesizing scene information |
CN103475826A (en) * | 2013-09-27 | 2013-12-25 | 深圳市中视典数字科技有限公司 | Video matting and synthesis method |
CN103581528A (en) * | 2012-07-19 | 2014-02-12 | 百度在线网络技术(北京)有限公司 | Method for preprocessing in photographing process of mobile terminal and mobile terminal |
CN103856617A (en) * | 2012-12-03 | 2014-06-11 | 联想(北京)有限公司 | Photographing method and user terminal |
-
2015
- 2015-04-30 CN CN201510220902.0A patent/CN104954670B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998045A (en) * | 2009-08-11 | 2011-03-30 | 佛山市顺德区顺达电脑厂有限公司 | Image processing device capable of synthesizing scene information |
CN103581528A (en) * | 2012-07-19 | 2014-02-12 | 百度在线网络技术(北京)有限公司 | Method for preprocessing in photographing process of mobile terminal and mobile terminal |
CN103856617A (en) * | 2012-12-03 | 2014-06-11 | 联想(北京)有限公司 | Photographing method and user terminal |
CN103475826A (en) * | 2013-09-27 | 2013-12-25 | 深圳市中视典数字科技有限公司 | Video matting and synthesis method |
Also Published As
Publication number | Publication date |
---|---|
CN104954670A (en) | 2015-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105430295B (en) | Image processing apparatus and method | |
CN104660903B (en) | Image pickup method and filming apparatus | |
CN104811554B (en) | The switching method and terminal of camera mode | |
CN104902185B (en) | Image pickup method and device | |
CN104767941A (en) | Photography method and device | |
CN105959543B (en) | It is a kind of to remove reflective filming apparatus and method | |
CN105262951A (en) | Mobile terminal having binocular camera and photographing method | |
CN105335458B (en) | Preview picture method and device | |
CN105516423A (en) | Mobile terminal, data transmission system and mobile terminal shoot method | |
CN105791701B (en) | Image capturing device and method | |
CN106603917A (en) | Shooting device and method | |
CN106027905B (en) | A kind of method and mobile terminal for sky focusing | |
CN104683697A (en) | Shooting parameter adjusting method and device | |
CN105578056A (en) | Photographing terminal and method | |
CN105407295B (en) | Mobile terminal filming apparatus and method | |
CN105681894A (en) | Device and method for displaying video file | |
CN104935810A (en) | Photographing guiding method and device | |
CN105357444B (en) | focusing method and device | |
CN106028098A (en) | Video recording method, device, and terminal | |
CN105407275B (en) | Photo synthesizer and method | |
CN104796625A (en) | Picture synthesizing method and device | |
CN104822099A (en) | Video packaging method and mobile terminal | |
CN105744170A (en) | Picture photographing device and method | |
CN105120145A (en) | Electronic equipment and image processing method | |
CN105163035A (en) | Mobile terminal shooting system and mobile terminal shooting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |