CN107404617A - A kind of image pickup method and terminal, computer-readable storage medium - Google Patents
A kind of image pickup method and terminal, computer-readable storage medium Download PDFInfo
- Publication number
- CN107404617A CN107404617A CN201710602087.3A CN201710602087A CN107404617A CN 107404617 A CN107404617 A CN 107404617A CN 201710602087 A CN201710602087 A CN 201710602087A CN 107404617 A CN107404617 A CN 107404617A
- Authority
- CN
- China
- Prior art keywords
- image
- terminal
- fused
- background content
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 239000000654 additive Substances 0.000 claims abstract description 11
- 230000000996 additive effect Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims description 13
- 238000001727 in vivo Methods 0.000 claims 2
- 230000000694 effects Effects 0.000 abstract description 4
- 230000006854 communication Effects 0.000 description 25
- 238000004891 communication Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 16
- 230000015572 biosynthetic process Effects 0.000 description 8
- 238000003786 synthesis reaction Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 231100000289 photo-effect Toxicity 0.000 description 5
- 230000005611 electricity Effects 0.000 description 4
- 238000010422 painting Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G06T3/14—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention discloses a kind of image pickup method and terminal, computer-readable storage medium, methods described to include:Using the image of first terminal continuous acquisition first, the first image includes the first body matter and/or the first background content;The second image by second terminal continuous acquisition that second terminal is sent is received, the second image includes the second body matter and/or the second background content;First object object to be fused is determined from the first image, first object object is the first body matter and/or the first background content, and the second destination object to be fused is determined from the second image, the second destination object is the second body matter and/or the second background content;By first object object and the second destination object additive fusion into the 3rd image;The 3rd image is captured, and the picture that fixes grabbed is included on first terminal.The image co-registration of the present invention is based on depth of view information, and the effect of fusion is very naturally, allow two strange land users to experience experience of the station in same position shooting.
Description
Technical field
The present invention relates to technique for taking, more particularly to a kind of image pickup method and terminal, computer-readable storage medium.
Background technology
With the popularization of intelligent terminal, the function of the camera of intelligent terminal is also more and more stronger.User can conveniently utilize
Camera shoots the picture of high definition.
In some scenarios, there are the picture that local user is shot to obtain by demand and the obtained picture of remote subscriber shooting
Merged, such as:User want with the friend in strange land one start auction opening and closing shine;Again for example:User wants the Ai Feier iron stood at the moment
A photo is clapped before tower and has been found that Eiffel Tower is currently to scheme by the later stage for the treating method of analogue far away from France
Piece synthesizes.This group photo experience is not extremely directly perceived, can be also opened on time latitude, also lack interaction between main body of taking a group photo, closed
Into figure look easily shape close god from or god close scape from.Sum it up, the flow for obtaining group photo in this way is complicated and closes
According to less effective.
The content of the invention
To be situated between in order to solve the above technical problems, the embodiments of the invention provide a kind of image pickup method and terminal, computer storage
Matter.
Image pickup method provided in an embodiment of the present invention is applied to first terminal, and methods described includes:
Using the image of first terminal continuous acquisition first, described first image includes the first body matter and/or the
One background content;
The second image by the second terminal continuous acquisition that second terminal is sent is received, second image includes the
Two body matters and/or the second background content;
First object object to be fused is determined from described first image, the first object object is described first
Body matter and/or the first background content, and the second destination object to be fused is determined from second image, it is described
Second destination object is second body matter and/or the second background content;
By the first object object and the second destination object additive fusion into the 3rd image;
The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
In the embodiment of the present invention, before crawl the 3rd image, methods described also includes:
According to the first of acquisition the adjustment operation to the first object object and/or described second in the 3rd image
Destination object is adjusted as follows:Size adjustment, position adjustment.
In the embodiment of the present invention, before crawl the 3rd image, methods described also includes:
According to the second of acquisition the adjustment operation to the first object object and/or described second in the 3rd image
Destination object is adjusted as follows:Figure layer depth adjusts.
It is described that first object object to be fused, Yi Jicong are determined from described first image in the embodiment of the present invention
The second destination object to be fused is determined in second image, including:
Depth of view information based on described first image, first object pair to be fused is determined from described first image
As;
Based on the depth of view information of second image, the second target pair to be fused is determined from second image
As.
In the embodiment of the present invention, before crawl the 3rd image, methods described also includes:
The 3rd image described in preview on the first terminal.
Terminal provided in an embodiment of the present invention includes:
Camera, for the image of continuous acquisition first;
Memory, for storing picture processing program;
Processor, for performing the picture processing program in the memory to realize following operation:
Obtain the first image of the first terminal continuous acquisition, described first image include the first body matter and/or
First background content;
The second image by the second terminal continuous acquisition that second terminal is sent is received, second image includes the
Two body matters and/or the second background content;
First object object to be fused is determined from described first image, the first object object is described first
Body matter and/or the first background content, and the second destination object to be fused is determined from second image, it is described
Second destination object is second body matter and/or the second background content;
By the first object object and the second destination object additive fusion into the 3rd image;
The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
In the embodiment of the present invention, before crawl the 3rd image, the processor is additionally operable to perform the storage
Picture processing program in device is to realize following operation:
According to the first of acquisition the adjustment operation to the first object object and/or described second in the 3rd image
Destination object is adjusted as follows:Size adjustment, position adjustment.
In the embodiment of the present invention, before crawl the 3rd image, the processor is additionally operable to perform the storage
Picture processing program in device is to realize following operation:
According to the second of acquisition the adjustment operation to the first object object and/or described second in the 3rd image
Destination object is adjusted as follows:Figure layer depth adjusts.
In the embodiment of the present invention, the processor be additionally operable to perform the picture processing program in the memory with realize with
Lower operation:
Depth of view information based on described first image, first object pair to be fused is determined from described first image
As;
Based on the depth of view information of second image, the second target pair to be fused is determined from second image
As.
In the embodiment of the present invention, the processor be additionally operable to perform the picture processing program in the memory with realize with
Lower operation:
The terminal also includes:Display, for the 3rd image described in the preview on the first terminal.
Computer-readable storage medium provided in an embodiment of the present invention is stored with one or more program, one or more
Individual program can be by one or more computing device, to realize above-mentioned any described image pickup method.
The technical scheme of the embodiment of the present invention, utilize the image of first terminal continuous acquisition first, described first image
Including the first body matter and/or the first background content;Receive that second terminal sends by the second terminal continuous acquisition
Second image, second image include the second body matter and/or the second background content;Determined from described first image
First object object to be fused, the first object object are first body matter and/or the first background content, and
The second destination object to be fused is determined from second image, second destination object is second body matter
And/or second background content;By the first object object and the second destination object additive fusion into the 3rd image;Crawl
3rd image, and the obtained picture that fixes will be captured including on the first terminal.Using the skill of the embodiment of the present invention
Art scheme, scape/people of distal end and local scape/people are subjected to realtime graphic synthesis preview and shooting, it is achieved thereby that strange land is closed
According to because the process of shooting group photo is that the image collected to both ends carries out live preview, thus user can adjust bat in real time
The content taken the photograph, such as angle of shooting, the posture of shooting, shooting both sides can carry out real-time, interactive, increase to the content of shooting
Shooting experience.In addition, the image co-registration of the embodiment of the present invention is to be based on depth of view information, thus the effect merged very naturally,
The user in two strange lands is set to experience the experience arrived at a station in same position shooting.
Brief description of the drawings
Fig. 1 is a kind of hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is the schematic flow sheet one of the image pickup method of the embodiment of the present invention;
Fig. 4 is the schematic flow sheet two of the image pickup method of the embodiment of the present invention;
Fig. 5 is the schematic diagram of the shooting picture of the embodiment of the present invention;
Fig. 6 is the schematic flow sheet three of the image pickup method of the embodiment of the present invention;
Fig. 7 is the structure composition schematic diagram of the terminal of the embodiment of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move
Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or
It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can
Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage
Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers
Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Fig. 3 is the schematic flow sheet one of the image pickup method of the embodiment of the present invention, and the image pickup method in this example is applied to the
One terminal, as shown in figure 3, the image pickup method comprises the following steps:
Step 301:Using the image of first terminal continuous acquisition first, described first image includes the first body matter
And/or first background content.
In the embodiment of the present invention, first terminal refers to local terminal, and second terminal refers to distance terminal.Below by first eventually
The user at end is referred to as the first user, and the user of second terminal is referred to as into second user.Local terminal and distance terminal be it is relative and
Speech, i.e.,:If first terminal is mobile phone A, second terminal is flat board B, then mobile phone A is flat board B distance terminal, and flat board B is
The distance terminal of mobile phone A.Here it is to be illustrated using first terminal and second terminal as different types of equipment, first terminal can be with
It is same type of terminal with second terminal.
In the embodiment of the present invention, first terminal and second terminal can be mobile phone, even tablet personal computer, notebook electricity
The equipment such as brain, desktop computer.First terminal and second terminal all have camera, and specifically, the camera takes the photograph depth of field shooting to be double
Head, double depth of field cameras of taking the photograph refer to:By two cameras simultaneously shooting image, due to two cameras in the position of terminal not
Together, thus by two cameras shoot come image can carry depth of view information.Here, in depth of view information namely picture not
With depth information corresponding to content.
In the embodiment of the present invention, the first user opens the camera on first terminal, is then aligned with wanting the region of shooting
The image of continuous acquisition first, here, the first image refers to the realtime graphic collected by the camera on first terminal.
The first image is previewed on the display screen of first terminal, because the first image is the image that collects in real time, thus preview is drawn
Face is as the Different Dynamic found a view changes.Typically, the first image includes two class contents, is respectively:First body matter,
First background content.Such as:Personage P stands image before background painting T, and personage P be the first body matter, and background painting T is the
One background content.First body matter is for the first background content, usually user's focus of attention, thus, first
In the field depth of camera, display picture is more clear for the meeting of body matter, and the first background content can be in camera
Outside field depth, display picture is more fuzzy.Certainly, if the field depth of camera is larger, the first body matter and first
Background content can be fallen into field depth, now, the display picture of the first body matter and the first background content compared with
To be clear.
Step 302:Receive the second image by the second terminal continuous acquisition that second terminal is sent, second figure
As including the second body matter and/or the second background content.
In the embodiment of the present invention, second user opens the camera in second terminal, is then aligned with wanting the region of shooting
The image of continuous acquisition second, here, the second image refers to the realtime graphic collected by the camera in second terminal.
The second image is previewed on the display screen of second terminal, because the second image is the image that collects in real time, thus preview is drawn
Face is as the Different Dynamic found a view changes.Typically, the second image includes two class contents, is respectively:Second body matter,
Second background content.
In the embodiment of the present invention, first terminal and second terminal have communication module, such as mobile communication card.First terminal
Communication connection, during specific implementation, the camera application in first terminal are established by respective communication module with second terminal
(APP) data-interface is created between the communication module and in first terminal, similarly, the camera APP and second in second terminal
Data-interface is created between communication module in terminal, then, can be to realize in first terminal using respective communication module
Camera APP and second terminal in camera APP between data transfer.Based on this, first terminal receives second terminal
The second image sent.
In the embodiment of the present invention, after first terminal receives the second image that second terminal is sent, the second image can be entered
Row preview, preview can not also be carried out to the second image.
Step 303:First object object to be fused is determined from described first image, the first object object is
First body matter and/or the first background content, and the second target to be fused is determined from second image
Object, second destination object are second body matter and/or the second background content.
In the embodiment of the present invention, based on the depth of view information of the first image, determined from described first image to be fused
First object object, the first object object are first body matter and/or the first background content;Based on the second image
Depth of view information, the second destination object to be fused is determined from second image, second destination object is described
Second body matter and/or the second background content.
A variety of fusion scenes are illustrated respectively below:
Scene one:
First image includes:First background content.Second image includes:Second body matter and the second background content.Melt
Close purpose:The second body matter in second image is merged with the first background content in the first image.
Based on the depth of view information of the first image, full frame, namely the first background content are extracted from the first image.Based on
The depth of view information of two images, the picture of the second body matter is extracted from the second image.
Scene two:
First image includes:First background content and the second background content.Second image includes:Second body matter and
Two background contents.
1) purpose is merged:The second body matter in second image is merged with the first background content in the first image.
Based on the depth of view information of the first image, the picture of the first background content is extracted from the first image.Based on the second figure
The depth of view information of picture, the picture of the second body matter is extracted from the second image.
2) purpose is merged:The first body matter in first image is merged with the second background content in the second image.
Based on the depth of view information of the first image, the picture of the first body matter is extracted from the first image.Based on the second figure
The depth of view information of picture, the picture of the second background content is extracted from the second image.
3) purpose is merged:By the second body matter in the first body matter and the second image in the first image, with
The second background content fusion in second image.
Based on the depth of view information of the first image, the picture of the first body matter is extracted from the first image.Based on the second figure
The depth of view information of picture, full frame, namely the second background content and the second body matter are extracted from the second image.
4) purpose is merged:By the second body matter in the first body matter and the second image in the first image, with
The first background content fusion in first image.
Based on the depth of view information of the first image, full frame, namely the first background content and are extracted from the first image
One body matter.Based on the depth of view information of the second image, the picture of the second body matter is extracted from the second image.
Step 304:By the first object object and the second destination object additive fusion into the 3rd image.
In the embodiment of the present invention, first object object represents a figure layer, and the second destination object represents another figure layer, and
View data in the two figure layers also all has depth of view information.
In the embodiment of the present invention, additive fusion into the 3rd image can be:
By first object object Overlapping display before the second destination object;
By first object object Overlapping display after the second destination object.
Step 305:The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
In the embodiment of the present invention, because the first image is the realtime graphic that is continuously shot to obtain, the second image is also continuous
Obtained realtime graphic is shot, therefore the 3rd image that fusion obtains also is realtime graphic, after the 3rd image is fused into, in institute
State the 3rd image described in preview on first terminal.So, user can be to watch the image effect of synthesis in real time.Work as user
When being satisfied with to the 3rd image of synthesis, shooting push button can be clicked on to realize the crawl of the 3rd image, here, the 3rd image is grabbed
Take and refer to the picture that fixed out in real-time 3rd image, then, will capture the obtained picture that fixes is included described the
In one terminal.
The technical scheme of the embodiment of the present invention, by double depth of view information taken the photograph depth of field camera and obtain image, by local
Live preview image and the live preview image of distal end are placed in the different image depth depth that user specifies, so as to obtain in real time
Synthesis preview image, confirm that strange land group photo can be obtained by clicking on shooting.
In addition, image proximally and distally according to respective depth of view information, can intercept the main body of needs.With strange land lovers
Exemplified by group photo, both sides can observe in respective terminal preview and adjust group photo effect, greatly improve intuitive and interaction
Property, strange land group photo can be obtained by being shot after confirmation, so as to greatly improve user experience.
Fig. 4 is the schematic flow sheet two of the image pickup method of the embodiment of the present invention, and the image pickup method in this example is applied to the
One terminal, as shown in figure 4, the image pickup method comprises the following steps:
Step 401:Using the image of first terminal continuous acquisition first, described first image includes the first body matter
And/or first background content.
In the embodiment of the present invention, first terminal refers to local terminal, and second terminal refers to distance terminal.Below by first eventually
The user at end is referred to as the first user, and the user of second terminal is referred to as into second user.Local terminal and distance terminal be it is relative and
Speech, i.e.,:If first terminal is mobile phone A, second terminal is flat board B, then mobile phone A is flat board B distance terminal, and flat board B is
The distance terminal of mobile phone A.Here it is to be illustrated using first terminal and second terminal as different types of equipment, first terminal can be with
It is same type of terminal with second terminal.
In the embodiment of the present invention, first terminal and second terminal can be mobile phone, even tablet personal computer, notebook electricity
The equipment such as brain, desktop computer.First terminal and second terminal all have camera, and specifically, the camera takes the photograph depth of field shooting to be double
Head, double depth of field cameras of taking the photograph refer to:By two cameras simultaneously shooting image, due to two cameras in the position of terminal not
Together, thus by two cameras shoot come image can carry depth of view information.Here, in depth of view information namely picture not
With depth information corresponding to content.
In the embodiment of the present invention, the first user opens the camera on first terminal, is then aligned with wanting the region of shooting
The image of continuous acquisition first, here, the first image refers to the realtime graphic collected by the camera on first terminal.
The first image is previewed on the display screen of first terminal, because the first image is the image that collects in real time, thus preview is drawn
Face is as the Different Dynamic found a view changes.Typically, the first image includes two class contents, is respectively:First body matter,
First background content.Such as:Personage P stands image before background painting T, and personage P be the first body matter, and background painting T is the
One background content.First body matter is for the first background content, usually user's focus of attention, thus, first
In the field depth of camera, display picture is more clear for the meeting of body matter, and the first background content can be in camera
Outside field depth, display picture is more fuzzy.Certainly, if the field depth of camera is larger, the first body matter and first
Background content can be fallen into field depth, now, the display picture of the first body matter and the first background content compared with
To be clear.
Step 402:Receive the second image by the second terminal continuous acquisition that second terminal is sent, second figure
As including the second body matter and/or the second background content.
In the embodiment of the present invention, second user opens the camera in second terminal, is then aligned with wanting the region of shooting
The image of continuous acquisition second, here, the second image refers to the realtime graphic collected by the camera in second terminal.
The second image is previewed on the display screen of second terminal, because the second image is the image that collects in real time, thus preview is drawn
Face is as the Different Dynamic found a view changes.Typically, the second image includes two class contents, is respectively:Second body matter,
Second background content.
In the embodiment of the present invention, first terminal and second terminal have communication module, such as mobile communication card.First terminal
Communication connection, during specific implementation, the camera application in first terminal are established by respective communication module with second terminal
(APP) data-interface is created between the communication module and in first terminal, similarly, the camera APP and second in second terminal
Data-interface is created between communication module in terminal, then, can be to realize in first terminal using respective communication module
Camera APP and second terminal in camera APP between data transfer.Based on this, first terminal receives second terminal
The second image sent.
In the embodiment of the present invention, after first terminal receives the second image that second terminal is sent, the second image can be entered
Row preview, preview can not also be carried out to the second image.
Step 403:First object object to be fused is determined from described first image, the first object object is
First body matter and/or the first background content, and the second target to be fused is determined from second image
Object, second destination object are second body matter and/or the second background content.
In the embodiment of the present invention, based on the depth of view information of the first image, determined from described first image to be fused
First object object, the first object object are first body matter and/or the first background content;Based on the second image
Depth of view information, the second destination object to be fused is determined from second image, second destination object is described
Second body matter and/or the second background content.
Step 404:By the first object object and the second destination object additive fusion into the 3rd image.
In the embodiment of the present invention, first object object represents a figure layer, and the second destination object represents another figure layer, and
View data in the two figure layers also all has depth of view information.
In the embodiment of the present invention, additive fusion into the 3rd image can be:
By first object object Overlapping display before the second destination object;
By first object object Overlapping display after the second destination object.
Step 405:According to the first of acquisition the adjustment operation to the first object object in the 3rd image and/or
Second destination object is adjusted as follows:Size adjustment, position adjustment.
In the embodiment of the present invention, the first user can carry out size adjustment to first object object, can also be to the first mesh
Mark object and carry out position adjustment.Such as:User refers to touch-control on first object object by two, refers to separation by two to amplify first
The size of destination object, refer to by two close to reducing the size of first object object.User refers to touch-control in first object by one
On object, the movement to first object object is realized by mobile finger.Similarly, the first user can also be to the second target pair
As carrying out size adjustment and/or position adjustment.
Step 406:According to the second of acquisition the adjustment operation to the first object object in the 3rd image and/or
Second destination object is adjusted as follows:Figure layer depth adjusts.
In the embodiment of the present invention, first object object is a figure layer, and the second destination object is another figure layer, if just
First object object Overlapping display is before the second destination object during secondary fusion, but user have adjusted first object object
And/or second destination object size and/or position after, it is desirable to change the relative position of figure layer, then can adjust first object
Object Overlapping display is after the second destination object.Similarly, if first object object Overlapping display is second during initial fusion
After destination object, but user is after it have adjusted the size of first object object and/or the second destination object and/or position,
Want the relative position of change figure layer, then can adjust first object object Overlapping display before the second destination object.
Step 407:The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
In the embodiment of the present invention, because the first image is the realtime graphic that is continuously shot to obtain, the second image is also continuous
Obtained realtime graphic is shot, therefore the 3rd image that fusion obtains also is realtime graphic, after the 3rd image is fused into, in institute
State the 3rd image described in preview on first terminal.So, user can be to watch the image effect of synthesis in real time.Work as user
When being satisfied with to the 3rd image of synthesis, shooting push button can be clicked on to realize the crawl of the 3rd image, here, the 3rd image is grabbed
Take and refer to the picture that fixed out in real-time 3rd image, then, will capture the obtained picture that fixes is included described the
In one terminal.
The technical scheme of the embodiment of the present invention, by double depth of view information taken the photograph depth of field camera and obtain image, by local
Live preview image and the live preview image of distal end are placed in the different image depth depth that user specifies, so as to obtain in real time
Synthesis preview image, confirm that strange land group photo can be obtained by clicking on shooting.
In addition, image proximally and distally according to respective depth of view information, can intercept the main body of needs.With strange land lovers
Exemplified by group photo, both sides can observe in respective terminal preview and adjust group photo effect, greatly improve intuitive and interaction
Property, strange land group photo can be obtained by being shot after confirmation, so as to greatly improve user experience.
Fig. 5 is the schematic diagram of the shooting picture of the embodiment of the present invention, as shown in figure 5, including:
(1):First image of first terminal continuous acquisition, here, the first image includes:Figure and ground.To be fused
First object object is the full frame of the first image.
(2):Second image of second terminal continuous acquisition, here, the second image includes:Figure and ground.To be fused
Second destination object is the personage of the second image.
Here, the first user can confirm respective movement posture, so as to shape with second user by both sides' communication and coordination
Into the group photo effect of anticipation.
(3):By first terminal or second terminal, according to the depth of view information of the second image, people is intercepted out from the second image
Thing, the personage in the second image is shown on first terminal.
(4):Personage in second image is blended into the first image.
Here, personage's Overlapping display in the second image is on the first image.
(5):Size adjustment and position adjustment are carried out to the personage in the second image.
Here, the first user can finely tune respective movement posture, so as to shape with second user by both sides' communication and coordination
Into the group photo effect of anticipation.
(6):Based on the depth of view information of the first image, the personage in the second image is placed into the personage's in the first image
Below, the group photo effect of more true nature is formed.
Then click on and determine with freeze picture;And then the fine setting such as tone is carried out to remote image, so as to proximal end view picture
The more coordinating and unifying;Finally, the final acquisition group photo image of determination is again tapped on.
Fig. 6 is the schematic flow sheet three of the image pickup method of the embodiment of the present invention, as shown in fig. 6, the image pickup method includes
Following steps:
Step 601:First user opens the camera of near-end.
Step 602:The image of camera focusing continuous acquisition first.
Step 603:Obtain the depth of view information of the first body matter in the first image.
Step 604:Using the first body matter as origin, the first user intercepts image A to be fused on depth of field direction.
Step 605:First user switches to the long-range real-time pictures connection mode of camera.
Specifically, near-end receives the second image by distal end continuous acquisition that distal end is sent.
Step 606:First user or second user choose the second body matter in the second image.
Step 607:Obtain the depth of view information of the second body matter in the second image.
Step 608:Using the second body matter as origin, user intercepts image B to be fused on depth of field direction.
Step 609:Start picture synthesis model, preliminary composograph is superimposed to open realtime graphic A and image B.
Step 610:Displacement is zoomed in and out to realtime graphic B to acquire more preferably composograph.
Here, image A and image B are still live preview image.
Step 611:First user's touch-control shooting push button carries out video freezing.
Here, the picture to fix also remains with depth of view information, therefore can perform following steps 612.
Step 612:The processing such as mix colours is carried out to image B, so as to the image A more coordinating and unifyings.
Step 613:First user's touch-control shooting push button obtains final image.
Fig. 7 is the structure composition schematic diagram of the terminal of the embodiment of the present invention, as shown in fig. 7, the terminal includes:
Camera 701, for the image of continuous acquisition first;
Memory 702, for storing picture processing program;
Processor 703, for performing the picture processing program in the memory 702 to realize following operation:
Obtain the first image of the first terminal continuous acquisition, described first image include the first body matter and/or
First background content;
The second image by the second terminal continuous acquisition that second terminal is sent is received, second image includes the
Two body matters and/or the second background content;
First object object to be fused is determined from described first image, the first object object is described first
Body matter and/or the first background content, and the second destination object to be fused is determined from second image, it is described
Second destination object is second body matter and/or the second background content;
By the first object object and the second destination object additive fusion into the 3rd image;
The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
In the embodiment of the present invention, before crawl the 3rd image, the processor 703 is additionally operable to deposit described in execution
Picture processing program in reservoir 702 is to realize following operation:
According to the first of acquisition the adjustment operation to the first object object and/or described second in the 3rd image
Destination object is adjusted as follows:Size adjustment, position adjustment.
In the embodiment of the present invention, before crawl the 3rd image, the processor 703 is additionally operable to deposit described in execution
Picture processing program in reservoir 702 is to realize following operation:
According to the second of acquisition the adjustment operation to the first object object and/or described second in the 3rd image
Destination object is adjusted as follows:Figure layer depth adjusts.
In the embodiment of the present invention, the processor 703 be additionally operable to perform picture processing program in the memory 702 with
Realize following operate:
Depth of view information based on described first image, first object pair to be fused is determined from described first image
As;
Based on the depth of view information of second image, the second target pair to be fused is determined from second image
As.
In the embodiment of the present invention, the processor 703 be additionally operable to perform picture processing program in the memory 702 with
Realize following operate:
The terminal also includes:Display 704, for the 3rd image described in the preview on the first terminal.
If the device of above-mentioned traffic signaling of embodiment of the present invention tracking realized in the form of software function module and as
Independent production marketing in use, can also be stored in a computer read/write memory medium.Based on such understanding,
The part that the technical scheme of the embodiment of the present invention substantially contributes to prior art in other words can be with the shape of software product
Formula is embodied, and the computer software product is stored in a storage medium, including some instructions to cause one calculating
Machine equipment (can be personal computer, server or network equipment etc.) performs each embodiment methods described of the present invention
It is all or part of.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read Only
Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.So, the embodiment of the present invention is not restricted to appoint
What specific hardware and software combines.
Correspondingly, the embodiment of the present invention also provides a kind of computer-readable storage medium, wherein computer program is stored with, the meter
Calculation machine program is configured to perform the image pickup method of the embodiment of the present invention.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (11)
1. a kind of image pickup method, it is characterised in that applied to first terminal, methods described includes:
Using the image of first terminal continuous acquisition first, described first image includes the first body matter and/or first back of the body
Scape content;
The second image by the second terminal continuous acquisition that second terminal is sent is received, second image includes the second master
Appearance and/or the second background content in vivo;
First object object to be fused is determined from described first image, the first object object is first main body
Content and/or the first background content, and the second destination object to be fused is determined from second image, described second
Destination object is second body matter and/or the second background content;
By the first object object and the second destination object additive fusion into the 3rd image;
The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
2. image pickup method according to claim 1, it is characterised in that before crawl the 3rd image, the side
Method also includes:
According to the first of acquisition the adjustment operation to the first object object in the 3rd image and/or second target
Object is adjusted as follows:Size adjustment, position adjustment.
3. image pickup method according to claim 1 or 2, it is characterised in that described before crawl the 3rd image
Method also includes:
According to the second of acquisition the adjustment operation to the first object object in the 3rd image and/or second target
Object is adjusted as follows:Figure layer depth adjusts.
4. image pickup method according to claim 1, it is characterised in that it is described determined from described first image it is to be fused
First object object, and the second destination object to be fused is determined from second image, including:
Depth of view information based on described first image, first object object to be fused is determined from described first image;
Based on the depth of view information of second image, the second destination object to be fused is determined from second image.
5. according to the image pickup method described in any one of Claims 1-4, it is characterised in that the crawl the 3rd image it
Before, methods described also includes:
The 3rd image described in preview on the first terminal.
6. a kind of terminal, it is characterised in that the terminal includes:
Camera, for the image of continuous acquisition first;
Memory, for storing picture processing program;
Processor, for performing the picture processing program in the memory to realize following operation:
The first image of the first terminal continuous acquisition is obtained, described first image includes the first body matter and/or first
Background content;
The second image by the second terminal continuous acquisition that second terminal is sent is received, second image includes the second master
Appearance and/or the second background content in vivo;
First object object to be fused is determined from described first image, the first object object is first main body
Content and/or the first background content, and the second destination object to be fused is determined from second image, described second
Destination object is second body matter and/or the second background content;
By the first object object and the second destination object additive fusion into the 3rd image;
The 3rd image is captured, and the obtained picture that fixes will be captured including on the first terminal.
7. terminal according to claim 6, it is characterised in that before crawl the 3rd image, the processor
It is additionally operable to perform the picture processing program in the memory to realize following operation:
According to the first of acquisition the adjustment operation to the first object object in the 3rd image and/or second target
Object is adjusted as follows:Size adjustment, position adjustment.
8. the terminal according to claim 6 or 7, it is characterised in that before crawl the 3rd image, the processing
Device is additionally operable to perform the picture processing program in the memory to realize following operation:
According to the second of acquisition the adjustment operation to the first object object in the 3rd image and/or second target
Object is adjusted as follows:Figure layer depth adjusts.
9. terminal according to claim 6, it is characterised in that the processor is additionally operable to perform the figure in the memory
Piece processing routine is to realize following operation:
Depth of view information based on described first image, first object object to be fused is determined from described first image;
Based on the depth of view information of second image, the second destination object to be fused is determined from second image.
10. according to the terminal described in any one of claim 6 to 9, it is characterised in that the processor is additionally operable to deposit described in execution
Picture processing program in reservoir is to realize following operation:
The terminal also includes:Display, for the 3rd image described in the preview on the first terminal.
11. a kind of computer-readable storage medium, it is characterised in that the computer-readable storage medium is stored with one or more journey
Sequence, one or more of programs can be by one or more computing devices, to realize any one of claim 1 to 5 institute
The method and step stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710602087.3A CN107404617A (en) | 2017-07-21 | 2017-07-21 | A kind of image pickup method and terminal, computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710602087.3A CN107404617A (en) | 2017-07-21 | 2017-07-21 | A kind of image pickup method and terminal, computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107404617A true CN107404617A (en) | 2017-11-28 |
Family
ID=60401193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710602087.3A Pending CN107404617A (en) | 2017-07-21 | 2017-07-21 | A kind of image pickup method and terminal, computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107404617A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108154514A (en) * | 2017-12-06 | 2018-06-12 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
CN108198129A (en) * | 2017-12-26 | 2018-06-22 | 努比亚技术有限公司 | A kind of image combining method, terminal and computer readable storage medium |
CN108632543A (en) * | 2018-03-26 | 2018-10-09 | 广东欧珀移动通信有限公司 | Method for displaying image, device, storage medium and electronic equipment |
CN108965769A (en) * | 2018-08-28 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Image display method and device |
CN109035159A (en) * | 2018-06-27 | 2018-12-18 | 努比亚技术有限公司 | A kind of image optimization processing method, mobile terminal and computer readable storage medium |
WO2020057661A1 (en) * | 2018-09-21 | 2020-03-26 | 华为技术有限公司 | Image capturing method, device, and apparatus |
CN110992256A (en) * | 2019-12-17 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN111263093A (en) * | 2020-01-22 | 2020-06-09 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN112004034A (en) * | 2020-09-04 | 2020-11-27 | 北京字节跳动网络技术有限公司 | Method and device for close photographing, electronic equipment and computer readable storage medium |
CN112236980A (en) * | 2018-06-08 | 2021-01-15 | 斯纳普公司 | Generating messages for interacting with physical assets |
CN113489903A (en) * | 2021-07-02 | 2021-10-08 | 惠州Tcl移动通信有限公司 | Shooting method, shooting device, terminal equipment and storage medium |
CN114390206A (en) * | 2022-02-10 | 2022-04-22 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
US11394676B2 (en) | 2019-03-28 | 2022-07-19 | Snap Inc. | Media content response in a messaging system |
CN116197887A (en) * | 2021-11-28 | 2023-06-02 | 梅卡曼德(北京)机器人科技有限公司 | Image data processing method, device, electronic equipment and storage medium |
WO2023109389A1 (en) * | 2021-12-15 | 2023-06-22 | Tcl通讯科技(成都)有限公司 | Image fusion method and apparatus, and computer device and computer-readable storage medium |
CN116600147A (en) * | 2022-12-29 | 2023-08-15 | 广州紫为云科技有限公司 | Method and system for remote multi-person real-time cloud group photo |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070171478A1 (en) * | 2005-09-27 | 2007-07-26 | Oki Data Corporation | Image forming apparatus |
CN102158681A (en) * | 2011-02-16 | 2011-08-17 | 中兴通讯股份有限公司 | Method for coordinately shooting in videophone and mobile terminal |
CN105100615A (en) * | 2015-07-24 | 2015-11-25 | 青岛海信移动通信技术股份有限公司 | Image preview method, apparatus and terminal |
CN105187709A (en) * | 2015-07-28 | 2015-12-23 | 努比亚技术有限公司 | Remote photography implementing method and terminal |
CN106331529A (en) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | Image capturing method and apparatus |
-
2017
- 2017-07-21 CN CN201710602087.3A patent/CN107404617A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070171478A1 (en) * | 2005-09-27 | 2007-07-26 | Oki Data Corporation | Image forming apparatus |
CN102158681A (en) * | 2011-02-16 | 2011-08-17 | 中兴通讯股份有限公司 | Method for coordinately shooting in videophone and mobile terminal |
CN105100615A (en) * | 2015-07-24 | 2015-11-25 | 青岛海信移动通信技术股份有限公司 | Image preview method, apparatus and terminal |
CN105187709A (en) * | 2015-07-28 | 2015-12-23 | 努比亚技术有限公司 | Remote photography implementing method and terminal |
CN106331529A (en) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | Image capturing method and apparatus |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108154514A (en) * | 2017-12-06 | 2018-06-12 | 广东欧珀移动通信有限公司 | Image processing method, device and equipment |
CN108154514B (en) * | 2017-12-06 | 2021-08-13 | Oppo广东移动通信有限公司 | Image processing method, device and equipment |
CN108198129A (en) * | 2017-12-26 | 2018-06-22 | 努比亚技术有限公司 | A kind of image combining method, terminal and computer readable storage medium |
CN108632543B (en) * | 2018-03-26 | 2020-07-07 | Oppo广东移动通信有限公司 | Image display method, image display device, storage medium and electronic equipment |
CN108632543A (en) * | 2018-03-26 | 2018-10-09 | 广东欧珀移动通信有限公司 | Method for displaying image, device, storage medium and electronic equipment |
US11356397B2 (en) | 2018-06-08 | 2022-06-07 | Snap Inc. | Generating interactive messages with entity assets |
US11722444B2 (en) | 2018-06-08 | 2023-08-08 | Snap Inc. | Generating interactive messages with entity assets |
CN112236980B (en) * | 2018-06-08 | 2022-09-16 | 斯纳普公司 | Generating messages for interacting with physical assets |
CN112236980A (en) * | 2018-06-08 | 2021-01-15 | 斯纳普公司 | Generating messages for interacting with physical assets |
CN109035159A (en) * | 2018-06-27 | 2018-12-18 | 努比亚技术有限公司 | A kind of image optimization processing method, mobile terminal and computer readable storage medium |
CN108965769B (en) * | 2018-08-28 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Video display method and device |
CN108965769A (en) * | 2018-08-28 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Image display method and device |
WO2020057661A1 (en) * | 2018-09-21 | 2020-03-26 | 华为技术有限公司 | Image capturing method, device, and apparatus |
US11218649B2 (en) | 2018-09-21 | 2022-01-04 | Huawei Technologies Co., Ltd. | Photographing method, apparatus, and device |
US11394676B2 (en) | 2019-03-28 | 2022-07-19 | Snap Inc. | Media content response in a messaging system |
CN110992256A (en) * | 2019-12-17 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN111263093B (en) * | 2020-01-22 | 2022-04-01 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN111263093A (en) * | 2020-01-22 | 2020-06-09 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
WO2022048651A1 (en) * | 2020-09-04 | 2022-03-10 | 北京字节跳动网络技术有限公司 | Cooperative photographing method and apparatus, electronic device, and computer-readable storage medium |
CN112004034A (en) * | 2020-09-04 | 2020-11-27 | 北京字节跳动网络技术有限公司 | Method and device for close photographing, electronic equipment and computer readable storage medium |
CN113489903A (en) * | 2021-07-02 | 2021-10-08 | 惠州Tcl移动通信有限公司 | Shooting method, shooting device, terminal equipment and storage medium |
CN116197887A (en) * | 2021-11-28 | 2023-06-02 | 梅卡曼德(北京)机器人科技有限公司 | Image data processing method, device, electronic equipment and storage medium |
CN116197887B (en) * | 2021-11-28 | 2024-01-30 | 梅卡曼德(北京)机器人科技有限公司 | Image data processing method, device, electronic equipment and storage medium for generating grabbing auxiliary image |
WO2023109389A1 (en) * | 2021-12-15 | 2023-06-22 | Tcl通讯科技(成都)有限公司 | Image fusion method and apparatus, and computer device and computer-readable storage medium |
CN114390206A (en) * | 2022-02-10 | 2022-04-22 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN116600147A (en) * | 2022-12-29 | 2023-08-15 | 广州紫为云科技有限公司 | Method and system for remote multi-person real-time cloud group photo |
CN116600147B (en) * | 2022-12-29 | 2024-03-29 | 广州紫为云科技有限公司 | Method and system for remote multi-person real-time cloud group photo |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107404617A (en) | A kind of image pickup method and terminal, computer-readable storage medium | |
CN108093171A (en) | A kind of photographic method, terminal and computer readable storage medium | |
CN109729266A (en) | A kind of image capturing method, terminal and computer readable storage medium | |
CN107133939A (en) | A kind of picture synthesis method, equipment and computer-readable recording medium | |
CN107820014A (en) | A kind of image pickup method, mobile terminal and computer-readable storage medium | |
CN107317963A (en) | A kind of double-camera mobile terminal control method, mobile terminal and storage medium | |
CN108259781A (en) | image synthesizing method, terminal and computer readable storage medium | |
CN107682627A (en) | A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium | |
CN107343064A (en) | A kind of mobile terminal of two-freedom rotating camera | |
CN108234295A (en) | Display control method, terminal and the computer readable storage medium of group's functionality controls | |
CN108055411A (en) | Flexible screen display methods, mobile terminal and computer readable storage medium | |
CN107704176A (en) | A kind of picture-adjusting method and terminal | |
CN107948360A (en) | Image pickup method, terminal and the computer-readable recording medium of flexible screen terminal | |
CN107333056A (en) | Image processing method, device and the computer-readable recording medium of moving object | |
CN107680060A (en) | A kind of image distortion correction method, terminal and computer-readable recording medium | |
CN108200269A (en) | Display screen control management method, terminal and computer readable storage medium | |
CN107124552A (en) | A kind of image pickup method, terminal and computer-readable recording medium | |
CN108055463A (en) | Image processing method, terminal and storage medium | |
CN107239205A (en) | A kind of photographic method, mobile terminal and storage medium | |
CN108055483A (en) | A kind of picture synthesis method, mobile terminal and computer readable storage medium | |
CN108184051A (en) | A kind of main body image pickup method, equipment and computer readable storage medium | |
CN107707821A (en) | Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter | |
CN107040723A (en) | A kind of imaging method based on dual camera, mobile terminal and storage medium | |
CN109672822A (en) | A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium | |
CN107483804A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171128 |