CN107295262A - Image processing method, mobile terminal and computer-readable storage medium - Google Patents

Image processing method, mobile terminal and computer-readable storage medium Download PDF

Info

Publication number
CN107295262A
CN107295262A CN201710632757.6A CN201710632757A CN107295262A CN 107295262 A CN107295262 A CN 107295262A CN 201710632757 A CN201710632757 A CN 201710632757A CN 107295262 A CN107295262 A CN 107295262A
Authority
CN
China
Prior art keywords
image
alternative objects
imaging
distance
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710632757.6A
Other languages
Chinese (zh)
Other versions
CN107295262B (en
Inventor
何世强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710632757.6A priority Critical patent/CN107295262B/en
Publication of CN107295262A publication Critical patent/CN107295262A/en
Application granted granted Critical
Publication of CN107295262B publication Critical patent/CN107295262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices

Abstract

The invention discloses a kind of image processing method, mobile terminal and computer-readable storage medium.Described image processing method includes:Into image preview mode, preview image is gathered;Extract the imaging area of alternative objects in the preview image;Based on the acquisition parameter for forming the preview image, the distance between the alternative objects and image acquisition device are calculated;With reference to the imaging area and the distance, the first object is selected from the alternative objects;Focused and gathered to form the first image based on first object.

Description

Image processing method, mobile terminal and computer-readable storage medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of mobile terminal image processing method, mobile terminal And computer-readable storage medium.
Background technology
With the development of image processing techniques, the process of image intelligent collection or intelligence generation is being carried out, is being occurred in that automatic The processing such as focusing, so that focusing position is aligned into the object that user wants emphasis collection.It is existing automatic right but practice is found The problem of focusing is inaccurate still occurs in Jiao.
The content of the invention
In view of this, the embodiment of the present invention provides a kind of movement to solve at least one problem present in prior art Terminal image processing method, mobile terminal and computer-readable storage medium.
What the technical scheme of the embodiment of the present invention was realized in:
First aspect of the embodiment of the present invention provides a kind of image processing method, including:
Into image preview mode, preview image is gathered;
Extract the imaging area of alternative objects in the preview image;
Based on the acquisition parameter for forming the preview image, calculate between the alternative objects and image acquisition device away from From;
With reference to the imaging area and the distance, the first object is selected from the alternative objects;
Focused and gathered to form the first image based on first object.
Alternatively, methods described also includes:
The distance between image space according to the second object and first object, is divided into different grades of second pair As, wherein, second object is the alternative objects beyond first object;
According to the grade of second object, using the virtualization extent index being adapted with the grade, virtualization or removal The imaging of corresponding second object in described first image, generates second image.
Alternatively, the imaging for blurring or removing corresponding second object in described first image, including one below:
Imaging region progress pixel to the second object described in described first image is down-sampled, generates second figure Picture, wherein, the image resolution ratio of the imaging region of the second object described in second image less than first object into As the image resolution ratio in region;
Using Obfuscating Algorithms, upset the pixel value of pixel in the imaging region of the second object in described first image, generate Second image;
Algorithm is replaced using pixel, the pixel of the second object imaging region is replaced using the pixel value of background object Value, generates second image, wherein, the background object and the acquisition target beyond the alternative objects;
Generation insertion pixel value, and replace the second object imaging described in described first image using the insertion pixel value The pixel value of the partial pixel in region.
Alternatively, the imaging area with reference to described in and the distance, the first object is determined from the alternative objects, bag Include:
The product of each alternative imaging area to object and area weights is calculated, to obtain the first reference factor;
Each alternative objects and the distance and the product apart from weights of described image collector are calculated, second is obtained and refers to The factor;
The sum of first reference factor is solved, the comprehensive reference factor of each alternative objects is obtained;
The comprehensive reference factor is ranked up;
First object is determined based on ranking results.
Alternatively, the imaging area with reference to described in and the distance, the first object is determined from the alternative objects, bag Include:
The product of the imaging area of each alternative objects and the distance of each alternative objects equipment is calculated, the 3rd is obtained Reference factor;
The 3rd reference factor to each alternative objects is ranked up;
Based on the ranking results, first object is selected from the alternative objects.
Second aspect of the embodiment of the present invention provides a kind of mobile terminal, including:Image acquisition device, memory, processor, and It is stored in the computer program by the computing device on the memory;
Described image collector, for IMAQ;
The memory, for storage information;
The processor, is connected with described image collector and the memory respectively, for passing through the computer journey The execution of sequence, controls the IMAQ of described image collector, and performs following steps:
Into image preview mode, preview image is gathered;
Extract the imaging area of alternative objects in the preview image;
Based on the acquisition parameter for forming the preview image, calculate between the alternative objects and image acquisition device away from From;
With reference to the imaging area and the distance, the first object is selected from the alternative objects;
Focused and gathered to form the first image based on first object.
Alternatively, the processor, is additionally operable to perform following steps:
The distance between image space according to the second object and first object, is divided into different grades of second pair As, wherein, second object is the alternative objects beyond first object;
According to the grade of second object, using the virtualization extent index being adapted with the grade, virtualization or removal The imaging of corresponding second object in described first image, generates second image.
Alternatively, the processor, is additionally operable to perform following steps:
First object and/or the attribute of background object are determined, wherein, the background object is in the preview image Drawing Object beyond the alternative objects;
According to the attribute, selection virtualization or the processing mode for removing second object.
Fourth aspect of the embodiment of the present invention provides a kind of computer-readable storage medium, and the computer-readable storage medium is stored with meter Calculation machine program;After the computer program is performed, the image procossing that foregoing one or more technical schemes are provided can be realized Method.
Mobile terminal image processing method, mobile terminal and the computer-readable storage medium provided in the embodiment of the present invention, The imaging area of each alternative objects and the distance with image acquisition device in preview image can be combined during focusing, integrated decision-making goes out needs First object of focusing.Relative to the object of automatic decision first progress pair of the single parameter based on imaging area in the prior art Burnt mode, it is possible to achieve more accurate focusing, the accurate rate and user satisfaction of lifting certainly to focusing.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for realizing an optional mobile terminal of the invention;
Fig. 2 is mobile whole communications network system Organization Chart as shown in Figure 1;
Fig. 3 is the schematic flow sheet of the first image processing method provided in an embodiment of the present invention;
Fig. 4 is the schematic flow sheet of second of image processing method provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of first image provided in an embodiment of the present invention;
The signal of the second image after the first image the second object of removal that Fig. 6 is provided Fig. 5 for the embodiment of the present invention Figure;
Fig. 7 is a kind of structural representation of mobile terminal provided in an embodiment of the present invention;
Fig. 8 is the structural representation of another mobile terminal provided in an embodiment of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can be mixed Ground is used.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. are moved Move the fixed terminals such as terminal, and numeral TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of realization each embodiment of the invention, the shifting Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit 108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1 Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts, Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station Downlink information receive after, handled to processor 110;In addition, up data are sent into base station.Generally, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user's transmitting-receiving electricity by WiFi module 102 Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 1 00 When under the isotypes such as formula, speech recognition mode, broadcast reception mode, it is that radio frequency unit 101 or WiFi module 102 are received or The voice data stored in memory 109 is converted into audio signal and is output as sound.Moreover, audio output unit 103 The audio output related to the specific function that mobile terminal 1 00 is performed can also be provided (for example, call signal receives sound, disappeared Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042,1041 pairs of graphics processor is in video acquisition mode Or the view data progress of the static images or video obtained in image capture mode by image capture apparatus (such as camera) Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after being handled through graphics processor 1041 can be deposited Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model. Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound The noise produced during frequency signal or interference.
Mobile terminal 1 00 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 1 00 is moved in one's ear Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.; The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can be wrapped Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it (such as user is using any suitable objects such as finger, stylus or annex on contact panel 1071 or in contact panel 1071 Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection Two parts of device and touch controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it It is converted into contact coordinate, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can be wrapped Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc. One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, with preprocessor 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel 1061 be input and the output function that mobile terminal is realized as two independent parts, but in certain embodiments, can By contact panel 1071 and the input that is integrated and realizing mobile terminal of display panel 1061 and output function, not do specifically herein Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 1 00.For example, External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 1 00 or can be with For transmitting data between mobile terminal 1 00 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area And storage data field, wherein, application program (the such as sound that storing program area can be needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data field can be stored uses created data (such as according to mobile phone Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, it can also include non-easy The property lost memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection Individual part, by operation or performs and is stored in software program and/or module in memory 109, and calls and be stored in storage Data in device 109, perform the various functions and processing data of mobile terminal, so as to carry out integral monitoring to mobile terminal.Place Reason device 110 may include one or more processing units;It is preferred that, processor 110 can integrated application processor and modulatedemodulate mediate Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 1 00 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 1 00 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system that the mobile terminal of the present invention is based on is entered below Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system Unite as the LTE system of universal mobile communications technology, UE (User Equipment, use of the LTE system including communicating connection successively Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation 204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203, ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way, Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and There is provided carrying and connection management for the control node of signaling between EPC203.HSS2032 is all to manage for providing some registers Such as function of attaching position register (not shown) etc, and some are preserved about the use such as service features, data rate The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
As shown in figure 3, the present embodiment provides a kind of image processing method, including:
Step S110:Into image preview mode, preview image is gathered;
Step S120:Extract the imaging area of alternative objects in the preview image;
Step S130:Based on the acquisition parameter for forming the preview image, the alternative objects and IMAQ are calculated The distance between device;
Step S140:With reference to the imaging area and the distance, the first object is selected from the alternative objects;
Step S150:Focused and gathered to form the first image based on first object.
A kind of image processing method that the present embodiment is provided can be applied to the method in foregoing mobile terminal.It is described to move Dynamic terminal can be to carry to image mobile phone, tablet personal computer or wearable device or specialized camera of first-class image acquisition device etc..
Mobile terminal initially enters preview mode in the present embodiment, and preview mode is the run-up mode of IMAQ, complete Into the various preparation works of IMAQ, for example, camera is opened, start to flutter and catch collection light, and the collection based on capture Light generates the preview image, when collecting the instruction for confirming collection, can regard corresponding preview image as final collection Particular memory region is arrived in image storage.The usual particular memory region is non-caching region.
General pattern pattern includes:Front-collection pattern and rearmounted drainage pattern.Front-collection mould pattern can be:Camera Collection face and mobile terminal display location in the drainage pattern of the same side, the preview image currently captured can simultaneous display exist On display screen, checked for the user towards display screen.The typical application scenarios of the front-collection pattern, including:Autodyne.User A holds mobile phone auto heterodyne, and camera and display screen are both facing to collected user A.It is simultaneously towards the collection to be collected object Module and display screen.
The rearmounted drainage pattern, the direction of camera is to be located at not homonymy with the display screen, and collected object is It can't see the capture images that display screen is currently shown.
The alternative imaging area to happiness of each in preview image can be extracted in the present embodiment.In the present embodiment, it is described Alternative objects can be face.Face can be identified based on recognition of face etc. in the step s 120, then be joined by facial contour Number etc. calculates imaging area of each face in preview image.The imaging area can be embodied by area parameters, also may be used To be embodied by number of pixels.Generally, it is right if collected acquisition target is nearer by imaging first-class image acquisition device Imaging area of one acquisition target in preview image is bigger.The alternative objects are except the face in the present embodiment Outside, other collected objects can be can also be with imaging of the person etc..For example, being currently acquired to racing car situation, then Alternative objects can be the imaging of racing car in the preview image.
In a word, in the step S120 of the present embodiment can by modes such as contours extracts, recognize and extract it is alternative right The imaging area of elephant.
At the same time, the present embodiment can also the acquisition parameter based on preview image, alternative objects can be calculated pre- Look at the imaging situation of image, calculate the distance between alternative objects and image acquisition device.The acquisition parameter may include:It is burnt Away from, at a distance of and object distance etc. at least one, based on image-forming principle, can simplicity calculate each alternative objects and shooting is first-class Distance.
It is worth noting that, the alternative objects can be some class object in the preview image, and simultaneously not all Acquisition target, can so reduce imaging area and extract and apart from the treating capacity of calculating.For example, a preview image includes: The imaging of the humanity scenery such as the imaging of natural scene, the imaging of people and building;Here alternative objects can be solely people;One Under a little situations, it is also possible to humane scenery.In a word, all acquisition targets of alternative objects here not in preview image.
In some cases, due to the attribute of acquisition target itself, for example, the attribute such as height, face area, can to adopt Collect imaging of the object in preview image smaller.For example, when adult and children are collected simultaneously, children are due to the face ratio of itself Adult's is bashful, then may be such that the area of face of the imaging area less than the adult of interference acquisition of the face of children is small.Therefore It is this because the problem of self attributes of acquisition target cause in focus process in order to solve.In the present embodiment can be with reference to Distance, it is the first object for needing to focus which, which is determined,.During IMAQ, if acquisition target can relatively close to shooting Head, therefore distance and imaging area can be combined in the present embodiment, the two parameters are picked out for focusing from alternative objects First object.
First object can be focused in step S150, recapture preview image forms described first and adopted Collect image.First object focusing may include:Based on the distance between first object and the camera, setting figure As the focal plane of collector.It so can ensure that and the first object be arranged in the focusing area, so as to be lifted automatic The accuracy of focusing, so as to lift user satisfaction.
As further improvement of this embodiment, as shown in figure 4, methods described also includes:
Step S160:Blur or remove the second object beyond first object in the alternative objects, generation the Two images.
In order to further protrude first object, the acquisition target beyond first object can be considered as second pair As carrying out virtualization processing.Virtualization processing is carried out in the present embodiment, can cause the background to happiness obfuscation, so that by fuzzy Change second object, further protrude first object.It can so reduce in image acquisition process, pedestrian, which is strayed into, to cause The influence to picture quality.
Specifically, methods described also includes:
The distance between image space according to second object and first object, is divided into different grades of the Two objects;
The second object beyond first object in the alternative objects is blurred or removed, the second image, bag is generated Include:
According to the grade of second object, using the virtualization extent index being adapted with the grade to first figure Corresponding second object is blurred as in, generates second image.
In the present embodiment in order to avoid the clear display of the first object focusing, and the second object of the first data collection is excessive The unnatural phenomenon of image transition caused by fuzzy.In the present embodiment can be according to the imaging area and distance of the second object, will Second object divides different grades of second object.
In certain embodiments, imaging of the different acquisition object in preview image be with different positions, therefore this Can be based on the distance between the image space of the second object and the first object in preview image, by the second object point in embodiment For different brackets.
When the virtualization of the second object is carried out, the virtualization of degree of correspondence can be carried out based on grade.Generally, the second object With the distance between the image space of the first object and virtualization degree into negative correlation.The imaging closer to the first object is imaged, Then virtualization degree is lower, then definition is higher, is imaged the imaging further away from the first object, then virtualization degree is higher, then definition It is lower.
In the present embodiment second object can be first object beyond all first objects or Alternative objects in addition to first object.I.e. in certain embodiments, second object and first object are same One class acquisition target.
For example, the both imaging of someone, imagings of scenery in a preview image, if alternative is people, the second object to happiness Can be:People beyond the people of emphasis collection.
Specifically how to blur second object has many methods, several optional modes presented below:
Optional mode one:
The step S160 may include:Imaging region progress pixel drop to the second object described in described first image is adopted Sample, generates second image, wherein, the image resolution ratio of the imaging region of the second object is less than described in second image The image resolution ratio of the imaging region of first object.
Generally, the image resolution ratio of same each position of collection image is identical, described image resolution ratio For:Number of pixels in unit area.In the present embodiment can be down-sampled by the imaging region progress to the second object, so that The image resolution ratio of the imaging region of the second object is reduced, so that the definition to the second object is realized, so as to realize to interference Second object of the first object carries out Fuzzy Processing.
Optional mode two:
The step S160 may include:
Using Obfuscating Algorithms, upset the pixel value of pixel in the imaging region of the second object in described first image, generate Second image.
Obfuscating Algorithms are utilized in the present embodiment, upset the pixel value of the pixel of the imaging region of the second object.For example, the The imaging region of two objects, including:Pixel A and pixel B, exchange the pixel value of pixel A and pixel B, it is clear that can cause first pair The imaging of elephant occurs distorting or the various processing such as obfuscation so that target is reduced to the imaging definition of happiness, so as to reduce to the The interference of one object imaging.In addition to the pixel value between pixel is exchanged, for example, it is also possible to by the pixel value assignment of pixel A To pixel B, and the original pixel value of pixel B is assigned to pixel C, in a word, by upsetting for pixel value, so as to destroy second The blur-free imaging of object, so as to realize the virtualization of the second object to highlight the imaging of the first object.
Optional mode three:
The step S160 may include:
Algorithm is replaced using pixel, the pixel of the second object imaging region is replaced using the pixel value of background object Value, generates second image, wherein, the background object and the acquisition target beyond the alternative objects.
Second object is to need the object of obfuscation, and the background object can adopting for the imaging without interference with the first object Collect object.In the present embodiment, the pixel value of such as background object, replaces the pixel value of the second object imaging region, such Words, equivalent to expansion and the removal of second object of the background object in the first image, it is clear that can more protrude the first object.
When implementing, the background object can be the object of predefined type, for example, sky, sea or meadow etc. are certainly The object of right scenery., can be with if having passerby's entrance during collection for example, the first image is image using sky as background The pixel value of passerby's imaging region is replaced using the pixel value of sky, so as to reach the processing for removing passerby.
Optional mode four:
The step S160 may include:
Generation insertion pixel value, and replace the second object imaging described in described first image using the insertion pixel value The pixel value of the partial pixel in region.
In the manner, some pixel values can be generated at random, or the dominant hue based on preview image generates some pixels Value, based on interpolation algorithm, then the pixel substituted the need for regular the second object of determination imaging region utilizes biochemistry Pixel value is inserted, corresponding pixel value is replaced, so as to realize the virtualization of the second object.In some embodiments, it is also possible to be based on Random algorithm, picking out at random out needs the pixel of replacement pixel value.
The dominant hue can be dominant hue of background object etc..For example, during current background based on green, as green color The pixel of background object imaging region is maximum, then generates the green pixel values of the different depths, the imaging region for the second object Pixel replace etc..
In a word, the mode for blurring or removing in the present embodiment second object has many kinds, is not limited to above-mentioned Meaning is a kind of.
Specifically by the way of the second object of any virtualization or removal, methods described may also include:
Determine first object and/or the attribute of background object;
According to the attribute, selection virtualization or the processing mode for removing second object.
For example, when the object that the background object is predefined type, for example, what sky, sea or meadow etc. were pre-defined During object, it is possible to use pixel replaces algorithm and removes second object.
For another example when the type of the background object is a lot, that is, having natural scene, there is humane scenery, and different type again Scenery have including a variety of, down-sampled mode can be selected, by the processing of the image resolution ratio of the second object, virtualization is described Second object.
In another example, can be preferred if the imaging region of the imaging region of the first object and the second object is distant apart From the imaging region of the processing such as pixel alternate algorithm, Obfuscating Algorithms second object, so as to generate second image.
In a word, the second object of any processing is specifically used in the present embodiment in the way of generating second image, The attribute of the first object and/or background object can be combined to determine.
In certain embodiments, the step S140 may include:
The product of each alternative imaging area to object and area weights is calculated, to obtain the first reference factor;
Each alternative objects and the distance and the product apart from weights of described image collector are calculated, second is obtained and refers to The factor;
The sum of first reference factor is solved, the comprehensive reference factor of each alternative objects is obtained;
The comprehensive reference factor is ranked up;
First object is determined based on ranking results.
In the present embodiment, area weights are multiplied by equivalent to imaging area, distance is multiplied by apart from weights, then calculate this two Individual sum, as the comprehensive reference factor, based on the comprehensive reference factor, selects the first object.
In further embodiments, the step S140 may include:
The product of the imaging area of each alternative objects and the distance of each alternative objects equipment is calculated, the 3rd is obtained Reference factor;
The 3rd reference factor to each alternative objects is ranked up;
Based on the ranking results, first object is selected from the alternative objects.
In the present embodiment directly by the imaging area and apart from carry out product calculation, obtain the described 3rd refer to because Son, is directly based upon the 3rd reference factor and is ranked up, and the result based on sequence obtains first object.
In certain embodiments, calculating for convenience can do the imaging area and distance of each alternative objects at normalization After reason, based on normalized result, the comprehensive reference factor or the 3rd reference factor are calculated, is simplified because being related to big The problem of computation complexity is big caused by amount floating-point operation.
In certain embodiments, if the imaging area can be with the comprehensive reference factor or the 3rd reference factor positive correlation, institute State distance and the comprehensive reference factor or the 3rd reference factor are negatively correlated, if being sorted from big to small, selection row The forward one or more alternative objects of sequence are used as first object.For example, 1,2 or 3 alternative objects conduct of selection First object, remaining is second object.If being sorted from small to large, selected and sorted rearward one or many Individual alternative objects are used as first object.
In further embodiments, if the imaging area can with the comprehensive reference factor or the 3rd reference factor positive correlation, The distance and the comprehensive reference factor or the 3rd reference factor are negatively correlated, if being sorted from big to small, select The one or more alternative objects of sequence rearward are used as first object.If being sorted from small to large, selected and sorted is leaned on Preceding one or more alternative objects are used as first object.
As shown in fig. 7, the present embodiment provides a kind of mobile terminal, including:
Preview unit 310, for entering image preview mode, gathers preview image;
Extraction unit 320, the imaging area for extracting alternative objects in the preview image;
Computing unit 330, for based on the acquisition parameter for forming the preview image, calculating the alternative objects and figure As the distance between collector;
Selecting unit 340, for reference to the imaging area and the distance, first pair to be selected from the alternative objects As;
Collecting unit 350, for being focused and being gathered to form the first image based on first object.
The mobile terminal that the present embodiment is provided can be any one foregoing mobile terminal.
The preview unit 310 and collecting unit 350 can correspond to image acquisition device, can be used for image preview and The generation of final image.
In the present embodiment, the extraction unit 320, computing unit 330 and selecting unit 340 can correspond to processing Device.The processor can for central processing unit, microprocessor, digital signal processor, programmable array or application processor or Application specific integrated circuit, can realize the function lists such as extraction, calculating and selection by computer-executable codes such as computer programs The operation of member, so as to realize the exact focus of IMAQ.
Alternatively, the mobile terminal also includes:
Division unit, for the distance between image space according to the second object and first object, is divided into not Second object of ad eundem, wherein, second object is the alternative objects beyond first object;
Unit is blurred, for the grade according to second object, is joined using the virtualization degree being adapted with the grade Number, virtualization or the imaging for removing corresponding second object in described first image, generate second image.
The division unit and the virtualization unit, equally may correspond to processor or process circuit in the present embodiment, Pass through the division of the second object level, it is possible to achieve the virtualization degree close to the first object is low, the virtualization journey away from the first object Degree is low, it is ensured that the natural transition of image, to lift the picture quality of image again.
Further, the virtualization unit, specifically for performing one below:To second pair described in described first image The imaging region progress pixel of elephant is down-sampled, generates second image, wherein, the second object described in second image Image resolution ratio of the image resolution ratio of imaging region less than the imaging region of first object;Using Obfuscating Algorithms, upset In described first image in the imaging region of the second object pixel pixel value, generate second image;Replaced using pixel Algorithm, replaces the pixel value of the second object imaging region using the pixel value of background object, generates second image, its In, the background object and the acquisition target beyond the alternative objects;Generation insertion pixel value, and utilize the insertion pixel Value replaces the pixel value of the partial pixel of the second object imaging region described in described first image.
In a word, the mode of virtualization or the removal of the second object of mobile terminal progress in the present embodiment has many kinds, not office Be limited to it is above-mentioned any one.
In certain embodiments, the selecting unit 340, specifically for calculating each alternative imaging area to object With the product of area weights, to obtain the first reference factor;Calculate the distance of each alternative objects and described image collector With the product apart from weights, the second reference factor is obtained;The sum of first reference factor is solved, each alternative objects are obtained The comprehensive reference factor;The comprehensive reference factor is ranked up;First object is determined based on ranking results.
In further embodiments, the selecting unit 340, can also be specifically for calculating the imagings of each alternative objects Area and the product of the distance of each alternative objects equipment, obtain the 3rd reference factor;To the 3rd of each alternative objects the Reference factor is ranked up;Based on the ranking results, first object is selected from the alternative objects.
As shown in figure 8, the present embodiment also provides a kind of mobile terminal, including:Image acquisition device 410, memory 420, place Device 430 is managed, and is stored in the computer program performed on the memory 420 by the processor 430;
Described image collector 410, for IMAQ;
The memory 420, for storage information;
The processor 430, is connected with described image collector 410 and the memory 420, for by described respectively The execution of computer program, the IMAQ of control described image collector 410, and perform foregoing one or more technical schemes The image processing method of offer, for example, for example, image processing method as shown in Figures 3 and 4 at least one.
Described image collector 410 may include that camera or camera etc. can carry out the sensor of IMAQ, can gather Preview image and ultimately generate the first image and the second image etc..
The memory 420 may include various types of storage mediums, being capable of storage information.The memory 420 is at least Including the non-moment storage medium in part, available for storage computer program.
The processor can be central processing unit, microprocessor, digital signal processor, application processor, programmable battle array Row or application specific integrated circuit etc., can be realized precisely focusing and the virtualization of the second object or are gone by the execution of computer program Except processing, with prominent first object.
For example, the processor 430 can at least perform following steps by the execution of the computer program:
Into image preview mode, preview image is gathered;
Extract the imaging area of alternative objects in the preview image;
Based on the acquisition parameter for forming the preview image, calculate between the alternative objects and image acquisition device away from From;
With reference to the imaging area and the distance, the first object is selected from the alternative objects;
Focused and gathered to form the first image based on first object.
In certain embodiments, the processor 430, is additionally operable to perform following steps:According to the second object and described the The distance between image space of one object, is divided into different grades of second object, wherein, second object is described the The alternative objects beyond one object;According to the grade of second object, using the virtualization journey being adapted with the grade Parameter, virtualization or the imaging for removing corresponding second object in described first image are spent, second image is generated.
In further embodiments, the processor 430, is additionally operable to perform following steps:Determine first object and/ Or the attribute of background object, wherein, the background object is the Drawing Object beyond alternative objects described in the preview image; According to the attribute, selection virtualization or the processing mode for removing second object.
The embodiment of the present invention also provides a kind of computer-readable storage medium, and the computer-readable storage medium is stored with computer journey Sequence;After the computer program is performed, the image processing method that foregoing one or more technical schemes are provided, example can be realized Such as, at least one of image processing method as shown in Figures 3 and 4.
The foregoing storage medium may include:Movable storage device, read-only storage (Read Only Memory, ROM), magnetic disc or CD etc. are various can be with the medium of store program codes;It is chosen as non-moment storage medium.
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text means relevant with embodiment During special characteristic, structure or characteristic are included at least one embodiment of the present invention.Therefore, occur everywhere in entire disclosure " in one embodiment " or " in one embodiment " identical embodiment is not necessarily referred to.In addition, these specific feature, knots Structure or characteristic can be combined in one or more embodiments in any suitable manner.It should be understood that in the various implementations of the present invention In example, the size of the sequence number of above-mentioned each process is not meant to the priority of execution sequence, and the execution sequence of each process should be with its work( It can be determined with internal logic, any limit is constituted without tackling the implementation process of the embodiment of the present invention.The embodiments of the present invention Sequence number is for illustration only, and the quality of embodiment is not represented.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and And also including other key elements being not expressly set out, or also include for this process, method, article or device institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Also there is other identical element in process, method, article or the device of key element.
, can be by it in several embodiments provided herein, it should be understood that disclosed apparatus and method Its mode is realized.Apparatus embodiments described above are only schematical, for example, the division of the unit, is only A kind of division of logic function, can have other dividing mode, such as when actually realizing:Multiple units or component can be combined, or Another system is desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or discussed each composition portion Coupling point each other or direct-coupling or communication connection can be the INDIRECT COUPLINGs of equipment or unit by some interfaces Or communication connection, can be electrical, machinery or other forms.
The above-mentioned unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part shown can be or may not be physical location;Both a place can be located at, multiple network lists can also be distributed to In member;Part or all of unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a processing unit, also may be used Be each unit individually as a unit, can also two or more units it is integrated in a unit;It is above-mentioned Integrated unit can both be realized in the form of hardware, it would however also be possible to employ hardware adds the form of SFU software functional unit to realize.
Or, if the above-mentioned integrated unit of the present invention is realized using in the form of software function module and is used as independent product Sale in use, can also be stored in a computer read/write memory medium.Understood based on such, the present invention is implemented The part that the technical scheme of example substantially contributes to prior art in other words can be embodied in the form of software product, The computer software product is stored in a storage medium, including some instructions are to cause a computer equipment (can be with It is personal computer, server or network equipment etc.) perform all or part of each of the invention embodiment methods described. And foregoing storage medium includes:Movable storage device, ROM, magnetic disc or CD etc. are various can be with Jie of store program codes Matter.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (10)

1. a kind of image processing method, it is characterised in that including:
Into image preview mode, preview image is gathered;
Extract the imaging area of alternative objects in the preview image;
Based on the acquisition parameter for forming the preview image, the distance between the alternative objects and image acquisition device are calculated;
With reference to the imaging area and the distance, the first object is selected from the alternative objects;
Focused and gathered to form the first image based on first object.
2. according to the method described in claim 1, it is characterised in that
Methods described also includes:
The distance between image space according to the second object and first object, is divided into different grades of second object, Wherein, second object is the alternative objects beyond first object;
According to the grade of second object, using the virtualization extent index being adapted with the grade, virtualization or removal are described The imaging of corresponding second object in first image, generates second image.
3. method according to claim 2, it is characterised in that
The virtualization or the imaging for removing corresponding second object in described first image, including one below:
Imaging region progress pixel to the second object described in described first image is down-sampled, generates second image, its In, the image resolution ratio of the imaging region of the second object described in second image is less than the imaging region of first object Image resolution ratio;
Using Obfuscating Algorithms, upset the pixel value of pixel in the imaging region of the second object in described first image, generation is described Second image;
Algorithm is replaced using pixel, the pixel value of the second object imaging region is replaced using the pixel value of background object, it is raw Into second image, wherein, the background object and the acquisition target beyond the alternative objects;
Generation insertion pixel value, and replace the second object imaging region described in described first image using the insertion pixel value Partial pixel pixel value.
4. method according to claim 2, it is characterised in that
Methods described also includes:
First object and/or the attribute of background object are determined, wherein, the background object is described in the preview image Drawing Object beyond alternative objects;
According to the attribute, selection virtualization or the processing mode for removing second object.
5. the method according to any one of Claims 1-4, it is characterised in that
The imaging area with reference to described in and the distance, the first object is determined from the alternative objects, including:
The product of each alternative imaging area to object and area weights is calculated, to obtain the first reference factor;
Calculate the distance and the product apart from weights of each alternative objects and described image collector, obtain second refer to because Son;
The sum of first reference factor is solved, the comprehensive reference factor of each alternative objects is obtained;
The comprehensive reference factor is ranked up;
First object is determined based on ranking results.
6. the method according to any one of Claims 1-4, it is characterised in that
The imaging area with reference to described in and the distance, the first object is determined from the alternative objects, including:
The product of the imaging area of each alternative objects and the distance of each alternative objects equipment is calculated, the 3rd is obtained and refers to The factor;
The 3rd reference factor to each alternative objects is ranked up;
Based on the ranking results, first object is selected from the alternative objects.
7. a kind of mobile terminal, it is characterised in that including:Image acquisition device, memory, processor, and it is stored in the storage By the computer program of the computing device on device;
Described image collector, for IMAQ;
The memory, for storage information;
The processor, is connected with described image collector and the memory, for passing through the computer program respectively The IMAQ of control described image collector is performed, and performs following steps:
Into image preview mode, preview image is gathered;
Extract the imaging area of alternative objects in the preview image;
Based on the acquisition parameter for forming the preview image, the distance between the alternative objects and image acquisition device are calculated;
With reference to the imaging area and the distance, the first object is selected from the alternative objects;
Focused and gathered to form the first image based on first object.
8. mobile terminal according to claim 7, it is characterised in that
The processor, is additionally operable to perform following steps:
The distance between image space according to the second object and first object, is divided into different grades of second object, Wherein, second object is the alternative objects beyond first object;
According to the grade of second object, using the virtualization extent index being adapted with the grade, virtualization or removal are described The imaging of corresponding second object in first image, generates second image.
9. mobile terminal according to claim 8, it is characterised in that
The processor, is additionally operable to perform following steps:
First object and/or the attribute of background object are determined, wherein, the background object is described in the preview image Drawing Object beyond alternative objects;
According to the attribute, selection virtualization or the processing mode for removing second object.
10. a kind of computer-readable storage medium, the computer-readable storage medium is stored with computer program;The computer program quilt After execution, the image processing method that any one of claim 1 to 6 is provided can be realized.
CN201710632757.6A 2017-07-28 2017-07-28 Image processing method, mobile terminal and computer storage medium Active CN107295262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710632757.6A CN107295262B (en) 2017-07-28 2017-07-28 Image processing method, mobile terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710632757.6A CN107295262B (en) 2017-07-28 2017-07-28 Image processing method, mobile terminal and computer storage medium

Publications (2)

Publication Number Publication Date
CN107295262A true CN107295262A (en) 2017-10-24
CN107295262B CN107295262B (en) 2021-03-26

Family

ID=60102496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710632757.6A Active CN107295262B (en) 2017-07-28 2017-07-28 Image processing method, mobile terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN107295262B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259770A (en) * 2018-03-30 2018-07-06 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110062157A (en) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN113126111A (en) * 2019-12-30 2021-07-16 Oppo广东移动通信有限公司 Time-of-flight module and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008026804A (en) * 2006-07-25 2008-02-07 Canon Inc Automatic focus detection device, imaging apparatus and control method
CN101426093A (en) * 2007-10-29 2009-05-06 株式会社理光 Image processing device, image processing method, and computer program product
CN103200361A (en) * 2012-01-06 2013-07-10 株式会社日立制作所 Video signal processing apparatus
CN103984186A (en) * 2014-05-04 2014-08-13 深圳市阿格斯科技有限公司 Optical zooming vidicon and automatic focusing control method and device thereof
CN104023175A (en) * 2014-04-25 2014-09-03 深圳英飞拓科技股份有限公司 Automatic focusing method and device
CN104363378A (en) * 2014-11-28 2015-02-18 广东欧珀移动通信有限公司 Camera focusing method, camera focusing device and terminal
CN104994298A (en) * 2015-07-14 2015-10-21 厦门美图之家科技有限公司 Focusing triggering method and system capable of intelligently selecting focusing mode
CN105812652A (en) * 2015-07-29 2016-07-27 维沃移动通信有限公司 Terminal focusing method and terminal
CN105933589A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and terminal
CN106973164A (en) * 2017-03-30 2017-07-21 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008026804A (en) * 2006-07-25 2008-02-07 Canon Inc Automatic focus detection device, imaging apparatus and control method
CN101426093A (en) * 2007-10-29 2009-05-06 株式会社理光 Image processing device, image processing method, and computer program product
CN103200361A (en) * 2012-01-06 2013-07-10 株式会社日立制作所 Video signal processing apparatus
CN104023175A (en) * 2014-04-25 2014-09-03 深圳英飞拓科技股份有限公司 Automatic focusing method and device
CN103984186A (en) * 2014-05-04 2014-08-13 深圳市阿格斯科技有限公司 Optical zooming vidicon and automatic focusing control method and device thereof
CN104363378A (en) * 2014-11-28 2015-02-18 广东欧珀移动通信有限公司 Camera focusing method, camera focusing device and terminal
CN104994298A (en) * 2015-07-14 2015-10-21 厦门美图之家科技有限公司 Focusing triggering method and system capable of intelligently selecting focusing mode
CN105812652A (en) * 2015-07-29 2016-07-27 维沃移动通信有限公司 Terminal focusing method and terminal
CN105933589A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and terminal
CN106973164A (en) * 2017-03-30 2017-07-21 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259770A (en) * 2018-03-30 2018-07-06 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108259770B (en) * 2018-03-30 2020-06-02 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110062157A (en) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110062157B (en) * 2019-04-04 2021-09-17 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN113126111A (en) * 2019-12-30 2021-07-16 Oppo广东移动通信有限公司 Time-of-flight module and electronic equipment
CN113126111B (en) * 2019-12-30 2024-02-09 Oppo广东移动通信有限公司 Time-of-flight module and electronic device

Also Published As

Publication number Publication date
CN107295262B (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN107959795B (en) Information acquisition method, information acquisition equipment and computer readable storage medium
CN107317963A (en) A kind of double-camera mobile terminal control method, mobile terminal and storage medium
CN109167910A (en) focusing method, mobile terminal and computer readable storage medium
CN107133939A (en) A kind of picture synthesis method, equipment and computer-readable recording medium
CN107194963A (en) A kind of dual camera image processing method and terminal
CN107730462A (en) A kind of image processing method, terminal and computer-readable recording medium
CN107566753A (en) Method, photo taking and mobile terminal
CN108965710A (en) Method, photo taking, device and computer readable storage medium
CN107680060A (en) A kind of image distortion correction method, terminal and computer-readable recording medium
CN107566731A (en) A kind of focusing method and terminal, computer-readable storage medium
CN109639996A (en) High dynamic scene imaging method, mobile terminal and computer readable storage medium
CN107613208A (en) Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area
CN107357500A (en) A kind of picture-adjusting method, terminal and storage medium
CN107707821A (en) Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter
CN107749947A (en) Photographic method, mobile terminal and computer-readable recording medium
CN107172349A (en) Mobile terminal image pickup method, mobile terminal and computer-readable recording medium
CN107295262A (en) Image processing method, mobile terminal and computer-readable storage medium
CN107888829A (en) Focusing method, mobile terminal and the storage medium of mobile terminal
CN107613200A (en) A kind of focus adjustment method, equipment and computer-readable recording medium
CN109005354A (en) Image pickup method, mobile terminal and computer readable storage medium
CN108566515A (en) It takes pictures processing method, mobile terminal and storage medium
CN107948531A (en) A kind of image processing method, terminal and computer-readable recording medium
CN108200350A (en) Autozoom image pickup method, mobile terminal and computer readable storage medium
CN107395971A (en) A kind of image-pickup method, equipment and computer-readable recording medium
CN107493431A (en) A kind of image taking synthetic method, terminal and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant