CN107948514A - Image virtualization processing method, device and mobile equipment - Google Patents

Image virtualization processing method, device and mobile equipment Download PDF

Info

Publication number
CN107948514A
CN107948514A CN201711240101.6A CN201711240101A CN107948514A CN 107948514 A CN107948514 A CN 107948514A CN 201711240101 A CN201711240101 A CN 201711240101A CN 107948514 A CN107948514 A CN 107948514A
Authority
CN
China
Prior art keywords
depth
image
virtualization
field
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711240101.6A
Other languages
Chinese (zh)
Other versions
CN107948514B (en
Inventor
谭国辉
杜成鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711240101.6A priority Critical patent/CN107948514B/en
Publication of CN107948514A publication Critical patent/CN107948514A/en
Priority to PCT/CN2018/117195 priority patent/WO2019105297A1/en
Application granted granted Critical
Publication of CN107948514B publication Critical patent/CN107948514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Present applicant proposes a kind of image virtualization processing method, device and mobile equipment, wherein, image virtualization processing method is applied to include in the mobile equipment of camera assembly, including:Determine the current kinetic speed of the mobile equipment;According to the current kinetic speed, determine that the current depth of field calculates frame per second;Frame per second is calculated according to the depth of field, judges whether present preview image is target image;If so, then obtain the depth information of the background area of the target image;According to the depth information, virtualization processing is carried out to the present preview image.Thus, virtualization effect followability is improved, improves user experience.

Description

Image virtualization processing method, device and mobile equipment
Technical field
This application involves technical field of image processing, more particularly to a kind of image virtualization processing method, device and movement to set It is standby.
Background technology
With the development of science and technology, the camera device such as camera, video camera be widely used in daily life, work, In study, role is more and more important in people live.During using camera device shooting image, in order to protrude what is taken pictures Main body, it is a kind of gimmick being commonly used that virtualization processing is carried out to the background area taken pictures.
In general, when taking pictures, mobile equipment where camera device or the main body taken pictures can be moved, at virtualization Reason process needs to calculate the depth of field, and the depth of field is calculated and taken long, this has resulted in the mobile need because of mobile equipment or the main body taken pictures When recalculating the depth of field, the processing speed of processor may not catch up with the translational speed of mobile equipment or main body of taking pictures, and cause The depth of field can not be determined in time, and virtualization effect followability is poor, poor user experience.
Apply for content
The application provides a kind of image virtualization processing method, device and mobile equipment, by according to the current of mobile equipment Movement velocity, determines that the current depth of field calculates frame per second, and is calculating frame per second according to the depth of field, and it is target image to determine present preview image When, according to the depth information of the background area of target image, virtualization processing is carried out to present preview image, improves virtualization effect Followability, improves user experience.
The embodiment of the present application provides a kind of image virtualization processing method, applied in the mobile equipment including camera assembly, Including:Determine the current kinetic speed of the mobile equipment;According to the current kinetic speed, determine that the current depth of field calculates frame Rate;Frame per second is calculated according to the depth of field, judges whether present preview image is target image;If so, then obtain the target figure The depth information of the background area of picture;According to the depth information, virtualization processing is carried out to the present preview image.
Another embodiment of the application provides a kind of image virtualization processing unit, applied to the mobile equipment including camera assembly In, including:First determining module, for determining the current kinetic speed of the mobile equipment;Second determining module, for basis The current kinetic speed, determines that the current depth of field calculates frame per second;Judgment module, for calculating frame per second according to the depth of field, judges Whether present preview image is target image;First acquisition module, for when present preview image is target image, obtaining institute State the depth information of the background area of target image;First processing module, for according to the depth information, to described current pre- Image of looking at carries out virtualization processing.
The another embodiment of the application provides a kind of mobile equipment, including memory, processor and storage are on a memory simultaneously The computer program that can be run on a processor, when the processor performs described program, realizes figure as described in relation to the first aspect As virtualization processing method.
The application a further embodiment provides a kind of computer-readable recording medium, is stored thereon with computer program, the meter The image virtualization processing method as described in the above embodiments of the present application is realized when calculation machine program is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
By the current kinetic speed according to mobile equipment, determine that the current depth of field calculates frame per second, and calculate according to the depth of field Frame per second, when to determine present preview image be target image, according to the depth information of the background area of target image, to current preview Image carries out virtualization processing, improves virtualization effect followability, improves user experience.
Brief description of the drawings
The above-mentioned and/or additional aspect of the application and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart that processing method is blurred according to the image of the application one embodiment;
Fig. 2 is the flow chart that processing method is blurred according to the image of another embodiment of the application;
Fig. 2A -2B are the exemplary plots that processing method is blurred according to the image of the application one embodiment;
Fig. 3 is the flow chart that processing method is blurred according to the image of another embodiment of the application;
Fig. 4 is the structure diagram that processing unit is blurred according to the image of the application one embodiment;And
Fig. 5 is the schematic diagram according to the image processing circuit of the application one embodiment.
Embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Each embodiment of the application is in the prior art, when taking pictures, mobile equipment where camera device or takes pictures Main body can be moved, and since virtualization processing procedure needs to calculate the depth of field, and the depth of field is calculated and taken long, this has resulted in movement and has set When main body movement that is standby or taking pictures needs to recalculate the depth of field, the processing speed of processor may not catch up with mobile equipment or take pictures The problem of translational speed of main body, leads to not determine the depth of field in time, and virtualization effect followability is poor, poor user experience, proposes one Kind image virtualization processing method.
Image provided by the embodiments of the present application blurs processing method, by the current kinetic speed according to mobile equipment, really The settled preceding depth of field calculates frame per second, and is calculating frame per second according to the depth of field, when to determine present preview image be target image, according to target The depth information of the background area of image, virtualization processing is carried out to present preview image, is improved virtualization effect followability, is improved User experience.
Below with reference to the accompanying drawings the image virtualization processing method, device and mobile equipment of the embodiment of the present application are described.
Fig. 1 is the flow chart that processing method is blurred according to the image of the application one embodiment.
As shown in Figure 1, image virtualization processing method is applied to include in the mobile equipment of camera assembly, this method bag Include:
Step 101, the current kinetic speed of mobile equipment is determined.
Wherein, the executive agent of image virtualization processing method provided by the embodiments of the present application, provides for the embodiment of the present application Image virtualization processing unit, which can be configured in the mobile equipment including camera assembly, with the image to collection Carry out virtualization processing.Wherein, the type of mobile equipment is very much, can be mobile phone, tablet computer, laptop etc..
Specifically, when getting virtualization process instruction, can be by setting such as gyroscope in a mobile device, accelerating The sensors such as meter, velocity sensor are spent, determine the current kinetic speed of mobile equipment.
Step 102, according to current kinetic speed, determine that the current depth of field calculates frame per second.
Step 103, frame per second is calculated according to the depth of field, judges whether present preview image is target image.
It is understood that in mobile equipment moving process, camera module is ceaselessly gathering image, that is, the figure gathered As be multiple image, the prior art, when carrying out virtualization processing to the image of collection, it is necessary to the progress depth of field calculating of every two field picture, And taken long since the depth of field calculates, in mobile equipment moving process, the processing speed of processor may not catch up with movement The translational speed of equipment or main body of taking pictures, leads to not determine the depth of field in time, and virtualization effect followability is poor.
To solve the above-mentioned problems, in the embodiment of the present application, every two field picture that camera assembly gathers can not be carried out The depth of field calculates, but according to the current kinetic speed of mobile equipment, determine that the current depth of field calculates frame per second, and frame is calculated according to the depth of field Rate, from the multiple image of collection extracting target image carries out depth of field calculating, and for the two field picture outside target image, then directly The depth of field result of calculation of the target image extracted recently before utilizing is connect, so as to reduce the time of depth of field calculating, improves virtualization effect Fruit followability, improves user experience.
Wherein, the depth of field calculates frame per second, can refer to frame period when target image is extracted from the image of collection.Such as scape The deep frame per second that calculates is 2, if the target image extracted for the first time is the 1st two field picture, second of target image extracted is the 4th frame Image.
Specifically, the movement velocity that can pre-set mobile equipment calculates the correspondence of frame per second with the depth of field, so that After the current kinetic speed for determining mobile equipment, it can determine that the current depth of field calculates frame per second according to default correspondence.
, can be with it should be noted that when setting the movement velocity of mobile equipment to calculate the correspondence of frame per second with the depth of field Movement velocity according to mobile equipment is faster, and corresponding depth of field calculating frame per second is bigger, i.e., the depth of field calculates frame per second size and set with movement The principle of standby movement velocity speed direct ratio is configured.
Step 104, if so, then obtaining the depth information of the background area of target image.
Step 105, according to depth information, virtualization processing is carried out to present preview image.
Wherein, background area refers to other regions in addition to main body region of taking pictures.
Specifically, frame per second is being calculated according to the current depth of field, after extracting target image from the image of collection, if current preview Image is target image, then can obtain the depth information of the background area of target image, and according to depth information, determine virtualization Grade, so that according to virtualization grade, virtualization processing is carried out to present preview image.
Wherein it is determined that the process of the depth information of the background area of target image, will illustrate, herein in the following embodiments It is not described.
It should be noted that background area may include different people or thing, and the corresponding depth number of different people or thing According to being probably different, therefore the depth information of above-mentioned background area may be a numerical value or a number range.Wherein, when When the depth information of background area is a numerical value, which can be by being averaged the depth data of background area Arrive;Alternatively, can be by being worth in being taken to the depth data of background area.
Specifically, different depth boundses can be pre-set, corresponding different virtualization grade, so that in definite target figure After the depth information of the background area of picture, corresponding void can be determined according to definite depth information and default correspondence Change grade, to carry out virtualization processing to present preview image.
During specific implementation, gaussian kernel function can be used, virtualization processing is carried out to present preview image.Wherein, Gaussian kernel It is considered as weight matrix, Gaussian Blur value calculating is carried out to the pixel in present preview image by using weight matrix, Virtualization processing can be carried out to present preview image.When calculating the Gaussian Blur value of pixel, using the pixel to be calculated as in Imago element, and the pixel value of the pixel on center pixel periphery is weighted using weight matrix, finally obtain and want The Gaussian Blur value of the pixel of calculating.
During specific implementation, Gaussian Blur value calculating is carried out using different weight matrix to same pixel, you can obtain not With the virtualization effect of degree.And weight matrix is related with the variance of gaussian kernel function, variance is bigger, represents the footpath of gaussian kernel function Wider to sphere of action, smooth effect is better, and i.e. fog-level is higher.Therefore, virtualization grade and Gaussian kernel letter can be pre-set The correspondence of several variances, so that after the virtualization grade of target image is determined, can determine according to default correspondence The variance of gaussian kernel function, and then determine weight matrix, so that the virtualization that degree of correspondence is carried out to present preview image is handled.
It should be noted that when the background area to present preview image blurs, since background area may include Different people or thing, thus the depth information of background area gradient may it is larger, such as in background area certain region depth Data are very big, the depth data very little in certain region, if carrying out virtualization processing all in accordance with same virtualization grade to whole background area, It is unnatural to may result in virtualization effect.Therefore, in the embodiment of the present application, background area can also be divided into different areas Domain, different grades of virtualization processing is carried out to different regions.
Specifically, background area can be divided into by multiple regions according to the depth information of background area, each region The span of depth bounds increases with the increase of the depth location residing for the region, so as to different zones, carries out different degrees of Virtualization so that the virtualization effect of image is more natural, closer to optical focus effect, lifts the visual experience of user.
In a kind of possible way of realization, frame per second is calculated according to the current depth of field, target figure is extracted from the image of collection As after, if present preview image is not target image, following various ways can be used, present preview image is blurred Processing.
Mode one
According to the depth information of the target image before present preview image, virtualization processing is carried out to present preview image.
Specifically, when present preview image is not target image, can be according to the target image before present preview image Depth information, determine virtualization grade, to carry out virtualization processing to present preview image.
As an example it is assumed that it is 2 that the depth of field, which calculates frame per second, frame per second is calculated according to the depth of field, determines to extract the 1st two field picture, the 4th frame Image etc. is used as target image., then can root since the 1st two field picture is target image when present preview image is 1 two field picture According to the depth information of the background area of the 1st two field picture, virtualization grade is determined, to carry out virtualization processing to the 1st two field picture.It is current pre- , then can be according to by target image before, i.e., since the 2nd two field picture is not target image when image of looking at is 2 two field picture The virtualization grade that the depth information of the background area of 1 two field picture determines, virtualization processing is carried out to the 2nd two field picture.
Mode two
According to current kinetic speed, determine the first virtualization grade, and according to the first virtualization grade, to present preview image into Row virtualization is handled.
Specifically, movement velocity can be pre-set with blurring the correspondence of grade, so that in present preview image not When being target image, the first virtualization grade can be determined, with according to first according to current kinetic speed and default correspondence Grade is blurred, virtualization processing is carried out to present preview image.
, can be according to it should be noted that when setting the correspondence of movement velocity and virtualization grade of mobile equipment The movement velocity of mobile equipment is faster, and the virtualization degree of corresponding virtualization grade is lower, that is, blurs the virtualization degree height of grade The principle being inversely proportional with the movement velocity speed of mobile equipment is configured.
As an example it is assumed that the movement velocity for pre-setting mobile equipment is less than 0.5 meter per second (m/s), corresponding virtualization grade A, movement velocity are more than or equal to 0.5m/s, corresponding virtualization grade B.It is 2 that the depth of field, which calculates frame per second, calculates frame per second according to the depth of field, determines Extract the 1st two field picture, the 4th two field picture etc. and be used as target image.When present preview image is 2 two field picture, if current kinetic is fast Spend for 0.4m/s, since the 2nd two field picture is not target image, then can according to current kinetic speed and default correspondence, Determine virtualization grade A, so that according to virtualization grade A, virtualization processing is carried out to the 2nd two field picture.
It should be noted that in embodiments of the present invention, can also be in basis when present preview image is not target image The depth information of target image before present preview image, determines virtualization grade, and according to current kinetic speed, determines first After blurring grade, according to virtualization grade relatively low in two virtualization grades, virtualization processing is carried out to present preview image.
It is understood that the prior art, when carrying out virtualization processing to the image of collection, depth of field meter is carried out to every two field picture Calculate, it is necessary to expend larger power consumption, and in the embodiment of the present application, by the current kinetic speed according to mobile equipment, determine to work as The preceding depth of field calculates frame per second, so as to calculate frame per second according to the depth of field, target image is extracted from the multiple image of collection and carries out depth of field meter Calculate, reduce the power consumption in virtualization processing procedure.
Image provided by the embodiments of the present application blurs processing method, after the current kinetic speed according to mobile equipment, Determine that the current depth of field calculates frame per second, and frame per second is being calculated according to the depth of field, when to determine present preview image be target image, according to mesh The depth information of the background area of logo image, virtualization processing is carried out to present preview image, is improved virtualization effect followability, is changed It has been apt to user experience.
By above-mentioned analysis, it can determine that the current depth of field calculates frame per second according to the current kinetic speed of mobile equipment, So as to calculate frame per second according to the depth of field, when to determine present preview image be target image, according to the background area of target image Depth information, virtualization processing is carried out to present preview image.In a kind of possible way of realization, mobile equipment can be combined with The depth of field calculate processing speed, determine that the current depth of field calculates frame per second, with reference to Fig. 2, to image provided by the embodiments of the present application Virtualization processing method is further described.
Fig. 2 is the flow chart that processing method is blurred according to the image of another embodiment of the application.
As shown in Fig. 2, the image blurs processing method, including:
Step 201, the current kinetic speed of mobile equipment is determined.
Specifically, it can be sensed by such as gyroscope, accelerometer, velocity sensor set in a mobile device Device, determines the current kinetic speed of mobile equipment.
Step 202, processing speed is calculated according to the depth of field of mobile equipment, determines that the initial depth of field calculates frame per second.
Specifically, can pre-set the different depth of field calculates processing speed, the corresponding different initial depth of field calculates frame per second, After calculating processing speed in the depth of field for determining mobile equipment, processing speed and default can be calculated according to the definite depth of field Correspondence, determines that the initial depth of field calculates frame per second.
It should be noted that the depth of field of mobile equipment calculates processing speed, can be place when being dispatched from the factory according to mobile equipment Reason device performance determines;Alternatively, due to the software run in mobile equipment it is different when, the processor processing speed of mobile equipment can Can be different, therefore, what the depth of field of mobile equipment calculated processing speed or was determined according to the use state of mobile equipment, It is not restricted herein.
Step 203, according to current kinetic speed, frame per second is calculated to the initial depth of field and is adjusted, in terms of obtaining the current depth of field Calculate frame per second.
Step 204, frame per second is calculated according to the current depth of field, judges whether present preview image is target image.
Specifically, frame per second can be calculated to the initial depth of field and be adjusted in the following manner.That is step 203 can be to Lower step replaces:
Step 203a, judges whether the current kinetic speed of mobile equipment is more than threshold value, if so, step 203b is then performed, Otherwise, step 203c is performed.
Step 203b, increases the initial depth of field and calculates frame per second.
Step 203c, calculates frame per second using the initial depth of field and calculates frame per second as the current depth of field.
Wherein, threshold value can be arranged as required to.
Specifically, if the current kinetic speed of mobile equipment is more than threshold value, the initial depth of field can be increased and calculate frame per second, if The current kinetic speed of mobile equipment is less than or equal to threshold value, then the initial depth of field is calculated frame per second calculates frame as the current depth of field Rate.
, can be according to the current of mobile equipment if the current kinetic speed of mobile equipment is more than threshold value during specific implementation The difference of movement velocity and threshold value, determines that the initial depth of field calculates the increase degree of frame per second.The current kinetic speed of mobile equipment with The difference of threshold value is bigger, then the increase degree of initial depth of field calculating frame per second is bigger, the current kinetic speed and threshold value of mobile equipment Difference it is smaller, the increase degree that the initial depth of field calculates frame per second is smaller.
By the current kinetic speed according to mobile equipment, frame per second is calculated to the initial depth of field and is adjusted, determines to work as prospect It is deep to calculate frame per second, the current kinetic speed of mobile equipment can be made faster, current depth of field calculating frame per second is bigger.
Step 205, if so, then obtaining the depth information of the background area of target image.
Step 206, according to depth information, virtualization processing is carried out to present preview image.
Specifically, the depth information for the background area in the following method, determining target image can be adopted, i.e. step 205 It can include:
Step 205a, according to target image and corresponding depth image, determines the image depth information of target image.Its In, target image is RGB color image, and depth image includes each personal or object depth information in target image.Specifically, Depth image can be obtained using depth camera.Wherein, depth camera includes the depth based on structure light Range finder Camera and the depth camera based on flight time (time of flight, abbreviation TOF) ranging.
, can root since the color information of target image and the depth information of depth image are one-to-one relations The image depth information of target image is got according to depth image.
Step 205b, the background area of target image is determined according to image depth information.
Specifically, the First Point of target image according to image depth information, can be obtained, First Point is opened equivalent to main body End, is diffused from First Point, obtains and the region of depth consecutive variations adjacent with First Point, these regions and First Point are returned And be main body region, the region in target image in addition to main body is background area.
Step 205c, according to the color information of background area and the correspondence of the depth information of depth image, you can really Determine the depth information of background area.
In a kind of possible way of realization, portrait may be included in target image, at this point it is possible to using following side Method, determines the background area of target image, and then determines the depth information of background area.That is, in step 205, target figure is obtained Before the depth information of the background area of picture, it can also include:
Recognition of face is carried out to target image, determines the human face region that target image includes;
Obtain the depth information of human face region;
According to the current posture of mobile equipment and the depth information of human face region, portrait area is determined;
According to portrait area, region segmentation is carried out to target image, determines background area.
Specifically, the face area that target image includes can be gone out using trained deep learning Model Identification first Domain, then can determine that the depth information of human face region according to the correspondence of target image and depth image.Due to face area Domain includes the feature such as nose, eyes, ear, lip, and therefore, each feature in human face region is corresponding in depth image Depth data is different, for example, in the depth camera of face face sampling depth image, what depth camera was shot In depth image, the corresponding depth data of nose may be smaller, and the corresponding depth data of ear may be larger.Therefore, it is above-mentioned The depth information of human face region may be a numerical value or a number range.Wherein, when the depth information of human face region is During one numerical value, which can be by being averaged to obtain to the depth data of human face region;Alternatively, can be by face area The depth data in domain is worth in taking.
Since portrait area includes human face region, in other words, portrait area is in some depth together with human face region In the range of, accordingly, it is determined that after going out the depth information of human face region, portrait area can be set according to the depth information of human face region Depth bounds, further according to the depth bounds extraction area that falls into the depth bounds and be connected with human face region of portrait area Domain is to obtain portrait area.
It should be noted that since in the camera assembly of mobile equipment, imaging sensor includes multiple photosensitive units, each Photosensitive unit corresponds to a pixel, and camera assembly is to relatively move equipment to be fixedly installed, therefore when mobile equipment is with difference Posture shooting image when, different pixels point that identical point on object can be on correspondence image sensor.
As an example it is assumed that elliptic region is respectively that mobile terminal is clapped in a manner of portrait layout with transverse screen mode in Fig. 2A and Fig. 2 B When taking the photograph image, object region.As knowable to Fig. 2A and Fig. 2 B, when mobile equipment shooting image in a manner of portrait layout, it is shot A points and b points difference corresponding pixel points 10 and pixel 11 on thing, and when mobile equipment shooting image in a manner of transverse screen, it is shot A points and b points difference corresponding pixel points 11 and pixel 8 on thing.
So, it is assumed that the depth bounds N of known a point regions and b points region falls into depth bounds, it is necessary to extract During b point regions in N, if mobile equipment be portrait layout state, according to the position relationship of a points and b points, it is necessary to by pixel 10 extract to the direction of pixel 11, if mobile equipment is transverse screen state, need the direction by pixel 11 to pixel 8 to carry Take.That is, it is necessary to extract when falling into other regions in a certain depth bounds, if mobile equipment after determining a certain region Posture it is different, then need to different direction extractions.Therefore in embodiments of the present invention, according to the depth information of human face region After the depth bounds for setting portrait area, according to the depth bounds of portrait area, extraction is fallen into the depth bounds and and face During the region that region is connected, it can determine be connected to which direction extraction with face according to the current posture of mobile equipment And the region of the depth bounds of setting is fallen into, so as to determine portrait area faster.
Specifically, after portrait area is determined, you can carry out region segmentation to target image according to portrait area, people will be removed Other regions outside as region are determined as background area, and then are believed according to the color information of background area and the depth of depth image The correspondence of breath, determines the depth information of background area.
After the depth information that the background area of target image is determined, you can according to depth information, to present preview image Carry out virtualization processing.Concrete implementation process and principle, are referred to the detailed description of above-described embodiment, details are not described herein again.
By calculating processing speed according to the depth of field of mobile equipment, determine that the initial depth of field calculates frame per second, further according to current fortune Dynamic speed, calculates frame per second to the initial depth of field and is adjusted, to determine that the current depth of field calculates frame per second so that the definite depth of field calculates frame Rate is more reasonable, so that virtualization effect followability is more preferable.
Image provided by the embodiments of the present application blurs processing method, after the current kinetic speed of mobile equipment is determined, first Processing speed is calculated according to the depth of field of mobile equipment, determines that the initial depth of field calculates frame per second, then according to current kinetic speed, to first The beginning depth of field calculates frame per second and is adjusted, and calculates frame per second to obtain the current depth of field, frame per second is calculated further according to the current depth of field, judge current Whether preview image is target image, if so, the depth information of the background area of target image is then obtained, so as to believe according to depth Breath, virtualization processing is carried out to present preview image.Thus, by according to the current kinetic speed and movement equipment for moving equipment The depth of field calculates processing frame per second, determines that the current depth of field calculates frame per second, and is calculating frame per second according to the depth of field, determines that present preview image is During target image, according to the depth information of the background area of target image, virtualization processing is carried out to present preview image, is improved Effect followability is blurred, improves user experience.
By above-mentioned analysis, it can determine that the current depth of field calculates frame per second according to the current kinetic speed of mobile equipment, So as to calculate frame per second according to the depth of field, when to determine present preview image be target image, according to the background area of target image Depth information, determines corresponding virtualization grade, to carry out virtualization processing to preview image.In a kind of possible way of realization, When present preview image is target image, the current kinetic speed of mobile equipment is can be combined with, virtualization grade is determined, with to working as Preceding preview image carries out virtualization processing.With reference to Fig. 3, processing method is blurred to image provided by the embodiments of the present application into traveling One step explanation.
Fig. 3 is the flow chart according to the image virtualization processing method of the application another embodiment.
As shown in figure 3, the image blurs processing method, including:
Step 301, the current kinetic speed of mobile equipment is determined.
Step 302, according to current kinetic speed, determine that the current depth of field calculates frame per second.
Step 303, frame per second is calculated according to the depth of field, judges whether present preview image is target image.
Step 304, if so, then obtaining the depth information of the background area of target image.
Wherein, the specific implementation process and principle of above-mentioned steps 301-304, is referred to retouching in detail for above-described embodiment State, details are not described herein again.
Step 305, according to current kinetic speed, the first virtualization grade is determined.
Wherein, different virtualization grade, corresponding virtualization degree are different.
Specifically, the correspondence of the movement velocity and virtualization grade of mobile equipment can be pre-set, so as to determine After the current kinetic speed of mobile equipment, the first virtualization grade can be determined according to default correspondence.
, can be according to it should be noted that when setting the correspondence of movement velocity and virtualization grade of mobile equipment The movement velocity of mobile equipment is faster, and the virtualization degree of corresponding virtualization grade is lower, that is, blurs the virtualization degree height of grade The principle being inversely proportional with the movement velocity speed of mobile equipment is configured.
Step 306, according to the depth information of the background area of target image, the second virtualization grade is determined.
Specifically, different depth boundses can be pre-set, corresponding different virtualization grade, so that in definite target figure After the depth information of the background area of picture, the second virtualization can be determined according to definite depth information and default correspondence Grade.
Step 307, according to the second virtualization grade with blurring the relatively low virtualization grade of degree in the first virtualization grade, to preview Image carries out virtualization processing.
Specifically, after the second virtualization grade and the first virtualization grade is determined, you can according to the second virtualization grade and first Blur and the relatively low virtualization grade of degree is blurred in grade, virtualization processing is carried out to preview image.
It should be noted that in the embodiment of the present application, can also first it be believed according to the depth of the background area of target image Breath, determines the second virtualization grade, and then according to the current kinetic speed of mobile equipment, the second virtualization grade is adjusted, if The current kinetic speed of mobile equipment is larger, then the virtualization degree for reducing the second virtualization grade obtains final virtualization grade, from And according to final virtualization grade, virtualization processing is carried out to preview image.
It is understood that image virtualization processing method provided by the embodiments of the present application, is carried out by extracting target image The depth of field calculates, and reduces the power consumption in depth of field calculating time and virtualization processing procedure, improves the followability of virtualization effect, improve User experience.And by when present preview image is target image, according to the current kinetic speed of mobile equipment, and target The depth information of the background area of image, determines virtualization grade, and reduces virtualization with the increase of the movement velocity of mobile equipment Degree, the gap of the fog-level between background area after reducing the not body region by virtualization and virtualization, so that In mobile equipment movement, cover virtualization effect followability it is poor the problem of.
In order to realize above-described embodiment, the application also proposed a kind of image virtualization processing unit.
Fig. 4 is the structure diagram that processing unit is blurred according to the image of the application one embodiment.
As shown in figure 4, image virtualization processing unit is applied to include in the mobile equipment of camera assembly, including:
First determining module 41, for determining the current kinetic speed of mobile equipment;
Second determining module 42, for according to current kinetic speed, determining that the current depth of field calculates frame per second;
Judgment module 43, for calculating frame per second according to the depth of field, judges whether present preview image is target image;
First acquisition module 44, for when present preview image is target image, obtaining the background area of target image Depth information;
First processing module 45, for according to depth information, virtualization processing to be carried out to present preview image.
Specifically, image virtualization processing unit provided by the embodiments of the present application, can perform provided by the embodiments of the present application Image blurs processing method, which can be configured in the mobile equipment including camera assembly, with the image to collection into Row virtualization is handled.Wherein, the type of mobile equipment is very much, can be mobile phone, tablet computer, laptop etc..Fig. 4 is with movement Equipment carries out example for mobile phone.
In one embodiment of the application, the device, further includes:
Second processing module, for when present preview image is not target image, before present preview image The depth information of target image, virtualization processing is carried out to present preview image;
Alternatively, the 3rd processing module, for when present preview image is not target image, according to current kinetic speed, Determine the first virtualization grade, and according to the first virtualization grade, virtualization processing is carried out to present preview image.
In another embodiment of the application, the device, further includes:
3rd determining module, for calculating processing speed according to the depth of field of mobile equipment, determines that the initial depth of field calculates frame per second;
Above-mentioned second determining module 42, is specifically used for:
According to current kinetic speed, frame per second is calculated to the initial depth of field and is adjusted, frame per second is calculated to obtain the current depth of field.
In another embodiment of the application, above-mentioned second determining module 42, is additionally operable to:
Judge whether the current kinetic speed of mobile equipment is more than threshold value;
If so, then increasing the initial depth of field calculates frame per second;
Otherwise, the initial depth of field is calculated into frame per second and calculates frame per second as the current depth of field.
In another embodiment of the application, portrait can be included in target image, can be with correspondingly, the device Including:
4th determining module, for carrying out recognition of face to target image, determines the human face region that target image includes;
Second acquisition module, for obtaining the depth information of human face region;
5th determining module, for the depth information according to the current posture of mobile equipment and human face region, determines portrait Region;
6th determining module, for according to portrait area, carrying out region segmentation to target image, determining background area.
In another embodiment of the application, the device, can also include:
7th determining module, for according to current kinetic speed, determining the first virtualization grade;
8th determining module, for the depth information of the background area according to target image, determines the second virtualization grade;
Fourth processing module, for according to the second virtualization grade virtualization relatively low with virtualization degree in the first virtualization grade etc. Level, virtualization processing is carried out to preview image.
It should be noted that the foregoing description to embodiment of the method, is also applied for the device of the embodiment of the present application, it is realized Principle is similar, and details are not described herein.
The division of modules is only used for for example, in other embodiments in above-mentioned image virtualization processing unit, can Image virtualization processing unit is divided into different modules as required, with complete the whole of above-mentioned image virtualization processing unit or Partial function.
In conclusion the image virtualization processing unit of the embodiment of the present application, passes through the current kinetic speed according to mobile equipment After degree, determine that the current depth of field calculates frame per second, and frame per second is being calculated according to the depth of field, when to determine present preview image be target image, According to the depth information of the background area of target image, virtualization processing is carried out to present preview image, improve virtualization effect with It is casual, improve user experience.
In order to realize above-described embodiment, the application also proposed a kind of mobile equipment, including:Memory, processor and deposit Storage on a memory and the computer program that can run on a processor, when the processor performs described program, realizes such as the Image virtualization processing method described in one side.
It can also include image processing circuit in above-mentioned mobile equipment, image processing circuit can utilize hardware and/or soft Part component is realized, it may include defines the various processing of ISP (Image Signal Processing, picture signal processing) pipeline Unit.
Fig. 5 is the schematic diagram of image processing circuit in one embodiment.As shown in figure 5, for purposes of illustration only, only show and this Apply for the various aspects of the relevant image processing techniques of embodiment.
As shown in figure 5, image processing circuit includes ISP processors 540 and control logic device 550.Camera assembly 510 is caught View data handled first by ISP processors 540, ISP processors 540 view data is analyzed with catch can be used for it is true The image statistics of fixed and/or camera assembly 510 one or more control parameters.Camera assembly 510 may include there is one The camera of a or multiple lens 512 and imaging sensor 514.Imaging sensor 514 may include colour filter array (such as Bayer filters), imaging sensor 514 can obtain the luminous intensity caught with each imaging pixel of imaging sensor 514 and wavelength Information, and the one group of raw image data that can be handled by ISP processors 540 is provided.Sensor 520 can be connect based on sensor 520 Raw image data is supplied to ISP processors 540 by mouth type.520 interface of sensor can utilize SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other serial or parallel camera interfaces or The combination of above-mentioned interface.
ISP processors 540 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 540 can carry out raw image data at one or more images Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision Carry out.
ISP processors 540 can also receive pixel data from video memory 530.For example, from 520 interface of sensor by original Beginning pixel data is sent to video memory 530, and the raw pixel data in video memory 530 is available to ISP processors 540 is for processing.Video memory 530 can be independent in a part, storage device or electronic equipment for storage arrangement Private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 520 interface of sensor or from video memory 530, ISP processing Device 540 can carry out one or more image processing operations, such as time-domain filtering.View data after processing can be transmitted to be stored to image Device 530, to carry out other processing before shown.ISP processors 540 receive processing data from video memory 530, And the processing data are carried out with the image real time transfer in original domain and in RGB and YCbCr color spaces.Figure after processing As data may be output to display 570, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, graphics processor) further processing.In addition, the output of ISP processors 540 also can be transmitted to video memory 530, and Display 570 can read view data from video memory 530.In one embodiment, video memory 530 can be configured as Realize one or more frame buffers.In addition, the output of ISP processors 540 can be transmitted to encoder/decoder 560, to compile Code/decoding view data.The view data of coding can be saved, and be decompressed before being shown in 570 equipment of display.Compile Code device/decoder 560 can be realized by CPU or GPU or coprocessor.
The definite statistics of ISP processors 540, which can be transmitted, gives control logic device Unit 550.For example, statistics can wrap Include the image sensings such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 512 shadow correction of lens 514 statistical information of device.Control logic device 550 may include the processor and/or micro-control for performing one or more routines (such as firmware) Device processed, one or more routines according to the statistics of reception, can determine camera assembly 510 control parameter and control ginseng Number.For example, control parameter may include that 520 control parameter of sensor (such as gain, time of integration of spectrum assignment), camera are dodged The combination of photocontrol parameter, 512 control parameter of lens (such as focusing or zoom focal length) or these parameters.ISP control parameters It may include the gain level and color correction matrix for being used for automatic white balance and color adjustment (for example, during RGB processing), with And 512 shadow correction parameter of lens.
It is to realize the step of image blurs processing method with image processing techniques in Fig. 5 below:
Determine the current kinetic speed of the mobile equipment;
According to the current kinetic speed, determine that the current depth of field calculates frame per second;
Frame per second is calculated according to the depth of field, judges whether present preview image is target image;
If so, then obtain the depth information of the background area of the target image;
According to the depth information, virtualization processing is carried out to the present preview image.
In order to realize above-described embodiment, the application also proposes a kind of computer-readable recording medium, when the storage medium In instruction be performed by processor, enabling perform image as described in above-described embodiment and blur processing method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms is not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office Combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this area Art personnel can be tied the different embodiments or example described in this specification and different embodiments or exemplary feature Close and combine.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used for realization custom logic function or process Point, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic at the same time in the way of or in the opposite order, carry out perform function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of application Type.

Claims (10)

1. a kind of image blurs processing method, applied in the mobile equipment including camera assembly, it is characterised in that including:
Determine the current kinetic speed of the mobile equipment;
According to the current kinetic speed, determine that the current depth of field calculates frame per second;
Frame per second is calculated according to the depth of field, judges whether present preview image is target image;
If so, then obtain the depth information of the background area of the target image;
According to the depth information, virtualization processing is carried out to the present preview image.
2. the method as described in claim 1, it is characterised in that it is described judge present preview image whether be target image it Afterwards, further include:
If it is not, then according to the depth information of the target image before the present preview image, to the present preview image into Row virtualization is handled;
Alternatively, if it is not, then according to the current kinetic speed, the first virtualization grade is determined, and according to the described first virtualization grade, Virtualization processing is carried out to the present preview image.
3. the method as described in claim 1, it is characterised in that it is described according to the current kinetic speed, determine the current depth of field Before calculating frame per second, further include:
Processing speed is calculated according to the depth of field of the mobile equipment, determines that the initial depth of field calculates frame per second;
It is described to determine that the current depth of field calculates frame per second, including:
According to the current kinetic speed, frame per second is calculated to the initial depth of field and is adjusted, in terms of obtaining the current depth of field Calculate frame per second.
4. method as claimed in claim 3, it is characterised in that it is described that the initial depth of field calculating frame per second is adjusted, wrap Include:
Judge whether the current kinetic speed of the mobile equipment is more than threshold value;
If so, then increasing the initial depth of field calculates frame per second;
Otherwise, the initial depth of field is calculated into frame per second and calculates frame per second as the current depth of field.
5. the method as described in claim 1-4 is any, it is characterised in that the target image includes portrait;
Before the depth information for obtaining the background area of the target image, further include:
Recognition of face is carried out to the target image, determines the human face region that the target image includes;
Obtain the depth information of the human face region;
According to the current posture of the mobile equipment and the depth information of the human face region, portrait area is determined;
According to the portrait area, region segmentation is carried out to the target image, determines the background area.
6. the method as described in claim 1-4 is any, it is characterised in that the background area for obtaining the target image After depth information, further include:
According to the current kinetic speed, the first virtualization grade is determined;
According to the depth information of the background area of the target image, the second virtualization grade is determined;
According to the described second virtualization grade with blurring the relatively low virtualization grade of degree in the described first virtualization grade, to the preview Image carries out virtualization processing.
7. a kind of image blurs processing unit, applied in the mobile equipment including camera assembly, it is characterised in that including:
First determining module, for determining the current kinetic speed of the mobile equipment;
Second determining module, for according to the current kinetic speed, determining that the current depth of field calculates frame per second;
Judgment module, for calculating frame per second according to the depth of field, judges whether present preview image is target image;
First acquisition module, for when present preview image is target image, obtaining the background area of the target image Depth information;
First processing module, for according to the depth information, virtualization processing to be carried out to the present preview image.
8. device as claimed in claim 7, it is characterised in that further include:
Second processing module, for when present preview image is not target image, before the present preview image The depth information of target image, virtualization processing is carried out to the present preview image;
Alternatively, the 3rd processing module, for when present preview image is not target image, according to the current kinetic speed, Determine the first virtualization grade, and according to the described first virtualization grade, virtualization processing is carried out to the present preview image.
9. a kind of mobile equipment, it is characterised in that including memory, processor and storage on a memory and can be on a processor The computer program of operation, when the processor performs described program, realizes that the image as described in any in claim 1-6 is empty Change processing method.
10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The image virtualization processing method as described in any in claim 1-6 is realized during execution.
CN201711240101.6A 2017-11-30 2017-11-30 Image blurs processing method, device, mobile device and computer storage medium Active CN107948514B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711240101.6A CN107948514B (en) 2017-11-30 2017-11-30 Image blurs processing method, device, mobile device and computer storage medium
PCT/CN2018/117195 WO2019105297A1 (en) 2017-11-30 2018-11-23 Image blurring method and apparatus, mobile device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711240101.6A CN107948514B (en) 2017-11-30 2017-11-30 Image blurs processing method, device, mobile device and computer storage medium

Publications (2)

Publication Number Publication Date
CN107948514A true CN107948514A (en) 2018-04-20
CN107948514B CN107948514B (en) 2019-07-19

Family

ID=61948032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711240101.6A Active CN107948514B (en) 2017-11-30 2017-11-30 Image blurs processing method, device, mobile device and computer storage medium

Country Status (2)

Country Link
CN (1) CN107948514B (en)
WO (1) WO2019105297A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740337A (en) * 2019-01-25 2019-05-10 宜人恒业科技发展(北京)有限公司 A kind of method and device for realizing the identification of sliding block identifying code
WO2019105297A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Image blurring method and apparatus, mobile device, and storage medium
CN110047126A (en) * 2019-04-25 2019-07-23 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110062157A (en) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110248096A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment, computer readable storage medium
CN110266960A (en) * 2019-07-19 2019-09-20 Oppo广东移动通信有限公司 Preview screen processing method, processing unit, photographic device and readable storage medium storing program for executing
CN110503658A (en) * 2018-05-16 2019-11-26 纬创资通股份有限公司 Dress ornament tries method and its display system and computer-readable recording medium on
CN112016469A (en) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN112258527A (en) * 2020-11-02 2021-01-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113784015A (en) * 2020-06-10 2021-12-10 Oppo广东移动通信有限公司 Image processing circuit, electronic device, and image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821243A (en) * 2011-06-07 2012-12-12 索尼公司 Image processing device, method of controlling image processing device, and program for causing computer to execute the same method
CN103081455A (en) * 2010-11-29 2013-05-01 数字光学欧洲有限公司 Portrait image synthesis from multiple images captured on a handheld device
CN106454061A (en) * 2015-08-04 2017-02-22 纬创资通股份有限公司 Electronic device and image processing method
CN106791456A (en) * 2017-03-31 2017-05-31 联想(北京)有限公司 A kind of photographic method and electronic equipment
KR20170079935A (en) * 2015-12-31 2017-07-10 (주)이더블유비엠 Method and apparatus speedy region growing of depth area based on motion model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948514B (en) * 2017-11-30 2019-07-19 Oppo广东移动通信有限公司 Image blurs processing method, device, mobile device and computer storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103081455A (en) * 2010-11-29 2013-05-01 数字光学欧洲有限公司 Portrait image synthesis from multiple images captured on a handheld device
CN102821243A (en) * 2011-06-07 2012-12-12 索尼公司 Image processing device, method of controlling image processing device, and program for causing computer to execute the same method
CN106454061A (en) * 2015-08-04 2017-02-22 纬创资通股份有限公司 Electronic device and image processing method
KR20170079935A (en) * 2015-12-31 2017-07-10 (주)이더블유비엠 Method and apparatus speedy region growing of depth area based on motion model
CN106791456A (en) * 2017-03-31 2017-05-31 联想(北京)有限公司 A kind of photographic method and electronic equipment

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019105297A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Image blurring method and apparatus, mobile device, and storage medium
CN110503658A (en) * 2018-05-16 2019-11-26 纬创资通股份有限公司 Dress ornament tries method and its display system and computer-readable recording medium on
CN109740337A (en) * 2019-01-25 2019-05-10 宜人恒业科技发展(北京)有限公司 A kind of method and device for realizing the identification of sliding block identifying code
CN110062157A (en) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110062157B (en) * 2019-04-04 2021-09-17 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN110047126A (en) * 2019-04-25 2019-07-23 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110047126B (en) * 2019-04-25 2023-11-24 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN110248096A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment, computer readable storage medium
CN110248096B (en) * 2019-06-28 2021-03-12 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
US11178324B2 (en) 2019-06-28 2021-11-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Focusing method and device, electronic device and computer-readable storage medium
CN110266960A (en) * 2019-07-19 2019-09-20 Oppo广东移动通信有限公司 Preview screen processing method, processing unit, photographic device and readable storage medium storing program for executing
CN113784015A (en) * 2020-06-10 2021-12-10 Oppo广东移动通信有限公司 Image processing circuit, electronic device, and image processing method
CN112016469A (en) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN112258527A (en) * 2020-11-02 2021-01-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2019105297A1 (en) 2019-06-06
CN107948514B (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN107948514B (en) Image blurs processing method, device, mobile device and computer storage medium
US11457138B2 (en) Method and device for image processing, method for training object detection model
CN108093158A (en) Image virtualization processing method, device and mobile equipment
JP7015374B2 (en) Methods for image processing using dual cameras and mobile terminals
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
EP3499863B1 (en) Method and device for image processing
CN107977940A (en) background blurring processing method, device and equipment
US20230214976A1 (en) Image fusion method and apparatus and training method and apparatus for image fusion model
CN110191291B (en) Image processing method and device based on multi-frame images
CN115442515A (en) Image processing method and apparatus
CN108024058B (en) Image blurs processing method, device, mobile terminal and storage medium
CN107959778A (en) Imaging method and device based on dual camera
CN109068067A (en) Exposal control method, device and electronic equipment
CN107509031A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN107945105A (en) Background blurring processing method, device and equipment
CN107493432A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN107948517A (en) Preview screen virtualization processing method, device and equipment
CN109348088A (en) Image denoising method, device, electronic equipment and computer readable storage medium
CN108024054A (en) Image processing method, device and equipment
CN103369238B (en) Image creating device and image creating method
CN110324532A (en) A kind of image weakening method, device, storage medium and electronic equipment
CN107358593A (en) Imaging method and device
CN108989699A (en) Image composition method, device, imaging device, electronic equipment and computer readable storage medium
CN109194855A (en) Imaging method, device and electronic equipment
CN107820018A (en) User's photographic method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant