CN108024058A - Image virtualization processing method, device, mobile terminal and storage medium - Google Patents
Image virtualization processing method, device, mobile terminal and storage medium Download PDFInfo
- Publication number
- CN108024058A CN108024058A CN201711243733.8A CN201711243733A CN108024058A CN 108024058 A CN108024058 A CN 108024058A CN 201711243733 A CN201711243733 A CN 201711243733A CN 108024058 A CN108024058 A CN 108024058A
- Authority
- CN
- China
- Prior art keywords
- image
- blurring
- background area
- portrait
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 92
- 238000003384 imaging method Methods 0.000 claims abstract description 49
- 230000000694 effects Effects 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000033001 locomotion Effects 0.000 claims description 33
- 230000009471 action Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 230000009977 dual effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000009432 framing Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000003705 background correction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 239000004575 stone Substances 0.000 description 2
- 206010070834 Sensitisation Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008313 sensitization Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The application proposes a kind of image virtualization processing method, device, mobile terminal and storage medium, wherein, method includes:Obtain the master image of main camera collection and the sub-picture of secondary camera collection;According to master image and sub-picture, the depth information of master image is determined;According to the depth information of master image, the foreground area in master image and background area are determined;To background area extract characteristics of image, and with the Image Feature Matching of default scenery;According to the Image Feature Matching of the characteristics of image of background area and default scenery as a result, judging whether to carry out virtualization processing to background area.It can realize that adjustment blurs scope, lift the virtualization effect of image.At the same time, the sub-picture of secondary camera shooting is that sync pulse jamming obtains with main camera shooting master image, during so as to which follow-up virtualization processing is carried out to master image according to corresponding sub-picture, the imaging effect of images can be improved, and the accuracy of depth information is improved, so that image processing effect is preferable.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image blurring processing method and apparatus, a mobile terminal, and a storage medium.
Background
With the continuous development of mobile terminal technology, more and more users use mobile terminals with dual-camera functions to take pictures. When a user opens the two cameras and enters the blurring mode for preview, the blurring may be mistakenly blurred.
For example, a user uses a photographing apparatus to photograph a visitor standing in front of the "skyline haifang" boulder, and the user may wish to keep the writing on the boulder clear and not wish to invalidate the four words of the "skyline haifang" character.
In the prior art, after the depth of field is calculated by adopting a blurring algorithm, tourists, boulders and the sea are respectively identified as three parts with different depth of field, only the tourists are reserved, and other parts are blurred, so that the handwriting on the boulders is blurred, and the image blurring effect is poor.
Content of application
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the application provides an image blurring processing method to realize intelligent adjustment of blurring range and improve blurring effect of an imaging image. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are shot synchronously, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of an imaging photo can be improved on the one hand, and the accuracy of depth information is improved on the other hand, so that the image processing effect is better, and the technical problem of poor blurring effect of the existing image is solved.
The application provides an image blurring processing device.
The application provides a mobile terminal.
The present application provides a computer-readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present application provides an image blurring processing method, including:
acquiring a main image acquired by a main camera and acquiring an auxiliary image acquired by an auxiliary camera;
determining depth information of the main image according to the main image and the auxiliary image;
determining a foreground area and a background area in the main image according to the depth information of the main image;
extracting image features from the background area, and matching the image features with image features of a preset scene;
and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic matching result of a preset scene.
According to the image blurring processing method, the main image collected by the main camera and the auxiliary image collected by the auxiliary camera are obtained; determining depth information of the main image according to the main image and the auxiliary image; determining a foreground area and a background area in the main image according to the depth information of the main image; extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of a preset scene; and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of the preset scenery. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are shot synchronously, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of the imaging picture can be improved on the one hand, and the accuracy of the depth information is improved on the other hand, so that the image processing effect is better, and the technical problem of poor image blurring effect in the prior art is solved.
To achieve the above object, a second aspect of the present application provides an image blurring processing apparatus, including:
the acquisition module is used for acquiring a main image acquired by the main camera and acquiring an auxiliary image acquired by the auxiliary camera;
the depth processing module is used for determining the depth information of the main image according to the main image and the auxiliary image;
the region identification module is used for determining a foreground region and a background region in the main image according to the depth information of the main image;
the characteristic extraction module is used for extracting image characteristics from the background area and matching the image characteristics with the image characteristics of a preset scenery;
and the blurring module is used for judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of a preset scene.
The image blurring processing device of the embodiment of the application acquires the main image acquired by the main camera and acquires the auxiliary image acquired by the auxiliary camera; determining depth information of the main image according to the main image and the auxiliary image; determining a foreground area and a background area in the main image according to the depth information of the main image; extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of a preset scene; and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of the preset scenery. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are shot synchronously, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of the imaging picture can be improved on the one hand, and the accuracy of the depth information is improved on the other hand, so that the image processing effect is better, and the technical problem of poor image blurring effect in the prior art is solved.
To achieve the above object, a third aspect of the present application provides a mobile terminal, including: the image blurring processing method includes the steps of storing a first image, and storing a second image, wherein the first image is a moving image, and the second image is a moving image.
In order to achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program is configured to, when executed by a processor, implement an image blurring processing method according to an embodiment of the first aspect of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a first image blurring processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of triangulation;
fig. 3 is a schematic flowchart of a second image blurring processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a third image blurring processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another image blurring processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a terminal device according to another embodiment of the present application;
FIG. 8 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An image blurring processing method, an apparatus, a mobile terminal, and a storage medium according to embodiments of the present application are described below with reference to the drawings.
The image blurring processing method can be specifically executed by hardware equipment such as a mobile phone with two cameras, a tablet personal computer, a personal digital assistant and wearable equipment, wherein the hardware equipment with the two cameras comprises a camera module, and the camera module comprises a main camera and an auxiliary camera. The main camera and the auxiliary camera are respectively provided with a lens, an image sensor and a voice coil motor which are independent. The main camera and the auxiliary camera in the double cameras are connected with the camera connector, so that the voice coil motor is driven according to the current value provided by the camera connector, the distance between the lens and the image sensor is adjusted under the driving of the voice coil motor by the main camera and the auxiliary camera, and focusing is achieved.
As a possible application scenario, the resolution of the secondary camera is lower than that of the primary camera, when focusing is performed, only the secondary camera may be used for focusing, when the secondary camera is focused, a second driving current value of a motor of the secondary camera is obtained, and further, under the condition that the primary camera and the secondary camera have the same focusing distance, a first driving current value of the motor of the primary camera is determined according to the second driving current value, and the primary camera is driven by the first driving current value for focusing. Because the resolution ratio of the auxiliary camera is low, the image processing speed is high, the focusing speed can be accelerated, and the technical problem that the focusing speed of the double cameras is low in the prior art is solved.
In the specific realization process of two cameras, can select different camera combinations as main camera and vice camera in two cameras to adapt to different user's demands.
In an application scenario, a higher focusing speed is required, so that a main camera in the dual cameras is specifically a common camera, and a sub camera in the dual cameras is specifically a dual Pixel (PD) camera. The resolution of the double PD camera is lower than that of a common camera, so that the double PD camera has a faster focusing speed.
It should be noted that each pixel of the dual PD camera is composed of two units, and the two units can be used as phase focusing detection points, and can also be combined to form an image of one pixel, thereby greatly improving the focusing performance during electronic framing. A dual PD Complementary Metal Oxide Semiconductor (CMOS), a sensor camera is a commonly used dual PD camera that specifically uses CMOS as a sensor, and is originally used in a single lens reflex.
In another application scenario, a better imaging effect is required, so that the combination of the wide-angle camera and the telephoto camera is used as a dual camera. And switching the main camera and the auxiliary camera according to the shooting requirement. Specifically, when a close-up scene is shot, a wide-angle lens is used as a main camera, and a telephoto lens is used as a sub-camera; when a long shot is taken, the telephoto lens is used as the main camera, and the wide-angle lens is used as the auxiliary camera, so that the optical zooming function is realized, and the imaging quality and the subsequent blurring effect are also ensured.
Fig. 1 is a schematic flowchart of a first image blurring processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the image blurring processing method includes the following steps:
step 101, acquiring a main image acquired by a main camera and acquiring a secondary image acquired by a secondary camera.
In the embodiment of the application, the main camera and/or the auxiliary camera can be controlled to measure light in advance, then the ambient brightness is determined according to the average brightness of a plurality of light measuring points, and the main camera and the auxiliary camera are determined from the double cameras.
Specifically, since the light is insufficient when the ambient brightness is not higher than the threshold brightness, if a high-resolution camera is used as the main camera to take a picture, more noise may occur, resulting in poor imaging effect. Therefore, in the embodiment, when the light is insufficient, the high-sensitivity camera can be used as the main camera, and the high-resolution camera can be used as the sub-camera to take a picture, so that the noise in the image is reduced, and the imaging effect is improved. On the contrary, under the circumstances that ambient brightness is higher than threshold value luminance, under the circumstances that light is sufficient promptly, because the camera resolution ratio of high resolution is higher, it is also comparatively clear to form an image, and the noise is less, consequently, in this embodiment, can regard as main camera with the camera of high resolution, and high sensitization camera is taken a photograph as vice camera to improve the formation of image effect.
After the main camera and the auxiliary camera are determined, the main camera and the auxiliary camera can be adopted for framing shooting at the same time, and a main image and a picture are obtained respectively.
The imaged image may be previewed prior to being captured. As a possible implementation manner, only the picture acquired by the main camera can be previewed, and when the user sees a satisfactory preview picture, the photographing key is clicked, so that the main camera and the auxiliary camera are controlled to perform framing photographing simultaneously. Or, the image acquired by the sub-camera can be previewed, and when the user sees a satisfactory preview image, the user clicks the photographing key, so that the main camera and the sub-camera are controlled to perform framing photographing at the same time, which is not limited.
Step 102, determining the depth information of the main image according to the main image and the auxiliary image.
Specifically, since the main camera and the sub camera have a certain distance therebetween, and thus the two cameras have parallax, images taken by different cameras should be different. The main image is captured by the main camera and the web image is captured by the sub-camera, so that there should be some difference between the main image and the sub-image. According to the principle of triangulation, the depth information of the same object in the main image and the auxiliary image, namely the distance between the object and the plane where the main camera and the auxiliary camera are located, can be calculated.
For clarity of explanation of this process, the principle of triangulation will be briefly described below.
In actual scenes, the depth of the scene resolved by human eyes is mainly resolved by binocular vision. This is the same principle as the two cameras resolving depth. In this embodiment, the depth information of the imaging image is calculated according to the second captured image, the main method is based on the principle of triangulation, and fig. 2 is a schematic diagram of the principle of triangulation.
Based on FIG. 2, in the real space, the imaging object is drawn, and the positions O of the two cameras are shownRAnd OTAnd focal planes of the two cameras, wherein the distance between the focal planes and the plane where the two cameras are located is f, and the two cameras perform imaging at the focal planes, so that two shot images are obtained.
P and P' are the positions of the same subject in different captured images, respectively. Wherein the distance from the P point to the left boundary of the shot image is XRThe distance of the P' point from the left boundary of the shot image is XT。ORAnd OTAre respectively provided with two cameras, and the two cameras,the two cameras are in the same plane and the distance is B.
Based on the principle of triangulation, the distance Z between the object and the plane where the two cameras are located in fig. 2 has the following relationship:
based on this, can be derivedWhere d is a distance difference between positions of the same object in different captured images. B, f is constant, so the distance Z of the object can be determined from d.
Step 103, determining a foreground area and a background area in the main image according to the depth information of the main image.
Specifically, after the depth information of the main image is obtained through calculation, it may be determined whether each object in the main image is a foreground or a background according to the depth information of the object. Generally, the depth information indicates that the object is closer to the plane where the primary camera and the secondary camera are located, and when the depth value is smaller, the object can be determined to be the foreground, otherwise, the object is the background. The foreground region and the background region of the main image can be further determined according to each object in the main image.
And 104, extracting image characteristics of the background area and matching the image characteristics with the image characteristics of the preset scenery.
In this application, the preset scene may be a scenic spot, a building, or the like, or the preset scene may also be a scene containing a specific element, for example, the preset scene may be a scene containing characters, which is not limited in this application.
In the embodiment of the application, image features of each object in the background region may be extracted, and the image features may include multiple features such as contour features, color features, and/or texture features. Alternatively, the image features of each object in the background region may be extracted by using a related algorithm in the prior art, which is not limited to this.
Aiming at the preset scenery of scenic spots, buildings and the like, various features including contour features, color features, texture features and the like can be extracted, so that the scene is suitable for the complex scene; when the preset scenery is the scenery containing the specific elements of the characters, only the outline features can be extracted and compared with the prestored outline features of the characters, so that the feature matching process is simplified, and the matching efficiency is improved.
After the image features of the background region are extracted, the image features of the preset scenery can be matched, and the obtained matching result can be as follows: the image characteristics of the background region are matched with the image characteristics of the preset scenery, or the image characteristics of the background region are not matched with the image characteristics of the preset scenery.
As a possible implementation manner, the specific process of determining whether to match includes: the weighted value of each feature in the image features can be preset, after the image features of the background region are extracted, weighted summation processing can be carried out on each feature in the image features, namely, each feature is multiplied by the corresponding weight to obtain a product value, then the product values are accumulated to obtain a sum value, and then the sum value is compared with a preset threshold value to determine whether the image features of the background region are matched with the image features of the preset scenery or not. The preset threshold may be preset by a built-in program of the mobile terminal.
Specifically, when the sum value is equal to or greater than a preset threshold value, it may be determined that the image feature of the background region matches the image feature of the preset subject, and when the sum value is less than the preset threshold value, it may be determined that the image feature of the background region does not match the image feature of the preset subject.
And 105, judging whether blurring processing is carried out on the background area or not according to the matching result of the image characteristics of the background area and the image characteristics of the preset scenery.
In the embodiment of the application, whether blurring processing is performed on the background area can be judged according to the matching result of the image characteristics of the background area and the image characteristics of the preset scenery. Specifically, when the image feature of the background region is not matched with the image feature of the preset scene, the blurring process may be performed on the background region, and when the image feature of the background region is matched with the image feature of the preset scene, the blurring process may not be performed on the background region, that is, the imaging effect of the background region is maintained. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved.
In the image blurring processing method of the embodiment, a main image acquired by a main camera and an auxiliary image acquired by an auxiliary camera are acquired; determining depth information of the main image according to the main image and the auxiliary image; determining a foreground area and a background area in the main image according to the depth information of the main image; extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of a preset scene; and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of the preset scenery. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are synchronously shot, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of the imaging picture can be improved on the one hand, and the accuracy of the depth information is improved on the other hand, so that the image processing effect is better.
To clearly illustrate the above embodiment, the present embodiment provides another image blurring processing method, and fig. 3 is a flowchart illustrating a second image blurring processing method provided in the present embodiment.
As shown in fig. 3, the image blurring processing method may include the steps of:
step 201, acquiring a main image acquired by a main camera and acquiring a secondary image acquired by a secondary camera.
Step 202, determining the depth information of the main image according to the main image and the auxiliary image.
Step 203, determining a foreground area and a background area in the main image according to the depth information of the main image.
The execution processes of steps 201 to 203 can refer to the execution processes of steps 101 to 103 in the above embodiments, which are not described herein again.
And step 204, acquiring the shot geographic position.
It is understood that, when the preset scenery is a scenic spot or a building, the geographical location information thereof is fixed since the scenic spot and the building are immovable. Therefore, in the embodiment of the application, the preset scenery matched with the shot geographical position information can be determined by obtaining the shot geographical position information, so that the processing speed of the mobile terminal is increased, and the calculated amount is reduced.
Alternatively, the Global Positioning System (GPS) in the mobile terminal may be turned on to obtain the geographic location of the shot, or the Assisted Global Positioning System (AGPS) in the mobile terminal may be turned on to obtain the geographic location of the shot, which is not limited in this respect.
Step 205, determining a preset scene matched with the geographical position according to the geographical position.
In the embodiment of the application, after the shot geographic position information is obtained, the preset scenery matched with the geographic position information can be determined according to the geographic position, so that the processing speed of the mobile terminal is increased, and the calculated amount is reduced.
And step 206, extracting image characteristics of the background area and matching the image characteristics with the image characteristics of the preset scenery.
Step 207, determining whether the image features of the background region are matched with the image features of the preset scenery, if so, executing step 209, otherwise, executing step 208.
In step 208, it is determined to blur the background area.
Step 209, performing portrait identification on the foreground region, and determining whether the foreground region includes a portrait region, if so, performing step 210, otherwise, performing step 213.
In the embodiment of the present application, when the image feature of the background region matches with the image feature of the preset scene, it is further determined whether to perform blurring processing on the background region according to whether the foreground region includes the portrait region.
Optionally, a face recognition technology may be used to identify whether the foreground region includes the portrait region, or whether the foreground region includes the portrait region may be determined according to the contour features of each object in the foreground region. When it is determined that the foreground region does not contain the portrait, it indicates that the image taken by the user may be a landscape image, and at this time, the background region may not be blurred, i.e., the imaging effect of the background region is maintained. When it is determined that the foreground region contains the portrait, it may be further determined whether to perform blurring on the background region according to the action of the portrait, i.e., triggering step 210.
And step 210, performing portrait motion recognition according to the portrait area.
In the embodiment of the application, portrait motion recognition can be performed according to the portrait area, so that gestures, limb motions and the like of the portrait can be recognized.
As a possible implementation manner, the main camera and/or the sub-camera for acquiring the image may be a camera capable of acquiring depth information of a human body, and through the acquired depth information, a portrait motion in the portrait area may be recognized. For example, the main camera and/or the secondary camera may be a Depth camera (Red-Green-Blue Depth, RGBD), and Depth information of a human body in an image may be acquired while imaging, so that a portrait motion in a portrait area may be identified according to the Depth information. In addition, the human body depth information can be acquired through the structured light or the TOF lens, so that the human image action in the human image area can be recognized according to the depth information, and the method is not limited to this.
As another possible implementation manner, joints of the portrait in the portrait area may be recognized, for example, the face and the position information of the face in the portrait area may be recognized according to a face recognition technology, and then the position information of each joint of the portrait may be calculated according to a proportional relationship between limbs and height in human anatomy. Of course, the position information of each joint of the portrait in the portrait area may be determined by other algorithms, which is not limited to this. After the position information of each joint of the portrait is obtained, the relative position relation of the two joints can be determined according to the position information of the two adjacent joints, and then the movement of the portrait can be determined.
In step 211, the target position indicated by the portrait session is determined.
In the embodiment of the application, after the portrait motion is recognized, the target position indicated by the portrait motion can be determined. For example, when the portrait moves as a gesture, the target orientation indicated by the portrait move may be the direction indicated by the fingers of the portrait, e.g., the target orientation may be left, right, forward, backward, etc.
Further, before step 211 is executed, it may be determined that the portrait motion is a preset motion or a preset gesture. In one possible scenario, a limited number of actions and gestures may be defined as portrait actions for indicating whether to blur. For example, the portrait may make a "yeah" action, in which case the gesture made by the portrait is not a preset gesture although the portrait makes the gesture, and thus step 211 is not performed.
Step 212, determining whether the orientation of the background area relative to the portrait area matches the target orientation, if yes, performing step 213, otherwise, performing step 208.
In the embodiment of the application, after the portrait motion is recognized, whether blurring processing is performed on the background area or not can be continuously judged according to the recognized portrait motion.
Specifically, when the orientation of the background region with respect to the portrait region matches the target orientation, the background region may not be blurred, that is, the imaging effect of the background region may be maintained. For example, when the figure stands in front of the "skyline haifang" boulder and fingers are used to point to the "skyline haifang" four characters, at this time, the orientation of the background region relative to the figure region matches the target orientation, so the background region may not be blurred, that is, the "skyline haifang" four characters on the boulder may be retained.
When the orientation of the background area relative to the portrait area does not match the orientation of the target, blurring may be performed on the background area. Or taking the example that the portrait stands in front of the 'Skyline sea corner' boulder, the portrait points to the 'Skyline sea corner' four words by hand, at this time, if the background area contains not only the 'Skyline sea corner' stone carving, but also plants and sand beach, because the direction of the 'Skyline sea corner' stone carving relative to the portrait area is matched with the target direction, the blurring treatment can be carried out on the part of the background area, namely the 'Skyline sea corner' four words on the boulder are reserved, and the plants and the sand beach are subjected to the blurring treatment.
Step 213, the imaging effect of the background area is maintained.
In the image blurring processing method of the embodiment, a main image acquired by a main camera and an auxiliary image acquired by an auxiliary camera are acquired; determining depth information of the main image according to the main image and the auxiliary image; determining a foreground area and a background area in the main image according to the depth information of the main image; extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of a preset scene; and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of the preset scenery. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are synchronously shot, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of the imaging picture can be improved on the one hand, and the accuracy of the depth information is improved on the other hand, so that the image processing effect is better.
To clearly illustrate the previous embodiment, the present embodiment provides another image blurring processing method, and fig. 4 is a flowchart illustrating a third image blurring processing method provided in the present embodiment.
As shown in fig. 4, the image blurring processing method may include the steps of:
step 301, acquiring a main image acquired by a main camera and acquiring an auxiliary image acquired by an auxiliary camera;
step 302, determining the depth information of the main image according to the main image and the auxiliary image;
step 303, determining a foreground region and a background region in the main image according to the depth information of the main image.
And step 304, acquiring the shot geographic position.
And 305, determining a preset scene matched with the geographic position according to the geographic position.
And step 306, extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of the preset scenery.
Step 307, determining whether the image features of the background region are matched with the image features of the preset scenery, if so, executing step 309, otherwise, executing step 308.
Step 308, determining to perform blurring processing on the background area.
Step 309, performing portrait identification on the foreground area, and determining whether the foreground area includes a portrait area, if so, performing step 310, otherwise, performing step 314.
And step 310, performing portrait motion recognition according to the portrait area.
The execution processes of steps 301 to 310 can refer to the execution processes of steps 201 to 210 in the above embodiments, which are not described herein again.
In step 311, it is determined whether the identified portrait motion matches a preset command motion, if yes, step 312 is executed, otherwise, step 308 is executed.
In the embodiment of the application, the preset instruction action may be preset for a built-in program of the terminal, or the preset instruction action may be set by the user according to the self-requirement, which is not limited to this.
In the embodiment of the application, the identified portrait motion can be used for sending out an instruction, so that the instruction sent out by the portrait motion can be further identified. Specifically, it may be determined whether the identified portrait motion matches a preset command motion, if so, step 312 is executed, otherwise, it indicates that the portrait motion is not used for issuing a command, and thus, blurring may be performed on the background area.
Step 312, inquiring virtualization information corresponding to the preset instruction action; the blurring information includes whether blurring processing is performed and/or a degree of blurring.
In the embodiment of the application, when the preset instruction action is preset as a built-in program of the terminal, the mobile terminal may store virtualization information corresponding to the preset instruction action. When the preset instruction action is set as the user, when the user sets the preset instruction action, the operation interface of the mobile terminal can display prompt information about whether to perform virtualization processing and/or virtualization degree according to the preset instruction action, and then the user can select to perform virtualization processing or not according to the user requirement.
Optionally, when the identified portrait motion matches the preset instruction motion, virtualization information corresponding to the preset instruction motion, which is locally stored in the mobile terminal, may be queried, where the virtualization information includes whether to perform virtualization processing and/or a virtualization degree.
Step 313, determine whether the blurring information indicates blurring, if yes, go to step 308, otherwise go to step 314.
In this embodiment, when the blurring information indicates to perform blurring, blurring may be performed on the background area in the main image according to the blurring degree indicated by the blurring information. And when the blurring information indicates that blurring processing is not performed, the imaging effect of the background area in the main image can be maintained.
Step 314, the imaging effect of the background area in the main image is maintained.
In the image blurring processing method of the embodiment, a main image acquired by a main camera and an auxiliary image acquired by an auxiliary camera are acquired; determining depth information of the main image according to the main image and the auxiliary image; determining a foreground area and a background area in the main image according to the depth information of the main image; extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of a preset scene; and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of the preset scenery. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are synchronously shot, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of the imaging picture can be improved on the one hand, and the accuracy of the depth information is improved on the other hand, so that the image processing effect is better.
In order to implement the above embodiments, the present application further provides an image blurring processing apparatus.
Fig. 5 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the image blurring processing apparatus includes: an acquisition module 510, a depth processing module 520, a region identification module 530, a feature extraction module 540, and a blurring module 550. Wherein,
the acquiring module 510 is configured to acquire a main image acquired by the main camera and acquire an auxiliary image acquired by the auxiliary camera.
A depth processing module 520 for determining depth information of the main image according to the main image and the sub-image.
The region identifying module 530 is configured to determine a foreground region and a background region in the main image according to the depth information of the main image.
And the feature extraction module 540 is configured to extract image features from the background region, and match the image features with image features of a preset scene.
And a blurring module 550, configured to determine whether to perform blurring processing on the background region according to a matching result between the image feature of the background region and an image feature of a preset scene.
As a possible implementation manner, the blurring module 550 is specifically configured to perform blurring processing on the background region when the image feature of the background region is not matched with the image feature of the preset scene, and maintain the imaging effect of the background region when the image feature of the background region is matched with the image feature of the preset scene.
Further, in a possible implementation manner of the embodiment of the present application, referring to fig. 6, on the basis of the embodiment shown in fig. 5, the image blurring processing apparatus may further include:
the portrait identifying module 560 is configured to determine not to perform blurring processing on the background area and perform portrait identification on the foreground area when the image feature of the background area matches the image feature of the preset scenery;
the action recognition module 570 is used for recognizing the action of the portrait according to the portrait area when the foreground area contains the portrait area;
the blurring module 550 is further configured to determine whether to perform blurring on the background area according to the identified portrait motion.
As a possible implementation manner, the blurring module 550 is specifically configured to determine a target position indicated by the portrait motion; if the orientation of the background area relative to the portrait area is not matched with the orientation of the target, blurring the background area; and if the orientation of the background area relative to the portrait area is matched with the orientation of the target, maintaining the imaging effect of the background area.
As another possible implementation manner, the blurring module 550 is specifically configured to query blurring information corresponding to a preset instruction action when the identified portrait action matches the preset instruction action; the virtualization information includes whether to perform virtualization processing and/or a virtualization degree; if the blurring information indicates to perform blurring processing, blurring processing is performed on the background area in the main image according to the blurring degree indicated by the blurring information; if the blurring information indicates that blurring is not performed, the imaging effect of the background area in the main image is maintained.
An acquisition determining module 580 for acquiring a photographed geographical location before matching with an image feature of a preset scene; and determining the preset scenery matched with the geographical position according to the geographical position.
It should be noted that the foregoing explanation of the embodiment of the image blurring processing method is also applicable to the image blurring processing apparatus of this embodiment, and is not repeated here.
The image blurring processing device of the embodiment acquires a main image acquired by a main camera and acquires an auxiliary image acquired by an auxiliary camera; determining depth information of the main image according to the main image and the auxiliary image; determining a foreground area and a background area in the main image according to the depth information of the main image; extracting image characteristics of the background area, and matching the image characteristics with the image characteristics of a preset scene; and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of the preset scenery. Therefore, the blurring range can be intelligently adjusted, and the blurring effect of the imaging image is improved. Meanwhile, the auxiliary image shot by the auxiliary camera and the main image shot by the main camera are synchronously shot, so that when the main image is subjected to subsequent blurring processing according to the corresponding auxiliary image, the imaging effect of the imaging picture can be improved on the one hand, and the accuracy of the depth information is improved on the other hand, so that the image processing effect is better.
In order to implement the foregoing embodiments, the present application further proposes a mobile terminal, and fig. 7 is a schematic structural diagram of a terminal device according to another embodiment of the present application, and as shown in fig. 7, the terminal device 1000 includes: a housing 1100 and a primary camera 1112, a secondary camera 1113, a memory 1114, and a processor 1115 located within the housing 1100.
Wherein the memory 1114 stores executable program code; the processor 1115 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 1114, for performing the image blurring processing method as described in the foregoing method embodiments.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, which, when executed by a processor of a mobile terminal, implements the image blurring processing method as proposed in the foregoing embodiments.
The mobile terminal may further include an Image Processing circuit, which may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. Imaging device 910 may specifically include two cameras, each of which may include one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 may provide the raw image data to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 940 may also receive pixel data from image memory 930. For example, raw pixel data is sent from the sensor 920 interface to the image memory 930, and the raw pixel data in the image memory 930 is then provided to the ISP processor 940 for processing. The image Memory 930 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 920 interface or from the image memory 930, the ISP processor 940 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 receives processed data from image memory 930 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 970 for viewing by a user and/or further processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 970 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 960 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on a display 970 device. The encoder/decoder 960 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 920 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading correction parameters.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (10)
1. An image blurring processing method is characterized by comprising the following steps:
acquiring a main image acquired by a main camera and acquiring an auxiliary image acquired by an auxiliary camera;
determining depth information of the main image according to the main image and the auxiliary image;
determining a foreground area and a background area in the main image according to the depth information of the main image;
extracting image features from the background area, and matching the image features with image features of a preset scene;
and judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic matching result of a preset scene.
2. The image blurring processing method as claimed in claim 1, wherein after determining whether to perform blurring processing on the background region according to a result of matching an image feature of the background region with an image feature of a preset scene, the method further comprises:
if the background area is determined not to be subjected to blurring processing according to the image feature matching result of the background area and the image feature matching result of the preset scenery, then the portrait identification is carried out on the foreground area;
when the foreground area comprises a portrait area, performing portrait action recognition according to the portrait area;
and continuously judging whether blurring processing is carried out on the background area or not according to the recognized portrait motion.
3. The image blurring processing method according to claim 2, wherein the continuously determining whether to blur the background region according to the recognized portrait motion includes:
determining a target orientation indicated by the portrait motion;
if the position of the background area relative to the portrait area is not matched with the target position, blurring the background area;
and if the position of the background area relative to the portrait area is matched with the target position, maintaining the imaging effect of the background area.
4. The image blurring processing method according to claim 2, wherein the continuously determining whether to blur the background region according to the recognized portrait motion includes:
if the identified portrait motion is matched with a preset instruction motion, inquiring blurring information corresponding to the preset instruction motion; the virtualization information comprises whether virtualization processing is performed and/or a virtualization degree;
if the blurring information indicates to perform blurring processing, blurring processing is performed on a background area in the main image according to the blurring degree indicated by the blurring information;
and if the blurring information indicates that blurring processing is not performed, maintaining the imaging effect of the background area in the main image.
5. The image blurring processing method as claimed in any one of claims 1 to 4, wherein before the matching with the image feature of the preset scene, further comprising:
acquiring a shot geographic position;
and determining a preset scene matched with the geographic position according to the geographic position.
6. The image blurring processing method according to any one of claims 1 to 4, wherein said determining whether to perform blurring processing on the background region according to a result of matching an image feature of the background region with an image feature of a preset scene includes:
if the image characteristics of the background area are not matched with the image characteristics of a preset scene, blurring the background area;
and if the image characteristics of the background area are matched with the image characteristics of the preset scenery, maintaining the imaging effect of the background area.
7. An image blurring processing apparatus, comprising:
the acquisition module is used for acquiring a main image acquired by the main camera and acquiring an auxiliary image acquired by the auxiliary camera;
the depth processing module is used for determining the depth information of the main image according to the main image and the auxiliary image;
the region identification module is used for determining a foreground region and a background region in the main image according to the depth information of the main image;
the characteristic extraction module is used for extracting image characteristics from the background area and matching the image characteristics with the image characteristics of a preset scenery;
and the blurring module is used for judging whether blurring processing is carried out on the background area or not according to the image characteristic matching result of the background area and the image characteristic of a preset scene.
8. The image blurring processing device according to claim 7, further comprising:
the portrait recognition module is used for determining not to perform blurring processing on the background area and performing portrait recognition on the foreground area when the image characteristics of the background area are matched with the image characteristics of a preset scene;
the action identification module is used for identifying the portrait action according to the portrait area when the foreground area contains the portrait area;
and the blurring module is further used for judging whether blurring processing is carried out on the background area according to the identified portrait motion.
9. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements the image blurring processing method as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out an image blurring processing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711243733.8A CN108024058B (en) | 2017-11-30 | 2017-11-30 | Image blurs processing method, device, mobile terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711243733.8A CN108024058B (en) | 2017-11-30 | 2017-11-30 | Image blurs processing method, device, mobile terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108024058A true CN108024058A (en) | 2018-05-11 |
CN108024058B CN108024058B (en) | 2019-08-02 |
Family
ID=62078056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711243733.8A Active CN108024058B (en) | 2017-11-30 | 2017-11-30 | Image blurs processing method, device, mobile terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108024058B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108924435A (en) * | 2018-07-12 | 2018-11-30 | Oppo广东移动通信有限公司 | Image processing method, device and electronic equipment |
CN109151314A (en) * | 2018-09-10 | 2019-01-04 | 珠海格力电器股份有限公司 | Camera blurring processing method and device for terminal, storage medium and terminal |
CN109862262A (en) * | 2019-01-02 | 2019-06-07 | 上海闻泰电子科技有限公司 | Image weakening method, device, terminal and storage medium |
CN111343257A (en) * | 2020-02-17 | 2020-06-26 | 深圳市广和通无线股份有限公司 | Method and device for realizing universality of wireless communication module, wireless communication equipment and storage medium |
CN111524087A (en) * | 2020-04-24 | 2020-08-11 | 展讯通信(上海)有限公司 | Image processing method and device, storage medium and terminal |
CN112532882A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image display method and device |
CN113129241A (en) * | 2019-12-31 | 2021-07-16 | RealMe重庆移动通信有限公司 | Image processing method and device, computer readable medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945118A (en) * | 2014-03-14 | 2014-07-23 | 华为技术有限公司 | Picture blurring method and device and electronic equipment |
CN106530241A (en) * | 2016-10-31 | 2017-03-22 | 努比亚技术有限公司 | Image blurring processing method and apparatus |
CN106651755A (en) * | 2016-11-17 | 2017-05-10 | 宇龙计算机通信科技(深圳)有限公司 | Panoramic image processing method and device for terminal and terminal |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
WO2017143128A1 (en) * | 2016-02-18 | 2017-08-24 | Osterhout Group, Inc. | Haptic systems for head-worn computers |
JP2017200088A (en) * | 2016-04-28 | 2017-11-02 | キヤノン株式会社 | Subject tracking device, control method therefor, imaging apparatus, and program |
CN107370958A (en) * | 2017-08-29 | 2017-11-21 | 广东欧珀移动通信有限公司 | Image virtualization processing method, device and camera terminal |
-
2017
- 2017-11-30 CN CN201711243733.8A patent/CN108024058B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945118A (en) * | 2014-03-14 | 2014-07-23 | 华为技术有限公司 | Picture blurring method and device and electronic equipment |
WO2017143128A1 (en) * | 2016-02-18 | 2017-08-24 | Osterhout Group, Inc. | Haptic systems for head-worn computers |
JP2017200088A (en) * | 2016-04-28 | 2017-11-02 | キヤノン株式会社 | Subject tracking device, control method therefor, imaging apparatus, and program |
CN106530241A (en) * | 2016-10-31 | 2017-03-22 | 努比亚技术有限公司 | Image blurring processing method and apparatus |
CN106651755A (en) * | 2016-11-17 | 2017-05-10 | 宇龙计算机通信科技(深圳)有限公司 | Panoramic image processing method and device for terminal and terminal |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
CN107370958A (en) * | 2017-08-29 | 2017-11-21 | 广东欧珀移动通信有限公司 | Image virtualization processing method, device and camera terminal |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108924435A (en) * | 2018-07-12 | 2018-11-30 | Oppo广东移动通信有限公司 | Image processing method, device and electronic equipment |
CN109151314A (en) * | 2018-09-10 | 2019-01-04 | 珠海格力电器股份有限公司 | Camera blurring processing method and device for terminal, storage medium and terminal |
CN109862262A (en) * | 2019-01-02 | 2019-06-07 | 上海闻泰电子科技有限公司 | Image weakening method, device, terminal and storage medium |
CN113129241A (en) * | 2019-12-31 | 2021-07-16 | RealMe重庆移动通信有限公司 | Image processing method and device, computer readable medium and electronic equipment |
CN113129241B (en) * | 2019-12-31 | 2023-02-07 | RealMe重庆移动通信有限公司 | Image processing method and device, computer readable medium and electronic equipment |
CN111343257A (en) * | 2020-02-17 | 2020-06-26 | 深圳市广和通无线股份有限公司 | Method and device for realizing universality of wireless communication module, wireless communication equipment and storage medium |
CN111343257B (en) * | 2020-02-17 | 2022-09-06 | 深圳市广和通无线股份有限公司 | Data processing method, device, equipment and medium based on preset data instruction |
CN111524087A (en) * | 2020-04-24 | 2020-08-11 | 展讯通信(上海)有限公司 | Image processing method and device, storage medium and terminal |
CN112532882A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image display method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108024058B (en) | 2019-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107948519B (en) | Image processing method, device and equipment | |
CN108024058B (en) | Image blurs processing method, device, mobile terminal and storage medium | |
KR102279436B1 (en) | Image processing methods, devices and devices | |
KR102306304B1 (en) | Dual camera-based imaging method and device and storage medium | |
CN107977940B (en) | Background blurring processing method, device and equipment | |
EP3499863B1 (en) | Method and device for image processing | |
KR102293443B1 (en) | Image processing method and mobile terminal using dual camera | |
CN107945105B (en) | Background blurring processing method, device and equipment | |
CN108024054B (en) | Image processing method, device, equipment and storage medium | |
CN108111749B (en) | Image processing method and device | |
CN108154514B (en) | Image processing method, device and equipment | |
CN107872631B (en) | Image shooting method and device based on double cameras and mobile terminal | |
CN108093158B (en) | Image blurring processing method and device, mobile device and computer readable medium | |
WO2017045558A1 (en) | Depth-of-field adjustment method and apparatus, and terminal | |
CN111726521B (en) | Photographing method and photographing device of terminal and terminal | |
CN107172352B (en) | Focusing control method and device, computer-readable storage medium and mobile terminal | |
US20200364832A1 (en) | Photographing method and apparatus | |
CN111246093B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN113298735A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108052883B (en) | User photographing method, device and equipment | |
CN109582811B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
JP2020046475A (en) | Image processing device and control method therefor | |
CN114762313B (en) | Image processing method, device, storage medium and electronic equipment | |
JP2014175703A (en) | Imaging apparatus, control method for the same, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |