CN112165575A - Image blurring processing method and device, storage medium and electronic equipment - Google Patents

Image blurring processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112165575A
CN112165575A CN202011024322.1A CN202011024322A CN112165575A CN 112165575 A CN112165575 A CN 112165575A CN 202011024322 A CN202011024322 A CN 202011024322A CN 112165575 A CN112165575 A CN 112165575A
Authority
CN
China
Prior art keywords
image
blurring
depth
camera
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011024322.1A
Other languages
Chinese (zh)
Other versions
CN112165575B (en
Inventor
钱捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202011024322.1A priority Critical patent/CN112165575B/en
Publication of CN112165575A publication Critical patent/CN112165575A/en
Application granted granted Critical
Publication of CN112165575B publication Critical patent/CN112165575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides an image blurring processing method, an image blurring processing device, a storage medium and an electronic device, and relates to the technical field of image processing. The image blurring processing method comprises the following steps: if the terminal is in a virtualization mode, responding to a main camera acquisition request, acquiring a first preview image of a main camera, and configuring a channel of an auxiliary camera; calculating a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera at a hardware abstraction layer, and filling the depth image into a passage of the auxiliary camera; and acquiring real-time blurring state information corresponding to the depth image through the image post-processing service layer, and performing image blurring processing on the first preview image by combining the first preview image and the depth image according to the real-time blurring state information to obtain a target image. The embodiment of the disclosure can improve the image processing efficiency.

Description

Image blurring processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image blurring processing method, an image blurring processing apparatus, a computer-readable storage medium, and an electronic device.
Background
With the development of intelligent terminals, the photographing function is more and more perfect. In some scenes it is necessary to blur the image to meet the scene requirements.
In the related technology, a camera application needs to request preview streams of a main camera and an auxiliary camera at the same time, a hardware abstraction layer of the camera collects frames of the main camera and the auxiliary camera at the same time, after the frames are received, depth map calculation is carried out on the preview images of the main camera and the auxiliary camera, and blurring effects are synthesized to obtain a blurring image.
In the above manner, the serial processing flow of depth map calculation and blurring effect creation is adopted, and the time consumed by the superposition of the depth map calculation and the blurring effect creation is more likely to exceed the interval duration of real-time preview of one frame, so that the phenomenon of stutter of preview is caused, and the blurring effect of the image is affected. The calculation process is relatively long in time consumption, high in power consumption and low in calculation efficiency.
Disclosure of Invention
The present disclosure provides an image blurring processing method, an image blurring processing apparatus, a computer-readable storage medium, and an electronic device, which overcome the problems of the occurrence of a stuck phenomenon and low computational efficiency at least to some extent.
According to an aspect of the present disclosure, there is provided an image blurring processing method, including: if the terminal is in a virtualization mode, responding to a main camera acquisition request, acquiring a first preview image of a main camera, and configuring a channel of an auxiliary camera; in a hardware abstraction layer, calculating a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera, filling the depth image into a passage of the auxiliary camera, and sending the first preview image and the depth image of the main camera to an image post-processing service layer; and acquiring real-time blurring state information corresponding to the depth image through the image post-processing service layer, and performing image blurring processing on the first preview image by combining the first preview image and the depth image according to the real-time blurring state information to obtain a target image.
According to an aspect of the present disclosure, there is provided an image blurring processing apparatus including: the access configuration module is used for responding to a main camera acquisition request if the terminal is in a virtualization mode, acquiring a first preview image of the main camera and configuring an access of an auxiliary camera; the depth image calculation module is used for calculating a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera on a hardware abstraction layer, filling the depth image into a passage of the auxiliary camera, and sending the first preview image and the depth image of the main camera to an image post-processing service layer; and the image blurring module is used for acquiring real-time blurring state information corresponding to the depth image through the image post-processing service layer, and performing image blurring on the first preview image by combining the first preview image and the depth image according to the real-time blurring state information to obtain a target image.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image blurring processing method as described in any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any one of the image blurring processing methods described above via execution of the executable instructions.
In the image blurring processing method, the image blurring processing device, the computer-readable storage medium, and the electronic device provided in some embodiments of the present disclosure, on one hand, the depth image is calculated according to the first preview image and the second preview image at the hardware abstraction layer, so that the number of processed images of a single node is reduced, a phenomenon of stutter occurring in a process of previewing an image stream is avoided, calculation power consumption is reduced, and calculation efficiency and smoothness are improved. On the other hand, because the path of the auxiliary camera is configured, the depth image calculated by the hardware abstraction layer is filled into the configured path of the auxiliary camera, the second preview image of the auxiliary camera does not need to be transmitted from the hardware abstraction layer to calculate the depth image for image blurring processing, the depth image and the blurring image can be calculated by different components respectively, the distributed real-time preview background blurring effect is realized, the image transmission efficiency is improved, the memory consumption is reduced, and the image blurring effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 shows a flowchart of an image blurring process in the related art.
Fig. 2 is a schematic diagram illustrating an application scenario to which the image blurring processing method according to the embodiment of the present disclosure may be applied.
FIG. 3 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Fig. 4 shows a flowchart of an image blurring processing method in the embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of a camera architecture in an embodiment of the present disclosure.
Fig. 6 shows a flowchart of image blurring according to status information of real-time blurring in the embodiment of the present disclosure.
Fig. 7 schematically shows a flowchart of the image blurring process in the embodiment of the present disclosure.
Fig. 8 schematically shows an overall flowchart of image blurring based on a camera architecture in an embodiment of the present disclosure.
Fig. 9 schematically shows a flowchart for determining a target image in an embodiment of the present disclosure.
Fig. 10 schematically shows a block diagram of an image blurring processing apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, all of the following terms "first" and "second" are used for distinguishing purposes only and should not be construed as limiting the present disclosure.
An architecture diagram of image blurring in the related art is schematically shown in fig. 1. Referring to fig. 1, in the related art, a camera application 101 needs to request a preview stream of a main camera and an auxiliary camera at the same time, connect a hardware abstraction layer 103 of the cameras through a camera service layer 102, the hardware abstraction layer collects frames of the main camera and the auxiliary camera at the same time, and after receiving the images, sends the preview image of the main camera and the preview image of the auxiliary camera to a real-time blurring processing module 104 in an image post-algorithm processing service 103. Finally, the real-time blurring image is output and sent back to the camera for display or video recording.
In the above manner, the serial processing flow of depth map calculation and blurring effect creation is adopted, and the time consumed by the superposition of the depth map calculation and blurring effect creation is more likely to exceed the interval duration of real-time preview of one frame, thereby causing the phenomenon of stutter of preview. The depth map is calculated by software, so that the time consumption is relatively long and the power consumption is relatively high.
In order to solve the above technical problem, fig. 2 is a schematic diagram illustrating an application scenario to which the image blurring processing method or the image blurring processing apparatus according to the embodiment of the present disclosure may be applied.
The image blurring processing method can be applied to a photographing scene. Referring to fig. 2, the method can be applied to a process of shooting a target object 202 by using a terminal 201. The terminal 201 may be various types of clients capable of being used for shooting, for example, various smart phones, tablet computers, desktop computers, vehicle-mounted devices, wearable devices, and the like, which are capable of capturing images or videos and displaying the images or videos. The target object 202 may be any type of object to be photographed in various scenes, such as a person, an animal, or a landscape, etc. The target object may be in a stationary state or in a moving state. Specifically, a camera on the terminal 201 or a shooting camera application may be used to capture an image of the target. The camera on the terminal can comprise a plurality of camera modules, and any one or more camera modules can be called to acquire images of the target object.
It should be noted that the image blurring processing method provided by the embodiment of the present disclosure may be completely executed by the server, or may be completely executed by the terminal. Accordingly, the image blurring processing device may be provided in the terminal or the server.
FIG. 3 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. It should be noted that the electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, cause the processor to implement the image blurring processing method of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 3, the electronic device 300 may include: the mobile phone includes a processor 310, an internal memory 321, an external memory interface 322, a Universal Serial Bus (USB) interface 330, a charging management Module 340, a power management Module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication Module 350, a wireless communication Module 360, an audio Module 370, a speaker 371, a receiver 372, a microphone 373, an earphone interface 374, a sensor Module 380, a display screen 390, a camera Module 391, an indicator 392, a motor 393, a button 394, a Subscriber Identity Module (SIM) card interface 395, and the like. The sensor module 280 may include a depth sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 300. In other embodiments of the present application, electronic device 300 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units, such as: the Processor 310 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors. Additionally, a memory may be provided in processor 310 for storing instructions and data.
The USB interface 330 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a microsusb interface, a USB type c interface, or the like. The USB interface 330 may be used to connect a charger to charge the electronic device 300, and may also be used to transmit data between the electronic device 300 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface can also be used for connecting other electronic equipment and the like.
The charging management module 340 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives the input from the battery 342 and/or the charging management module 340, and supplies power to the processor 310, the internal memory 321, the display screen 390, the camera module 391, the wireless communication module 360, and the like.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like.
The mobile communication module 350 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 300.
The Wireless Communication module 360 may provide solutions for Wireless Communication applied to the electronic device 300, including Wireless Local Area Networks (WLANs) (e.g., Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.
The electronic device 300 implements a display function through the GPU, the display screen 390, and the application processor. The GPU is a microprocessor for image blurring processing, and is connected to the display screen 390 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
The electronic device 300 may implement a shooting function through the ISP, the camera module 391, the video codec, the GPU, the display screen 390, the application processor, and the like. In some embodiments, the electronic device 300 may include 1 or N camera modules 391, where N is a positive integer greater than 1, and if the electronic device 300 includes N cameras, one of the N cameras is a main camera, and the others may be sub cameras, such as a telephoto camera.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The internal memory 321 may include a program storage area and a data storage area. The external memory interface 322 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 300.
The electronic device 300 may implement an audio function through the audio module 370, the speaker 371, the receiver 372, the microphone 373, the earphone interface 374, the application processor, and the like. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310.
The speaker 371 is used to convert the audio electrical signal into a sound signal. The electronic apparatus 300 can listen to music through the speaker 371 or listen to a hands-free call. Receiver 372, also referred to as an "earpiece," is used to convert electrical audio signals into acoustic signals. When the electronic device 300 receives a call or voice information, it can receive voice by placing the receiver 372 close to the human ear. The microphone 373, also called "microphone", is used to convert the sound signal into an electrical signal. When making a call or transmitting voice information, a user can input a voice signal to the microphone 373 by sounding a voice signal through the mouth of the user near the microphone 373. The electronic device 300 may be provided with at least one microphone 373. The headset interface 374 is used to connect wired headsets.
For sensors included with the electronic device 300, a depth sensor is used to obtain depth information of the scene. The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. The gyro sensor may be used to determine the motion pose of the electronic device 300. The air pressure sensor is used for measuring air pressure. The magnetic sensor includes a hall sensor. The electronic device 300 may detect the opening and closing of the flip holster using a magnetic sensor. The acceleration sensor may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The distance sensor is used for measuring distance. The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The fingerprint sensor is used for collecting fingerprints. The temperature sensor is used for detecting temperature. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display screen 390. The ambient light sensor is used for sensing the ambient light brightness. The bone conduction sensor may acquire a vibration signal.
Keys 394 include a power on key, a volume key, and the like. The keys 394 may be mechanical keys. Or may be touch keys. Motor 393 may generate a vibration cue. Motor 393 can be used for both an incoming call vibration prompt and for touch vibration feedback. Indicator 392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 395 is for connecting a SIM card. The electronic device 300 interacts with the network through the SIM card to implement functions such as communication and data communication.
The present application also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
In the embodiment of the disclosure, an image blurring processing method is first provided. A flow chart of the image blurring processing method is schematically shown in fig. 4. As shown in fig. 4, mainly includes the following steps:
in step S410, if the terminal is in the blurring mode, in response to a main camera acquisition request, acquiring a first preview image of the main camera, and configuring a path of an auxiliary camera;
in step S420, in a hardware abstraction layer, calculating a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera, filling the depth image into a path of the auxiliary camera, and sending the first preview image and the depth image of the main camera to an image post-processing service layer;
in step S430, the real-time blurring state information corresponding to the depth image is obtained through the image post-processing service, and the first preview image is subjected to image blurring by combining the first preview image and the depth image according to the real-time blurring state information, so as to obtain a target image.
In the technical scheme provided by the embodiment of the disclosure, on one hand, the depth image is calculated according to the first preview image and the second preview image on the hardware abstraction layer, and the image blurring is performed on the image post-processing service layer, so that the depth image calculation and blurring effects are not required to be serially overlapped in one structure, the problem that the time consumed for overlapping exceeds the interval duration of one frame is avoided, the image processing of a single node is reduced, the phenomenon of jamming in previewing is avoided, the calculation power consumption is reduced, and the efficiency is improved. On the other hand, due to the fact that the path of the depth image is configured, the depth image calculated by the hardware abstraction layer is filled into the path of the configured depth image, the second preview image of the auxiliary camera does not need to be transmitted from the hardware abstraction layer to calculate the depth image to perform image blurring processing, image transmission efficiency is improved, and memory overhead is reduced.
Next, an image blurring processing method in the embodiment of the present disclosure will be described in detail with reference to the drawings.
In step S410, if the terminal is in the blurring mode, in response to a main camera acquisition request, acquiring a first preview image of the main camera, and configuring a path of the auxiliary camera.
In the embodiment of the present disclosure, the terminal being in the virtualization mode refers to the terminal being in a photographing state and being in the virtualization mode in the photographing state. The terminal can be for carrying out image acquisition's intelligent terminal, and this terminal can include a main camera and one and assist the camera at least, also can include a main camera and a plurality of camera of assisting, does not do special restriction to the quantity of camera here. The photographing state refers to that a camera of the terminal is started or a camera application is shot, and a camera of the terminal is called to photograph the target object. The terminal may determine a target object in response to a user's trigger operation. The trigger operation refers to an operation for triggering selection of a target area. The triggering operation may be performed by a user, and the processor of the terminal may acquire and detect whether the triggering operation of the user is received. The triggering operation may be various types of operations for selecting a photo-object, for example, one or a combination of multiple types of operations in a click, a button, a voice, an expression, and a body motion manner, and is not particularly limited herein as long as the terminal can be triggered to select a target object. Blurring mode refers to blurring a part of a shot picture so as to highlight a subject being shot.
A camera architecture 500 is schematically illustrated in fig. 5, and with reference to that illustrated in fig. 5, mainly comprises: the camera application 501 includes a real-time virtualization service layer. The camera service layer 502 is a bridge to communicate with the upper layers, and serves only as a middle layer. The camera hardware abstraction layer 503 where the vendor packages its own code for operating the drivers. The real-time virtualization layer 504 includes real-time virtualization nodes for performing virtualization. In the embodiment of the present disclosure, a terminal including a main camera and an auxiliary camera is taken as an example for description.
On the basis of the camera architecture shown in fig. 5, when the terminal is in the virtualization mode, the camera application may send a main camera acquisition request to the camera service layer, so that the camera hardware abstraction layer acquires a first preview image of the main camera in response to the main camera acquisition request. The first preview image may be, for example, a preview stream of the main camera, and may specifically be automatically acquired by the main camera.
Meanwhile, a channel can be configured for the auxiliary camera to transmit the image data associated with the auxiliary camera, and the preview image of the auxiliary camera does not need to be acquired directly from the hardware abstraction layer. When configuring the path, the path type of the auxiliary camera corresponding to the support situation may be configured according to the support situation of the depth image of the terminal. The support condition of the depth image can be used to indicate whether the terminal supports direct uploading of the depth image, and may be specifically determined according to the configuration parameters of the terminal. Further, according to the support situation of the depth image of the terminal, the path of the camera corresponding to the support situation can be configured. That is, the types of the configured paths are different depending on the supporting situation. And the support situation of the depth image is different for different terminals, so the configured path of each terminal is different. Specifically, if the terminal supports a depth image, a path for transmitting the depth image is configured for the sub camera. And if the terminal does not support the depth image, configuring a channel for transmitting the common shot image for the auxiliary camera. And the path of the auxiliary camera is configured and used for transmitting the image data obtained by shooting to the camera application.
In step S420, in a hardware abstraction layer, a depth image is calculated according to the first preview image of the main camera and the second preview image of the auxiliary camera, the depth image is filled in a path of the auxiliary camera, and the first preview image and the depth image of the main camera are sent to an image post-processing service layer.
In the embodiment of the disclosure, in the hardware abstraction layer of the camera, the preview images of the main camera and the auxiliary camera are still acquired simultaneously, and the preview image of the main camera can be recorded as a first preview image, and the preview image of the auxiliary camera can be recorded as a second preview image. But in this process, the second preview image of the secondary camera is not transmitted to the camera application. In the hardware abstraction layer, a depth image may be calculated from the first preview image and the second preview image.
In the algorithm, a first preview image and a second preview image are compared, and a depth image is generated by calculating the visual angle difference of a target object in the first preview image and the second preview image. Namely, the preview images corresponding to the main camera and the auxiliary camera are used for comparison, and the visual angle difference of the target object in the preview images is calculated to generate the depth image.
The target object refers to a subject to be photographed in the first preview image and the second preview image. The viewing angle difference refers to the difference in viewing angle between different cameras. The gray value of each pixel point in the depth image can represent the distance from a certain point in the scene to the camera.
Because the first preview image and the second preview image are respectively obtained by shooting by different cameras, and a certain distance is formed between the two cameras, the parallax caused by the first preview image and the second preview image can be calculated according to the principle of triangulation distance measurement to obtain the depth information of the same target object in the first preview image and the second preview image, namely the distance between the target object and the plane where the main camera and the auxiliary camera are located.
Of course, other methods may be used to calculate the depth information of the first preview image of the main camera, for example, when the main camera and the auxiliary camera take pictures of the same scene, the distance between an object in the scene and the auxiliary camera is proportional to the displacement difference, the attitude difference, and the like of the images formed by the main camera and the auxiliary camera, and therefore, a depth image may also be generated according to the proportional relationship.
After generating the depth image, if a path of the depth image is configured, the computed depth image is populated into the path of the configured secondary camera so that the camera application receives the depth image. Further, after acquiring the first preview image and the depth image, the camera application may transmit the first preview image and the depth image to an image post-processing service layer to enable image blurring processing.
In the embodiment of the disclosure, the obtained depth image is transmitted back to the camera application through the configured path of the auxiliary camera. By configuring one path of special image stream, the depth image obtained by calculation of the hardware abstraction layer is uploaded back to the camera application, the second preview image of the auxiliary camera is prevented from being directly transmitted to the camera application, the memory consumption and the resource consumption of data transmission are reduced, and the image transmission efficiency is improved. The depth image is calculated at the hardware abstraction layer, and the depth image is adjusted from the software layer of the real-time virtualization layer to the hardware layer represented by the hardware abstraction layer by adjusting the calculation structure of the calculated depth image, so that the depth image and the image virtualization process can be separated, distributed processing is realized, the processing pressure of a single node of the preview stream is reduced, the possibility of jam of the preview image stream is reduced, and the fluency is improved. In the embodiment of the disclosure, the depth image is obtained by using hardware calculation through the hardware abstraction layer, and compared with the depth image calculated by using software, the power consumption is reduced and the calculation performance is improved.
Next, with continuing reference to fig. 4, in step S430, obtaining, by the image post-processing service layer, real-time blurring state information corresponding to the depth image, and performing image blurring on the first preview image according to the real-time blurring state information and by combining the first preview image and the depth image to obtain a target image.
In the embodiment of the disclosure, a real-time blurring node may be included in the image post-processing service layer, so that the real-time blurring processing is performed by the real-time blurring node. In the process of acquiring the depth image, states when the depth image is calculated may be generated at the same time, and these states may be referred to as state information of real-time blurring, so as to indicate whether the depth image is normal or abnormal.
The real-time ghosted state information may include, but is not limited to, a combination of one or more parameters of occlusion state, distance of target object, light intensity. Moreover, the terminal has different support conditions for the depth image, and the mode of acquiring the real-time blurring state information is also different. Specifically, if the terminal supports a depth image, real-time ghosted state information is directly acquired from the real-time update information. And if the terminal does not support the depth image, calculating real-time blurring state information according to the depth image and the image shooting parameters. The image capturing parameters herein may include exposure parameters of the main camera and the sub camera at the time of capturing an image and whether there are a large number of different image frames in the captured image. The light intensity can be determined by the exposure parameters of the main camera and the auxiliary camera, and whether a large number of different image pictures exist in the shot image can be used for determining the shielding state (if most of different image pictures exist, the shot image can be considered to belong to the shielding state). In addition, the distance of the target object can be acquired through the depth image, so that real-time virtualized state information is obtained.
Fig. 6 schematically shows a flowchart of image blurring according to the status information of real-time blurring, and referring to fig. 6, the method mainly includes the following steps:
in step S610, it is determined whether the real-time virtualized state information is a normal state; if yes, go to step S620; if not, go to step S630.
In this step, when the real-time virtualized state information includes an occlusion state, a distance of the target object, and a light intensity, if one of the states is abnormal, the state information may be considered to be abnormal. The status information can be considered normal if all the parameters are normal.
In step S620, if the real-time blurring state information is a normal state, determining a blurring parameter according to the depth image, and performing image blurring processing on the first preview image according to the blurring parameter to obtain a target image.
In this step, when it is determined that the real-time blurring state information is in a normal state, the blurring effect may be synthesized using the first preview image and the depth image to obtain a blurred target image.
A flowchart of the image blurring process is schematically shown in fig. 7, and fig. 7 is a specific implementation of step S620. As shown in fig. 7, mainly includes the following steps:
in step S710, determining a blurring region from a depth image outside a depth range covered by the focus of the first preview image;
in step S720, a blurring degree is determined according to the depth information corresponding to the depth image of the blurring region, and image blurring processing is performed on the first preview image according to the blurring degree.
In the embodiment of the disclosure, the image can be kept clear in the depth range covered by the focusing point of the main camera, that is, the first preview image is kept unchanged. The focusing point refers to a position for ensuring that a focusing portion is clear so as to achieve accurate focusing. The terminal may include one focusing point or a plurality of focusing points, which are not limited in detail herein. For depth images outside the depth range covered by the focus of the main camera, it is possible to determine the region to be blurred. The blurring region is used to represent the image range to be blurred, and the blurring region may be all regions of the main camera other than the focus point.
Further, the distance between each pixel point and the main camera can be represented by the gray value of each pixel point in the depth image. And the distances between all the pixel points and the main camera are different, so that the depth values of the pixel points are also different. In order to improve the accuracy and the suitability of the blurring effect, the blurring degree of blurring each pixel point in the blurring region can be determined according to the difference of the depth value of each pixel point, and then the image corresponding to the blurring region in the first preview image is subjected to blurring processing according to the blurring degree, so that the blurring image effect is more real and more natural, and the blurring effect is improved. In particular, the degree of blurring may be positively correlated to the depth value, e.g., the greater the depth value, the greater the degree of blurring. Of course, the blurring degree may also be determined for different depth values according to other corresponding relationships, and the blurring processing of different degrees may be performed as long as the target image can be obtained, which is not limited herein. After the blurred target image is obtained, the target image may be transmitted to a camera application for display or video recording.
In step S630, if the real-time virtualized state information is an abnormal state, the real-time virtualized state information is prompted according to a priority order.
In this step, if the real-time blurring state information is an abnormal state, the image blurring process cannot be directly performed, but the real-time blurring state information of the abnormal state is presented in the order of priority. Priority order is used herein to indicate the order of precedence of the prompts. If one parameter is in an abnormal state, the parameter is directly prompted. If a plurality of parameters are in an abnormal state, the parameters need to be prompted according to the priority order. The priority order may be, for example, an occlusion state, a light intensity, and a distance of a target object. For example, if one camera is blocked by the user and cannot be blurred, feedback to the user may be given priority. If the light is too dark to obtain the blurring result, a hint can be given. The hint may also be made if the target object is too far away or too close, resulting in an impact on blurring. Because the real-time blurring state information has an abnormal state, which indicates that the depth image is inaccurate and unreliable, the user can be fed back by the prompt, so that the user can adjust the image shooting parameters of the terminal and the like according to the prompt identification information on the terminal to obtain an accurate depth image, and after the adjustment is completed, the image blurring processing is continuously performed according to the accurate depth image and according to the method in the step S620.
In the technical scheme of fig. 6, different modes are selected for processing according to whether the real-time virtualized state information is in an abnormal state. The first preview image and the depth image can be used for image blurring processing together when the real-time blurring state information is in a normal state, and the second preview image does not need to be transmitted to a camera application and an image post-processing service layer, so that image transmission is reduced, and transmission efficiency and processing efficiency are improved. In the process of parallel processing of the two, the phenomenon of jamming is avoided. And when the real-time virtualized state information is in an abnormal state, prompt information can be sent to remind a user of adjusting parameters in time, so that convenience is improved.
Fig. 8 schematically shows an overall flowchart of image blurring based on a camera architecture, and referring to fig. 8, the method mainly includes the following steps:
in step S801, the camera application transmits a main camera acquisition request to the camera service layer.
In step S802, the camera service layer acquires a first preview image from the camera hardware abstraction layer in response to the primary camera acquisition request, and configures a path of the secondary camera.
In step S803, a first preview image and a second preview image of the secondary camera are obtained in the camera hardware abstraction layer, a depth image is calculated according to the first preview image and the second preview image, and the depth image is sent to the camera service layer.
In step S804, the camera service layer sends the first preview image to the camera application, and fills the depth image to the path of the secondary camera.
In step S805, the first preview image and the depth image are input into the real-time blurring layer, so that the real-time blurring layer calculates a real-time blurred target image.
In step S806, the real-time virtualization layer outputs the target image to the camera application for presentation.
In the technical scheme of fig. 8, on the basis of a camera architecture, the depth image is calculated according to the first preview image and the second preview image at the hardware abstraction layer, so that the number of processed images of a single node is reduced, the phenomenon of stutter during the process of previewing an image stream is avoided, the calculation power consumption is reduced, and the calculation efficiency and the smoothness are improved. Because the path of the auxiliary camera is configured, the depth image calculated by the hardware abstraction layer is filled into the configured path of the auxiliary camera, and the second preview image of the auxiliary camera does not need to be transmitted from the hardware abstraction layer to calculate the depth image for image blurring processing, so that the image transmission efficiency is improved, and the memory consumption is reduced. The depth images of different images and the target image can be calculated separately during calculation, and the image processing efficiency is improved.
Fig. 9 schematically shows a flowchart for determining a target image, and with reference to fig. 9, mainly includes the following steps:
in step S901, a first preview image is input to the real-time virtualization layer.
In step S902, the depth image is input to the real-time virtualization layer.
In step S903, a support of the depth image of the terminal is acquired.
In step S904, it is determined whether the support of the depth image of the terminal is a depth image support. If yes, go to step S905. If not, go to step S906.
In step S905, a depth image is acquired from the underlying frame.
In step S9051, the real-time ghosted state information is acquired from the real-time state information.
In step S906, a path of the sub camera is configured.
In step S907, a depth image is calculated.
In step S908, the state information of the real-time blurring is determined from the depth image and the image capturing parameters.
In step S909, image blurring processing is performed based on the real-time blurring state information, and a target image is obtained.
In step S910, the target image is output to the camera application.
In the technical solution in fig. 9, the types of the paths of the auxiliary camera are configured according to the support situation of the depth image of the terminal, and the real-time virtualized state information is acquired according to the manner corresponding to the support situation. If the input type is a depth image, the step of calculating the depth image is skipped. The data processing amount of the nodes is reduced, and the memory consumption is reduced.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Fig. 10 schematically shows a block diagram of an image blurring processing apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 10, an image blurring processing device 1000 according to an exemplary embodiment of the present disclosure may include the following modules:
a path configuration module 1001, configured to, if the terminal is in a virtualization mode, respond to a main camera acquisition request, acquire a first preview image of the main camera, and configure a path of an auxiliary camera;
the depth image calculation module 1002 is configured to calculate, at a hardware abstraction layer, a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera, fill the depth image into a passage of the auxiliary camera, and send the first preview image and the depth image of the main camera to an image post-processing service layer;
an image blurring module 1003, configured to obtain, by the image post-processing service layer, real-time blurring state information corresponding to the depth image, and perform image blurring on the first preview image according to the real-time blurring state information and by combining the first preview image and the depth image, so as to obtain a target image.
In an exemplary embodiment of the disclosure, the pathway configuration module is configured to: and configuring a path of the auxiliary camera of a type corresponding to the supporting condition according to the supporting condition of the depth image of the terminal.
In an exemplary embodiment of the present disclosure, the image blurring module includes: the first state determining module is used for acquiring real-time virtualized state information from the real-time updating information if the terminal supports the depth image; and the second state determining module is used for calculating the real-time virtualized state information according to the depth image and the image shooting parameters if the terminal does not support the depth image.
In an exemplary embodiment of the present disclosure, the image blurring module includes: a first blurring module, configured to determine a blurring parameter according to the depth image if the state information of the real-time blurring is a normal state, and perform image blurring processing on the first preview image according to the blurring parameter to obtain the target image; and the second virtualization module is used for prompting the real-time virtualized state information according to a priority order if the real-time virtualized state information is in an abnormal state.
In an exemplary embodiment of the present disclosure, the real-time virtualized state information includes a combination of one or more parameters of an occlusion state, a distance of a target object, and a light intensity.
In an exemplary embodiment of the disclosure, the first blurring module is configured to: determining a blurring region according to a depth image outside a depth range covered by the focus of the first preview image; and determining the blurring degree according to the depth value corresponding to the depth image of the blurring area, and performing image blurring processing on the first preview image according to the blurring degree.
In an exemplary embodiment of the present disclosure, the depth image calculation module is configured to: and comparing the first preview image with the second preview image, and generating the depth image by calculating the visual angle difference of the target object in the first preview image and the second preview image.
It should be noted that, since the functional blocks of the image blurring processing device according to the embodiment of the present disclosure are the same as those in the embodiment of the image blurring processing method, they are not described again here.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. An image blurring processing method, comprising:
if the terminal is in a virtualization mode, responding to a main camera acquisition request, acquiring a first preview image of a main camera, and configuring a channel of an auxiliary camera;
in a hardware abstraction layer, calculating a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera, filling the depth image into a passage of the auxiliary camera, and sending the first preview image and the depth image of the main camera to an image post-processing service layer;
and acquiring real-time blurring state information corresponding to the depth image through the image post-processing service layer, and performing image blurring processing on the first preview image by combining the first preview image and the depth image according to the real-time blurring state information to obtain a target image.
2. The image blurring processing method according to claim 1, wherein the configuring of the path of the secondary camera includes:
and configuring a path of the auxiliary camera of a type corresponding to the supporting condition according to the supporting condition of the depth image of the terminal.
3. The image blurring processing method according to claim 1 or 2, wherein the obtaining of the real-time blurring state information corresponding to the depth image includes:
if the terminal supports the depth image, acquiring real-time virtualized state information from the real-time updating information;
and if the terminal does not support the depth image, calculating the real-time blurring state information according to the depth image and the image shooting parameters.
4. The image blurring processing method according to claim 1, wherein the image blurring processing the first preview image by combining the first preview image and the depth image according to the real-time blurring state information to obtain a target image includes:
if the state information of the real-time blurring is in a normal state, determining a blurring parameter according to the depth image, and performing image blurring processing on the first preview image according to the blurring parameter to obtain the target image;
and if the real-time virtualized state information is in an abnormal state, prompting the real-time virtualized state information according to a priority order.
5. The image blurring processing method according to claim 4, wherein the real-time blurring status information includes a combination of one or more parameters of an occlusion status, a distance of a target object, and a light intensity.
6. The image blurring processing method according to claim 4, wherein the determining a blurring parameter according to the depth image and performing image blurring on the first preview image according to the blurring parameter includes:
determining a blurring region according to a depth image outside a depth range covered by the focus of the first preview image;
and determining the blurring degree according to the depth value corresponding to the depth image of the blurring area, and performing image blurring processing on the first preview image according to the blurring degree.
7. The image blurring processing method according to claim 1, wherein the calculating a depth image from the first preview image of the primary camera and the second preview image of the secondary camera includes:
and comparing the first preview image with the second preview image, and generating the depth image by calculating the visual angle difference of the target object in the first preview image and the second preview image.
8. An image blurring processing apparatus, comprising:
the access configuration module is used for responding to a main camera acquisition request if the terminal is in a virtualization mode, acquiring a first preview image of the main camera and configuring an access of an auxiliary camera;
the depth image calculation module is used for calculating a depth image according to the first preview image of the main camera and the second preview image of the auxiliary camera on a hardware abstraction layer, filling the depth image into a passage of the auxiliary camera, and sending the first preview image and the depth image of the main camera to an image post-processing service layer;
and the image blurring module is used for acquiring real-time blurring state information corresponding to the depth image through the image post-processing service layer, and performing image blurring on the first preview image by combining the first preview image and the depth image according to the real-time blurring state information to obtain a target image.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out an image blurring processing method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image blurring processing method of any one of claims 1-7 via execution of the executable instructions.
CN202011024322.1A 2020-09-25 2020-09-25 Image blurring processing method and device, storage medium and electronic equipment Active CN112165575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011024322.1A CN112165575B (en) 2020-09-25 2020-09-25 Image blurring processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011024322.1A CN112165575B (en) 2020-09-25 2020-09-25 Image blurring processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112165575A true CN112165575A (en) 2021-01-01
CN112165575B CN112165575B (en) 2022-03-18

Family

ID=73864006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011024322.1A Active CN112165575B (en) 2020-09-25 2020-09-25 Image blurring processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112165575B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967201A (en) * 2021-03-05 2021-06-15 厦门美图之家科技有限公司 Image illumination adjusting method and device, electronic equipment and storage medium
CN114339071A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Image processing circuit, image processing method and electronic device
CN116051361A (en) * 2022-06-30 2023-05-02 荣耀终端有限公司 Image dimension data processing method and device
CN118075607A (en) * 2024-01-10 2024-05-24 荣耀终端有限公司 Image processing method, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945118A (en) * 2014-03-14 2014-07-23 华为技术有限公司 Picture blurring method and device and electronic equipment
CN109474780A (en) * 2017-09-07 2019-03-15 虹软科技股份有限公司 A kind of method and apparatus for image procossing
CN109862262A (en) * 2019-01-02 2019-06-07 上海闻泰电子科技有限公司 Image weakening method, device, terminal and storage medium
CN110288543A (en) * 2019-06-21 2019-09-27 北京迈格威科技有限公司 A kind of depth image guarantor side treating method and apparatus
CN110300240A (en) * 2019-06-28 2019-10-01 Oppo广东移动通信有限公司 Image processor, image processing method, camera arrangement and electronic equipment
CN110958390A (en) * 2019-12-09 2020-04-03 Oppo广东移动通信有限公司 Image processing method and related device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945118A (en) * 2014-03-14 2014-07-23 华为技术有限公司 Picture blurring method and device and electronic equipment
CN109474780A (en) * 2017-09-07 2019-03-15 虹软科技股份有限公司 A kind of method and apparatus for image procossing
CN109862262A (en) * 2019-01-02 2019-06-07 上海闻泰电子科技有限公司 Image weakening method, device, terminal and storage medium
CN110288543A (en) * 2019-06-21 2019-09-27 北京迈格威科技有限公司 A kind of depth image guarantor side treating method and apparatus
CN110300240A (en) * 2019-06-28 2019-10-01 Oppo广东移动通信有限公司 Image processor, image processing method, camera arrangement and electronic equipment
CN110958390A (en) * 2019-12-09 2020-04-03 Oppo广东移动通信有限公司 Image processing method and related device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967201A (en) * 2021-03-05 2021-06-15 厦门美图之家科技有限公司 Image illumination adjusting method and device, electronic equipment and storage medium
CN114339071A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Image processing circuit, image processing method and electronic device
CN116051361A (en) * 2022-06-30 2023-05-02 荣耀终端有限公司 Image dimension data processing method and device
CN116051361B (en) * 2022-06-30 2023-10-24 荣耀终端有限公司 Image dimension data processing method and device
CN118075607A (en) * 2024-01-10 2024-05-24 荣耀终端有限公司 Image processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112165575B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN112165575B (en) Image blurring processing method and device, storage medium and electronic equipment
CN111885305B (en) Preview picture processing method and device, storage medium and electronic equipment
CN111917980B (en) Photographing control method and device, storage medium and electronic equipment
CN111641828B (en) Video processing method and device, storage medium and electronic equipment
US12096134B2 (en) Big aperture blurring method based on dual cameras and TOF
CN111093108B (en) Sound and picture synchronization judgment method and device, terminal and computer readable storage medium
CN108616776B (en) Live broadcast analysis data acquisition method and device
CN108427630B (en) Performance information acquisition method, device, terminal and computer readable storage medium
CN111161176B (en) Image processing method and device, storage medium and electronic equipment
CN111815666B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN111586431B (en) Method, device and equipment for live broadcast processing and storage medium
CN110958465A (en) Video stream pushing method and device and storage medium
CN111641829B (en) Video processing method, device and system, storage medium and electronic equipment
CN111338474B (en) Virtual object pose calibration method and device, storage medium and electronic equipment
CN112584049A (en) Remote interaction method and device, electronic equipment and storage medium
CN111766606A (en) Image processing method, device and equipment of TOF depth image and storage medium
CN111770282A (en) Image processing method and device, computer readable medium and terminal equipment
CN110662105A (en) Animation file generation method and device and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN114020387A (en) Terminal screen capturing method and device, storage medium and electronic equipment
CN110572710B (en) Video generation method, device, equipment and storage medium
CN112165576A (en) Image display method, image display device, storage medium and electronic equipment
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN111698262B (en) Bandwidth determination method, device, terminal and storage medium
CN114697516B (en) Three-dimensional model reconstruction method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant