CN109151303B - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109151303B
CN109151303B CN201810960807.8A CN201810960807A CN109151303B CN 109151303 B CN109151303 B CN 109151303B CN 201810960807 A CN201810960807 A CN 201810960807A CN 109151303 B CN109151303 B CN 109151303B
Authority
CN
China
Prior art keywords
image
target
target image
camera
frame rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810960807.8A
Other languages
Chinese (zh)
Other versions
CN109151303A (en
Inventor
陈岩
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zeku Technology Shanghai Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810960807.8A priority Critical patent/CN109151303B/en
Publication of CN109151303A publication Critical patent/CN109151303A/en
Application granted granted Critical
Publication of CN109151303B publication Critical patent/CN109151303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Abstract

The application relates to an image processing method and device, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a first image acquired by a first camera according to a first frame rate in an exposure period, and acquiring at least two second images acquired by a second camera according to a second frame rate in the exposure period, wherein the first frame rate is less than the second frame rate; generating a first target image according to the first image, and generating a second target image according to the second image, wherein the second target image is used for representing depth information corresponding to the first target image; and processing the first target image and the second target image. The image processing method and device, the electronic equipment and the computer readable storage medium can improve the accuracy of image processing.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The intelligent device not only can collect the color information of the object but also can collect the depth information of the object when shooting the object, and the color information of the image can be more accurately processed through the depth information. For example, the near view and the far view in the image may be identified according to the depth information, so that the colors of the near view and the far view are processed differently; and whether the recognized face is a living body can be judged according to the depth information, so that the face beautifying treatment and the like can be performed on the living body face. There are various methods for acquiring depth information by image processing, such as a binocular ranging method, a structured light method, a time-of-flight method, and the like.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can improve the accuracy of image processing.
An image processing method comprising:
acquiring a first image acquired by a first camera according to a first frame rate in an exposure period, and acquiring at least two second images acquired by a second camera according to a second frame rate in the exposure period, wherein the first frame rate is less than the second frame rate;
generating a first target image according to the first image, and generating a second target image according to the second image, wherein the second target image is used for representing depth information corresponding to the first target image;
and processing the first target image and the second target image.
An image processing apparatus comprising:
the image acquisition module is used for acquiring a first image acquired by a first camera according to a first frame rate in an exposure period and acquiring at least two second images acquired by a second camera according to a second frame rate in the exposure period, wherein the first frame rate is less than the second frame rate;
the image conversion module is used for generating a first target image according to the first image and generating a second target image according to the second image, and the second target image is used for representing depth information corresponding to the first target image;
and the image processing module is used for processing the first target image and the second target image.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a first image acquired by a first camera according to a first frame rate in an exposure period, and acquiring at least two second images acquired by a second camera according to a second frame rate in the exposure period, wherein the first frame rate is less than the second frame rate;
generating a first target image according to the first image, and generating a second target image according to the second image, wherein the second target image is used for representing depth information corresponding to the first target image;
and processing the first target image and the second target image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first image acquired by a first camera according to a first frame rate in an exposure period, and acquiring at least two second images acquired by a second camera according to a second frame rate in the exposure period, wherein the first frame rate is less than the second frame rate;
generating a first target image according to the first image, and generating a second target image according to the second image, wherein the second target image is used for representing depth information corresponding to the first target image;
and processing the first target image and the second target image.
The image processing method and device, the electronic device and the computer readable storage medium can acquire a first image according to a first frame rate in a certain exposure period through the first camera, acquire at least two second images according to a second frame rate in the exposure period through the second camera, and process a first target image generated according to the first image and a second target image generated according to the second image. Therefore, the first camera and the second camera can acquire images in the same exposure time period, the second camera can acquire at least two second images simultaneously and generate a second target image finally used for processing according to the at least two second images, errors generated in image acquisition are reduced, and the accuracy of image processing is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flow chart of an image processing method in another embodiment;
FIG. 4 is a schematic diagram of TOF computed depth information in one embodiment;
FIG. 5 is a flowchart of an image processing method in yet another embodiment;
FIG. 6 is a flowchart of an image processing method in yet another embodiment;
FIG. 7 is a diagram illustrating the flow of image processing in one embodiment;
FIG. 8 is a software framework diagram for implementing an image processing method in one embodiment;
FIG. 9 is a diagram illustrating an implementation of an image processing method in one embodiment;
FIG. 10 is a schematic diagram showing a configuration of an image processing apparatus according to an embodiment;
FIG. 11 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the present application. The first image and the second image are both images, but they are not the same image.
FIG. 1 is a diagram of an embodiment of an application environment of an image processing method. As shown in fig. 1, two cameras, including a first camera 102 and a second camera 104, may be mounted on the electronic device. Specifically, the electronic device may perform shooting through the first camera 102 and the second camera 104, acquire a first image acquired by the first camera 102 in an exposure period according to a first frame rate, and acquire at least two second images acquired by the second camera 104 in the exposure period according to a second frame rate, where the first frame rate is less than the second frame rate. Then, generating a first target image according to the first image, and generating a second target image for the second image, wherein the second target image is used for representing depth information corresponding to the first target image; and processing according to the first target image and the second target image. It is understood that the electronic device may be a mobile phone, a computer, a wearable device, etc., and is not limited thereto.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 2, the image processing method includes steps 202 to 206. Wherein:
step 202, acquiring a first image acquired by a first camera according to a first frame rate in an exposure period, and acquiring at least two second images acquired by a second camera according to a second frame rate in the exposure period, wherein the first frame rate is less than the second frame rate.
The electronic equipment can be provided with a camera, and images are obtained through the installed camera. The camera can be divided into types such as laser camera, visible light camera according to the image difference of acquireing, and laser camera can acquire the laser and shine the image that forms on the object, and visible light image can acquire the image that visible light shines and forms on the object. The electronic equipment can be provided with a plurality of cameras, and the installation position is not limited.
For example, one camera may be installed on a front panel of the electronic device, two cameras may be installed on a back panel of the electronic device, and the cameras may be installed in an embedded manner inside the electronic device and then opened by rotating or sliding. Specifically, a front camera and a rear camera can be mounted on the electronic device, the front camera and the rear camera can acquire images from different viewing angles, the front camera can acquire images from a front viewing angle of the electronic device, and the rear camera can acquire images from a back viewing angle of the electronic device.
In the embodiment of the application, the electronic device is provided with at least two cameras, namely a first camera and a second camera, and then the first camera and the second camera are controlled to be exposed simultaneously, a first image is acquired through the first camera, and a second image is acquired through the second camera. It can be understood that the first camera and the second camera are both images acquired for the same scene, the first camera acquires the first image at a first frame rate, and the second camera acquires the second image at a second frame rate. The first frame rate is less than the second frame rate, so that the second camera can acquire a plurality of second images in the same exposure period.
Specifically, at least two second images acquired by the second camera can be used for synthesizing one image, so that a cavity phenomenon generated when the second camera acquires the second images can be avoided, and the accuracy of the images is improved. For example, a first camera may acquire a first image at a rate of 30 frames/second and a second camera may acquire a second image at a rate of 120 frames/second. Therefore, in the same exposure period, the first camera acquires one first image, and the second camera can acquire four second images.
And 204, generating a first target image according to the first image, and generating a second target image according to the second image, wherein the second target image is used for representing the depth information corresponding to the first target image.
Specifically, the first image refers to an original image acquired by a first camera, and the second image refers to an original image acquired by a second camera. An image sensor in the camera can convert optical signals into electric signals, and original images formed after the optical signals are converted into the electric signals cannot be directly processed by a processor and cannot be processed by the processor after certain format conversion.
In one embodiment, the first camera may be a visible light camera, the second camera may be a laser camera, and a laser emitter corresponding to the second camera may be mounted on the electronic device. And laser of the laser emitter is irradiated on the object, and a second image generated when the object is irradiated by the laser is acquired through the second camera. The second image is used for generating depth information corresponding to the first image.
The first image collected by the first camera can generate a corresponding first target image, and the first target image can be processed by the processor. For example, the acquired first image may be an image in a RAW format, the first image may be converted from an image in a RAW format into an image in a YUV (Luma Chroma) format, the YUV image formed after format conversion is the generated first target image, and then the first target image is processed. The second image acquired by the second camera may also be an image in a RAW format, and since the number of the acquired second images is at least two, the second images can be synthesized into one Depth image, that is, the second target image.
Step 206, the first target image and the second target image are processed.
It can be understood that the first camera and the second camera are shot for the same scene, so that the first image and the second image obtained by shooting correspond to each other, and the first target image and the second target image obtained also correspond to each other. For example, the first target image is a YUV image, and the second target image is a Depth image, so that the first target image may represent color information of the shooting scene, and the second target image may represent Depth information corresponding to the shooting scene.
After the first target image and the second target image are obtained, the first target image and the second target image may be processed, and a specific processing manner is not limited. For example, the face recognition processing may be performed according to a first target image, and the three-dimensional modeling of the face recognized in the first target image may be performed according to a second target image, so as to obtain a three-dimensional model of the face. And performing facial beautification treatment on the face in the first target image according to the depth information in the second target image.
In the image processing method provided by the above embodiment, a first camera may acquire a first image according to a first frame rate within a certain exposure period, a second camera acquires at least two second images according to a second frame rate within the exposure period, and then a first target image generated according to the first image and a second target image generated according to the second image are processed. Therefore, the first camera and the second camera can acquire images in the same exposure time period, the second camera can acquire at least two second images simultaneously and generate a second target image finally used for processing according to the at least two second images, errors generated in image acquisition are reduced, and the accuracy of image processing is improved.
Fig. 3 is a flowchart of an image processing method in another embodiment. As shown in fig. 3, the image processing method includes steps 302 to 314. Wherein:
step 302, when an image acquisition instruction is detected, acquiring a first image acquired by a first camera according to a first frame rate in an exposure period.
The image acquisition instruction refers to an instruction to trigger an image acquisition operation. For example, the user may open an application program and generate an image capture instruction by operating the application program. When the electronic equipment detects the image acquisition instruction, the camera can be opened. After the camera is opened, the time length of light projected to the photosensitive surface of the camera can be controlled by controlling the time length of the opening of the shutter. The longer the shutter is opened, the more the amount of light entering, and the higher the brightness of the formed image. For example, when the ambient light is bright, the shutter is generally controlled to be open for a short time, so that the amount of light entering is small, and the generated image is prevented from being too bright.
The exposure period refers to a time period for controlling the shutter of the camera to open, and the time when the shutter opens and the time length for which the shutter opens can be acquired according to the exposure period. For example, the exposure period is "12: 00:00 → 12:00: 30", the time when the shutter is opened is "12: 00: 00", and the duration for which the shutter is opened is 30 seconds. The frame rate refers to the frequency of the camera acquiring images, and specifically may be the number of images acquired by the camera per second. For example, the frame rate may be 25 frames/second, and the camera is controlled to acquire 30 images per second.
In the embodiment provided by the application, in order to ensure that the images collected by the first camera and the second camera correspond, the first camera and the second camera need to be controlled to be exposed simultaneously. The number of the images collected by the first camera and the second camera is different, so that the first camera and the second camera collect the images at different frame rates in the same exposure period, and thus the number of the images collected by the first camera and the second camera in the same exposure period is different.
And step 304, acquiring the number of the second images, and calculating to obtain a second frame rate according to the number of the second images and the first frame rate.
Specifically, frame rates of images acquired by the first camera and the second camera may be preset, or may be changed in real time, which is not limited herein. For example, the first frame rate is preset, the second frame rate is changed in real time, or both the first frame rate and the second frame rate are preset.
In one embodiment, the second frame rate is calculated according to the first frame rate, a second image quantity is obtained first, and the second frame rate can be calculated according to the second image quantity and the first frame rate. Assume that the first number of images of the acquired first image is S1The number of the second images is S2The first frame rate is Z1The relation between the first frame rate and the second frame rate is S1*Z2=S2*Z1Then the second frame rate Z2The calculation formula of (c) can be expressed as: z2=S2*Z1/S1. It is understood that when the default number of the first images is 1 frame, the second frame rate, i.e. Z, can be calculated directly according to the number of the second images and the first frame rate2=S2*Z1. For example: the first frame rate is 30 frames/second, the number of first images is 1 frame, the number of second images is 4 frames, and the second frame rate is 30 × 4 to 120 frames/second.
The number of the second images is the number of the second images required to be acquired, and the number of the second images may be preset or acquired according to an image acquisition instruction. The acquiring of the second number of images may specifically comprise: acquiring a preset second image quantity; or acquiring the application grade corresponding to the application identifier contained in the image acquisition instruction, and acquiring the corresponding second image quantity according to the application grade.
The application identifier is used for marking the application program initiating the image acquisition instruction, and the application level is used for representing the importance level of the application program initiating the image acquisition instruction. For example, the third party application may correspond to a lower application level and the system application may have a higher application level. The electronic device may pre-establish a corresponding relationship between the application identifier and the application level, and then may find the corresponding application level according to the application identifier. The higher the application level, the greater the number of second images corresponding to the acquired second images.
In an embodiment, the second number of images may also be obtained according to a shake condition of the electronic device, and in a case that the shake of the electronic device is relatively large, it is considered that the probability of an error occurring in the image obtained by the electronic device is greater, so that the error of the image may be reduced by a method of obtaining a composite of multiple images. Specifically, the electronic device establishes a corresponding relationship between the shake data and the number of the second images in advance, and in the process of acquiring the images, the shake data of the electronic device can be acquired, and the number of the second images corresponding to the shake data is acquired. The shake data may reflect a shake condition of the electronic device, and may specifically be data detected by a sensor such as a gyroscope or an acceleration sensor, which is not limited herein.
And step 306, acquiring second images, which are acquired by the second camera according to the second frame rate and correspond to the second image quantity, in the exposure period.
When an image acquisition instruction is detected, the first camera and the second camera need to be controlled to be exposed simultaneously, and images with different quantities are acquired in the same exposure time period. Therefore, after detecting the image capturing instruction, the electronic device may control the first camera to capture the first image according to the first frame rate, and control the second camera to capture the second image according to the second frame rate.
In order to ensure that the acquired first image and the acquired second image are corresponding, after the first image and the second image are acquired, a first time when the first image is acquired and a second time when the second image is acquired can be respectively acquired, when a time interval between the first time and the second time is less than an interval threshold value, a step of generating a first target image according to the first image and a step of generating a second target image according to the second image are executed; otherwise, the acquired first image and the acquired second image are considered to be non-corresponding, and the acquired first image and the acquired second image can be discarded.
In one embodiment, the first camera and the second camera may be installed at different positions on the electronic device, so that the acquired first image and the acquired second image may generate a certain parallax. Therefore, after the first image and the second image are acquired, the acquired first image and the acquired second image may be aligned so that the acquired first image and the acquired second image are corresponding, i.e. correspond to the same scene.
Step 308, performing a first format conversion on the first image to generate a first target image.
The camera is composed of an optical element and an image sensor, and the optical element is used for collecting light rays. The image sensor includes an array of color filters (e.g., Bayer filters) that can be used to convert the light intensity and wavelength information of the light collected by the optical elements into electrical signals, which then produce a raw image. The first image is an original image acquired by the first camera, and the first original image is subjected to first format conversion to generate a first target image.
In one embodiment, the first camera is a visible light camera, the first image may be an image in RAW format, and the first target image may be an image in YUV format, and the first target image in YUV format may be obtained by performing the first format conversion on the first image in RAW format.
And 310, packaging the second image, and performing second format conversion on the packaged second image to generate a second target image.
After the image acquisition instruction is detected, the second camera acquires images at the second frame rate in the same exposure period, and second images corresponding to the second image number can be obtained. The second image acquired by the second camera is also an original image, and a final target image can be obtained only after certain format conversion. Specifically, after the second image is acquired, in order to prevent the second image from being lost in the transmission process, the second image may be packed, so that the second image may form a whole on the memory for transmission, so as to prevent frame loss. The second image after packaging may be subjected to a second format conversion and then a second target image is generated.
In one embodiment, the second camera may be a laser camera, the electronic device may further include a laser transmitter, the laser transmitter transmits laser waves at a certain frequency, the second camera collects images formed by the laser waves after being reflected by the object, and then the distance from the object to the second camera may be calculated by calculating the Time of Flight (TOF) of the laser waves.
Specifically, the laser emitter may be controlled to emit a laser wave during the exposure period; and controlling at least two shutters of the second camera to open and close according to the second frame rate, and acquiring at least two second images generated by laser wave reflection when the shutters are opened. The second camera can acquire different second images through different shutters, the acquired second images can also be images in a RAW format, the second target images can be images in a Depth format, and the second target images in the RAW format are subjected to second format conversion to obtain the second target images in the Depth format.
FIG. 4 is a schematic diagram of TOF computed depth information in one embodiment. As shown in fig. 4, the laser transmitter may transmit a laser wave, the transmitted laser wave may form a reflected laser wave after being reflected by the object, and the depth information of the object may be calculated according to a phase difference between the transmitted laser wave and the received laser wave. When the laser camera actually collects images, different shutters can be controlled to be opened and closed at different times, then different receiving signals are formed, and therefore different images are collected through the shutter switches to calculate and obtain depth images. In one embodiment, the laser camera is controlled to receive laser wave signals through four shutters, and the laser wave signals received by the shutter 1, the shutter 2, the shutter 3 and the shutter 4 are respectively Q1、Q2、Q3、Q4Then, the formula for calculating the depth information is as follows:
Figure BDA0001773767060000111
wherein C is the speed of light, and f is the emission frequency of the laser wave. The above formula can perform the second format conversion on the four second images to generate the corresponding second target image in the Depth format. It is understood that when the number of the second images of the acquired second images is different, the corresponding formulas for performing the second format conversion on the second images may also be different. Specifically, a corresponding second format conversion formula may be obtained according to the second image quantity of the second image, and the second format conversion may be performed on the packed second image according to the second format conversion formula.
In one embodiment, the laser emitter may generate a high amount of heat when emitting laser light, which may increase the temperature of the camera. When the temperature of the camera changes, the accuracy of the image collected by the camera is also affected. Therefore, in the process of acquiring the images, the electronic equipment can detect the temperatures of the first camera and the second camera in real time, and when the temperatures of the first camera and the second camera are lower than the temperature threshold value, the step of acquiring the first image and the second image is executed; otherwise, the laser emitter can be directly turned off, and the acquisition of the first image and the second image is stopped.
Step 312, identify the target object in the first target image, and obtain target depth information corresponding to the target object according to the second target image.
After the first target image and the second target image are acquired, the first target image and the second target image can be packaged, and then the first target image and the second target image are sent to an application program, so that the loss of image data is prevented. For example, if the first target image is an RGB (Red Green Blue) image and the second target image is a Depth image, the first target image and the second target image may be packed into an RGBD image and then sent to the application program.
After the application receives the first target image and the second target image, the target object in the first target image may be identified. The second target image may represent depth information corresponding to the first target image, and thus target depth information corresponding to a target object in the first target image may be obtained according to the second target image. Specifically, the target object in the identified first target image is a target area composed of a plurality of pixel points, a corresponding target depth area in the second target image can be located according to the target area, and depth information corresponding to each pixel point in the target area can be obtained according to the target depth area.
In the embodiments provided in the present application, the method for identifying the target object is not limited herein. For example, the target object may be a human face, and the human face in the first target image may be recognized by a human face detection algorithm. The target object may also be a building, plant, animal, etc. that can be identified by artificial intelligence.
And step 314, processing the target object according to the target depth information.
After the target depth information corresponding to the target object is acquired, the target object may be processed according to the target depth information. For example, the target object may be three-dimensionally modeled according to the target depth information, or may be beautified according to the target depth information, and the specific processing manner is not limited herein.
In an embodiment, the step of acquiring a first target image and a second target image, and processing the first target image and the second target image may further include:
step 502, when the application level is greater than the level threshold, the first target image and the second target image are encrypted.
After the first target image and the second target image are acquired, the first target image and the second target image may be sent to an application program for processing. Before the first target image and the second target image are started, whether the application level of the application program is greater than a level threshold value or not can be judged, if yes, the requirement of the application program on safety is high, and the first target image and the second target image can be encrypted; if not, the application program has lower requirement on the security, and the first target image and the second target image can be directly sent to the application program.
And step 504, sending the encrypted first target image and the encrypted second target image to an application program for processing.
And sending the encrypted first target image and the encrypted second target image to an application program for processing. And the application program receives the encrypted first target image and the encrypted second target image, decrypts the encrypted first target image and the encrypted second target image, and then performs the next processing on the decrypted first target image and the decrypted second target image.
Before the first target image and the second target image are encrypted, the first target image and the second target image can be packaged, and then the packaged first target image and the packaged second target image are encrypted, so that the first target image and the second target image can be prevented from being lost in the transmission process.
In one embodiment, when the second camera acquires the second image, corresponding mark information is formed for each frame of the second image, and the mark information indicates the sequence of acquiring the second image. Specifically, the step of converting the format of the second image includes:
step 602, obtaining the mark information corresponding to each second image, where the mark information is used to indicate the sequence of acquiring the second images.
At least two second images are acquired by the second camera, so that after the second camera acquires the second images, corresponding mark information is generated for each second image, and the image acquisition sequence is marked through the mark information. Specifically, the marker information may be, but is not limited to, an acquisition time of the second image, a phase of the second image, and the like. The acquisition time of the second image is used for representing the time when the second image is acquired, and the time sequence of acquiring the second image can be judged according to the acquisition time. The second image phase may represent the sequential order of the second images acquired for each frame. For example, the second image may be labeled with serial numbers "01", "02", "03", and "04" according to the sequence of the acquisition time.
And step 604, judging whether the acquired second image is lost or not according to the mark information, and if not, packaging the second image and the corresponding mark information.
Whether the acquired second image is lost or not can be judged according to the mark information, and if the acquired second image is lost, the currently acquired second image can be discarded; and if the second image is not lost, packaging the second image and the corresponding mark information. For example, if the serial numbers of the acquired second images are "01", "03", and "04", it indicates that a second image with the serial number "02" is lost in the middle.
And 606, performing second format conversion on the packaged second image according to the mark information.
The packed second image is transmitted as a whole, so that one or more frames are not lost in the transmission process. There may only be cases where all of the second images are present, or where all of the second images are discarded. After the second image and the logo information are packaged, they may be passed to a processor for format conversion. Specifically, the sequence of acquiring the second image can be judged according to the mark information, and then the second image is synthesized and calculated according to the sequence of acquiring the second image to obtain a second target image.
As shown in fig. 7, a first camera may acquire a first image 702 according to a first frame rate during an exposure period, and a second camera may acquire a second image 722 according to a second frame rate during the exposure period. A first target image 704 is then calculated from the first image 702 and a second target image 724 is calculated from the second image 722. And finally, processing is performed according to the obtained first target image 704 and the second target image 724.
In the image processing method provided by the above embodiment, a first camera may acquire a first image according to a first frame rate within a certain exposure period, a second camera acquires at least two second images according to a second frame rate within the exposure period, and then a first target image generated according to the first image and a second target image generated according to the second image are processed. Therefore, the first camera and the second camera can acquire images in the same exposure time period, the second camera can acquire at least two second images simultaneously and generate a second target image finally used for processing according to the at least two second images, errors generated in image acquisition are reduced, and the accuracy of image processing is improved.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 5, and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 5, and 6 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
FIG. 8 is a software framework diagram for implementing the image processing method in one embodiment. As shown in fig. 8, the software framework includes an application Layer 80, a Hardware Abstraction Layer (HAL) 82, a Kernel Layer 84, and a Hardware Layer 86. The application layer 80 includes an application 802. Included in hardware abstraction layer 82 are interface 822, image synchronization module 824, image algorithm module 826, and application algorithm module 828. The inner core layer 84 includes a camera driver 842, a camera calibration module 844, and a camera synchronization map module 846. The hardware layer 862 includes a first camera 862, a second camera 864, and an Image Processor (ISP) 866.
In one embodiment, the application 802 may be used to initiate image acquisition instructions and then send the image acquisition instructions to the interface 822. After the interface 822 analyzes the image acquisition instruction, the configuration parameters of the camera can be configured through the camera driver 842, then the configuration parameters are sent to the image processor 866, and the first camera 862 and the second camera 864 are controlled to be opened by the image processor 866. After the first camera 862 and the second camera 864 are opened, the first camera 862 and the second camera 864 can be controlled by the camera synchronization module 846 to synchronously acquire images. The first image collected by the first camera 862 and the second image collected by the second camera 864 are sent to the image processor 866, and then the first image and the second image are sent to the camera calibration module 844 through the image processor 866. The camera calibration module 844 aligns the first image and the second image and sends the aligned first image and second image to the hardware abstraction layer 82. The image synchronization module 824 in the hardware abstraction layer 82 determines whether the first image and the second image are acquired simultaneously according to a first time point at which the first image is acquired and a second time point at which the second image is acquired. If so, the image algorithm module 826 calculates a first target image according to the first image and calculates a second target image according to the second image. The first target image and the second target image are packaged and the like through the application algorithm module 828, then the packaged and the like processed first target image and second target image are sent to the application program 802 through the interface 822, and after the application program 802 obtains the first target image and the second target image, three-dimensional modeling, beauty, Augmented Reality (AR) and the like can be performed according to the first target image and the second target image.
FIG. 9 is a diagram illustrating an implementation of an image processing method in one embodiment. As shown in fig. 9, the first camera and the second camera need to perform camera synchronization processing in the process of acquiring images, the first camera may acquire a first image according to a first frame rate in an exposure period, and the second camera may acquire at least two second images according to a second frame rate in the exposure period. The first image that first camera was gathered can be sent first buffer with the first time stamp that corresponds, and the second image that the second camera was gathered can be packed with corresponding mark information to send the second buffer with second image and mark information after packing and the second time stamp that corresponds. Wherein the first time stamp is used to indicate a first time instant when the first image is acquired and the second time stamp is used to indicate a second time instant when the second image is acquired. When the time interval between the first time stamp and the second time stamp is smaller than a first interval threshold value, reading a first image in a first buffer, performing first format conversion on the first image to obtain a first target image, and sending the first target image to a third buffer; and reading the second image and the corresponding mark information in the second buffer, then performing second format conversion on the second image according to the mark information to obtain a second target image, and sending the second target image to a fourth buffer. The first target image and the second target image may be subjected to a packing process before being sent to the application program, and then the packed first target image and second target image are sent to the fifth buffer. The application program may read the packed first target image and the second target image from the fifth buffer, and perform subsequent processing according to the read first target image and the read second target image.
Fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment. As shown in fig. 10, the image processing apparatus 1000 includes an image acquisition module 1002, an image conversion module 1004, and an image processing module 1006. Wherein:
the image acquisition module 1002 is configured to acquire a first image acquired by a first camera within an exposure period according to a first frame rate, and acquire at least two second images acquired by a second camera within the exposure period according to a second frame rate, where the first frame rate is less than the second frame rate.
The image conversion module 1004 is configured to generate a first target image according to the first image, and generate a second target image according to the second image, where the second target image is used to represent depth information corresponding to the first target image.
An image processing module 1006, configured to process the first target image and the second target image.
The image processing apparatus provided in the foregoing embodiment may acquire, by the first camera, a first image according to a first frame rate within a certain exposure period, acquire, by the second camera, at least two second images according to a second frame rate within the exposure period, and then process a first target image generated according to the first image and a second target image generated according to the second image. Therefore, the first camera and the second camera can acquire images in the same exposure time period, the second camera can acquire at least two second images simultaneously and generate a second target image finally used for processing according to the at least two second images, errors generated in image acquisition are reduced, and the accuracy of image processing is improved.
In one embodiment, the image acquisition module 1002 is further configured to, when an image acquisition instruction is detected, acquire a first image acquired by the first camera according to a first frame rate in an exposure period; acquiring the quantity of second images, and calculating to obtain a second frame rate according to the quantity of the second images and the first frame rate; and acquiring second images, which are acquired by a second camera according to a second frame rate in the exposure period and correspond to the second image quantity.
In an embodiment, the image capturing module 1002 is further configured to obtain an application level corresponding to an application identifier included in the image capturing instruction, and obtain a corresponding second image quantity according to the application level, where the application identifier is used to indicate an application program that initiates the image capturing instruction.
In one embodiment, the image conversion module 1004 is further configured to perform a first format conversion on the first image to generate a first target image; and packaging the second image, and performing second format conversion on the packaged second image to generate a second target image.
In one embodiment, the image conversion module 1004 is further configured to obtain flag information corresponding to each second image, where the flag information is used to indicate a sequence of acquiring the second images; judging whether the acquired second image is lost or not according to the mark information, and if not, packaging the second image and the corresponding mark information; and performing second format conversion on the packaged second image according to the mark information.
In one embodiment, the image processing module 1006 is further configured to encrypt the first target image and the second target image when the application level is greater than a level threshold; and sending the encrypted first target image and the encrypted second target image to the application program for processing.
In one embodiment, the image processing module 1006 is further configured to identify a target object in the first target image, and obtain target depth information corresponding to the target object according to the second target image; and processing the target object according to the target depth information.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 11 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 11, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 11, the image processing circuit includes a first ISP processor 1130, a second ISP processor 1140 and control logic 1150. The first camera 1110 includes one or more first lenses 1112 and a first image sensor 1114. The first image sensor 1114 may include a color filter array (e.g., a Bayer filter), and the first image sensor 1114 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 1114 and provide a set of image data that may be processed by the first ISP processor 1130. Second camera 1120 includes one or more second lenses 1122 and a second image sensor 1124. The second image sensor 1124 may include an array of color filters (e.g., Bayer filters), and the second image sensor 1124 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 1124 and provide a set of image data that may be processed by the second ISP processor 1140.
The first image collected by the first camera 1110 is transmitted to the first ISP processor 1130 for processing, after the first ISP processor 1130 processes the first image, the statistical data (such as brightness of the image, contrast value of the image, color of the image, etc.) of the first image may be sent to the control logic 1150, and the control logic 1150 may determine the control parameter of the first camera 1110 according to the statistical data, so that the first camera 1110 may perform operations such as auto-focusing and auto-exposure according to the control parameter. The first image may be stored in the image memory 1160 after being processed by the first ISP processor 1130, and the first ISP processor 1130 may also read the image stored in the image memory 1160 to process the image. In addition, the first image may be directly transmitted to the display 1170 after being processed by the ISP processor 1130, or the display 1170 may read and display the image in the image memory 1160.
Wherein the first ISP processor 1130 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 1130 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image Memory 1160 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving an interface from the first image sensor 1114, the first ISP processor 1130 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 1160 for additional processing before being displayed. The first ISP processor 1130 receives the processed data from the image memory 1160 and performs image data processing in RGB and YCbCr color spaces on the processed data. The image data processed by first ISP processor 1130 may be output to display 1170 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 1130 may also be sent to image memory 1160, and display 1170 may read image data from image memory 1160. In one embodiment, image memory 1160 may be configured to implement one or more frame buffers.
The statistics determined by first ISP processor 1130 may be sent to control logic 1150. For example, the statistical data may include first image sensor 1114 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 1112 shading correction, and the like. Control logic 1150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for first camera 1110 and control parameters for first ISP processor 1130 based on received statistical data. For example, the control parameters of the first camera 1110 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 1112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters, and the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 1112 shading correction parameters.
Similarly, a second image captured by second camera 1120 is transmitted to second ISP processor 1140 for processing, after second ISP processor 1140 processes the first image, statistical data of the second image (such as brightness of the image, contrast value of the image, color of the image, etc.) may be sent to control logic 1150, and control logic 1150 may determine control parameters of second camera 1120 according to the statistical data, so that second camera 1120 may perform operations such as auto-focus and auto-exposure according to the control parameters. The second image may be stored in the image memory 1160 after being processed by the second ISP processor 1140, and the second ISP processor 1140 may also read the image stored in the image memory 1160 to process the image. In addition, the second image may be directly transmitted to the display 1170 for display after being processed by the ISP processor 1140, and the display 1170 may also read the image in the image memory 1160 for display. Second camera 1120 and second ISP processor 1140 may also implement the processes described for first camera 1110 and first ISP processor 1130.
The following steps are performed to implement the image processing method using the image processing technique of fig. 11.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. An image processing method, comprising:
when an image acquisition instruction is detected, acquiring a first image acquired by a first camera according to a first frame rate within an exposure period;
acquiring an application grade corresponding to an application identifier contained in the image acquisition instruction, and acquiring a corresponding second image quantity according to the application grade, wherein the application identifier is used for marking an application program for initiating the image acquisition instruction;
calculating to obtain a second frame rate according to the number of the second images and the first frame rate, and acquiring at least two second images acquired by a second camera in the exposure period according to the second frame rate, wherein the first frame rate is less than the second frame rate;
generating a first target image according to the first image, and generating a second target image according to the second image, wherein the second target image is used for representing depth information corresponding to the first target image;
and processing the first target image and the second target image.
2. The method of claim 1, wherein the processing the first and second target images comprises:
when the application level is greater than a level threshold value, the first target image and the second target image are encrypted;
and sending the encrypted first target image and the encrypted second target image to the application program for processing.
3. The method of claim 1, wherein generating a first target image from the first image and a second target image from the second image comprises:
performing first format conversion on the first image to generate a first target image;
and packaging the second image, and performing second format conversion on the packaged second image to generate a second target image.
4. The method of claim 3, wherein packaging the second image and performing a second format conversion on the packaged second image comprises:
acquiring mark information corresponding to each second image, wherein the mark information is used for indicating the sequence of acquiring the second images;
judging whether the acquired second image is lost or not according to the mark information, and if not, packaging the second image and the corresponding mark information;
and performing second format conversion on the packaged second image according to the mark information.
5. The method of any of claims 1 to 4, wherein the processing the first and second target images comprises:
identifying a target object in the first target image, and acquiring target depth information corresponding to the target object according to the second target image;
and processing the target object according to the target depth information.
6. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring a first image acquired by a first camera according to a first frame rate in an exposure period when an image acquisition instruction is detected, acquiring an application level corresponding to an application identifier contained in the image acquisition instruction, and acquiring the number of corresponding second images according to the application level, wherein the application identifier is used for marking an application program for initiating the image acquisition instruction; calculating to obtain a second frame rate according to the number of the second images and the first frame rate, and acquiring at least two second images acquired by a second camera in the exposure period according to the second frame rate, wherein the first frame rate is less than the second frame rate;
the image conversion module is used for generating a first target image according to the first image and generating a second target image according to the second image, and the second target image is used for representing depth information corresponding to the first target image;
and the image processing module is used for processing the first target image and the second target image.
7. The apparatus of claim 6,
the image processing module is further configured to encrypt the first target image and the second target image when the application level is greater than a level threshold; and sending the encrypted first target image and the encrypted second target image to the application program for processing.
8. The apparatus of claim 6,
the image conversion module is further configured to perform a first format conversion on the first image to generate a first target image; and packaging the second image, and performing second format conversion on the packaged second image to generate a second target image.
9. The apparatus of claim 8,
the image conversion module is further configured to obtain flag information corresponding to each second image, where the flag information is used to indicate a sequence of acquiring the second images;
judging whether the acquired second image is lost or not according to the mark information, and if not, packaging the second image and the corresponding mark information;
and performing second format conversion on the packaged second image according to the mark information.
10. The device according to any one of claims 6 to 9,
the image processing module is further configured to identify a target object in the first target image, and obtain target depth information corresponding to the target object according to the second target image;
and processing the target object according to the target depth information.
11. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201810960807.8A 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium Active CN109151303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810960807.8A CN109151303B (en) 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810960807.8A CN109151303B (en) 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109151303A CN109151303A (en) 2019-01-04
CN109151303B true CN109151303B (en) 2020-12-18

Family

ID=64790759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810960807.8A Active CN109151303B (en) 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109151303B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110121031B (en) * 2019-06-11 2021-03-12 Oppo广东移动通信有限公司 Image acquisition method and device, electronic equipment and computer readable storage medium
CN113114941B (en) * 2019-08-12 2023-05-12 创新先进技术有限公司 Processing method, device and equipment for shooting image by camera
CN110955541B (en) * 2019-12-09 2022-04-15 Oppo广东移动通信有限公司 Data processing method, device, chip, electronic equipment and readable storage medium
CN110991369A (en) * 2019-12-09 2020-04-10 Oppo广东移动通信有限公司 Image data processing method and related device
CN112633143B (en) * 2020-12-21 2023-09-05 杭州海康威视数字技术股份有限公司 Image processing system, method, head-mounted device, processing device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045577A (en) * 2010-09-27 2011-05-04 昆山龙腾光电有限公司 Observer tracking system and three-dimensional stereo display system for three-dimensional stereo display
CN102959941A (en) * 2010-07-02 2013-03-06 索尼电脑娱乐公司 Information processing system, information processing device, and information processing method
CN105072259A (en) * 2015-07-20 2015-11-18 清华大学深圳研究生院 Method for preventing the geographic position of a mobile terminal from leaking
CN105630484A (en) * 2015-12-17 2016-06-01 宁波优而雅电器有限公司 Application level-based message reception method
CN107113416A (en) * 2014-11-13 2017-08-29 华为技术有限公司 The method and system of multiple views high-speed motion collection
CN107124604A (en) * 2017-06-29 2017-09-01 诚迈科技(南京)股份有限公司 A kind of utilization dual camera realizes the method and device of 3-D view

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406469B2 (en) * 2009-07-20 2013-03-26 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System and method for progressive band selection for hyperspectral images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102959941A (en) * 2010-07-02 2013-03-06 索尼电脑娱乐公司 Information processing system, information processing device, and information processing method
CN102045577A (en) * 2010-09-27 2011-05-04 昆山龙腾光电有限公司 Observer tracking system and three-dimensional stereo display system for three-dimensional stereo display
CN107113416A (en) * 2014-11-13 2017-08-29 华为技术有限公司 The method and system of multiple views high-speed motion collection
CN105072259A (en) * 2015-07-20 2015-11-18 清华大学深圳研究生院 Method for preventing the geographic position of a mobile terminal from leaking
CN105630484A (en) * 2015-12-17 2016-06-01 宁波优而雅电器有限公司 Application level-based message reception method
CN107124604A (en) * 2017-06-29 2017-09-01 诚迈科技(南京)股份有限公司 A kind of utilization dual camera realizes the method and device of 3-D view

Also Published As

Publication number Publication date
CN109151303A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN108965732B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109040591B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108989606B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109151303B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109118581B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109767467B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109146906B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108716983B (en) Optical element detection method and device, electronic equipment, storage medium
CN108419017B (en) Control method, apparatus, electronic equipment and the computer readable storage medium of shooting
CN109190533B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN109360254B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110121031B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110225258B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108600740A (en) Optical element detection method, device, electronic equipment and storage medium
CN108322648B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108600631B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108629329B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109120846B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109447927B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109446945B (en) Three-dimensional model processing method and device, electronic equipment and computer readable storage medium
CN109166082A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium
CN109582811B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109447925B (en) Image processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210617

Address after: Room 01, 8th floor, No.1 Lane 61, shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Patentee after: Zheku Technology (Shanghai) Co., Ltd

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Patentee before: OPPO Guangdong Mobile Communications Co.,Ltd.