CN108259769B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108259769B
CN108259769B CN201810277051.7A CN201810277051A CN108259769B CN 108259769 B CN108259769 B CN 108259769B CN 201810277051 A CN201810277051 A CN 201810277051A CN 108259769 B CN108259769 B CN 108259769B
Authority
CN
China
Prior art keywords
image
face
face image
frame
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810277051.7A
Other languages
Chinese (zh)
Other versions
CN108259769A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810277051.7A priority Critical patent/CN108259769B/en
Publication of CN108259769A publication Critical patent/CN108259769A/en
Application granted granted Critical
Publication of CN108259769B publication Critical patent/CN108259769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application discloses an image processing method, an image processing device, a storage medium and an electronic device. The image processing method comprises the following steps: receiving an image acquisition instruction; responding to an image acquisition instruction, and acquiring cached image frames from a caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face; judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not; if so, stopping acquiring the image frame; and if not, continuously acquiring the image frames until each person corresponds to at least one image frame with an eye area meeting preset conditions in the acquired image frames. According to the scheme, whether the eye area in the image frame acquired in real time meets the condition is determined, so that the number of the image frames to be acquired is dynamically adjusted, and the flexibility of image frame acquisition is improved; meanwhile, the frame grabbing time can be reduced to a certain extent, and the imaging speed is improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
Background
Photographing is also called photographing, photography, and generally refers to a process of exposing a photosensitive medium to light reflected by an object, and a mechanical camera or a digital camera is generally used. Along with the popularization and function diversification of intelligent electronic equipment, the use of intelligent electronic equipment for photographing and recording the drop of life becomes more popular.
After entering a shooting preview interface of a camera of the electronic device, the electronic device can acquire images and display the images on the interface for a user to preview. The images acquired by the electronic device may be stored in a buffer sequence, that is, a plurality of frames of images are stored in the buffer sequence. In the related art, when a certain processing needs to be performed on an acquired image, the electronic device may acquire a recently acquired multi-frame image from a buffer sequence. However, the image capturing mode has a long frame capturing time, which results in a slow imaging speed of the electronic device.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and an electronic device, which can improve the image shooting quality.
The embodiment of the application provides an image processing method, which is applied to electronic equipment and comprises the following steps:
receiving an image acquisition instruction;
responding to the image acquisition instruction, and acquiring the cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face;
judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not;
if so, stopping acquiring the image frame;
and if not, continuously acquiring the image frames until each person corresponds to at least one image frame with an eye area meeting preset conditions in the acquired image frames.
An embodiment of the present application provides an image processing apparatus, which is applied to an electronic device, and includes:
the receiving module is used for receiving an image acquisition instruction;
the response module is used for responding to the image acquisition instruction and acquiring the cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face;
the judging module is used for judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not;
the control module is used for controlling the response module to stop acquiring the image frame when the judgment module judges that the image frame is acquired;
and the response module is used for continuously acquiring the image frames until each person corresponds to at least one image frame with the eye area meeting the preset condition in the acquired image frames when the judgment module judges that the person does not belong to the image frames.
The embodiment of the application provides a storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed on a computer, the computer is enabled to execute the steps in the image processing method provided by the embodiment of the application.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the steps in the image processing method provided in the embodiment of the present application by calling the computer program stored in the memory.
In the embodiment of the application, an image acquisition instruction is received; responding to an image acquisition instruction, and acquiring cached image frames from a caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face; judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not; if so, stopping acquiring the image frame; and if not, continuously acquiring the image frames until each person corresponds to at least one image frame with an eye area meeting preset conditions in the acquired image frames. According to the scheme, whether the eye area in the image frame acquired in real time meets the condition is determined, so that the number of the image frames to be acquired is dynamically adjusted, and the flexibility of image frame acquisition is improved; meanwhile, the frame grabbing time can be reduced to a certain extent, and the imaging speed is improved.
Drawings
The technical solution and the advantages of the present invention will be apparent from the following detailed description of the embodiments of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an application example of the image processing method according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 6 is a schematic view of another structure of the image processing apparatus according to the embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an image processing circuit of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
It is understood that the execution subject of the embodiment of the present application may be an electronic device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
101 receives an image acquisition instruction.
Users often need to take pictures using an electronic device camera. After entering a shooting preview interface of a camera of the electronic equipment, the electronic equipment can acquire images through a built-in image sensor of the electronic equipment and display the images on the interface for a user to preview. However, in the image capturing process, due to the habit of the user, the eyes are easily affected by external environmental factors (such as light), and the eyes are easily blinked or closed under the condition of over-strong light. Especially, when a plurality of people group photo, because the position of the station needs to be adjusted and moved, the face image is easy to be unstable due to face shaking, and the more shaking is serious, the more people shaking is, the worse the stability of the image is. Based on these factors, in the related art, the acquisition of multi-needle images is used to synthesize an image that is large for all human eyes. However, if all the human eyes in the first frame of image captured at the beginning are large, it is not necessary to continue capturing the image any more, and if the image is continuously captured, the resource is wasted. Therefore, by adopting the mode of grabbing the multi-frame images, on one hand, the frame grabbing time is easy to be longer, and the imaging speed is reduced; on the other hand, the resource of the electronic equipment is easily wasted.
The image acquisition instruction can be triggered by pressing a physical shutter key on the electronic equipment by a user after the camera of the electronic equipment is started; in addition, the image acquisition instruction may also be triggered by a user touching a virtual shutter control on a display interface of the electronic device after the camera of the electronic device is turned on, which is not specifically limited.
102. And responding to the image acquisition instruction, and acquiring the cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face.
In some embodiments, the image frames in the buffered sequence may be acquired in real time by an image sensor disposed inside the electronic device and buffered in a temporary memory. After receiving the image acquisition instruction, the electronic device may capture a corresponding image frame from the buffered sequence in response to the instruction.
In a specific implementation process, the buffered image frames can be sequentially captured from the buffer sequence according to a buffer sequence from first to last, so as to ensure the continuity of the captured images. Wherein the capture speed may be determined from shutter parameters of the electronic device camera. The acquired image frames may include one or more persons and at least one recognizable face for subsequent image processing operations.
103. Judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not; if yes, go to step 105, otherwise go to step 104.
Specifically, every time an image frame in a frame buffer sequence is acquired, relevant detection and judgment processing is performed on an eye region of a person in a current image frame to determine whether all eye regions in the current image frame meet a preset condition.
In some embodiments, the face image of each person in the current image frame may be compared with the face image of the standard state corresponding to each person to determine whether the eye area of each person in the current image frame meets the preset condition. That is, the step of "determining whether the eye areas of all the persons in the acquired current image frame satisfy the preset condition" may include the following processes:
extracting a face image from the acquired current image frame;
matching a target sample face image from a preset database according to the face image;
comparing the face image with a target sample face image to obtain a comparison result;
and judging whether the eye areas of all the people in the current image frame meet the preset condition or not according to the comparison result.
Specifically, since the directly acquired original image is often not directly usable due to the limitation of various conditions and random interference, the current image frame may be subjected to preprocessing operations such as gray scale correction, noise filtering, and the like. Then, based on an image recognition algorithm, the image features of the preprocessed current image frame may be extracted to extract color features, texture features, shape features, spatial position relationship features, and the like. Therefore, the current image frame is reduced from high latitude to low dimension so as to obtain the low-dimension sample characteristics which can reflect the essence of the image most. Based on the above, facial features, such as eyes, nose, mouth and the like, are detected and recognized from the current image frame, so as to determine the facial images included therein, and one or more facial images recognized are extracted from the facial features.
In practical application, a database including faces of all people in the image frame needs to be constructed first. That is, it is necessary to acquire face image information of all persons in the current image frame in advance to obtain a sample face image of each person, and add the obtained sample face image to the database to construct the preset database. Thus, a sample facial image matching the extracted facial image can be selected from the preset data. And comparing the face image with the matched sample face image to judge whether the eye areas of all the people in the current image frame meet the preset condition.
In some embodiments, the step "matching the corresponding sample face image from the preset database according to the face image" may include the following steps:
identifying the face image to obtain a face identification result;
and acquiring a target sample face image matched with the face image from a preset database according to the face recognition result.
Specifically, the extracted face images may be recognized, and the person identity corresponding to each face image may be determined according to the recognition structure. And further acquiring a sample face image corresponding to the person identity from a preset database to serve as a target sample face image.
In practical application, because the human face has expression changes, the sizes of eyes under different expressions are different. Therefore, in order to improve the accuracy of the comparison result, a plurality of sample facial images with different expressions of the same person can be stored in the preset database so as to match the most suitable target sample facial image.
That is, in some embodiments, the preset database may include: a plurality of sample face image sets; the sample face image set may include: and sample face images of a plurality of different expressions of the same face. Then, the step of "obtaining a target sample face image matched with the face image from a preset database according to the face recognition result" may include the following processes:
according to the face recognition result, determining a target sample face image set matched with the face image from a preset database;
recognizing the expression of the face image;
and selecting a target sample face image from the target sample face image set according to the expression.
Specifically, when a preset database is constructed, facial images of each person under different expressions are collected to obtain a plurality of sample facial images of the same person, and the plurality of sample facial images are stored in the database in a set form.
In addition, when a preset database is constructed, the face images of the same person at different angles can be acquired, such as the face image at the front view angle, the face image at the left oblique view angle, the face image at the right oblique view angle and the like, so that the accuracy of the comparison result is further improved.
In specific implementation, when the face image is compared with the target sample face image, the size and the angle of the image may be different, so that direct comparison cannot be performed. Therefore, the face image can be adjusted to the same size as the target sample face image by using an affine transformation algorithm, so as to facilitate the subsequent image comparison operation. That is, in some embodiments, the step of "comparing the face image with the target sample face image to obtain the comparison result" may include the following processes:
aligning the face image with a target sample face image;
calculating a first opening degree of an eye region in the aligned face image and a second opening degree of the eye region in the target sample face image;
and comparing the first opening degree with the second opening degree to determine that the first opening degree is smaller than the second opening degree, and obtaining a comparison result.
Specifically, the face image is aligned with the target sample face image by using an affine transformation algorithm, so that the face image and the target sample face image are adjusted to be the same in size and angle, and the face image and the target sample face image can be directly compared.
When calculating the first opening degree of the eye region in the aligned face image, the palpebral fissure value between the upper eyelid and the lower eyelid in the eye region may be obtained first, and specifically, the maximum distance between the upper eyelid and the lower eyelid in the direction perpendicular to the horizontal direction may be used. Then, the first opening degree is calculated based on the acquired eyelid fissure value. In some embodiments, the lid split value may be directly used as the degree of opening.
Similarly, the second opening degree may refer to the above-mentioned method for calculating the first opening degree, which is not described herein again. And finally, comparing the first opening degree with the second opening degree.
The step of determining whether the eye areas of all the people in the current image frame satisfy the preset condition according to the comparison result may include the following processes:
if the comparison result includes that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame do not meet the preset condition;
and if the comparison result does not include that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame meet the preset condition.
Specifically, the comparison result includes that the first opening degree is smaller than the second opening degree, that is, the size of the eyes of the person in the current image frame does not reach the size of the corresponding eyes in the preset database, and therefore it is determined that the condition is not satisfied. The comparison result does not include that the first opening degree is smaller than the second opening degree, that is, the eye sizes of all the faces in the image frame all reach the corresponding eye sizes in the preset database, so that the condition is judged to be met.
104. Judging whether each person corresponds to at least one image frame with an eye area meeting the preset condition in the acquired image frames; if yes, go to step 105, otherwise go to step 102.
Specifically, when the eye size of a person exists in the current image frame and does not reach the corresponding eye size in the preset database, whether all the persons correspond to the image frame with the eye size reaching the eye size in the preset database or not is detected.
105. And stopping continuously acquiring the image frames.
Specifically, when the acquired eye regions of all the people in the current image frame satisfy the preset condition, the eyes of all the people in the image frame may be considered to be relatively large. At this time, the image frames can be stopped from being continuously acquired from the buffering sequence, so that the frame capturing time is shortened, and the imaging speed is improved.
In addition, when each person in the acquired image frames corresponds to at least one image frame with an eye area satisfying the preset condition, it may be considered that each person corresponds to an image with a large eye in the captured image frames. At this time, the image frames can be stopped from being continuously acquired from the buffering sequence, so that the frame capturing time is shortened, and the imaging speed is improved.
As can be seen from the above, the image processing method provided in the embodiment of the present application obtains an instruction by receiving an image; responding to an image acquisition instruction, and acquiring cached image frames from a caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face; judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not; if so, stopping acquiring the image frame; and if not, continuously acquiring the image frames until each person corresponds to at least one image frame with an eye area meeting preset conditions in the acquired image frames. According to the scheme, whether the eye area in the image frame acquired in real time meets the condition is determined, so that the number of the image frames to be acquired is dynamically adjusted, and the flexibility of image frame acquisition is improved; meanwhile, the frame grabbing time can be reduced to a certain extent, and the imaging speed is improved.
Referring to fig. 2, fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
201. the electronic device receives a received image acquisition instruction.
In some embodiments, the image acquisition instruction may be triggered by a user pressing a physical shutter key on the electronic device after turning on the camera of the electronic device.
In some embodiments, the image acquisition instruction may also be triggered by a user touching a virtual shutter control on a display interface of the electronic device after the camera of the electronic device is turned on.
202. And the electronic equipment responds to the image acquisition instruction and acquires the cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise an identifiable human face.
Specifically, after receiving the image acquisition instruction, the electronic device may capture the corresponding image frame from the buffer sequence in response to the instruction.
In a specific implementation process, the buffered image frames can be sequentially captured from the buffer sequence according to a buffer sequence from first to last, so as to ensure the continuity of the captured images. Wherein the capture speed may be determined from shutter parameters of the electronic device camera. The acquired image frames may include one or more persons and at least one recognizable face for subsequent image processing operations.
203. The electronic device extracts a face image from the acquired current image frame.
Specifically, since the directly acquired original image is often not directly usable due to the limitation of various conditions and random interference, the current image frame may be subjected to preprocessing operations such as gray scale correction, noise filtering, and the like. Then, based on an image recognition algorithm, the image features of the preprocessed current image frame may be extracted to extract color features, texture features, shape features, spatial position relationship features, and the like. Therefore, the current image frame is reduced from high latitude to low dimension so as to obtain the low-dimension sample characteristics which can reflect the essence of the image most. Based on the above, facial features, such as eyes, nose, mouth and the like, are detected and recognized from the current image frame, so as to determine the facial images included therein, and one or more facial images recognized are extracted from the facial features.
204. The electronic equipment identifies the face image to obtain a face identification result.
Specifically, the extracted face images may be recognized, and the person identity corresponding to each face image may be determined according to the recognition structure. And further acquiring a sample face image corresponding to the person identity from a preset database to serve as a target sample face image.
205. And the electronic equipment acquires a target sample face image matched with the face image from a preset database according to the face recognition result.
In the embodiment of the present application, a database including faces of all people in the image frame needs to be constructed first. That is, it is necessary to acquire face image information of all persons in the current image frame in advance to obtain a sample face image of each person, and add the obtained sample face image to the database to construct the preset database. Thus, a sample facial image matching the extracted facial image can be selected from the preset data. And comparing the face image with the matched sample face image to judge whether the eye areas of all the people in the current image frame meet the preset condition.
In practical application, because the human face has expression changes, the sizes of eyes under different expressions are different. Therefore, in order to improve the accuracy of the comparison result, a plurality of sample facial images with different expressions of the same person can be stored in the preset database so as to match the most suitable target sample facial image. Specifically, when a preset database is constructed, facial images of each person under different expressions are collected to obtain a plurality of sample facial images of the same person, and the plurality of sample facial images are stored in the database in a set form.
That is, in some embodiments, the preset database may include: a plurality of sample face image sets; the sample face image set may include: and sample face images of a plurality of different expressions of the same face. Then, a target sample facial image set matched with the facial image can be determined from a preset database according to the facial recognition result, the expression of the facial image is recognized, and then the target sample facial image is selected from the target sample facial image set according to the expression.
For example, if the expression of the face image is recognized as an expressionless face, extracting an expressionless sample face image from a target sample face image set corresponding to the person in a preset database to serve as a target sample face image; for another example, if the expression of the face image is recognized as a smile, a sample face image of the smile expression is extracted from a target sample face image set corresponding to the person in a preset database to serve as a target sample face image.
206. And the electronic equipment compares the face image with the target sample face image to obtain a comparison result.
In specific implementation, when the face image is compared with the target sample face image, the size and the angle of the image may be different, so that direct comparison cannot be performed. Therefore, the face image can be adjusted to be aligned with the target sample face image by using an affine transformation algorithm to adjust the face image to be the same size and angle as the target sample face image for direct comparison of the two.
207. The electronic equipment judges whether the eye areas of all the people in the current image frame meet preset conditions or not according to the comparison result; if yes, go to step 211, otherwise go to step 208.
In some embodiments, a first degree of opening of the eye region in the aligned face image and a second degree of opening of the eye region in the target sample face image may be calculated, and then the first degree of opening and the second degree of opening are compared to determine that the first degree of opening is smaller than the second degree of opening, resulting in a comparison result.
Specifically, when calculating the first opening degree of the eye region in the aligned face image, the palpebral fissure value between the upper eyelid and the lower eyelid in the eye region may be obtained first, and specifically, the maximum distance between the upper eyelid and the lower eyelid in the direction perpendicular to the horizontal direction may be used. Then, the first opening degree is calculated based on the acquired eyelid fissure value. In some embodiments, the lid split value may be directly used as the degree of opening.
Similarly, the second opening degree may refer to the above-mentioned method for calculating the first opening degree, which is not described herein again. And finally, comparing the first opening degree with the second opening degree.
Additionally, in one embodiment, the electronic device may detect the eye size in the image as follows. For example, the electronic device may first recognize the eye region in the image by using a face and eye recognition technology, and then obtain an area ratio of the eye region in the entire image. The area ratio is large, it can be considered that the eyes of the user are open largely. The small area ratio may be considered to be a small opening of the user's eyes. For another example, the electronic device may calculate the number of pixel points occupied by human eyes in the image in the vertical direction, and the size of the number may be used to represent the size of the human eyes.
208. The electronic equipment judges whether each person corresponds to at least one image frame with an eye area meeting the preset condition in the acquired image frames; if yes, go to step 211, otherwise go to step 209.
Specifically, when the eye size of a person exists in the current image frame and does not reach the corresponding eye size in the preset database, whether all the persons correspond to the image frame with the eye size reaching the eye size in the preset database or not is detected.
209. The electronic device counts the number of frames acquired for the image frame.
Specifically, an accumulator may be provided, and each time an image acquisition instruction is received, one image frame is acquired from the buffer sequence and accumulated once to count the number of acquired image frames. When the next image acquisition instruction is received, the recorded data is cleared, and counting is restarted.
210. The electronic equipment judges whether the frame number reaches a preset threshold value; if yes, go to step 211, otherwise go to step 202.
Specifically, the preset threshold may be the number of frames of captured images corresponding to one shutter set by the product manufacturer, for example, the number of frames may be 4 frames, 6 frames, 8 frames, and so on.
It is understood that the number of frames statistically acquired to the image frames may not exceed the preset threshold.
211. The electronic device stops acquiring the image frames.
Specifically, when the acquired eye regions of all the people in the current image frame satisfy the preset condition, the eyes of all the people in the image frame may be considered to be relatively large. At this time, the image frames can be stopped from being continuously acquired from the buffering sequence, so that the frame capturing time is shortened, and the imaging speed is improved.
In addition, when each person in the acquired image frames corresponds to at least one image frame with an eye area satisfying the preset condition, it may be considered that each person corresponds to an image with a large eye in the captured image frames. At this time, the image frames can be stopped from being continuously acquired from the buffering sequence, so that the frame capturing time is shortened, and the imaging speed is improved.
In addition, when the number of frames of the image frames obtained by statistics reaches a set upper limit value, the image frames are forcibly stopped from being continuously obtained from the buffer sequence.
For example, taking a smart phone as an example, after entering a preview interface of a camera, a frame of image is acquired every 30 to 60 milliseconds, and the acquired image is stored in a buffer queue. The buffer queue may be a fixed-length queue, for example, the buffer queue may store 15 frames of images newly acquired by the mobile phone.
Referring to fig. 3, the user a opens the camera of the mobile phone to prepare for shooting a group photo for three people, i.e., a first person, a second person and a third person, and at this time, the mobile phone can detect a shooting instruction triggered by the user a through a shooting button, and can capture a frame of image from the cache sequence at regular intervals according to the shooting instruction. If the mobile phone detects the sizes of the eyes A, B and C in the captured first frame image, the sizes of the eyes A, B and C are respectively compared with the sample face images A, B and C stored in the mobile phone album in advance, so as to determine whether the sizes of the eyes A, B and C in the first frame image reach the sizes of the eyes in the corresponding sample face images.
Suppose that the human eye size values in the sample human face images corresponding to the three persons A, B and C are respectively as follows: 75. 80, 82. If the first frame image has the eye size value of 80 for the first frame image, the eye size value of 80 for the second frame image, and the eye size value of 84 for the third frame image, it is obvious that the eye sizes of all the people in the first frame image are large, and at this time, capturing the images from the buffer sequence can be stopped, and the first frame image is directly output to the album as a photo.
If the first frame image has an eye size value of 70 for a, an eye size value of 80 for b, and an eye size value of 84 for c, it can be obtained that the eye size degree of a does not reach the eye size degree in the sample face image, and the eye size degrees of b and c both reach the eye size degree in the sample face image. Therefore, images need to be continuously acquired from the buffer sequence, and the image capture from the buffer sequence is stopped until the eye size degree of the nail in the captured images reaches the eye size degree of the sample face images.
In practical applications, after capturing frames, the captured image frames need to be processed to obtain images with relatively large eyes of all people. In some embodiments, taking the acquired image frame as an example of a multi-person image, after stopping acquiring the image frame, the following process may be further included:
if the number of the acquired image frames is multiple, determining a basic image from the acquired multiple image frames, wherein the basic image at least comprises a face image meeting the preset condition;
determining a face image to be replaced which does not accord with a preset condition from the basic image;
determining a target face image which meets preset conditions from other to-be-processed images except the basic image, wherein the target face image and the to-be-replaced face image are face images of the same person;
in the basic image, replacing the face image to be replaced with a target face image to obtain a basic image subjected to image replacement processing;
and performing image noise reduction processing on the base image subjected to the image replacement processing and outputting the base image.
Specifically, the basic image at least includes a face image that meets the preset condition, that is, the eyes of at least one person in the determined basic image are relatively large. After the basic image is determined, the face image to be replaced which does not accord with the preset condition (namely the face image to be replaced with smaller eyes) is determined from the basic image, and the target face image which accords with the preset condition (namely the face image to be replaced with larger eyes) is extracted from the captured residual image frames. Then, the face image to be replaced is replaced by the target face image, namely, the face image with larger eyes in the residual image frame is replaced by the face image with smaller eyes in the basic image, so as to obtain the image with larger eyes of all the people. And finally, carrying out noise reduction processing on the image, and outputting the image to an album of the electronic equipment to be a photo.
As can be seen from the above, in the scheme of the application, the number of the image frames to be acquired is dynamically adjusted by determining whether the eye area in the image frame acquired in real time meets the condition, so that the flexibility of acquiring the image frames is improved; meanwhile, the frame grabbing time can be reduced to a certain extent, and the imaging speed is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 300 may include: a receiving module 31, a response module 32, a judging module 33 and a control module 304. Wherein:
a receiving module 31, configured to receive an image acquisition instruction;
a response module 32, configured to respond to the image obtaining instruction, and obtain the cached image frame from the caching sequence according to the caching sequence, where the image frame includes at least one identifiable face;
the judging module 33 is configured to judge whether the eye areas of all the persons in the acquired current image frame meet a preset condition;
a control module 34, configured to control the response module 32 to stop acquiring image frames when the determination module 33 determines yes;
the response module 32 is further configured to, when the determining module 33 determines that the image frame is not acquired, continue to acquire the image frame until each person in the acquired image frame corresponds to at least one image frame whose eye area satisfies a preset condition.
In one embodiment, the determining module 33 may be further configured to:
extracting a face image from the acquired current image frame;
matching a target sample face image from a preset database according to the face image;
comparing the face image with the face image of the target sample to obtain a comparison result;
and judging whether the eye areas of all the people in the current image frame meet preset conditions or not according to the comparison result.
In one embodiment, the determining module 33 may be further configured to:
identifying the face image to obtain a face identification result;
and acquiring a target sample face image matched with the face image from a preset database according to the face recognition result.
In one embodiment, the preset database includes: a plurality of sample face image sets; the sample face image set includes: and sample face images of a plurality of different expressions of the same face. The determining module 33 may be further configured to:
according to the face recognition result, determining a target sample face image set matched with the face image from a preset database;
recognizing the expression of the face image;
and selecting a target sample facial image from the target sample facial image set according to the expression.
In one embodiment, the determining module 33 may be further configured to:
aligning the face image with the target sample face image;
calculating a first opening degree of an eye region in the aligned face image and a second opening degree of the eye region in the target sample face image;
and comparing the first opening degree with the second opening degree to determine that the first opening degree is smaller than the second opening degree, and obtaining a comparison result.
In one embodiment, the determining module 33 may be further configured to:
if the comparison result includes that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame do not meet the preset condition;
and if the comparison result does not include that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame meet the preset condition.
In one embodiment, referring to fig. 5, the image processing apparatus 300 may further include:
a counting module 35, configured to count the number of frames of the obtained image frames;
a determining module 36, configured to determine whether the frame number reaches a preset threshold;
the control module 34 is further configured to control the response module 32 to stop continuing to acquire image frames when the determination module determines yes.
In one embodiment, the acquired image frame is a multi-person image; referring to fig. 6, the image processing apparatus 300 may further include: a processing module 37. Among other things, the processing module 37 may be configured to:
after stopping acquiring the image frames, if the number of the acquired image frames is multiple, determining a basic image from the acquired multiple image frames, wherein the basic image at least comprises a face image which meets the preset condition;
determining a face image to be replaced which does not accord with the preset condition from the basic image;
determining a target face image meeting the preset condition from other to-be-processed images except the basic image, wherein the target face image and the to-be-replaced face image are face images of the same person;
in the basic image, replacing the face image to be replaced with the target image to obtain a basic image subjected to image replacement processing;
and performing image noise reduction processing on the base image subjected to the image replacement processing and outputting the base image.
The image processing device provided by the embodiment of the application acquires an instruction by receiving an image; responding to an image acquisition instruction, and acquiring cached image frames from a caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face; judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not; if so, stopping acquiring the image frame; and if not, continuously acquiring the image frames until each person corresponds to at least one image frame with an eye area meeting preset conditions in the acquired image frames. According to the scheme, whether the eye area in the image frame acquired in real time meets the condition is determined, so that the number of the image frames to be acquired is dynamically adjusted, and the flexibility of image frame acquisition is improved; meanwhile, the frame grabbing time can be reduced to a certain extent, and the imaging speed is improved.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute the steps in the image processing method provided in the embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is used to execute the steps in the image processing method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be an electronic device such as a tablet computer or a smartphone. Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include components such as a sensor 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The sensors 401 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, and the like.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, thereby implementing the steps:
receiving an image acquisition instruction;
responding to the image acquisition instruction, and acquiring cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face;
judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not;
if so, stopping acquiring the image frame;
if not, continuously acquiring the image frames until each person corresponds to at least one image frame with the eye area meeting the preset condition in the acquired image frames.
As shown in fig. 8, the image processing circuit includes an image signal processor 540 and a control logic 550. Image data captured by the imaging device 510 is first processed by an image signal processor 540, and the image signal processor 540 analyzes the image data to capture image statistics that may be used to determine and/or one or more control parameters of the imaging device 510. The imaging device 510 may include a camera with one or more lenses 511 and an image sensor 512. Image sensor 512 may include an array of color filters (e.g., Bayer filters), and image sensor 512 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 512 and provide a set of raw image data that may be processed by an image signal processor 540. The sensor 520 may provide raw image data to the image signal processor 540 based on the sensor 520 interface type. The sensor 520 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The image signal processor 540 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and image signal processor 540 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image signal processor 540 may also receive pixel data from the image memory 530. For example, raw pixel data is sent from the sensor 520 interface to the image memory 530, and the raw pixel data in the image memory 530 is then provided to the image signal processor 540 for processing. The image Memory 530 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 520 interface or from the image memory 530, the image signal processor 540 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 530 for additional processing before being displayed. The image signal processor 540 receives the processed data from the image memory 530 and performs image data processing on the processed data in the original domain and in RGB and YCbCr color spaces. The processed image data may be output to a display 570 for viewing by a user and/or further processing by a graphics engine or GPU (graphics processing Unit). Further, the output of the image signal processor 540 may also be sent to the image memory 530, and the display 570 may read image data from the image memory 530. In one embodiment, image memory 530 may be configured to implement one or more frame buffers. Further, the output of the image signal processor 540 may be transmitted to an encoder/decoder 560 so as to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 570 device. The encoder/decoder 560 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the image signal processor 540 may be sent to the control logic 550. For example, the statistical data may include image sensor 512 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 511 shading correction, and the like. The control logic 550 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 510 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 520 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 511 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 511 shading correction parameters.
The following steps are steps for implementing the image processing method provided by the embodiment by using the image processing technology in fig. 8:
receiving an image acquisition instruction;
responding to the image acquisition instruction, and acquiring cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face;
judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not;
if so, stopping acquiring the image frame;
if not, continuously acquiring the image frames until each person corresponds to at least one image frame with the eye area meeting the preset condition in the acquired image frames.
In one embodiment, when the electronic device performs the step of determining whether the eye regions of all the persons in the acquired current image frame satisfy the preset condition, the method specifically includes:
extracting a face image from the acquired current image frame;
matching a target sample face image from a preset database according to the face image;
comparing the face image with the face image of the target sample to obtain a comparison result;
and judging whether the eye areas of all the people in the current image frame meet preset conditions or not according to the comparison result.
In an embodiment, when the electronic device performs the step of matching the corresponding sample face image from the preset database according to the face image, the method specifically includes:
identifying the face image to obtain a face identification result;
and acquiring a target sample face image matched with the face image from a preset database according to the face recognition result.
In one embodiment, the preset database includes: a plurality of sample face image sets; the sample face image set comprises: and sample face images of a plurality of different expressions of the same face. When the step of acquiring, by the electronic device, a target sample face image matched with the face image from a preset database according to the face recognition result is executed, the method specifically includes:
according to the face recognition result, determining a target sample face image set matched with the face image from a preset database;
recognizing the expression of the face image;
and selecting a target sample facial image from the target sample facial image set according to the expression.
In an embodiment, when the electronic device performs the step of comparing the face image with the target sample face image to obtain a comparison result, the method specifically includes:
aligning the face image with the target sample face image;
calculating a first opening degree of an eye region in the aligned face image and a second opening degree of the eye region in the target sample face image;
and comparing the first opening degree with the second opening degree to determine that the first opening degree is smaller than the second opening degree, and obtaining a comparison result.
In an embodiment, when the electronic device performs the step of determining whether the eye areas of all the people in the current image frame satisfy the preset condition according to the comparison result, specifically:
if the comparison result includes that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame do not meet the preset condition;
and if the comparison result does not include that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame meet the preset condition.
In one embodiment, after determining that the eye regions of all the persons in the acquired current image frame do not satisfy the preset condition, the electronic device may further perform the following steps:
counting the number of the acquired image frames;
judging whether the frame number reaches a preset threshold value;
and if so, stopping continuously acquiring the image frame.
In one embodiment, the acquired image frame is a multi-person image; after stopping acquiring the image frame, the electronic device may further perform the steps of:
if the number of the acquired image frames is multiple, determining a basic image from the acquired multiple image frames, wherein the basic image at least comprises a face image which meets the preset condition;
determining a face image to be replaced which does not accord with the preset condition from the basic image;
determining a target face image meeting the preset condition from other to-be-processed images except the basic image, wherein the target face image and the to-be-replaced face image are face images of the same person;
in the basic image, replacing the face image to be replaced with the target image to obtain a basic image subjected to image replacement processing;
and performing image noise reduction processing on the base image subjected to the image replacement processing and outputting the base image.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing apparatus provided in the embodiment of the present application and the image processing method in the above embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
It should be noted that, for the image processing method in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the image processing method in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to illustrate the principles and implementations of the present invention, and the above descriptions of the embodiments are only used to help understanding the method and the core concept of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. An image processing method applied to an electronic device, comprising:
receiving an image acquisition instruction;
responding to the image acquisition instruction, and acquiring the cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face;
judging whether the eye areas of all the people in the acquired current image frame meet preset conditions or not, and specifically comprising the following steps: extracting a face image from the obtained current image frame, identifying the face image to obtain a face identification result, determining a target sample face image set matched with the face image from a preset database according to the face identification result, identifying the expression of the face image, selecting a target sample face image from the target sample face image set according to the expression, comparing the face image with the target sample face image to obtain a comparison result, and judging whether the eye areas of all figures in the current image frame meet preset conditions according to the comparison result, wherein the preset database comprises: a plurality of sample face image sets, the sample face image sets comprising: sample face images of a plurality of different expressions of the same face;
if so, stopping acquiring the image frame;
and if not, continuously acquiring the image frames until each person corresponds to at least one image frame with the eye area meeting the preset condition in the acquired image frames, and storing the image frames meeting the preset condition into an album.
2. The image processing method according to claim 1, wherein the comparing the face image with the target sample face image to obtain a comparison result comprises:
aligning the face image with the target sample face image;
calculating a first opening degree of an eye region in the aligned face image and a second opening degree of the eye region in the target sample face image;
and comparing the first opening degree with the second opening degree to determine that the first opening degree is smaller than the second opening degree, and obtaining a comparison result.
3. The image processing method of claim 2, wherein the determining whether the eye areas of all the people in the current image frame satisfy a preset condition according to the comparison result comprises:
if the comparison result includes that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame do not meet the preset condition;
and if the comparison result does not include that the first opening degree is smaller than the second opening degree, judging that the eye areas of all the people in the current image frame meet the preset condition.
4. The image processing method according to claim 1, further comprising, after determining that the eye areas of all the persons in the acquired current image frame do not satisfy the preset condition:
counting the number of the acquired image frames;
judging whether the frame number reaches a preset threshold value or not;
and if so, stopping continuously acquiring the image frame.
5. The image processing method according to any one of claims 1 to 4, wherein the acquired image frame is a multi-person image; after stopping acquiring the image frame, the method further comprises the following steps:
if the number of the acquired image frames is multiple, determining a basic image from the acquired multiple image frames, wherein the basic image at least comprises a face image meeting the preset condition;
determining a face image to be replaced which does not accord with the preset condition from the basic image;
determining a target face image which meets the preset condition from other to-be-processed images except the basic image, wherein the target face image and the to-be-replaced face image are face images of the same person;
replacing the face image to be replaced with the target face image in the basic image to obtain a basic image subjected to image replacement processing;
and performing image noise reduction processing on the base image subjected to the image replacement processing and outputting the base image.
6. An image processing apparatus applied to an electronic device, comprising:
the receiving module is used for receiving an image acquisition instruction;
the response module is used for responding to the image acquisition instruction and acquiring the cached image frames from the caching sequence according to the caching sequence, wherein the image frames at least comprise one recognizable face;
a judging module, configured to judge whether eye regions of all people in the obtained current image frame satisfy a preset condition, specifically, to extract a face image from the obtained current image frame, identify the face image to obtain a face identification result, determine a target sample face image set matched with the face image from a preset database according to the face identification result, identify an expression of the face image, select a target sample face image from the target sample face image set according to the expression, compare the face image with the target sample face image to obtain a comparison result, and judge whether the eye regions of all people in the current image frame satisfy the preset condition according to the comparison result, where the preset database includes: a plurality of sample face image sets, the sample face image sets comprising: sample face images of a plurality of different expressions of the same face;
the control module is used for controlling the response module to stop acquiring the image frame when the judgment module judges that the image frame is acquired;
and the response module is used for continuously acquiring the image frames until each person corresponds to at least one image frame with the eye area meeting the preset condition in the acquired image frames when the judgment module judges that the person does not obtain the image frames, and storing the image frames meeting the preset condition into an album.
7. A storage medium having stored thereon a computer program, the computer program, when executed on a computer, causing the computer to perform the method of any one of claims 1-5.
8. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any one of claims 1-5 by invoking a computer program stored in the memory.
CN201810277051.7A 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment Active CN108259769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810277051.7A CN108259769B (en) 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810277051.7A CN108259769B (en) 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108259769A CN108259769A (en) 2018-07-06
CN108259769B true CN108259769B (en) 2020-08-14

Family

ID=62747688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810277051.7A Active CN108259769B (en) 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108259769B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543581A (en) * 2018-11-15 2019-03-29 北京旷视科技有限公司 Image processing method, image processing apparatus and non-volatile memory medium
CN109949213B (en) * 2019-03-15 2023-06-16 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN112419447A (en) * 2020-11-17 2021-02-26 北京达佳互联信息技术有限公司 Method and device for generating dynamic graph, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049085A (en) * 2012-12-19 2013-04-17 苏州贝腾特电子科技有限公司 Method for double clicking virtual mouse
CN105072327A (en) * 2015-07-15 2015-11-18 广东欧珀移动通信有限公司 Eye-closing-preventing person photographing method and device thereof
CN106204435A (en) * 2016-06-27 2016-12-07 北京小米移动软件有限公司 Image processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5683025B2 (en) * 2010-04-19 2015-03-11 パナソニックIpマネジメント株式会社 Stereoscopic image capturing apparatus and stereoscopic image capturing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049085A (en) * 2012-12-19 2013-04-17 苏州贝腾特电子科技有限公司 Method for double clicking virtual mouse
CN105072327A (en) * 2015-07-15 2015-11-18 广东欧珀移动通信有限公司 Eye-closing-preventing person photographing method and device thereof
CN106204435A (en) * 2016-06-27 2016-12-07 北京小米移动软件有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN108259769A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
JP4274233B2 (en) Imaging apparatus, image processing apparatus, image processing method therefor, and program causing computer to execute the method
CN108810413B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107886484B (en) Beautifying method, beautifying device, computer-readable storage medium and electronic equipment
US8861806B2 (en) Real-time face tracking with reference images
KR101280920B1 (en) Image recognition apparatus and method
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
US8879802B2 (en) Image processing apparatus and image processing method
CN111402135A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN108198152B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108093158B (en) Image blurring processing method and device, mobile device and computer readable medium
CN108401110B (en) Image acquisition method and device, storage medium and electronic equipment
WO2019114508A1 (en) Image processing method, apparatus, computer readable storage medium, and electronic device
CN108259769B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108574803B (en) Image selection method and device, storage medium and electronic equipment
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108093170B (en) User photographing method, device and equipment
CN112036311A (en) Image processing method and device based on eye state detection and storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN108462831B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108401109B (en) Image acquisition method and device, storage medium and electronic equipment
CN108513068B (en) Image selection method and device, storage medium and electronic equipment
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant