CN108513069B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108513069B
CN108513069B CN201810276764.1A CN201810276764A CN108513069B CN 108513069 B CN108513069 B CN 108513069B CN 201810276764 A CN201810276764 A CN 201810276764A CN 108513069 B CN108513069 B CN 108513069B
Authority
CN
China
Prior art keywords
image
wide
face
processed
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810276764.1A
Other languages
Chinese (zh)
Other versions
CN108513069A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810276764.1A priority Critical patent/CN108513069B/en
Publication of CN108513069A publication Critical patent/CN108513069A/en
Application granted granted Critical
Publication of CN108513069B publication Critical patent/CN108513069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment, wherein a tele scene image and a wide-angle scene image which are synchronously acquired and are in the same scene are acquired; then, carrying out face recognition on the tele scene image to obtain the face characteristics of the tele scene image; then binding the face characteristics of the tele scene image with the wide scene image; and finally, presetting the wide-angle scene image according to the face characteristics bound by the wide-angle scene image. When the scheme is applied to long-distance portrait shooting or multi-person group shooting, the face features of people in a scene to be shot can be extracted through a long-focus scene image with a small view finding range but a large portrait account, and then the extracted face features and a wide-angle scene image with a large view finding range but a small portrait account are used for processing the wide-angle scene image, so that the problem of difficulty in extracting the face features can be solved, and the purpose of improving the image processing accuracy is achieved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
Electronic equipment such as a mobile phone generally provides a photographing function for a user, and with continuous progress of hardware such as a camera module and an image processing algorithm, the photographing function of the electronic equipment is more and more powerful, and the user also uses the electronic equipment more and more frequently to photograph.
At present, electronic devices provide image processing functions, such as beautifying and decorating human faces in images, in addition to basic photographing functions. The premise of realizing image processing is to recognize the human face features in the image, but when the electronic device shoots people at a long distance or performs multi-person group photo, the human face in the shot image is small, and the accurate extraction of the human face features is difficult, so that the accuracy of the subsequent image processing on the image is influenced.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and an electronic device, which can improve the accuracy of image processing.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a synchronously acquired tele scene image and a wide scene image of the same scene;
carrying out face recognition on the tele scene image to obtain the face characteristics of the tele scene image;
binding the face features with the wide-angle scene images;
and presetting the wide-angle scene image according to the face characteristics bound by the wide-angle scene image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for acquiring a synchronously acquired tele scene image and a wide scene image of the same scene;
the image recognition module is used for carrying out face recognition on the tele scene image to obtain the face characteristics of the tele scene image;
the characteristic binding module is used for binding the human face characteristic with the wide-angle scene image;
and the image processing module is used for presetting the wide-angle scene image according to the human face characteristics bound by the wide-angle scene image.
In a third aspect, a storage medium is provided in an embodiment of the present application, and has a computer program stored thereon, where the computer program is enabled to execute an image processing method according to any embodiment of the present application when the computer program runs on a computer.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a central processing unit and a memory, where the memory has a computer program, and the central processing unit is configured to execute the image processing method according to any embodiment of the present application by calling the computer program.
The method includes the steps that firstly, a long-focus scene image and a wide-angle scene image which are acquired synchronously and are in the same scene are acquired; then, carrying out face recognition on the tele scene image to obtain the face characteristics of the tele scene image; then binding the face characteristics of the tele scene image with the wide scene image; and finally, presetting the wide-angle scene image according to the face characteristics bound by the wide-angle scene image. When the technical scheme provided by the embodiment of the application is applied to long-distance portrait shooting or multi-person group shooting, the face features of people in a scene to be shot can be extracted through a long-focus scene image with a small viewing range but a large portrait account, and then the extracted face features and a wide-angle scene image with a large viewing range but a small portrait account are used for processing the wide-angle scene image, so that the problem of difficulty in extracting the face features can be solved, and the purpose of improving the image processing accuracy is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of the arrangement positions of the telephoto camera and the wide-angle camera in the embodiment of the present application.
Fig. 3 is an exemplary diagram of acquiring a tele scene image and a wide scene image of a scene to be photographed in the embodiment of the present application.
Fig. 4 is an operation diagram of binding facial features in the embodiment of the present application.
Fig. 5 is a schematic diagram of another operation of binding facial features in the embodiment of the present application.
Fig. 6 is a schematic diagram of determining an image to be processed from a plurality of wide-angle scene images in the embodiment of the present application.
Fig. 7 is an operation diagram for determining a face image to be processed from an image to be processed in the embodiment of the present application.
Fig. 8 is a schematic diagram illustrating an operation of acquiring an image of a target face from another image of a wide-angle scene in an embodiment of the present application.
Fig. 9 is a schematic diagram of an operation of replacing the face image to be processed with the target face image in the embodiment of the present application.
Fig. 10 is a schematic diagram of an operation of acquiring a target face image from a face image library in the embodiment of the present application.
Fig. 11 is another operation diagram for replacing the face image to be processed with the target face image in the embodiment of the present application.
Fig. 12 is an operation diagram for acquiring a target face image from a target electronic device in the embodiment of the present application.
Fig. 13 is a further flowchart illustrating an image processing method provided in an embodiment of the present application.
Fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 16 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 17 is a schematic diagram of a detailed structure of an image processing circuit in the embodiment of the present application.
Fig. 18 is a schematic diagram of another detailed structure of the image processing circuit in the embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The term module, as used herein, may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include additional steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the present application provides an image processing method, and an execution subject of the image processing method may be the image processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the image processing apparatus, where the image processing apparatus may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure. The specific flow of the image processing method provided by the embodiment of the application can be as follows:
in step 101, synchronously acquired tele scene images and wide scene images of the same scene are acquired.
Referring to fig. 2, in the embodiment of the present application, the electronic device includes a telephoto camera and a wide-angle camera, and in an arrangement manner shown in fig. 2, the telephoto camera and the wide-angle camera are horizontally arranged in parallel on a same plane of the electronic device, and a certain distance is provided between the telephoto camera and the wide-angle camera. In addition, the electronic equipment is provided with a tele-image buffer queue and a wide-image buffer queue, wherein the tele-image buffer queue is used for buffering images collected by the tele camera, and the wide-image buffer queue is used for buffering images collected by the wide camera. In practical implementation, the arrangement of the telephoto camera and the wide-angle camera may be set by those skilled in the art according to actual needs.
In the embodiment of the application, after the electronic equipment starts the photographing application, the tele camera and the wide camera are synchronously started, the tele camera and the wide camera are controlled to synchronously acquire images of a scene to be photographed according to the same frame rate, and the tele scene image and the wide scene image of the scene to be photographed are respectively obtained. The telephoto camera stores the acquired telephoto scene image into the telephoto image cache queue, and the wide-angle camera stores the acquired wide-angle scene image into the wide-angle image cache queue.
For example, referring to fig. 3, in the scene to be shot shown in fig. 3, the scene to be shot includes people c, people d, and people e 3 different people, at a certain collection time, the electronic device collects a tele-scene image of the scene to be shot through the tele-camera, and collects a wide-angle-scene image of the scene to be shot through the wide-angle-camera, and obviously, the people in the tele-scene image are larger than the people in the wide-angle-scene image, and are easier to identify.
In the embodiment of the application, when receiving a triggered image shooting request, the electronic device extracts a tele scene image and a wide scene image of the same scene, which are synchronously acquired by the tele camera and the wide camera, from the tele scene image queue and the wide scene image queue respectively.
Optionally, in an embodiment, acquiring the synchronously acquired tele scene image and the wide scene image of the same scene may include:
the method comprises the steps of obtaining a long-focus scene image and a wide-angle scene image which are synchronously collected at a plurality of collecting moments and are in the same scene, and obtaining a long-focus scene image set and a wide-angle scene image set.
The number of the tele scene images in the tele scene image set is the same as the number of the wide scene images in the wide scene image set. The number of targets for acquiring the tele scene image/the wide tele image can be determined according to the frame rate information of the tele camera/the wide camera.
Specifically, when receiving a triggered image shooting request, the electronic device first acquires frame rate information of the tele camera/the wide camera, where the frame rate information is used to describe the number of scene images acquired by the tele camera/the wide camera in a unit time, for example, when the acquired frame rate information is 30fps, it indicates that the tele camera acquires 30 tele scene images of a scene to be shot per second, and the wide camera acquires 30 wide scene images of the scene to be shot per second; for another example, when the obtained frame rate information is 15fps, it is described that the telephoto camera acquires 15 telephoto scene images of the scene to be photographed per second, and the wide-angle camera acquires 15 wide-angle scene images of the scene to be photographed per second.
When the ambient light brightness is high (or the electronic device is in a bright environment), the tele camera/the wide camera can complete exposure within a short time (for example, 30ms), so that the tele camera/the wide camera can acquire a scene image at a high frame rate; when the ambient light level is low (or the electronic device is in a dark environment), it takes a long time (e.g., 40ms-60ms or more) for the tele/wide camera to complete the exposure, so that it can only capture the scene image at a low frame rate.
Then, the number of targets corresponding to the frame rate information is determined according to the frame rate information of the telephoto camera/the wide-angle camera, where the number of targets and the frame rate information may be in a direct relationship, for example, when the frame rate information of the telephoto camera/the wide-angle camera is 30fps, the number of targets may be determined to be 8, and when the frame rate information of the telephoto camera/the wide-angle camera is 15fps, the number of targets may be determined to be 6.
After the target number is determined, the tele scene images and the wide scene images with the target number are extracted from the tele image buffer queue and the wide image buffer queue, wherein the tele scene images and the wide scene images with the same acquisition time (or synchronously acquired) form an image group, and the image groups with the target number are obtained together.
For example, when the number of targets is determined to be "4", 4 tele scene images at acquisition times t1, t2, t3, and t4 are extracted from the tele image buffer queue, and 4 wide scene images at acquisition times t1, t2, t3, and t4 are extracted from the wide image buffer queue, so that 4 image groups are obtained, corresponding to the acquisition times t1, t2, t3, and t4, respectively.
In step 102, performing face recognition on the acquired tele scene image to obtain face features of the tele scene image;
in the embodiment of the application, when a plurality of acquired tele scene images are available, face recognition is respectively performed on the acquired tele scene images; and when only one long-focus scene image is acquired, carrying out face recognition on the acquired long-focus scene image.
The facial features are used to describe the facial image included in the tele scene image, including but not limited to the face size, the eye opening degree, and the expression type (such as anger, disgust, fear, happy, sad, and frightened), etc.
It should be noted that, the embodiment of the present application is not particularly limited to what face recognition technology is used to perform face recognition on a tele-scene image, and a person skilled in the art may select a suitable face recognition technology according to actual needs.
In step 103, binding the face features of the tele scene image with the wide scene image;
after the face recognition of the acquired tele scene image is completed, the face features recognized from the tele scene image are bound with the same wide scene image at the acquisition time.
For example, referring to fig. 4, a tele scene image and a wide-angle tele image at an acquisition time t1 are acquired, then a face feature is recognized from the tele scene image at the acquisition time t1, and the recognized face feature is bound with the wide-angle scene image at the acquisition time t1 to serve as the face feature of the wide-angle scene image.
For another example, referring to fig. 5, 4 image groups with acquisition times of t1, t2, t3, and t4 are acquired, then, a face feature 1 is recognized from a tele scene image with an acquisition time of t1, the recognized face feature 1 is bound to a wide-angle scene image with an acquisition time of t1, a face feature 2 is recognized from a tele scene image with an acquisition time of t2, the recognized face feature 2 is bound to a wide-angle scene image with an acquisition time of t2, a face feature 3 is recognized from a tele scene image with an acquisition time of t3, the recognized face feature 3 is bound to a wide-angle scene image with an acquisition time of t3, a face feature 4 is recognized from a tele scene image with an acquisition time of t4, and the recognized face feature 4 is bound to a wide-angle scene image with an acquisition time of t 4.
In step 104, the wide-angle scene image is subjected to preset processing according to the face features bound to the wide-angle scene image.
In the embodiment of the application, after the binding of the face features is completed, the wide-angle scene image is subjected to preset processing further according to the face features bound by the wide-angle scene image. Specifically, in an optional implementation manner, when a plurality of acquired wide-angle scene images are obtained, the preset processing is performed on the wide-angle scene image according to the face features bound to the wide-angle scene image, and the preset processing includes:
determining the number of matched face images which are contained in each wide-angle scene image and meet preset conditions according to the face characteristics bound by each wide-angle scene image;
taking the wide-angle scene image with the largest number of matched face images as an image to be processed;
and performing preset processing on the image to be processed according to the face features bound by the image to be processed.
The preset conditions can be set according to actual shooting requirements, for example, when multi-person group photo is carried out, if expressions which are frightened by people need to be shot, the preset conditions can be configured to be 'the expression type is frightened'; for another example, when a plurality of persons are group photo, if an expression that people open their eyes and smile is to be photographed, the preset facial features may be configured as "the expression type is smile and eyes are open".
For example, referring to fig. 6, a total of 4 wide-angle scene images are acquired, which are: a wide-angle scene image A, a wide-angle scene image B, a wide-angle scene image C, and a wide-angle scene image D, the scenes to be shot corresponding to the 4 wide-angle scene images comprise 3 different characters, and the preset condition is configured as that the expression is laugh, according to the face features bound to the 4 wide-angle scene images, it is determined that one matching face image (i.e., the third face image) is included in the wide-angle scene image a, it is determined that no matching face image exists in the wide-angle scene image B, it is determined that two matching face images (i.e., the third face image and the third face image) are included in the wide-angle scene image C, and it is determined that one matching face image (i.e., the fifth face image) is included in the wide-angle scene image D.
After the image to be processed is determined, the determined image to be processed may be subjected to a preset process according to the facial features bound to the image to be processed, and specifically, the preset process may be performed on the image to be processed according to the facial features bound to the image to be processed, including:
determining a face image to be processed which does not meet preset conditions in the image to be processed according to the face features bound by the image to be processed;
acquiring a target face image corresponding to the face image to be processed from other wide-angle scene images except the image to be processed, wherein the target face image meets preset conditions and belongs to the same person as the face image to be processed;
and replacing the face image to be processed with the target face image.
After the images to be processed are determined from the acquired wide-angle scene images, the face images to be processed are further determined from the images to be processed, wherein the face images to be processed do not meet preset conditions.
For example, referring to fig. 7, the determined image to be processed includes a face image of a third person, and a face image of a fifth person, as shown in fig. 7, according to the facial features bound to the image to be processed, it is determined that the expression type of the third person is "laugh", the expression type of the third person is also "laugh", and the expression type of the fifth person is "depressed", if the preset condition is "laugh", the third person and the fourth person conform to the preset condition, and the fifth person does not conform to the preset condition, so the fifth person is determined as the face image to be processed.
And determining a face image to be processed in the image to be processed, and further acquiring a target face image corresponding to the face image to be processed from other wide-angle scene images except the image to be processed, wherein the target face image and the face image to be processed belong to the same person, and the target face image meets a preset condition. For example, referring to fig. 8 and 6, of the four wide-angle scene images shown in fig. 6, the wide-angle scene image C is determined as the image to be processed, wherein the expression type of the facial image is "frustrated" and does not meet the preset condition "the expression type is laugh", and is further determined as the facial image to be processed. Then, in the other wide-angle scene images (here, the scene image a, the scene image B, and the scene image D) except the image to be processed (here, the scene image C), whether there is a pentahedral image with an expression type of "laugh" (i.e., a facial image of a character pentagon) is searched, and obviously, as shown in fig. 6, there is a pentahedral image with an expression type of "laugh" in the wide-angle scene image D, at this time, the pentahedral image in the wide-angle scene image D is determined as a target facial image, and the pentahedral image is extracted from the wide-angle scene image D and used for subsequent processing.
After the target face image corresponding to the face image to be processed is obtained, the face image to be processed can be replaced by the target face image. For example, please refer to fig. 9 and fig. 6 in combination, after the four wide-angle scene images shown in fig. 6 (i.e., the wide-angle scene image a, the wide-angle scene image B, the wide-angle scene image C, and the wide-angle scene image D) are acquired, the wide-angle scene image C is determined as an image to be processed, and further, a facial image of a person in the wide-angle scene image C (i.e., a facial image of a person in the wide-angle scene image C) is determined as a; then, determining the pentagon face image in the wide-angle scene image D as a target face image; then, the pentagon face image in the wide-angle scene image C is replaced by the pentagon face image in the wide-angle scene image D, so as to obtain the wide-angle scene image C after the replacement processing, as shown in fig. 9, in the replaced wide-angle scene image C, the expression types of the face images of the persons are all "laughter", and all meet the preset condition, and the expression type is laughter ".
In the embodiment of the application, after the image replacement is completed, the replaced image to be processed can be used as a result image of the image shooting request.
Optionally, in an embodiment, after replacing the face image to be processed with the target face image, the method further includes:
and performing noise reduction processing on the replaced image to be processed according to other wide-angle scene images except the image to be processed.
The noise reduction processing can be performed on the image to be processed in a multi-frame noise reduction mode. For example, 4 wide-angle scene images, which are respectively a wide-angle scene image a, a wide-angle scene image B, a wide-angle scene image C, and a wide-angle scene image D, are obtained in total, where the wide-angle scene image D is determined as an image to be processed, and then multi-frame noise reduction may be performed on the wide-angle scene image D according to the wide-angle scene image a, the wide-angle scene image B, and the wide-angle scene image C.
Specifically, when performing multi-frame noise reduction, the wide-angle scene image a, the wide-angle scene image D, the wide-angle scene image B, and the wide-angle scene image C may be aligned first, and a pixel value of each group of aligned pixels may be obtained. If the pixel values of the same group of alignment pixels are not different, the pixel value average value of the group of alignment pixels can be calculated, and the pixel value average value is used for replacing the pixel value of the corresponding pixel of the wide-angle scene image D. If the pixel values of the alignment pixels in the same group have a large difference, the pixel values in the wide-angle scene image D may not be adjusted.
For example, the pixel P1 in the wide-angle scene image a, the pixel P2 in the wide-angle scene image D, the pixel P3 in the wide-angle scene image B, and the pixel P4 in the wide-angle scene image C are a group of mutually aligned pixels, where the pixel value of P1 is 101, the pixel value of P2 is 102, the pixel value of P3 is 103, and the pixel value of P4 is 104, and then the average value of the pixel values of the group of mutually aligned pixels is 102.5, then the terminal may adjust the pixel value of the P2 pixel in the wide-angle scene image D from 102 to 102.5, so as to perform noise reduction processing on the P2 pixel in the wide-angle scene image D. If the pixel value of P1 is 80, the pixel value of P2 is 102, the pixel value of P3 is 83, and the pixel value of P4 is 90, then the pixel value of P2 may not be adjusted at this time, i.e., the pixel value of P2 remains 102, because their pixel values are more different.
Optionally, in an embodiment, after acquiring the target face image corresponding to the to-be-processed face image from the other wide-angle scene images except the to-be-processed image, the method further includes:
when the target face image is failed to be acquired from other wide-angle scene images, judging whether a local preset face image library stores the target face image or not;
when the target face image is stored in the face image library, the target face image stored in the face image library is extracted.
An alternative approach to acquiring a target face image is provided herein, taking into account the failure to acquire a target face image from other wide-angle scene images.
Specifically, a face image library may be created locally in the electronic device in advance, where the face image library is used to store the face image acquired by the electronic device. For example, the electronic device performs face recognition on a wide-angle scene image after shooting an external scene each time to obtain the wide-angle scene image corresponding to the external scene, and stores a face image recognized from the wide-angle scene image in a face image library.
Correspondingly, when the target face image is failed to be acquired from other wide-angle scene images, whether the target face image corresponding to the face image to be processed is stored or not can be searched in a local preset face image library. The target face image and the face image to be processed belong to the same person, and the target face image meets a preset condition.
And if the target face image corresponding to the face image to be processed is found in the face image library, extracting the target face image corresponding to the face image to be processed from the face image library. For example, referring to fig. 10 and fig. 6 in combination, for the to-be-processed facial image (i.e., the pentagon facial image) determined from the to-be-processed image shown in fig. 6, a target facial image corresponding to the to-be-processed facial image is extracted from the facial image library (the target facial image is identified from a certain image previously taken in a history and stored in the facial image library), as shown in fig. 10, the extracted target facial image and the to-be-processed facial image belong to the same person, i.e., belong to the same person pentagon, the expression type of the target facial image is "smile", and the expression type meets the preset condition "smile" that is configured.
Specifically, the extracting the target face image stored in the face image library includes:
when a plurality of target face images are stored in a face image library, acquiring the storage time of each target face image;
and extracting a target face image which is stored in the face image library and has the closest time to the current time.
For example, 3 target face images corresponding to images to be processed are found in the face image library, which are respectively the target face image 1, the target face image 2 and the target face image 3, the storage time of the target face image 1 is t1, the storage time of the target face image 2 is t2, the storage time of the target face image 3 is t3, if the storage time t1 is 1 day from the current time, the storage time t2 is 2 days from the current time, and the storage time t3 is 7 days from the current time, obviously, the storage time of the target face image 1 is the closest to the current time, and at this time, the target image 1 is extracted from the face image library for subsequent processing.
Specifically, because the target facial image is extracted from the facial image library at this time, the target facial image may be acquired from different shooting scenes, and the shooting scenes are different, and the illumination thereof is also different, therefore, in order to improve the image effect after image processing, the target facial image is replaced by the to-be-processed facial image, including:
acquiring illumination information of a face image to be processed;
migrating the acquired illumination information to a target face image;
and replacing the face image to be processed with the migrated target face image.
When the illumination information is migrated, a proper illumination migration algorithm can be selected according to actual needs to migrate the illumination information of the image to be processed to the target face image. Wherein, the optional illumination migration algorithm includes but is not limited to: the method comprises the following steps of an illumination migration algorithm based on a quotient graph, an illumination migration algorithm based on a 3D model, an illumination migration algorithm based on filter decomposition, an illumination migration algorithm based on intrinsic decomposition and the like.
After the migration of the illumination information is completed, replacing the determined face image to be processed with the migrated target face image, so that all face images in the replaced image to be processed meet preset conditions and the illumination information is consistent.
For example, referring to fig. 11, after the illumination information of a certain determined face image to be processed is transferred to the corresponding target face image, the target face image and the face image to be processed obtain the same illumination effect, and then the determined face image to be processed is replaced with the transferred target face image, so that all face images in the replaced image to be processed conform to the preset condition and the illumination information is consistent.
It should be noted that, as to what kind of illumination migration algorithm is adopted, the embodiment of the present application is not specifically limited, and a person skilled in the art may select the illumination migration algorithm according to actual needs, and the illumination migration algorithm may be the illumination migration algorithm listed in the embodiment of the present application, or may be the illumination migration algorithm not listed in the embodiment of the present application.
Optionally, in an embodiment, after determining whether the target face image is stored in the local preset face image library, the method further includes:
when a target face image corresponding to the face image to be processed is not stored in the face image library, acquiring character information corresponding to the face image to be processed;
sending an image acquisition request to target electronic equipment corresponding to the acquired character information, wherein the image acquisition request is used for indicating the target electronic equipment to search and return a target face image corresponding to the face image to be processed;
and receiving a target face image returned by the target electronic equipment.
In consideration of the situation that the target face image is not obtained from the local face image library, that is, when the target face image corresponding to the face image to be processed is not stored in the local face image library, another scheme for obtaining the target face image is provided herein.
Specifically, when the search of the face image library is completed and the target face image corresponding to the face image to be processed is not found in the face image library, that is, when the target face image corresponding to the face image to be processed is not stored in the face image library, the character information corresponding to the face image to be processed is obtained, that is, the face image of "who" the face image to be processed is determined.
After the figure information corresponding to the face image to be processed is obtained, the target electronic device corresponding to the obtained figure information is determined according to the incidence relation between the locally pre-stored figure information and the electronic device (the incidence relation is used for describing a user to which the electronic device belongs). For example, referring to fig. 6, regarding the to-be-processed image shown in fig. 6, it is determined that the to-be-processed image is a pentagon face image, at this time, the person information to which the pentagon face image is obtained is "pentagon", and it is further determined that a mobile phone of the pentagon is the target electronic device.
After the target electronic equipment is determined, an image acquisition request is generated according to a pre-agreed message format, and the generated image acquisition request is sent to the determined target electronic equipment. The image acquisition request is used for indicating the target electronic equipment to search whether a target face image corresponding to the face image to be processed exists locally or not, and if the target face image is found, the found target face image is returned. Specifically, the target electronic device also creates a face image library locally in advance, when receiving an image acquisition request, searches in the local face image library according to an instruction of the image acquisition request, and returns a target face image which corresponds to the face image to be processed if the target face image is found.
Correspondingly, after the image acquisition request is sent to the target electronic equipment, the target face image returned by the target electronic equipment is received.
For example, referring to fig. 12, on one hand, after acquiring an image to be processed (a group image of three people, namely propane, butane and pentane), the electronic device determines that a penta facial image in the image to be processed is the facial image to be processed, searches a local facial image library, does not find a target facial image corresponding to the facial image to be processed, determines that a penta mobile phone is the target electronic device at this time, and sends an image acquisition request to the target electronic device; on the other hand, after receiving the image acquisition request from the electronic device, the target electronic device searches in a local face image library, finds the target face image corresponding to the face image to be processed, and returns the found target face image to the electronic device.
Specifically, in an optional implementation manner, when one acquired wide-angle scene image is obtained, the preset processing is performed on the wide-angle scene image according to the face features bound to the wide-angle scene image, and the preset processing includes:
determining a face image to be processed which does not meet preset conditions in the wide-angle scene image according to the face characteristics bound by the wide-angle scene image;
acquiring a target face image corresponding to a face image to be processed from a local preset face image library;
and replacing the determined face image to be processed with the acquired target face image.
For example, the obtained wide-angle scene image includes a face image of a third person, and a face image of a fifth person, and according to the facial features bound to the wide-angle scene image, the expression type of the third person is determined to be "laugh", the expression type of the third person is also determined to be "laugh", and the expression type of the fifth person is determined to be "depressed", if the preset condition is that the expression type is "laugh", the third person and the fourth person conform to the preset condition, and the fifth person does not conform to the preset condition, so that the fifth person is determined to be the face image to be processed.
After the face image to be processed is determined from the wide-angle scene image, whether a target face image corresponding to the face image to be processed is stored or not can be searched in a local preset face image library, wherein the target face image and the face image to be processed belong to the same person, and the target face image meets a preset condition.
After the target face image corresponding to the face image to be processed is obtained, the face image to be processed can be replaced by the target face image. At the moment, the target face image is extracted from the face image library, the target face image can be acquired from different shooting scenes, the shooting scenes are different, and the illumination is different, so that the determined face image to be processed is replaced by the acquired target face image, the illumination information of the face image to be processed is firstly transferred to the target face image, then the face image to be processed is replaced by the transferred target face image, and therefore all face images in the replaced wide-angle scene image meet preset conditions and the illumination information is consistent.
Therefore, when the technical scheme provided by the embodiment of the application is applied to long-distance portrait shooting or multi-person group shooting, the face features of people in a scene to be shot can be extracted through a tele-scene image with a small viewing range but a large portrait account, and then the extracted face features and a wide-angle scene image with a large viewing range but a small portrait account are used for processing the wide-angle scene image, so that the problem of difficulty in extracting the face features can be solved, and the purpose of improving the image processing accuracy is achieved.
The image processing method of the present application will be further described below on the basis of the methods described in the above embodiments. Referring to fig. 13, the image processing method may include:
in step 201, a plurality of image groups of the same scene are acquired, wherein each image group comprises a tele scene image and a wide scene image with the same acquisition time.
Referring to fig. 2, in the embodiment of the present application, the electronic device includes a telephoto camera and a wide-angle camera, and in an arrangement manner shown in fig. 2, the telephoto camera and the wide-angle camera are horizontally arranged in parallel on a same plane of the electronic device, and a certain distance is provided between the telephoto camera and the wide-angle camera. In addition, the electronic equipment is provided with a tele-image buffer queue and a wide-image buffer queue, wherein the tele-image buffer queue is used for buffering images collected by the tele camera, and the wide-image buffer queue is used for buffering images collected by the wide camera. In practical implementation, the arrangement of the telephoto camera and the wide-angle camera may be set by those skilled in the art according to actual needs.
In the embodiment of the application, after the electronic equipment starts the photographing application, the tele camera and the wide camera are synchronously started, the tele camera and the wide camera are controlled to synchronously acquire images of a scene to be photographed according to the same frame rate, and the tele scene image and the wide scene image of the scene to be photographed are respectively obtained. The telephoto camera stores the acquired telephoto scene image into the telephoto image cache queue, and the wide-angle camera stores the acquired wide-angle scene image into the wide-angle image cache queue.
For example, referring to fig. 3, in the scene to be shot shown in fig. 3, the scene to be shot includes people c, people d, and people e 3 different people, at a certain collection time, the electronic device collects a tele-scene image of the scene to be shot through the tele-camera, and collects a wide-angle-scene image of the scene to be shot through the wide-angle-camera, and obviously, the people in the tele-scene image are larger than the people in the wide-angle-scene image, and are easier to identify.
Specifically, when receiving a triggered image shooting request, the electronic device first acquires frame rate information of the tele camera/the wide camera, where the frame rate information is used to describe the number of scene images acquired by the tele camera/the wide camera in a unit time, for example, when the acquired frame rate information is 30fps, it indicates that the tele camera acquires 30 tele scene images of a scene to be shot per second, and the wide camera acquires 30 wide scene images of the scene to be shot per second; for another example, when the obtained frame rate information is 15fps, it is described that the telephoto camera acquires 15 telephoto scene images of the scene to be photographed per second, and the wide-angle camera acquires 15 wide-angle scene images of the scene to be photographed per second.
When the ambient light brightness is high (or the electronic device is in a bright environment), the tele camera/the wide camera can complete exposure within a short time (for example, 30ms), so that the tele camera/the wide camera can acquire a scene image at a high frame rate; when the ambient light level is low (or the electronic device is in a dark environment), it takes a long time (e.g., 40ms-60ms or more) for the tele/wide camera to complete the exposure, so that it can only capture the scene image at a low frame rate.
Then, the number of targets corresponding to the frame rate information is determined according to the frame rate information of the telephoto camera/the wide-angle camera, where the number of targets and the frame rate information may be in a direct relationship, for example, when the frame rate information of the telephoto camera/the wide-angle camera is 30fps, the number of targets may be determined to be 8, and when the frame rate information of the telephoto camera/the wide-angle camera is 15fps, the number of targets may be determined to be 6.
After the target number is determined, the tele scene images and the wide scene images with the target number are extracted from the tele image buffer queue and the wide image buffer queue, wherein the tele scene images and the wide scene images with the same acquisition time (or synchronously acquired) form an image group, and the image groups with the target number are obtained together.
For example, when the number of targets is determined to be "4", 4 tele scene images at acquisition times t1, t2, t3, and t4 are extracted from the tele image buffer queue, and 4 wide scene images at acquisition times t1, t2, t3, and t4 are extracted from the wide image buffer queue, so that 4 image groups are obtained, corresponding to the acquisition times t1, t2, t3, and t4, respectively.
In step 202, the face features of the tele scene images in each image group are acquired, and the acquired face features of each tele scene image are bound with the wide scene images in the group.
The facial features are used to describe the facial image included in the tele scene image, including but not limited to the face size, the eye opening degree, and the expression type (such as anger, disgust, fear, happy, sad, and frightened), etc.
It should be noted that, the embodiment of the present application is not particularly limited to what face recognition technology is used to perform face recognition on a tele-scene image, and a person skilled in the art may select a suitable face recognition technology according to actual needs.
For example, referring to fig. 5, 4 image groups at the acquisition times of t1, t2, t3, and t4 are acquired, then, a face feature 1 is recognized from a tele scene image at the acquisition time of t1, the recognized face feature 1 is bound to a wide-angle scene image at the acquisition time of t1, a face feature 2 is recognized from a tele scene image at the acquisition time of t2, the recognized face feature 2 is bound to a wide-angle scene image at the acquisition time of t2, a face feature 3 is recognized from a tele scene image at the acquisition time of t3, the recognized face feature 3 is bound to a wide-angle scene image at the acquisition time of t3, a face feature 4 is recognized from a tele scene image at the acquisition time of t4, and the recognized face feature 4 is bound to a wide-angle scene image at the acquisition time of t 4.
In step 203, the number of the matched face images which are included in each wide-angle scene image and meet the preset conditions is determined according to the face features bound to each wide-angle scene image.
In step 204, the wide-angle scene image containing the largest number of matching face images is taken as the image to be processed.
The preset conditions can be set according to actual shooting requirements, for example, when multi-person group photo is carried out, if expressions which are frightened by people need to be shot, the preset conditions can be configured to be 'the expression type is frightened'; for another example, when a plurality of persons are group photo, if an expression that people open their eyes and smile is to be photographed, the preset facial features may be configured as "the expression type is smile and eyes are open".
For example, referring to fig. 6, a total of 4 wide-angle scene images are acquired, which are: a wide-angle scene image A, a wide-angle scene image B, a wide-angle scene image C, and a wide-angle scene image D, the scenes to be shot corresponding to the 4 wide-angle scene images comprise 3 different characters, and the preset condition is configured as that the expression is laugh, according to the face features bound to the 4 wide-angle scene images, it is determined that one matching face image (i.e., the third face image) is included in the wide-angle scene image a, it is determined that no matching face image exists in the wide-angle scene image B, it is determined that two matching face images (i.e., the third face image and the third face image) are included in the wide-angle scene image C, and it is determined that one matching face image (i.e., the fifth face image) is included in the wide-angle scene image D.
In step 205, a face image to be processed which does not meet a preset condition in the image to be processed is determined.
After the images to be processed are determined from the acquired wide-angle scene images, the face images to be processed are further determined from the images to be processed, wherein the face images to be processed do not meet preset conditions.
For example, referring to fig. 7, the determined image to be processed includes a face image of a third person, and a face image of a fifth person, as shown in fig. 7, according to the facial features bound to the image to be processed, it is determined that the expression type of the third person is "laugh", the expression type of the third person is also "laugh", and the expression type of the fifth person is "depressed", if the preset condition is "laugh", the third person and the fourth person conform to the preset condition, and the fifth person does not conform to the preset condition, so the fifth person is determined as the face image to be processed.
In step 206, a target face image corresponding to the face image to be processed is obtained from other wide-angle scene images except the face image to be processed, and the target face image meets the preset condition and belongs to the same person as the face image to be processed.
And determining a face image to be processed in the image to be processed, and further acquiring a target face image corresponding to the face image to be processed from other wide-angle scene images except the image to be processed, wherein the target face image and the face image to be processed belong to the same person, and the target face image meets a preset condition. For example, referring to fig. 8 and 6, of the four wide-angle scene images shown in fig. 6, the wide-angle scene image C is determined as the image to be processed, wherein the expression type of the facial image is "frustrated" and does not meet the preset condition "the expression type is laugh", and is further determined as the facial image to be processed. Then, in the other wide-angle scene images (here, the scene image a, the scene image B, and the scene image D) except the image to be processed (here, the scene image C), whether there is a pentahedral image with an expression type of "laugh" (i.e., a facial image of a character pentagon) is searched, and obviously, as shown in fig. 6, there is a pentahedral image with an expression type of "laugh" in the wide-angle scene image D, at this time, the pentahedral image in the wide-angle scene image D is determined as a target facial image, and the pentahedral image is extracted from the wide-angle scene image D and used for subsequent processing.
In step 207, the face image to be processed is replaced with the target face image.
After the target face image corresponding to the face image to be processed is obtained, the face image to be processed can be replaced by the target face image. For example, please refer to fig. 9 and fig. 6 in combination, after the four wide-angle scene images shown in fig. 6 (i.e., the wide-angle scene image a, the wide-angle scene image B, the wide-angle scene image C, and the wide-angle scene image D) are acquired, the wide-angle scene image C is determined as an image to be processed, and further, a facial image of a person in the wide-angle scene image C (i.e., a facial image of a person in the wide-angle scene image C) is determined as a; then, determining the pentagon face image in the wide-angle scene image D as a target face image; then, the pentagon face image in the wide-angle scene image C is replaced by the pentagon face image in the wide-angle scene image D, so as to obtain the wide-angle scene image C after the replacement processing, as shown in fig. 9, in the replaced wide-angle scene image C, the expression types of the face images of the persons are all "laughter", and all meet the preset condition, and the expression type is laughter ".
In the embodiment of the application, after the image replacement is completed, the replaced image to be processed can be used as a result image of the image shooting request.
In an embodiment, an image processing apparatus 400 is further provided, please refer to fig. 14, and fig. 14 is a schematic structural diagram of the image processing apparatus 400 according to an embodiment of the present disclosure. The image processing apparatus 400 is applied to an electronic device, and the image processing apparatus 400 includes an image obtaining module 401, an image recognition module 402, a feature binding module 403, and an image processing module 404, as follows:
an image obtaining module 401, configured to obtain a tele scene image and a wide scene image of the same scene, which are acquired synchronously;
an image recognition module 402, configured to perform face recognition on the acquired tele scene image to obtain a face feature of the tele scene image;
a feature binding module 403, configured to bind a face feature of the tele scene image with the wide scene image;
and the image processing module 404 is configured to perform preset processing on the wide-angle scene image according to the face feature bound to the wide-angle scene image.
In an embodiment, when there are a plurality of wide-angle scene images acquired by the image acquisition module 401, the image processing module 404 is specifically configured to:
determining the number of matched face images which are contained in each wide-angle scene image and meet preset conditions according to the face characteristics bound by each wide-angle scene image;
taking the wide-angle scene image with the largest number of matched face images as an image to be processed;
and performing preset processing on the image to be processed according to the face features bound by the image to be processed.
In an embodiment, the image processing module 404 is further specifically configured to:
determining a face image to be processed which does not meet preset conditions in the image to be processed according to the face features bound by the image to be processed;
acquiring a target face image corresponding to the face image to be processed from other wide-angle scene images except the image to be processed, wherein the target face image meets preset conditions and belongs to the same person as the face image to be processed;
and replacing the face image to be processed with the target face image.
In an embodiment, the image processing apparatus 400 further comprises a noise reduction processing module configured to:
and performing noise reduction processing on the replaced image to be processed according to other wide-angle scene images except the image to be processed.
In an embodiment, the image processing module 404 is further configured to:
when the target face image is failed to be acquired from other wide-angle scene images except the image to be processed, judging whether a local preset face image library stores the target face image or not;
when the target face image is stored in the face image library, the target face image stored in the face image library is extracted.
In an embodiment, the image processing module 404 is specifically configured to:
acquiring illumination information of a face image to be processed;
migrating the acquired illumination information to a target face image;
and replacing the face image to be processed with the migrated target face image.
In an embodiment, the image processing module 404 is further configured to:
when the target face image is not stored in the face image library, acquiring character information corresponding to the face image to be processed;
sending an image acquisition request to target electronic equipment corresponding to the character information, wherein the image acquisition request is used for indicating the target electronic equipment to search and return a target face image;
and receiving a target face image returned by the target electronic equipment.
In specific implementation, the modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the units may refer to the foregoing embodiments, which are not described herein again.
As can be seen from the above, the image processing apparatus of this embodiment can obtain the synchronously acquired tele scene image and the wide scene image of the same scene by the image obtaining module 401; performing face recognition on the acquired tele scene image by an image recognition module 402 to obtain a face feature of the tele scene image; the face features of the tele scene image are bound with the wide scene image by the feature binding module 403; the image processing module 404 performs preset processing on the wide-angle scene image according to the face features bound to the wide-angle scene image. When the technical scheme provided by the embodiment of the application is applied to long-distance portrait shooting or multi-person group shooting, the face features of people in a scene to be shot can be extracted through a long-focus scene image with a small viewing range but a large portrait account, and then the extracted face features and a wide-angle scene image with a large viewing range but a small portrait account are used for processing the wide-angle scene image, so that the problem of difficulty in extracting the face features can be solved, and the purpose of improving the image processing accuracy is achieved.
The embodiment of the application also provides the electronic equipment. Referring to fig. 15, the electronic device 500 includes a central processing unit 501 and a memory 502. The central processing unit 501 is electrically connected to the memory 502.
The cpu 500 is a control center of the electronic device 500, connects various parts of the whole electronic device through various interfaces and lines, and executes various functions of the electronic device 500 and processes data by running or loading a computer program stored in the memory 502 and calling data stored in the memory 502, thereby implementing accurate identification of the gender of the user.
The memory 502 may be used to store software programs and modules, and the central processing unit 501 executes various functional applications and data processing by operating the computer programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the central processor 501 access to the memory 502.
In the embodiment of the present application, the central processing unit 501 in the electronic device 500 executes the image processing method in any of the above embodiments by running the computer program stored in the memory 502, such as: acquiring a synchronously acquired tele scene image and a wide scene image of the same scene; carrying out face recognition on the acquired tele scene image to obtain the face characteristics of the tele scene image; binding the face characteristics of the obtained tele scene image with the wide scene image; and presetting the wide-angle scene image according to the face characteristics bound by the angle scene image.
Referring to fig. 16, in some embodiments, the electronic device 500 may further include: a display 503, radio frequency circuitry 504, audio circuitry 505, power supply 506, image processing circuitry 507, and a graphics processor 508. The display 503, the rf circuit 504, the audio circuit 505, and the power source 506 are electrically connected to the central processing unit 501.
The display 503 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof. The Display 503 may include a Display panel, and in some embodiments, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices.
The audio circuit 505 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone.
The power supply 506 may be used to power various components of the electronic device 500. In some embodiments, the power source 506 may be logically connected to the central processor 501 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
The Image Processing circuit 507 may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline, as shown in fig. 17, and in one embodiment, the Image Processing circuit 507 includes an ISP processor 5071 and a control logic 5072. The image data captured by camera 5073 is first processed by ISP processor 5071, and ISP processor 5071 analyzes the image data to capture image statistics that may be used to determine and/or one or more control parameters of camera 5073. The camera 5073 may include a camera with one or more lenses 50731 and an image sensor 50732. Image sensor 50732 may include an array of color filters (e.g., Bayer filters), and image sensor 50732 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 50732 and provide a set of raw image data that may be processed by ISP processor 5071. A sensor 5074 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 5071 based on the type of sensor 5074 interface. The sensor 5074 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
Further, image sensor 50732 may also send raw image data to sensor 5074, sensor 5074 may provide raw image data to ISP processor 5071 based on sensor 5074 interface type, or sensor 5074 may store raw image data in image memory 5075.
The ISP processor 5071 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 5071 may perform one or more image processing operations on the raw image data, collecting statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 5071 may also receive image data from image memory 5075. For example, the sensor 5074 interface sends raw image data to the image memory 5075, and the raw image data in the image memory 5075 is then provided to the ISP processor 5071 for processing. The image Memory 5075 may be part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 50732 interface or from sensor 5074 interface or from image memory 5075, ISP processor 5071 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 5075 for additional processing before being displayed. ISP processor 5071 receives processed data from image memory 5075 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 5071 may be output to display 503 for viewing by a user and/or further processed by a graphics engine or image processor 507. Further, the output of ISP processor 5071 may also be sent to image memory 5075 and display 503 may read image data from image memory 5075. In one embodiment, image memory 5075 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 5071 may be sent to an encoder/decoder 5076 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on the display 503 device. The encoder/decoder 5076 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by ISP processor 5071 may be sent to control logic 5072 unit. For example, the statistical data may include image sensor 50732 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 50731 shading correction, and the like. The control logic 5072 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters of the camera 5073 and ISP processor 5071 based on the received statistical data. For example, the control parameters of the camera 5073 may include sensor 5074 control parameters (such as gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 50731 control parameters (such as focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 50731 shading correction parameters, among others.
The image processor 508 performs conversion driving of display data that the electronic device needs to display, and supplies a line scanning signal to the display 503 to control the display 503 to display correctly.
Further, the image processing circuit 507 is further described on the basis of the image processing circuit 507 described in the above embodiment, and referring to fig. 18, the difference from the above embodiment is that the camera 5073 includes a telephoto camera 507301 and a wide-angle camera 507302, the telephoto camera 507301 includes a first lens 507311 and a first image sensor 507321, and the wide-angle camera 507302 includes a second lens 507312 and a second image sensor 507322.
Performance parameters (e.g., focal length, aperture size, resolution, etc.) of the telephoto camera 507301 and the wide-angle camera 507302 are not limited at all. The tele camera 507301 and the wide camera 507302 may be disposed in the same plane of the electronic device, e.g., both on the back or front of the electronic device. The installation distance of the dual cameras in the electronic device may be determined according to the size of the electronic device and/or the shooting effect, for example, in order to make the overlapping degree of the image contents shot by the tele camera 507301 and the wide camera 507302 high, the closer the tele camera 507301 and the wide camera 507302 are, the better, for example, within 10 mm.
An embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program runs on a computer, the computer is caused to execute the image processing method in any one of the above embodiments, such as: acquiring a synchronously acquired tele scene image and a wide scene image of the same scene; carrying out face recognition on the acquired tele scene image to obtain the face characteristics of the tele scene image; binding the face characteristics of the obtained tele scene image with the wide scene image; and presetting the wide-angle scene image according to the face characteristics bound by the angle scene image.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the image processing method of the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, the computer program can be stored in a computer readable storage medium, such as a memory of the electronic device, and executed by at least one central processing unit in the electronic device, and the process of executing the computer program can include, for example, the process of the embodiment of the image processing method. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (7)

1. An image processing method, comprising:
acquiring frame rate information of a tele camera and a wide camera, and determining the number of targets corresponding to the frame rate information;
acquiring image groups of a target number, wherein the image groups comprise synchronously acquired tele scene images and wide scene images of the same scene;
carrying out face recognition on the tele scene image in each image group to obtain face features;
the face features of the tele scene images in each image group are used as the face features of the wide scene images;
determining the number of matched face images which are contained in each wide-angle scene image and meet preset conditions according to the face characteristics of each wide-angle scene image;
taking the wide-angle scene image with the largest number of matched face images as an image to be processed;
determining a to-be-processed face image which does not meet preset conditions in the to-be-processed image according to the face characteristics of the to-be-processed image;
acquiring a target face image corresponding to the face image to be processed from other wide-angle scene images except the image to be processed, wherein the target face image meets the preset condition and belongs to the same person as the face image to be processed;
replacing the facial image to be processed with the target facial image;
and performing noise reduction processing or not according to the pixel values of the pixels at the same positions in the other wide-angle scene images aiming at each pixel in the replaced image to be processed.
2. The image processing method according to claim 1, further comprising, after acquiring a target face image corresponding to the to-be-processed face image from another wide-angle scene image other than the to-be-processed image:
when the target face image is failed to be acquired from the other wide-angle scene images, judging whether a local preset face image library stores the target face image or not;
when the target face image is stored in the face image library, extracting the target face image stored in the face image library.
3. The image processing method according to claim 2, wherein replacing the face image to be processed with the target face image comprises:
acquiring illumination information of the face image to be processed;
migrating the acquired illumination information to the target face image;
and replacing the face image to be processed with the migrated target face image.
4. The image processing method according to claim 2, wherein after determining whether the target facial image is stored in a locally preset facial image library, the method comprises:
when the target face image is not stored in the face image library, acquiring figure information corresponding to the face image to be processed;
sending an image acquisition request to target electronic equipment corresponding to the character information, wherein the image acquisition request is used for indicating the target electronic equipment to search and return the target face image;
and receiving the target face image returned by the target electronic equipment.
5. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring frame rate information of the tele camera and the wide camera, determining the number of targets corresponding to the frame rate information and acquiring image groups of the number of the targets, wherein the image groups comprise a tele scene image and a wide scene image which are synchronously acquired and are in the same scene;
the image recognition module is used for carrying out face recognition on the tele scene image in each image group to obtain face features;
the characteristic binding module is used for taking the face characteristic of the tele scene image in each image group as the face characteristic of the wide scene image;
the image processing module is used for determining the number of the matched face images which are contained in each wide-angle scene image and meet the preset conditions according to the face characteristics of each wide-angle scene image; taking the wide-angle scene image with the largest number of matched face images as an image to be processed; determining a to-be-processed face image which does not meet preset conditions in the to-be-processed image according to the face characteristics of the to-be-processed image; acquiring a target face image corresponding to the face image to be processed from other wide-angle scene images except the image to be processed, wherein the target face image meets the preset condition and belongs to the same person as the face image to be processed; replacing the facial image to be processed with the target facial image;
and the noise reduction processing module is used for carrying out noise reduction processing or not carrying out noise reduction processing on each pixel in the replaced image to be processed according to the pixel value of the same pixel in the other wide-angle scene images.
6. A storage medium having stored thereon a computer program, characterized in that, when the computer program runs on a computer, it causes the computer to execute the image processing method according to any one of claims 1 to 4.
7. An electronic device comprising a central processing unit and a memory, said memory storing a computer program, wherein said central processing unit is adapted to execute the image processing method according to any one of claims 1 to 4 by calling said computer program.
CN201810276764.1A 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment Active CN108513069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810276764.1A CN108513069B (en) 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810276764.1A CN108513069B (en) 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108513069A CN108513069A (en) 2018-09-07
CN108513069B true CN108513069B (en) 2021-01-08

Family

ID=63379314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810276764.1A Active CN108513069B (en) 2018-03-30 2018-03-30 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108513069B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110430359B (en) * 2019-07-31 2021-07-09 北京迈格威科技有限公司 Shooting assistance method and device, computer equipment and storage medium
CN112887613B (en) * 2021-01-27 2022-08-19 维沃移动通信有限公司 Shooting method and device, electronic equipment and storage medium
CN113014820A (en) * 2021-03-15 2021-06-22 联想(北京)有限公司 Processing method and device and electronic equipment
CN113329172B (en) * 2021-05-11 2023-04-07 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN113347355A (en) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243818A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Image processing method and device and image processing equipment
CN105183734A (en) * 2014-06-16 2015-12-23 西安中兴新软件有限责任公司 Method and device for image file sharing
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN106454121A (en) * 2016-11-11 2017-02-22 努比亚技术有限公司 Double-camera shooting method and device
CN106454287A (en) * 2016-10-27 2017-02-22 深圳奥比中光科技有限公司 Combined camera shooting system, mobile terminal and image processing method
CN106454130A (en) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 Control method, control device and electric device
CN106993135A (en) * 2017-03-31 2017-07-28 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107690649A (en) * 2015-06-23 2018-02-13 三星电子株式会社 Digital filming device and its operating method
CN107833197A (en) * 2017-10-31 2018-03-23 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194849B (en) * 2013-06-13 2021-01-15 核心光电有限公司 Double-aperture zooming digital camera

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183734A (en) * 2014-06-16 2015-12-23 西安中兴新软件有限责任公司 Method and device for image file sharing
CN104243818A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Image processing method and device and image processing equipment
CN107690649A (en) * 2015-06-23 2018-02-13 三星电子株式会社 Digital filming device and its operating method
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN106454287A (en) * 2016-10-27 2017-02-22 深圳奥比中光科技有限公司 Combined camera shooting system, mobile terminal and image processing method
CN106454121A (en) * 2016-11-11 2017-02-22 努比亚技术有限公司 Double-camera shooting method and device
CN106454130A (en) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 Control method, control device and electric device
CN106993135A (en) * 2017-03-31 2017-07-28 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107833197A (en) * 2017-10-31 2018-03-23 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing

Also Published As

Publication number Publication date
CN108513069A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110505411B (en) Image shooting method and device, storage medium and electronic equipment
CN116582741B (en) Shooting method and equipment
US20210168279A1 (en) Document image correction method and apparatus
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
EP4376433A1 (en) Camera switching method and electronic device
CN108574803B (en) Image selection method and device, storage medium and electronic equipment
CN112668636A (en) Camera shielding detection method and system, electronic equipment and storage medium
US10769416B2 (en) Image processing method, electronic device and storage medium
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114390212B (en) Photographing preview method, electronic device and storage medium
CN107180417B (en) Photo processing method and device, computer readable storage medium and electronic equipment
CN117132515A (en) Image processing method and electronic equipment
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN108769527B (en) Scene identification method and device and terminal equipment
CN108259768B (en) Image selection method and device, storage medium and electronic equipment
CN113808066A (en) Image selection method and device, storage medium and electronic equipment
CN115623319B (en) Shooting method and electronic equipment
CN113676670B (en) Photographing method, electronic device, chip system and storage medium
CN112367470B (en) Image processing method and device and electronic equipment
EP4304188A1 (en) Photographing method and apparatus, medium and chip
WO2024093854A1 (en) Image processing method and electronic device
WO2022183876A1 (en) Photography method and apparatus, and computer-readable storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant