CN105681627B - Image shooting method and electronic equipment - Google Patents

Image shooting method and electronic equipment Download PDF

Info

Publication number
CN105681627B
CN105681627B CN201610121500.XA CN201610121500A CN105681627B CN 105681627 B CN105681627 B CN 105681627B CN 201610121500 A CN201610121500 A CN 201610121500A CN 105681627 B CN105681627 B CN 105681627B
Authority
CN
China
Prior art keywords
target object
image data
image
electronic device
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610121500.XA
Other languages
Chinese (zh)
Other versions
CN105681627A (en
Inventor
廖安华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610121500.XA priority Critical patent/CN105681627B/en
Publication of CN105681627A publication Critical patent/CN105681627A/en
Application granted granted Critical
Publication of CN105681627B publication Critical patent/CN105681627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for shooting images and electronic equipment, wherein the method comprises the following steps: acquiring image data; identifying a target object in the image data and obtaining a contour of the identified target object; processing the identified target object in the image data based on the contour; a captured image is generated based on the processed image data. By implementing the invention, the target object in a series of images can be efficiently processed.

Description

Image shooting method and electronic equipment
Technical Field
The present invention relates to image processing technologies, and in particular, to a method for capturing an image and an electronic device.
Background
In the process of shooting an image or recording a video, local characters or objects in the image need to be processed (mosaic, blur, and the like), a currently commonly used mode is to process the local part of the image after shooting or recording is finished, and when the number of shot images is large, the local processing of the image is time-consuming.
Disclosure of Invention
The embodiment of the invention provides an image shooting method and electronic equipment, which can be used for efficiently processing a target object in a series of images.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a method for shooting images, which comprises the following steps:
acquiring image data;
identifying a target object in the image data and obtaining a contour of the identified target object;
processing a target object identified in the image data based on the contour;
a captured image is generated based on the processed image data.
Preferably, the identifying a target object in the image data includes:
identifying a specific acquisition area calibrated by a user, and extracting the characteristics of a target object in the image acquisition area, or extracting the characteristics of a preset target object;
feature matching is performed on the basis of the extracted features and the image data, and the target object is identified in the image data on the basis of a matching result.
Preferably, the feature matching with the image data based on the extracted features includes:
identifying the depth of the target object in the environment, and determining the depth interval of the target object in the environment;
and performing feature matching with the part of the image data, which is positioned in the depth interval, based on the extracted features of the target object.
Preferably, the processing of the target object identified in the image data based on the contour includes:
performing at least one of the following processes on the target object identified in each of the image data:
mosaic processing;
fuzzification processing;
and covering a specific image different from the target object on the layer of the target object.
Preferably, the identifying a target object in the image data includes:
analyzing the sensing data to obtain displacement representing the motion of the electronic equipment, and determining the displacement compensation quantity of the target object in the image data based on the displacement;
adjusting a history region including the target object in the image data based on the displacement compensation amount to obtain a target region;
identifying the target object in the target region in the image data.
In a second aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes:
the camera is used for acquiring image data;
a processor for identifying a target object in the image data and obtaining a contour of the identified target object;
the processor is further configured to process a target object identified in the image data based on the contour.
The processor is further configured to generate a captured image based on the processed image data.
Preferably, the processor further identifies a specific acquisition region calibrated by a user, and extracts a feature of a target object located in the image acquisition region, or extracts a feature of a preset target object;
the processor is further configured to perform feature matching with the image data based on the extracted features, and identify the target object in the image data based on a matching result.
Preferably, the processor is further configured to identify a depth of the target object in the environment, and determine a depth interval in which the target object is located in the environment;
the processor is further configured to perform feature matching with a portion of the image data located in the depth interval based on the extracted features of the target object.
Preferably, the processor is further configured to perform at least one of the following processes on the target object identified in each of the image data: mosaic processing; fuzzification processing; and covering a specific image different from the target object on the layer of the target object.
Preferably, the processor is further configured to analyze the sensing data to obtain a displacement representing the motion of the electronic device, and determine a displacement compensation amount of the target object in the image data based on the displacement;
the processor is further configured to adjust a history region including the target object in the image data based on the displacement compensation amount to obtain a target region;
the processor is further configured to identify the target object in the target region in the image data.
In a third aspect, an embodiment of the present invention provides a computer storage medium, where executable instructions are stored, and the executable instructions are used to execute the above-mentioned image capturing method.
According to the embodiment of the invention, before the image data is acquired and the shot image is generated, the target object carried in the image data is identified, the identified target object is processed and the shot image such as a frame image in a photo or a video is generated, so that the target object is covered while the image is generated, and the time for covering the target object in the later period of a user is saved.
Drawings
FIG. 1 is a first schematic flow chart illustrating an implementation of a method for capturing an image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a second implementation flow of the image capturing method according to the embodiment of the present invention;
FIG. 3 is a third schematic flow chart illustrating an implementation of a method for capturing an image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a flowchart illustrating an implementation of a method for capturing an image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an implementation flow of a method for capturing an image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a sixth implementation flow of a method for capturing an image according to an embodiment of the present invention;
FIG. 7 is a first functional block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 8 is a functional structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
The image shooting method disclosed by the embodiment of the invention is applied to electronic equipment such as a smart phone, a tablet personal computer and a notebook computer; optionally, a camera is arranged in the electronic device, and the electronic device shoots an environment through the camera to obtain image data; optionally, the electronic device controls the camera in the image capturing device by connecting with the image capturing device (e.g., a short-distance connection such as bluetooth), and controls the camera in the image capturing device to capture an environment to obtain image data, where the image data may be data of one picture or data of one or more frames of images in a video.
Referring to fig. 1, in the embodiment of the present invention, an electronic device acquires image data (step 101), identifies a target object in the image data, obtains a contour of the identified target object (step 102), and processes the target object identified in the image data based on the contour (step 103); a captured image is generated based on the processed image data (step 104).
Different from the prior art that the target object is processed after the image is generated, the embodiment of the invention identifies the target object carried in the image data and processes the identified target object and generates the shot image, such as a frame of image in a photo or a video, before the image data is acquired and the shot image is generated, so that the target object is covered while the image is generated, and the time for covering the target object in the later period of a user is saved.
Example one
In a typical application scenario of this embodiment, a user performs a framing operation on an environment by using a camera of an electronic device (i.e., image data of the environment is captured by the electronic device through the camera and is presented on a display interface of the electronic device based on the image data, so that the user can adjust a shooting angle, a shooting range, and the like) to prepare to take a picture, and during the framing operation, the user finds an object (a target object, which is not desired to be displayed by the user) that needs to be hidden in the environment, so that a specific capture area of the environment including the target object is calibrated in a specific manner (e.g., in a touch manner), after the user takes (e.g., performs a triggering operation of shooting such as pressing a shutter), the electronic device identifies the target object in the image data of the taken picture based on a feature matching manner, and processes the target object to generate a shot image (i.e., an image that can, the target object in the image is hidden and displayed.
To achieve the above-mentioned effect, referring to an alternative flow chart of the method for capturing images shown in fig. 2, the method comprises the following steps:
step 201, image data is acquired.
After responding to the framing operation of the user, if an instruction of acquiring the environment is further received, the electronic equipment controls the camera to acquire the environment to obtain image data including objects (such as people, objects and the like) in the environment.
Step 202, identifying a specific acquisition area calibrated by a user, and extracting the characteristics of a target object located in the image acquisition area.
The electronic device may determine the specific acquisition region targeted by the user in response to the user performing the framing operation (of course, the specific acquisition region may also be pre-targeted in terms of coordinates, orientation, etc. between acquisition environments), for example, receiving the specific acquisition region targeted by the user through an operation (such as performing a closed curve) while the electronic device is presenting a real-time image of the environment in the image interface in response to the user framing operation.
The features in the image acquisition area can be implemented by any existing image feature extraction algorithm, and particularly, in order to save computing resources of electronic equipment, the features of the target object can be simplified into imaging points at any position, such as points at the edge of the target object, or points on the target object which are inconsistent with the characteristics of the target object, such as black spots on a white target object, convex points on the target object, concave points on the target object, rust points on a metal target object, peeling points on a paint body on the surface of the target object, and the like.
And step 203, performing feature matching on the extracted features and the image data, and identifying the target object in the image data based on the matching result.
Steps 202 to 203 are processing steps of identifying a target object in image data.
And step 204, acquiring the contour of the identified target object, and processing the target object identified in the image data based on the contour.
The contour of the target object may be implemented by using an existing edge detection algorithm, and in practical applications, the image data (including the image data within the contour) corresponding to the contour region of the target object in the image data may be subjected to mosaic or blurring processing to make the target object invisible, or the image data corresponding to the contour region of the target object in the image data may be overlaid with a specific image (for example, a randomly generated image such as a monochrome image, or, when an image, that is, a specific image, is set in advance by a user, the image data corresponding to the contour region of the target object in the image data may be overlaid with the specific image set by the user preferentially.
Step 205, a captured image is generated based on the processed image data.
When the photograph is taken, the target object included in the image data of the photograph is processed as in step 204, so that the effect of modifying the image data corresponding to the contour region of the target object is achieved, and the target object will not be visible in the taken image when the taken image is generated based on the modified image data.
In the case that the user captures the image data of the environment forming video by using the camera, since the video is composed of a series of frame images captured by the camera, the target object included in the image data of each frame image of the video (i.e. the data of the series of frame images) is processed, and the processing is consistent with the processing described in the foregoing steps, after the image data of each frame image of the video is processed, when the electronic device plays the generated image data of the video, the target object is always in an invisible state, and the effect of hiding the target object can be realized without any post-editing processing operation performed on the generated image data by the user.
Example two
In a typical application scenario of this embodiment, a user performs a framing operation on an environment by using a camera of an electronic device to prepare to take a picture, during the framing operation of the user, the electronic device identifies an object (a target object, which is not desired by the user to be displayed) that needs to be hidden and processed, which is predetermined by the user, from image data of the acquired environment, so that the target object in the environment is calibrated in a specific manner (e.g., by touch), after the user takes a picture (e.g., performs a triggering operation of taking a picture such as pressing a shutter), the electronic device identifies the target object from image data of the taken picture based on a feature matching manner, processes the target object to generate a taken image (i.e., an image that can be displayed for the user to view), and the target object in the image is hidden and displayed.
To achieve the above-mentioned effect, referring to an alternative flow chart of the method for capturing images shown in fig. 3, the method includes the following steps:
step 301, image data is acquired.
Step 302, extracting the characteristics of a preset target object.
The user may preset an object that is not required to be displayed in the image, that is, a target object, in the electronic device, and when the user sets the target object, the user may set a feature of the target object, such as a color feature and a contour feature, or a feature of the image extracted based on an existing image feature extraction algorithm, and when the electronic device acquires the image data, the user may extract the feature of the preset target object.
Of course, the electronic device may also obtain the feature of the target object by performing feature extraction based on an existing image of the target object uploaded to the electronic device by the user. When a user sets a plurality of target objects, the electronic device extracts features of the target objects according to the target objects that need to be hidden in the currently captured generated image set by the user among the plurality of target objects.
Step 303, performing feature matching on the extracted features and the image data, and identifying the target object in the image data based on the matching result.
Generally, the image data includes a plurality of objects (including a target object), when the electronic device matches the extracted features of the target object with the image data, a matching result with the features of the plurality of objects in the image data is obtained, and the matching result adopts a quantized matching degree to represent a matching situation between the preset features of the target object and the features of different objects in the image data; since the extracted features of the target object may not completely match the features of the target object in the image data due to the difference in position and size of the target object in the image, but the extracted features of the target object may match the features of the target object in the image data to a greater extent than the extracted features of the target object may match the features of the non-target object in the image data, the object in the image data that matches the features of the extracted target object to a highest extent may be identified as the target object based on the matching results of the extracted features of the target object and the features of the plurality of objects in the image data.
Steps 302 to 303 are processing steps of identifying a target object in image data.
And step 304, obtaining the contour of the identified target object, and processing the target object identified in the image data based on the contour.
Step 305 generates a captured image based on the processed image data.
When the photograph is taken, the target object included in the image data of the photograph is subjected to the processing as in step 304, which achieves the effect of modifying the image data corresponding to the outline region of the target object, and the target object will not be visible in the taken image when the taken image is generated based on the modified image data.
In the case that the user captures the image data of the environment forming video by using the camera, since the video is composed of a series of frame images captured by the camera, the target object included in the image data of each frame image of the video (i.e. the data of the series of frame images) is processed, and the processing is consistent with the processing described in the foregoing steps, after the image data of each frame image of the video is processed, when the electronic device plays the generated image data of the video, the target object is always in an invisible state, and the effect of hiding the target object can be realized without any post-editing processing operation performed on the generated image data by the user.
EXAMPLE III
In a typical application scenario of this embodiment, a user performs a framing operation on an environment by using a camera of an electronic device to prepare to take a picture, during the framing operation, the user finds an object (a target object that the user does not want to be displayed in the picture) in the environment that needs to be processed, so that the target object in the environment is calibrated in a specific manner (e.g., a touch manner), the electronic device recognizes the depth of the target object in the environment, after the user performs the photographing (e.g., performs a triggering operation of the photographing such as pressing a shutter), the electronic device recognizes the target object included in the acquired image data based on the depth of the target object, and processes the target object to generate a photographed image (i.e., an output image for the user to view).
To achieve the above-mentioned effect, referring to an alternative flow chart of the method for capturing images shown in fig. 4, the method comprises the following steps:
step 401, image data is acquired.
Step 402, identifying a specific acquisition area calibrated by a user, and extracting the characteristics of a target object positioned in the image acquisition area.
And step 403, identifying the depth of the target object in the environment, and determining the depth interval of the target object in the environment.
The electronic equipment identifies the depth information of the target object in the environment by setting a binocular camera or depth camera.
And step 404, performing feature matching with the part of the image data, which is positioned in the depth interval, based on the extracted features of the target object, and identifying the target object in the image data based on the matching result.
The object in the image data is often in different depth intervals, the depth space of the target object in the environment is identified, the image data in the depth space in the image data is subjected to feature matching, and other depth spaces in the image data are not subjected to feature matching, so that the processing time and the computing resource of the electronic equipment can be obviously saved.
When the determined depth interval in the image data only contains one object, the target object in the depth interval in the image data can be determined based on the matching result by matching the features of the extracted target object and the depth interval in the image data.
Steps 402 to 404 are processing steps of identifying a target object in image data.
Step 405, obtaining the contour of the identified target object, and processing the identified target object in the image data based on the contour.
As before, including mosaic processing; fuzzification processing; and covering a specific image different from the target object on the layer of the target object.
In step 406, a captured image is generated based on the processed image data.
When the photograph is taken, the target object included in the image data of the photograph is subjected to the processing as in step 405, so that the effect of modifying the image data corresponding to the outline region of the target object is achieved, and the target object will not be visible in the taken image when the taken image is generated based on the modified image data.
In the case where the user captures image data of an environment forming video by using a camera, since the video is composed of a series of frame images captured by the camera, processing is performed on a target object included in the image data of each frame image of the video (that is, data of the series of frame images), in accordance with the processing described in the foregoing steps.
Example four
In a typical application scenario of this embodiment, a user performs a framing operation on an environment by using a camera of an electronic device to prepare to take a picture, during the framing operation of the user, the electronic device identifies an object that needs to be processed (a target object that is not desired to be displayed in the picture) preset by the user from image data of the acquired environment, so that the target object in the environment is calibrated in a specific manner (e.g., by touch), the electronic device identifies the depth of the target object in the environment, after the user performs the shooting (e.g., performs a triggering operation of the shooting such as pressing a shutter), the electronic device identifies the target object included in the acquired image data based on the depth of the target object, and processes the target object to generate a shot image (i.e., an output image for the user to view).
To achieve the above-mentioned effect, referring to an alternative flow chart of the method for capturing images shown in fig. 5, the method comprises the following steps:
step 501, image data is acquired.
Step 502, extracting the characteristics of a preset target object.
Step 503, identifying the depth of the target object in the environment, and determining the depth interval of the target object in the environment.
The electronic device recognizes that the object in the environment, which is extracted in step 502, is a target object, and recognizes the depth information of the target object in the environment by setting a binocular camera or a depth camera.
And step 504, performing feature matching on the extracted features of the target object and the part, located in the depth interval, in the image data, and identifying the target object in the image data based on the matching result.
The object in the image data is often in different depth intervals, the depth space of the target object in the environment is identified, the image data in the depth space in the image data is subjected to feature matching, and other depth spaces in the image data are not subjected to feature matching, so that the processing time and the computing resource of the electronic equipment can be obviously saved.
When the determined depth interval in the image data only contains one object, the target object in the depth interval in the image data can be determined based on the matching result by matching the features of the extracted target object and the depth interval in the image data.
Steps 502 to 504 are processing steps of identifying a target object in image data.
And 505, acquiring the contour of the identified target object, and processing the target object identified in the image data based on the contour.
As before, including mosaic processing; fuzzification processing; and covering a specific image different from the target object on the layer of the target object.
Step 506, generating a shot image based on the processed image data.
When the photograph is taken, the target object included in the image data of the photograph is processed as in step 505, so that the effect of modifying the image data corresponding to the contour region of the target object is achieved, and the target object will not be visible in the taken image when the taken image is generated based on the modified image data.
In the case that the user captures the image data of the environment forming video by using the camera, since the video is composed of a series of frame images captured by the camera, the target object included in the image data of each frame image of the video (i.e. the data of the series of frame images) is processed, and the processing is consistent with the processing described in the foregoing steps, after the image data of each frame image of the video is processed, when the electronic device plays the generated image data of the video, the target object is always in an invisible state, and the effect of hiding the target object can be realized without any post-editing processing operation performed on the generated image data by the user.
EXAMPLE five
In a typical application scenario of this embodiment, a user performs a framing operation on an environment by using a camera of an electronic device to prepare to take a picture, during the framing operation of the user, the electronic device identifies an object to be processed (a target object that the user does not want to display in the picture, such as a target object located in a specific image capturing area predefined by the user or a target object conforming to a feature predefined by the user) in image data of the acquired environment, after the electronic device identifies the target object in an area of an image, in view of a characteristic of continuity of the user operation, in image data of a next image, the target object may be identified in a history area first to increase a speed of identifying the target object, and in view of a situation that shaking inevitably occurs when the user holds the electronic device, therefore, by detecting the displacement of the electronic equipment between the two images of the acquired image data and correcting the historical region based on the displacement compensation amount, the speed of identifying the target object can be further improved. If the target object is not identified in the history region, feature matching continues to be performed in other regions in the image data to identify the target object.
To achieve the above-mentioned effect, referring to an alternative flow chart of the method for capturing images shown in fig. 6, the method comprises the following steps:
step 601, acquiring first image data.
Step 602, obtaining characteristics of the target object.
As before, as an implementation manner of step 602, a feature of a preset target object is extracted, a depth of the target object in the environment is identified, a depth interval in which the target object is located in the environment is determined, feature matching is performed with a portion of the first image data located in the depth interval based on the extracted feature of the target object, and the target object is identified in the first image data based on a matching result.
As mentioned above, as an implementation manner of step 602, the feature of the target object may be a feature of the target object preset by the user.
Step 603, obtaining the contour of the identified target object, processing the identified target object in the first image data based on the contour, and generating a shot image 1 based on the processed first image data.
As before, including mosaic processing; fuzzification processing; and covering a specific image different from the target object on the layer of the target object.
And step 604, analyzing the sensing data to obtain displacement representing the motion of the electronic equipment, and determining the displacement compensation amount of the target object in the image data based on the displacement.
Step 605, adjusting the history region including the target object in the second image data based on the displacement compensation amount to obtain the target region.
It is assumed that the second image data is image data acquired after the first image data, for example, when the first image data is image data of a photograph 1 taken, the second image data is image data of a photograph 2 taken after the photograph 1; when the first image data is image data of a frame image 1 in a captured video, then the second image data is image data of a frame image 2 captured after the frame image 1.
Step 606 identifies a target object in the target area in the second image data.
When the target object is not identified in the target region in the second image data, as an alternative to step 606, the target object is identified based on the features of the target object in other regions of the second image data.
Step 607, the contour of the identified target object is obtained, the target object identified in the second image data is processed based on the contour, and the captured image 2 is generated based on the processed image data.
For the third image data acquired after the second image data, the process of identifying the target object in the third image data is similar to the process from the step 604 to the step 606, and it is not described further, by performing compensation and correction on the target area in different image data based on the displacement of the electronic device, the speed of identifying the target object can be increased, and the computing resource of the electronic device can be saved.
EXAMPLE six
Referring to fig. 7, the present embodiment describes an electronic device, including:
a camera 100 for acquiring image data; the processor 200 controls the camera 100 to capture the environment to obtain image data including objects (e.g., people, objects, etc.) in the environment if an instruction to capture the environment is further received after responding to a framing operation of the user.
A processor 200 for identifying a target object in the image data, obtaining a contour of the identified target object, and processing the target object identified in the image data based on the contour, for example, performing at least one of the following processes on the target object identified in the image data: mosaic processing; fuzzification processing; and generates a photographed image based on the processed image data.
As one implementation of the processor 200 processing the target object, the processor 200 identifies a specific acquisition region targeted by a user, extracts features of an object located in the image acquisition region, performs feature matching with the image data based on the extracted features of the object, and identifies the target object in the image data based on a matching result. The processor 200 may determine the particular acquisition region targeted by the user during the course of performing the framing operation in response to the user (of course, the particular acquisition region may also be pre-targeted in terms of coordinates, orientation, etc. between acquisition environments), for example, receiving the particular acquisition region targeted by the user through an operation (e.g., implementing a closed curve) while the processor 200 is presenting a real-time image of the environment in the image interface in response to the user framing operation. The features in the image acquisition area can be implemented by any existing image feature extraction algorithm, and particularly, in order to save computing resources of electronic equipment, the features of the target object can be simplified into imaging points at any position, such as points at the edge of the target object, or points on the target object which are inconsistent with the characteristics of the target object, such as black spots on a white target object, convex points on the target object, concave points on the target object, rust points on a metal target object, peeling points on a paint body on the surface of the target object, and the like.
As yet another implementation of the processor 200 for processing the target object, the processor 200 extracts features of an object located in the image capture area, performs feature matching with the image data based on the extracted features of the object, and identifies the target object in the image data based on the matching result. The user may preset an object that is not required to be displayed in the image, that is, the target object, when the user sets the target object, the user may set a feature of the target object, such as a color feature and a contour feature, or a feature of the image extracted based on an existing image feature extraction algorithm, and the processor 200 extracts the feature of the preset target object when the image data is acquired. Of course, the processor 200 may also obtain the features of the target object by performing feature extraction based on an existing image of the target object uploaded to the memory of the electronic device by the user. When the user sets a plurality of target objects, the processor 200 extracts features of the target objects according to the target objects that need to be hidden in the currently captured generated image set by the user among the plurality of target objects.
In order to increase the speed of identifying the target object in the image data, the processor 200 identifies the depth of the object in the environment, and determines the depth interval of the object in the environment; feature matching is performed with a portion of the image data located in the depth section based on the extracted features of the object.
In order to promote the continuously acquired image data (here, the first image data and the second image data are taken as an example for explanation), referring to fig. 8, a sensor 300 is further provided in the electronic device for outputting sensing data representing the displacement of the electronic device; after the processor 200 identifies the target object in the first image data based on the above manner, determining an area where the target object is located in the first image data; analyzing the sensing data to obtain displacement representing the motion of the electronic equipment, and determining the displacement compensation quantity of the target object in the image data based on the displacement; adjusting a region including a target object in the first image data based on the displacement compensation amount to obtain a target region; the target object is recognized in the target area of the second image data, and if the target object is not recognized, the target object is recognized based on the feature of the target object in other areas (areas other than the target area) of the second image data.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A method of capturing an image, the method comprising:
acquiring first image data in response to a framing operation;
extracting target object features;
identifying the depth of the target object in the environment, and determining the depth interval of the target object in the environment;
in response to a shooting trigger operation, performing feature matching with a portion of the first image data, which is located in the depth interval, based on the extracted features of the target object, identifying the target object in the first image data based on a matching result, and obtaining a contour of the identified target object;
processing a target object identified in the first image data based on the contour;
a photographed image is generated based on the processed first image data.
2. The method of capturing an image according to claim 1, wherein the extracting the target object feature includes:
and identifying a specific acquisition area calibrated by a user, and extracting the characteristics of a target object in the image acquisition area, or extracting the characteristics of a preset target object.
3. The method of capturing images according to claim 1, wherein the processing the target object identified in the first image data based on the contour includes:
performing at least one of the following processes on the target object identified in each of the first image data:
mosaic processing;
fuzzification processing;
and covering a specific image different from the target object on the layer of the target object.
4. A method of capturing imagery according to claim 1, the method further comprising:
analyzing the sensing data to obtain displacement representing the motion of the electronic equipment, and determining the displacement compensation quantity of the target object in the second image data based on the displacement;
adjusting a history area including the target object in the second image data based on the displacement compensation amount to obtain a target area;
identifying the target object in the target region in the second image data.
5. An electronic device, characterized in that the electronic device comprises:
the camera is used for acquiring first image data;
the processor is used for responding to a framing operation, extracting target object characteristics, identifying the depth of the target object in the environment, and determining the depth interval of the target object in the environment; in response to a shooting trigger operation, performing feature matching with a portion of the first image data, which is located in the depth interval, based on the extracted features of the target object, identifying the target object in the first image data based on a matching result, and obtaining a contour of the identified target object;
the processor is further configured to process a target object identified in the first image data based on the contour;
the processor is further configured to generate a captured image based on the processed first image data.
6. The electronic device of claim 5, wherein the extracting target object features comprises:
the processor is further configured to identify a specific acquisition region calibrated by a user, and extract features of a target object located in the image acquisition region, or extract features of a preset target object.
7. The electronic device of claim 6,
the processor is further configured to perform at least one of the following processes on the target object identified in each of the first image data: mosaic processing; fuzzification processing; and covering a specific image different from the target object on the layer of the target object.
8. The electronic device of claim 5, wherein the electronic device further comprises:
a sensor for outputting sensed data indicative of a displacement of movement of the electronic device;
the processor is further configured to analyze the sensing data to obtain a displacement representing the motion of the electronic device, and determine a displacement compensation amount of the target object in the second image data based on the displacement;
the processor is further configured to adjust a history region including the target object in the second image data based on the displacement compensation amount to obtain a target region;
the processor is further configured to identify the target object in the target region in the second image data.
CN201610121500.XA 2016-03-03 2016-03-03 Image shooting method and electronic equipment Active CN105681627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610121500.XA CN105681627B (en) 2016-03-03 2016-03-03 Image shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610121500.XA CN105681627B (en) 2016-03-03 2016-03-03 Image shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105681627A CN105681627A (en) 2016-06-15
CN105681627B true CN105681627B (en) 2019-12-24

Family

ID=56307810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610121500.XA Active CN105681627B (en) 2016-03-03 2016-03-03 Image shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105681627B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933612A (en) * 2016-06-29 2016-09-07 联想(北京)有限公司 Image processing method and electronic equipment
CN106331486A (en) * 2016-08-25 2017-01-11 珠海市魅族科技有限公司 Image processing method and system
CN107426497A (en) * 2017-06-15 2017-12-01 深圳天珑无线科技有限公司 The method, apparatus and computer-readable recording medium of a kind of recording image
CN107493433A (en) * 2017-09-08 2017-12-19 盯盯拍(深圳)技术股份有限公司 Image pickup method and filming apparatus
CN107707824B (en) * 2017-10-27 2020-07-31 Oppo广东移动通信有限公司 Shooting method, shooting device, storage medium and electronic equipment
CN108900895B (en) * 2018-08-23 2021-05-18 深圳码隆科技有限公司 Method and device for shielding target area of video stream
CN108897899A (en) * 2018-08-23 2018-11-27 深圳码隆科技有限公司 The localization method and its device of the target area of a kind of pair of video flowing
US10614340B1 (en) 2019-09-23 2020-04-07 Mujin, Inc. Method and computing system for object identification
CN111191083B (en) * 2019-09-23 2021-01-01 牧今科技 Method and computing system for object identification
CN111177009A (en) * 2019-12-31 2020-05-19 五八有限公司 Script generation method and device, electronic equipment and storage medium
CN111263063A (en) * 2020-02-17 2020-06-09 深圳传音控股股份有限公司 Method, device and equipment for shooting image
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
CN113766130B (en) * 2021-09-13 2023-07-28 维沃移动通信有限公司 Video shooting method, electronic equipment and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001086407A (en) * 1999-09-17 2001-03-30 Matsushita Electric Ind Co Ltd Image pickup device with mosaic function and mosaic processor
CN1767638A (en) * 2005-11-30 2006-05-03 北京中星微电子有限公司 Visible image monitoring method for protecting privacy right and its system
CN101276409A (en) * 2007-03-27 2008-10-01 三洋电机株式会社 Image processing apparatus
CN102111491A (en) * 2009-12-29 2011-06-29 比亚迪股份有限公司 Mobile equipment with picture-taking function and face recognition processing method thereof
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN104168422A (en) * 2014-08-08 2014-11-26 小米科技有限责任公司 Image processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5088161B2 (en) * 2008-02-15 2012-12-05 ソニー株式会社 Image processing apparatus, camera apparatus, communication system, image processing method, and program
CN103297699A (en) * 2013-05-31 2013-09-11 北京小米科技有限责任公司 Method and terminal for shooting images
CN104796594B (en) * 2014-01-16 2020-01-14 中兴通讯股份有限公司 Method for instantly presenting special effect of preview interface and terminal equipment
CN105100615B (en) * 2015-07-24 2019-02-26 青岛海信移动通信技术股份有限公司 A kind of method for previewing of image, device and terminal
CN105225230B (en) * 2015-09-11 2018-07-13 浙江宇视科技有限公司 A kind of method and device of identification foreground target object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001086407A (en) * 1999-09-17 2001-03-30 Matsushita Electric Ind Co Ltd Image pickup device with mosaic function and mosaic processor
CN1767638A (en) * 2005-11-30 2006-05-03 北京中星微电子有限公司 Visible image monitoring method for protecting privacy right and its system
CN101276409A (en) * 2007-03-27 2008-10-01 三洋电机株式会社 Image processing apparatus
CN102111491A (en) * 2009-12-29 2011-06-29 比亚迪股份有限公司 Mobile equipment with picture-taking function and face recognition processing method thereof
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN104168422A (en) * 2014-08-08 2014-11-26 小米科技有限责任公司 Image processing method and device

Also Published As

Publication number Publication date
CN105681627A (en) 2016-06-15

Similar Documents

Publication Publication Date Title
CN105681627B (en) Image shooting method and electronic equipment
CN108197586B (en) Face recognition method and device
JP6961797B2 (en) Methods and devices for blurring preview photos and storage media
US9652663B2 (en) Using facial data for device authentication or subject identification
EP3719694A1 (en) Neural network model-based human face living body detection
JP5757063B2 (en) Information processing apparatus and method, and program
CN106372629B (en) Living body detection method and device
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN103365488B (en) Information processor, program and information processing method
JP2010045770A (en) Image processor and image processing method
JP2012530994A (en) Method and apparatus for half-face detection
CN107566749B (en) Shooting method and mobile terminal
KR20090088325A (en) Image processing apparatus, image processing method and imaging apparatus
WO2016107638A1 (en) An image face processing method and apparatus
CN108037830B (en) Method for realizing augmented reality
WO2017173578A1 (en) Image enhancement method and device
US8284292B2 (en) Probability distribution constructing method, probability distribution constructing apparatus, storage medium of probability distribution constructing program, subject detecting method, subject detecting apparatus, and storage medium of subject detecting program
JP2012212373A (en) Image processing device, image processing method and program
CN111860346A (en) Dynamic gesture recognition method and device, electronic equipment and storage medium
WO2020227945A1 (en) Photographing method and apparatus
CN111640165A (en) Method and device for acquiring AR group photo image, computer equipment and storage medium
JP2019135810A (en) Image processing apparatus, image processing method, and program
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN109313797A (en) A kind of image display method and terminal
CN109919190B (en) Straight line segment matching method, device, storage medium and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant