CN105681627A - Image shooting method and electronic equipment - Google Patents

Image shooting method and electronic equipment Download PDF

Info

Publication number
CN105681627A
CN105681627A CN201610121500.XA CN201610121500A CN105681627A CN 105681627 A CN105681627 A CN 105681627A CN 201610121500 A CN201610121500 A CN 201610121500A CN 105681627 A CN105681627 A CN 105681627A
Authority
CN
China
Prior art keywords
target object
view data
electronics
feature
treater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610121500.XA
Other languages
Chinese (zh)
Other versions
CN105681627B (en
Inventor
廖安华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610121500.XA priority Critical patent/CN105681627B/en
Publication of CN105681627A publication Critical patent/CN105681627A/en
Application granted granted Critical
Publication of CN105681627B publication Critical patent/CN105681627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Abstract

The invention discloses an image shooting method and electronic equipment. The method comprises: image data are obtained; a target object is identified in the image data and a contour of the identified target object is obtained; the identified target object in the image data is processed based on the contour; and a shooting image is generated based on the processed image data. According to the invention, target objects in a series of images can be processed efficiently.

Description

Shooting image method and electronics
Technical field
The present invention relates to image processing techniques, particularly relate to a kind of shooting image method and electronics.
Background technology
At shooting image or in the process of recorded video, need the personage to local in image or object to process and (stamp mosaic, fuzzy etc.), the mode generally adopted at present is processed the local of image after taking or having recorded, when the quantity of the image taken is more, the Local treatment of image is quite consuming time.
Summary of the invention
The embodiment of the present invention provides a kind of shooting image method and electronics, it is possible to efficiently processed by the target object in a series of images.
The technical scheme of the embodiment of the present invention is achieved in that
The embodiment of the present invention provides a kind of shooting image method, and described method comprises:
Obtain view data;
Described view data identifies target object, and obtains the profile of the target object identified;
Based on described profile, the target object identified in described view data is processed;
Shooting image is generated based on view data after process.
Preferably, described in view data, identify target object, comprising:
Identify the particular acquisition region that user demarcates, extract the feature of the target object being positioned at described image acquisition region, or, extract the feature of the target object preset;
Carry out characteristic matching based on the feature extracted and described view data, in described view data, identify described target object based on matching result.
Preferably, described feature based on extracting and described view data carry out characteristic matching, comprising:
Identify the degree of depth of described target object in described environment, it is determined that the depth intervals of described target object residing in described environment;
Based on the feature of the described target object extracted, carry out characteristic matching with the part being positioned at described depth intervals in described view data.
Preferably, described based on profile in described view data identify target object process, comprising:
The described target object identified in view data described in each is carried out following process one of at least:
Mosaic processing;
Fuzzy processing;
The specific image being different from described target object is covered on the layer of described target object.
Preferably, described view data identifying, target object comprises stating:
Resolve the displacement that sensing data obtains characterizing the motion of described electronics, determine the displacement compensation amount of target object described in described view data based on described displacement;
Based on described displacement compensation amount, the history area including described target object in described view data is carried out adjustment and obtain target area;
Described target object is identified in the described target area in described view data.
Second aspect, the embodiment of the present invention provides a kind of electronics, and described electronics comprises:
Camera, for obtaining view data;
Treater, for identifying target object in described view data, and obtains the profile of the target object identified;
Described treater, also for being processed by the target object identified in described view data based on described profile.
Described treater, also for generating shooting image based on view data after process.
Preferably, described treater, also identifies the particular acquisition region that user demarcates, and extracts the feature of the target object being positioned at described image acquisition region, or, extract the feature of the target object preset;
Described treater, also for carrying out characteristic matching based on the feature extracted and described view data, identifies described target object based on matching result in described view data.
Preferably, described treater, also for identifying the degree of depth of described target object in described environment, it is determined that the depth intervals of described target object residing in described environment;
Described treater, also for the feature based on the described target object extracted, carries out characteristic matching with the part being positioned at described depth intervals in described view data.
Preferably, described treater, also for the described target object identified in view data described in each is carried out following process one of at least: mosaic processing; Fuzzy processing; The specific image being different from described target object is covered on the layer of described target object.
Preferably, described treater, also obtains characterizing the displacement of described electronics motion, determines the displacement compensation amount of target object described in described view data based on described displacement for resolving sensing data;
Described treater, also obtains target area for the history area including described target object in described view data being carried out adjustment based on described displacement compensation amount;
Described treater, also for identifying described target object in the described target area in described view data.
The third aspect, the embodiment of the present invention provides a kind of computer-readable storage medium, stores and can perform instruction in described computer-readable storage medium, described performs instruction for performing above-mentioned shooting image method.
The embodiment of the present invention is before getting view data, generating shooting image, the target object of carrying in view data is identified, the target object identified is processed and generates shooting image, such as the two field picture in photo or video, just completing the covering treatment to target object like this while synthetic image, saving user needs the time that target object carries out covering treatment the later stage.
Accompanying drawing explanation
Fig. 1 is the realization flow schematic diagram one taking image method in the embodiment of the present invention;
Fig. 2 is the realization flow schematic diagram two taking image method in the embodiment of the present invention;
Fig. 3 is the realization flow schematic diagram three taking image method in the embodiment of the present invention;
Fig. 4 is the realization flow schematic diagram four taking image method in the embodiment of the present invention;
Fig. 5 is the realization flow schematic diagram five taking image method in the embodiment of the present invention;
Fig. 6 is the realization flow schematic diagram six taking image method in the embodiment of the present invention;
Fig. 7 is the illustrative view of functional configuration one of electronics in the embodiment of the present invention;
Fig. 8 is the illustrative view of functional configuration two of electronics in the embodiment of the present invention.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
The shooting image method that the embodiment of the present invention is recorded is applied to the electronicss such as smart mobile phone, panel computer, notebook computer; Can selection of land, electronics is provided with camera, environment is carried out shooting by camera and obtains view data by electronics; Can selection of land, by realizing the control of the camera in shooting equipment with shooting equipment connection (as bluetooth etc. is closely connected) in electronics, environment is carried out shooting and obtains view data by the camera in control shooting equipment, view data can be the data of a photo, or the data of a frame or multiframe image in video.
See Fig. 1, in the embodiment of the present invention, electronics obtains view data (step 101), view data identifies target object, and obtain the profile (step 102) of the target object identified, based on profile, the target object identified in view data is processed (step 103); Shooting image (step 104) is generated based on view data after process.
Target object is carried out process from prior art different after synthetic image, the embodiment of the present invention is before getting view data, generating shooting image, the target object of carrying in view data is identified, the target object identified is processed and generates shooting image, such as the two field picture in photo or video, just completing the covering treatment to target object like this while synthetic image, saving user needs the time that target object carries out covering treatment the later stage.
Embodiment one
In the present embodiment one typically application scene, it (is also exactly and present based on the display interface of view data at electronics by the view data of camera collection environment at electronics that user utilizes camera of electronic equipment that environment carries out viewing operation, shooting angle is adjusted for user, scope etc.) take pictures to prepare, the process of user's viewing operation finds the object (target object needing in environment to hide, user does not wish that target object shows), therefore the particular acquisition region that ad hoc fashion (as passed through touch control manner) has been demarcated environment and comprised target object is passed through, after user's shooting (as implemented the triggering operation of shooting as press shutter), the mode of feature based coupling in the view data of the photo of shooting is identified target object by electronics, the image (image checked for user that also can show exactly) of shooting is generated after being processed by target object, in image, target object is hidden display.
For realizing above-mentioned effect, see an optional schematic flow sheet of the shooting image method shown in Fig. 2, comprise the following steps:
Step 201, obtains view data.
Electronics, after the viewing operation of response user, if receiving the instruction gathering environment further, then controls the view data that camera collection environment obtains including object in environment (such as people, thing etc.).
Step 202, identifies the particular acquisition region that user demarcates, and extracts the feature of the target object being positioned at image acquisition region.
In the process that response user implements viewing operation, electronics can determine that particular acquisition region that user demarcates is (certainly, particular acquisition region can also be demarcated in advance by the form in coordinate, orientation between collection environment), such as respond user's viewing operation at electronics and during the real-time image of the environment that presents in graphic interface, receive the particular acquisition region that user delimited by operation (as implemented the curve closed).
The feature being positioned at image acquisition region can adopt any one characteristics of image extraction algorithm existing to realize, especially, resource is calculated for saving electronics, the feature of target object can be reduced to the imaging point at any position, such as the point of target object edge, can also be the point inconsistent with the characteristic of this target object on target object, such as the black splotch on white target object, the point of target object projection, recessed point on target object, rust point on metal target object, target object surface paint body peels off a little etc.
Step 203, carries out characteristic matching based on the feature extracted and view data, identifies target object based on matching result in view data.
Step 202 to step 203 is the treatment step identifying target object in view data.
Step 204, obtains the profile of the target object identified, and is processed by the target object identified based on profile in view data.
The profile of target object can adopt existing edge detection algorithm to realize, in practical application, the view data (comprising the view data in profile) that the contour area of target object in view data is corresponding can be carried out mosaic or Fuzzy processing makes target object be in invisible state, or, the view data that the contour area of target object in view data is corresponding is covered specific image, and (image of such as stochastic generation is such as monochrome image, or, when to set image in advance be also exactly specific image to user, the view data that the contour area of target object in view data is corresponding is covered by the specific image that the preferential user of use sets.
Step 205, generates shooting image based on view data after process.
When taking pictures, the target object view data of photo comprised carries out the process such as step 204, to realize the effect view data that the contour area of target object is corresponding modified, when generating shooting image based on view data after amendment, target object is being taken invisible in image.
When user utilizes the view data that camera collection environment forms video, it is be made up of the series of frame images of camera collection due to video, therefore the target object comprised for the view data (being also exactly the data of series of frame images) of each two field picture of video processes, the process recorded with abovementioned steps is consistent, after the view data of each two field picture of video is processed, when the view data of the video generated play by electronics, target object will be in invisible state all the time, without the need to user, the view data generated is carried out the editing and processing in any later stage and operate the effect that can realize hiding target object.
Embodiment two
In the present embodiment one typically application scene, user utilizes camera of electronic equipment that environment is carried out viewing operation and takes pictures to prepare, in the process of user's viewing operation, electronics is to identifying the predetermined object (target object needing hiding process of user in the view data of the environment obtained, user does not wish that target object shows), therefore the target object in environment has been demarcated by ad hoc fashion (as passed through touch control manner), after user's shooting (as implemented the triggering operation of shooting as press shutter), the mode of feature based coupling in the view data of the photo of shooting is identified target object by electronics, the image (image checked for user that also can show exactly) of shooting is generated after being processed by target object, in image, target object is hidden display.
For realizing above-mentioned effect, see an optional schematic flow sheet of the shooting image method shown in Fig. 3, comprise the following steps:
Step 301, obtains view data.
Step 302, extracts the feature of the target object preset.
User can be set in advance in image in the electronic device does not need the object shown to be exactly target object yet, can the feature of Offered target object during user's target setting object, such as color characteristic, contour feature, or the feature of the image based on the extraction of existing characteristics of image extraction algorithm, the feature of the target object preset that electronics then extracts when getting view data.
Certainly, electronics can also carry out feature extraction based on the existing image of target object that user is uploaded to electronics and the feature obtaining target object. When user arranges multiple target object, what electronics was arranged in multiple target object according to user needs the target object hidden and the feature extracting target object in current shooting in synthetic image.
Step 303, carries out characteristic matching based on the feature extracted and view data, identifies target object based on matching result in view data.
Usually, view data comprises multiple object (comprising target object), when the feature of the target object of extraction is mated by electronics with view data, obtaining the matching result of the feature with multiple object in view data, the matching degree that matching result employing quantizes characterizes the match condition of the feature of the feature object different from view data of default target object, due to target object position in the picture, the difference of size, the feature of target object extracted can not be mated completely with the feature of the target object in view data, but the matching degree of the feature of the target object in the feature of target object extracted and view data, it is greater than the matching degree of the feature of non-targeted object in the feature of the target object of extraction and view data, therefore, based on the matching result of the feature of target object extracted and the feature of the multiple object of view data, it is target object by Object identifying the highest with the characteristic matching degree of target object extracted in view data.
Step 302 to step 303 is the treatment step identifying target object in view data.
Step 304, obtains the profile of the target object identified, and is processed by the target object identified based on profile in view data.
Step 305, generates shooting image based on view data after process.
When taking pictures, the target object view data of photo comprised carries out the process such as step 304, to realize the effect view data that the contour area of target object is corresponding modified, when generating shooting image based on view data after amendment, target object is being taken invisible in image.
When user utilizes the view data that camera collection environment forms video, it is be made up of the series of frame images of camera collection due to video, therefore the target object comprised for the view data (being also exactly the data of series of frame images) of each two field picture of video processes, the process recorded with abovementioned steps is consistent, after the view data of each two field picture of video is processed, when the view data of the video generated play by electronics, target object will be in invisible state all the time, without the need to user, the view data generated is carried out the editing and processing in any later stage and operate the effect that can realize hiding target object.
Embodiment three
In the present embodiment one typically application scene, user utilizes camera of electronic equipment that environment is carried out viewing operation and takes pictures to prepare, the process of user's viewing operation finds environment needs object (target object to be processed, user does not wish that target object shows in photograph), therefore the target object in environment has been demarcated by ad hoc fashion (as passed through touch control manner), electronics identifies the target object degree of depth in the environment, after user's shooting (as implemented the triggering operation of shooting as press shutter), the depth recognition of electronics based target object goes out in the view data gathered the target object comprised, shooting image (being also exactly the image checked for user exported) is generated after being processed by target object.
For realizing above-mentioned effect, see an optional schematic flow sheet of the shooting image method shown in Fig. 4, comprise the following steps:
Step 401, obtains view data.
Step 402, identifies the particular acquisition region that user demarcates, and extracts the feature of the target object being positioned at image acquisition region.
Step 403, identifies the target object degree of depth in the environment, it is determined that target object residing depth intervals in the environment.
By arranging, binocular camera or depth camera identify target object degree of depth information in the environment to electronics.
Step 404, based on the feature of the target object extracted, carries out characteristic matching with the part being positioned at depth intervals in view data, identifies target object based on matching result in view data.
Object in view data is often in different depth intervals, by identifying target object deep space residing in the environment, the view data being in this deep space in view data carries out characteristic matching, and characteristic matching is not carried out for other deep spaces in view data such that it is able to obviously save the calculating resource for the treatment of time and electronics.
When view data being positioned at determined depth intervals and only includes an object, see that the feature being positioned at the target object of this depth intervals with extraction in view data is mated, can determine view data is positioned at the target object of this depth intervals based on matching result.
Step 402 to step 404 is the treatment step identifying target object in view data.
Step 405, obtains the profile of the target object identified, and is processed by the target object identified based on profile in view data.
As front, comprise mosaic processing; Fuzzy processing; The specific image being different from target object is covered on the layer of target object.
Step 406, generates shooting image based on view data after process.
When taking pictures, the target object view data of photo comprised carries out the process such as step 405, to realize the effect view data that the contour area of target object is corresponding modified, when generating shooting image based on view data after amendment, target object is being taken invisible in image.
When user utilizes the view data that camera collection environment forms video, it is be made up of the series of frame images of camera collection due to video, therefore the target object comprised for the view data (being also exactly the data of series of frame images) of each two field picture of video processes, and the process recorded with abovementioned steps is consistent.
Embodiment four
In the present embodiment one typically application scene, user utilizes camera of electronic equipment that environment is carried out viewing operation and takes pictures to prepare, in the process of user's viewing operation, electronics is to identifying the need object (target object to be processed that user sets in advance in the view data of the environment obtained, user does not wish that target object shows in photograph), therefore the target object in environment has been demarcated by ad hoc fashion (as passed through touch control manner), electronics identifies the target object degree of depth in the environment, after user's shooting (as implemented the triggering operation of shooting as press shutter), the depth recognition of electronics based target object goes out in the view data gathered the target object comprised, shooting image (being also exactly the image checked for user exported) is generated after being processed by target object.
For realizing above-mentioned effect, see an optional schematic flow sheet of the shooting image method shown in Fig. 5, comprise the following steps:
Step 501, obtains view data.
Step 502, extracts the feature of the target object preset.
Step 503, identifies the target object degree of depth in the environment, it is determined that target object residing depth intervals in the environment.
It is target object that electronics identifies the object meeting extraction in step 502 in environment, identifies target object degree of depth information in the environment by arranging binocular camera or depth camera.
Step 504, based on the feature of the target object extracted, carries out characteristic matching with the part being positioned at depth intervals in view data, identifies target object based on matching result in view data.
Object in view data is often in different depth intervals, by identifying target object deep space residing in the environment, the view data being in this deep space in view data carries out characteristic matching, and characteristic matching is not carried out for other deep spaces in view data such that it is able to obviously save the calculating resource for the treatment of time and electronics.
When view data being positioned at determined depth intervals and only includes an object, see that the feature being positioned at the target object of this depth intervals with extraction in view data is mated, can determine view data is positioned at the target object of this depth intervals based on matching result.
Step 502 to step 504 is the treatment step identifying target object in view data.
Step 505, obtains the profile of the target object identified, and is processed by the target object identified based on profile in view data.
As front, comprise mosaic processing; Fuzzy processing; The specific image being different from target object is covered on the layer of target object.
Step 506, generates shooting image based on view data after process.
When taking pictures, the target object view data of photo comprised carries out the process such as step 505, to realize the effect view data that the contour area of target object is corresponding modified, when generating shooting image based on view data after amendment, target object is being taken invisible in image.
When user utilizes the view data that camera collection environment forms video, it is be made up of the series of frame images of camera collection due to video, therefore the target object comprised for the view data (being also exactly the data of series of frame images) of each two field picture of video processes, the process recorded with abovementioned steps is consistent, after the view data of each two field picture of video is processed, when the view data of the video generated play by electronics, target object will be in invisible state all the time, without the need to user, the view data generated is carried out the editing and processing in any later stage and operate the effect that can realize hiding target object.
Embodiment five
In the present embodiment one typically application scene, user utilizes camera of electronic equipment that environment is carried out viewing operation and takes pictures to prepare, in the process of user's viewing operation, electronics needs object (target object to be processed to identifying in the view data of the environment obtained, user does not wish that target object shows in photograph, it it is such as the target object being positioned at the specific image pickup area that user demarcates in advance, or meet the target object of the feature that user sets in advance), after electronics identifies target object in a region of an image, consider the feature of the continuity of user operation, in the view data of next image, can first at this history area identification target object, to promote the speed identifying target object, and, consider that user's handling electronics can unavoidably be shaken, therefore, by the displacement of detection electronics electronics between the view data obtaining two images, history area deformation based compensation amount is revised, the speed identifying target object can be promoted further.If not identifying target object in this history area, then other regions continued in view data carry out characteristic matching to identify target object.
For realizing above-mentioned effect, see an optional schematic flow sheet of the shooting image method shown in Fig. 6, comprise the following steps:
Step 601, obtains the first view data.
Step 602, obtains the feature of target object.
As front, as a kind of implementation of step 602, extract the feature of the target object preset, identify the target object degree of depth in the environment, determine target object residing depth intervals in the environment, based on the feature of the target object extracted, carry out characteristic matching with the part being positioned at depth intervals in the first view data, in the first view data, identify target object based on matching result.
As previously, as a kind of implementation of step 602, the feature obtaining target object can be the feature of the target object that user sets in advance.
Step 603, obtains the profile of the target object identified, and is processed by the target object identified based on profile in the first view data, generates shooting image 1 based on the first view data after process.
As front, comprise mosaic processing; Fuzzy processing; The specific image being different from target object is covered on the layer of target object.
Step 604, resolves the displacement that sensing data obtains characterizing electronics motion, and deformation based determines the displacement compensation amount of target object in view data.
Step 605, the history area including target object in the 2nd view data is carried out adjustment and obtains target area by deformation based compensation amount.
Assuming, the 2nd view data is the view data obtained after the first view data, and such as, when the view data of photo 1 that the first view data is shooting, then the 2nd view data is the view data of photo 2 captured after photo 1; When the view data of two field picture 1 in the video that the first view data is shooting, then the 2nd view data is the view data of two field picture 2 captured after two field picture 1.
Step 606, identifies target object in the target area in the 2nd view data.
When target object is not identified in the target area in the 2nd view data, as the replacement step of step 606, then feature at other region base based target objects of the 2nd view data identifies target object.
Step 607, obtains the profile of the target object identified, and is processed by the target object identified based on profile in the 2nd view data, generates shooting image 2 based on allowing view data after process.
During for the 3rd view data gathered after the 2nd view data, 3rd view data identifying, process and abovementioned steps 604 to the step 606 of target object are similar, illustrate no longer in a separate paper, by the displacement amount of the target area in different view data based on electronics is compensated and corrected, the speed identifying target object can be promoted, save the calculating resource of electronics.
Embodiment six
See Fig. 7, the present embodiment records a kind of electronics, and electronics comprises:
Camera 100, for obtaining view data; Treater 100, after the viewing operation of response user, if receiving the instruction gathering environment further, then controls camera 100 and gathers the view data that environment obtains including object in environment (such as people, thing etc.).
Treater 200, for identifying target object in view data, and obtain the profile of target object identified, based on profile, the target object identified in view data is processed, such as, the target object identified in view data is carried out following process one of at least: mosaic processing;Fuzzy processing; And generate shooting image based on view data after process.
As a kind of implementation that target object is processed by treater 200, treater 200 identifies the particular acquisition region that user demarcates, extract the feature of the object being positioned at image acquisition region, feature and view data based on the object extracted carry out characteristic matching, identify target object based on matching result in view data. In the process that response user implements viewing operation, treater 200 can determine that particular acquisition region that user demarcates is (certainly, particular acquisition region can also be demarcated in advance by the form in coordinate, orientation between collection environment), such as, during the real-time image of the environment responding user's viewing operation at treater 200 and present in graphic interface, receive the particular acquisition region that user delimited by operation (as implemented the curve closed). The feature being positioned at image acquisition region can adopt any one characteristics of image extraction algorithm existing to realize, especially, resource is calculated for saving electronics, the feature of target object can be reduced to the imaging point at any position, such as the point of target object edge, can also be the point inconsistent with the characteristic of this target object on target object, such as the black splotch on white target object, the point of target object projection, recessed point on target object, rust point on metal target object, target object surface paint body peels off a little etc.
As another implementation that target object is processed by treater 200, treater 200 extracts the feature of the object being positioned at image acquisition region, feature and view data based on the object extracted carry out characteristic matching, identify target object based on matching result in view data. User can be set in advance in image does not need the object shown to be exactly target object yet, can the feature of Offered target object during user's target setting object, such as color characteristic, contour feature, or the feature of the image based on the extraction of existing characteristics of image extraction algorithm, treater 200 is to the feature of the target object preset then extracted when getting view data. Certainly, the existing image that treater 200 can also be uploaded to the target object of the storer of electronics based on user carry out feature extraction and the feature obtaining target object. When user arranges multiple target object, what treater 200 was arranged in multiple target object according to user needs the target object hidden and the feature extracting target object in current shooting in synthetic image.
Identify the speed of target object to be lifted in view data, treater 200 identifies the object degree of depth in the environment, it is determined that object residing depth intervals in the environment; Based on the feature of the object extracted, carry out characteristic matching with the part being positioned at depth intervals in view data.
In order to be lifted at the view data (being described for the first view data and the 2nd view data) of acquisition continuously here, see Fig. 8, electronics also arranges sensor 300, for exporting the sensing data of the displacement characterizing electronics motion; Treater 200 based on above-mentioned mode after the first view data identifies target object, it is determined that region residing for target object in the first view data; Resolving the displacement that sensing data obtains characterizing electronics motion, deformation based determines the displacement compensation amount of target object in view data; The region including target object in first view data is carried out adjustment and obtains target area by deformation based compensation amount; Identifying target object in the target area of the 2nd view data, if not identifying target object, then feature at other regions (region outside target area) the based target object of the 2nd view data identifies target object.
One of ordinary skill in the art will appreciate that: all or part of step realizing aforesaid method embodiment can be completed by the hardware that programmed instruction is relevant, aforesaid program can be stored in a computer read/write memory medium, this program, when performing, performs the step comprising aforesaid method embodiment; And aforesaid storage media comprises: mobile storage equipment, read-only storage (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc. various can be program code stored medium.
Or, if the above-mentioned integrated unit of the present invention realize using the form of software function module and as independent production marketing or when using, it is also possible to be stored in a computer read/write memory medium. Based on such understanding, the technical scheme of the embodiment of the present invention in essence or says that part prior art contributed can embody with the form of software product, this computer software product is stored in a storage media, comprises some instructions with so that a computer equipment (can be Personal Computer, server or the network equipment etc.) performs all or part of of method described in each embodiment of the present invention. And aforesaid storage media comprises: mobile storage equipment, ROM, RAM, magnetic disc or CD etc. various can be program code stored medium.
The above; it is only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, any it is familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention. Therefore, protection scope of the present invention should be as the criterion with the protection domain of described claim.

Claims (10)

1. a shooting image method, it is characterised in that, described method comprises:
Obtain view data;
Described view data identifies target object, and obtains the profile of the target object identified;
Based on described profile, the target object identified in described view data is processed;
Shooting image is generated based on view data after process.
2. take image method as claimed in claim 1, it is characterised in that, described in view data, identify target object, comprising:
Identify the particular acquisition region that user demarcates, extract the feature of the target object being positioned at described image acquisition region, or, extract the feature of the target object preset;
Carry out characteristic matching based on the feature extracted and described view data, in described view data, identify described target object based on matching result.
3. take image method as claimed in claim 2, it is characterised in that, described feature based on extracting and described view data carry out characteristic matching, comprising:
Identify the degree of depth of described target object in described environment, it is determined that the depth intervals of described target object residing in described environment;
Based on the feature of the described target object extracted, carry out characteristic matching with the part being positioned at described depth intervals in described view data.
4. take image method as claimed in claim 1, it is characterised in that, described based on profile in described view data identify target object process, comprising:
The described target object identified in view data described in each is carried out following process one of at least:
Mosaic processing;
Fuzzy processing;
The specific image being different from described target object is covered on the layer of described target object.
5. take image method as claimed in claim 1, it is characterised in that, described view data identifies comprise target object stating:
Resolve the displacement that sensing data obtains characterizing electronics motion, determine the displacement compensation amount of target object described in described view data based on described displacement;
Based on described displacement compensation amount, the history area including described target object in described view data is carried out adjustment and obtain target area;
Described target object is identified in the described target area in described view data.
6. an electronics, it is characterised in that, described electronics comprises:
Camera, for obtaining view data;
Treater, for identifying target object in described view data, and obtains the profile of the target object identified;
Described treater, also for being processed by the target object identified in described view data based on described profile;
Described treater, also for generating shooting image based on view data after process.
7. electronics as claimed in claim 6, it is characterised in that,
Described treater, also identifies the particular acquisition region that user demarcates, and extracts the feature of the target object being positioned at described image acquisition region, or, extract the feature of the target object preset;
Described treater, also for carrying out characteristic matching based on the feature extracted and described view data, identifies described target object based on matching result in described view data.
8. electronics as claimed in claim 7, it is characterised in that,
Described treater, also for identifying the degree of depth of described target object in described environment, it is determined that the depth intervals of described target object residing in described environment;
Described treater, also for the feature based on the described target object extracted, carries out characteristic matching with the part being positioned at described depth intervals in described view data.
9. electronics as claimed in claim 8, it is characterised in that,
Described treater, also for the described target object identified in view data described in each is carried out following process one of at least: mosaic processing; Fuzzy processing; The specific image being different from described target object is covered on the layer of described target object.
10. electronics as claimed in claim 6, it is characterised in that, described electronics also comprises:
Sensor, for exporting the sensing data of the displacement characterizing the motion of described electronics;
Described treater, also obtains characterizing the displacement of described electronics motion, determines the displacement compensation amount of target object described in described view data based on described displacement for resolving sensing data;
Described treater, also obtains target area for the history area including described target object in described view data being carried out adjustment based on described displacement compensation amount;
Described treater, also for identifying described target object in the described target area in described view data.
CN201610121500.XA 2016-03-03 2016-03-03 Image shooting method and electronic equipment Active CN105681627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610121500.XA CN105681627B (en) 2016-03-03 2016-03-03 Image shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610121500.XA CN105681627B (en) 2016-03-03 2016-03-03 Image shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105681627A true CN105681627A (en) 2016-06-15
CN105681627B CN105681627B (en) 2019-12-24

Family

ID=56307810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610121500.XA Active CN105681627B (en) 2016-03-03 2016-03-03 Image shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105681627B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933612A (en) * 2016-06-29 2016-09-07 联想(北京)有限公司 Image processing method and electronic equipment
CN106331486A (en) * 2016-08-25 2017-01-11 珠海市魅族科技有限公司 Image processing method and system
CN107426497A (en) * 2017-06-15 2017-12-01 深圳天珑无线科技有限公司 The method, apparatus and computer-readable recording medium of a kind of recording image
CN107493433A (en) * 2017-09-08 2017-12-19 盯盯拍(深圳)技术股份有限公司 Image pickup method and filming apparatus
CN107707824A (en) * 2017-10-27 2018-02-16 广东欧珀移动通信有限公司 Image pickup method, device, storage medium and electronic equipment
CN108897899A (en) * 2018-08-23 2018-11-27 深圳码隆科技有限公司 The localization method and its device of the target area of a kind of pair of video flowing
CN108900895A (en) * 2018-08-23 2018-11-27 深圳码隆科技有限公司 The screen method and its device of the target area of a kind of pair of video flowing
CN111177009A (en) * 2019-12-31 2020-05-19 五八有限公司 Script generation method and device, electronic equipment and storage medium
CN111191083A (en) * 2019-09-23 2020-05-22 牧今科技 Method and computing system for object identification
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
WO2021164162A1 (en) * 2020-02-17 2021-08-26 深圳传音控股股份有限公司 Image photographing method and apparatus, and device
CN113766130A (en) * 2021-09-13 2021-12-07 维沃移动通信有限公司 Video shooting method, electronic equipment and device
US11763459B2 (en) 2019-09-23 2023-09-19 Mujin, Inc. Method and computing system for object identification

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001086407A (en) * 1999-09-17 2001-03-30 Matsushita Electric Ind Co Ltd Image pickup device with mosaic function and mosaic processor
CN1767638A (en) * 2005-11-30 2006-05-03 北京中星微电子有限公司 Visible image monitoring method for protecting privacy right and its system
CN101276409A (en) * 2007-03-27 2008-10-01 三洋电机株式会社 Image processing apparatus
US20090207269A1 (en) * 2008-02-15 2009-08-20 Sony Corporation Image processing device, camera device, communication system, image processing method, and program
CN102111491A (en) * 2009-12-29 2011-06-29 比亚迪股份有限公司 Mobile equipment with picture-taking function and face recognition processing method thereof
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103297699A (en) * 2013-05-31 2013-09-11 北京小米科技有限责任公司 Method and terminal for shooting images
CN104168422A (en) * 2014-08-08 2014-11-26 小米科技有限责任公司 Image processing method and device
CN104796594A (en) * 2014-01-16 2015-07-22 中兴通讯股份有限公司 Preview interface special effect real-time presenting method and terminal equipment
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN105225230A (en) * 2015-09-11 2016-01-06 浙江宇视科技有限公司 A kind of method and device identifying foreground target object

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001086407A (en) * 1999-09-17 2001-03-30 Matsushita Electric Ind Co Ltd Image pickup device with mosaic function and mosaic processor
CN1767638A (en) * 2005-11-30 2006-05-03 北京中星微电子有限公司 Visible image monitoring method for protecting privacy right and its system
CN101276409A (en) * 2007-03-27 2008-10-01 三洋电机株式会社 Image processing apparatus
US20090207269A1 (en) * 2008-02-15 2009-08-20 Sony Corporation Image processing device, camera device, communication system, image processing method, and program
CN102111491A (en) * 2009-12-29 2011-06-29 比亚迪股份有限公司 Mobile equipment with picture-taking function and face recognition processing method thereof
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103297699A (en) * 2013-05-31 2013-09-11 北京小米科技有限责任公司 Method and terminal for shooting images
CN104796594A (en) * 2014-01-16 2015-07-22 中兴通讯股份有限公司 Preview interface special effect real-time presenting method and terminal equipment
CN104168422A (en) * 2014-08-08 2014-11-26 小米科技有限责任公司 Image processing method and device
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN105225230A (en) * 2015-09-11 2016-01-06 浙江宇视科技有限公司 A kind of method and device identifying foreground target object

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933612A (en) * 2016-06-29 2016-09-07 联想(北京)有限公司 Image processing method and electronic equipment
CN106331486A (en) * 2016-08-25 2017-01-11 珠海市魅族科技有限公司 Image processing method and system
CN107426497A (en) * 2017-06-15 2017-12-01 深圳天珑无线科技有限公司 The method, apparatus and computer-readable recording medium of a kind of recording image
CN107493433A (en) * 2017-09-08 2017-12-19 盯盯拍(深圳)技术股份有限公司 Image pickup method and filming apparatus
CN107707824B (en) * 2017-10-27 2020-07-31 Oppo广东移动通信有限公司 Shooting method, shooting device, storage medium and electronic equipment
CN107707824A (en) * 2017-10-27 2018-02-16 广东欧珀移动通信有限公司 Image pickup method, device, storage medium and electronic equipment
CN108897899A (en) * 2018-08-23 2018-11-27 深圳码隆科技有限公司 The localization method and its device of the target area of a kind of pair of video flowing
CN108900895A (en) * 2018-08-23 2018-11-27 深圳码隆科技有限公司 The screen method and its device of the target area of a kind of pair of video flowing
CN111191083A (en) * 2019-09-23 2020-05-22 牧今科技 Method and computing system for object identification
US11763459B2 (en) 2019-09-23 2023-09-19 Mujin, Inc. Method and computing system for object identification
CN111177009A (en) * 2019-12-31 2020-05-19 五八有限公司 Script generation method and device, electronic equipment and storage medium
WO2021164162A1 (en) * 2020-02-17 2021-08-26 深圳传音控股股份有限公司 Image photographing method and apparatus, and device
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
CN113766130A (en) * 2021-09-13 2021-12-07 维沃移动通信有限公司 Video shooting method, electronic equipment and device

Also Published As

Publication number Publication date
CN105681627B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN105681627A (en) Image shooting method and electronic equipment
US10198823B1 (en) Segmentation of object image data from background image data
CN108229369B (en) Image shooting method and device, storage medium and electronic equipment
US11386699B2 (en) Image processing method, apparatus, storage medium, and electronic device
US10217195B1 (en) Generation of semantic depth of field effect
WO2019128507A1 (en) Image processing method and apparatus, storage medium and electronic device
CN110998659B (en) Image processing system, image processing method, and program
KR102574141B1 (en) Image display method and device
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2014080075A1 (en) Method and apparatus for facial image processing
CN104584531A (en) Image processing apparatus and image display apparatus
CN111640166B (en) AR group photo method, device, computer equipment and storage medium
CN101770643A (en) Image processing apparatus, image processing method, and image processing program
CN111008935B (en) Face image enhancement method, device, system and storage medium
KR20010080219A (en) Image processing apparatus, image processing method, and recording medium
CN103168316A (en) User interface control device, user interface control method, computer program, and integrated circuit
CN104641398A (en) Photographic subject tracking device and camera
KR20190036168A (en) Method for correcting image based on category and recognition rate of objects included image and electronic device for the same
WO2017173578A1 (en) Image enhancement method and device
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
KR102538685B1 (en) Method and apparatus for restoring 3d information using multi-view information
CN110177216A (en) Image processing method, device, mobile terminal and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant