CN107483821A - A kind of image processing method and mobile terminal - Google Patents

A kind of image processing method and mobile terminal Download PDF

Info

Publication number
CN107483821A
CN107483821A CN201710744927.XA CN201710744927A CN107483821A CN 107483821 A CN107483821 A CN 107483821A CN 201710744927 A CN201710744927 A CN 201710744927A CN 107483821 A CN107483821 A CN 107483821A
Authority
CN
China
Prior art keywords
preview image
depth
profile
target subject
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710744927.XA
Other languages
Chinese (zh)
Other versions
CN107483821B (en
Inventor
王仕琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiku software technology (Shanghai) Co.,Ltd.
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710744927.XA priority Critical patent/CN107483821B/en
Publication of CN107483821A publication Critical patent/CN107483821A/en
Application granted granted Critical
Publication of CN107483821B publication Critical patent/CN107483821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The embodiments of the invention provide a kind of image processing method and mobile terminal, the image processing method includes:By dual camera, the first preview image with depth of view information is obtained, and the first preview image is included in mobile terminal display screen;The user instruction of the first preview image triggering is touched according to user, obtains the first depth of view information corresponding to touch point;Based on the first depth of view information, the profile of the target subject in the first preview image is determined;According to the profile of target subject, the target subject in the second preview image for being got by dual camera is tracked, to determine profile of the target subject in the second preview image;The parameter of the background area of second preview image is adjusted, wherein, background area is all areas outside the profile of target subject.So as to be accurately positioned to the target subject in each two field picture in dynamic image, the accuracy of body profile extraction is improved, effectively improves satisfaction of the user to the image after processing.

Description

A kind of image processing method and mobile terminal
Technical field
The present embodiments relate to the communications field, more particularly to a kind of image processing method and mobile terminal.
Background technology
At present, with the development of mobile terminal, dual camera equally starts to popularize.Mobile terminal with dual camera can Obtain image in depth of view information, and according to depth of view information extraction specified object profile, and to profile outside background area enter The processing such as row virtualization.
But image processing method of the prior art, typically still image is handled, dynamic can not be applied to Image, can only be background blurring so as to be carried out to dynamic image by post-processing.During post-processing, it is necessary to master Body is tracked, i.e. each two field picture is required for positioning main body.Method for tracing of the prior art, is typically used Fixed size region is tracked, i.e. after user selects main body, mobile terminal or image processing equipment select for user, draw Determine trace regions, then, according to the size of trace regions, the main body in each frame is tracked, then again to trace regions Outer background area is handled, it is clear that in this mode, not processed trace regions include what part needs were processed Background parts.
Therefore, in the image procossing scheme of prior art, it is single processing method to be present, can not be applied to dynamic image, And to agent localization inaccuracy in the trace mode of the main body in dynamic image, be present, cause processing result image undesirable Problem.
The content of the invention
The embodiment of the present invention provides a kind of image processing method, to solve in the image procossing scheme of prior art, deposits It is single in processing method, dynamic image can not be applied to, and in the trace mode of the main body in dynamic image, main body be present and determine Position is inaccurate, the problem of causing processing result image undesirable.
First aspect, there is provided a kind of image processing method, applied to the mobile terminal with dual camera, methods described Including:
By dual camera, the first preview image with depth of view information is obtained, and the first preview image is included moving In dynamic terminal display screen;
The user instruction of the first preview image triggering is touched according to user, obtains the first depth of view information corresponding to touch point;
Based on the first depth of view information, the profile of the target subject in the first preview image is determined;
According to the profile of target subject, to the target subject in the second preview image for being got by dual camera It is tracked, to determine profile of the target subject in the second preview image;
The parameter of the background area of second preview image is adjusted, wherein, background area is the profile of target subject Outer all areas.
On the other hand, the embodiment of the present invention additionally provides a kind of mobile terminal, and the mobile terminal has dual camera, bag Include:
First acquisition module, for by dual camera, obtaining the first preview image with depth of view information, and by first Preview image is shown in mobile terminal display screen;
Second acquisition module, for touching the user instruction of the first preview image triggering according to user, obtain touch point pair The first depth of view information answered;
Determining module, for based on the first depth of view information, determining the profile of the target subject in the first preview image;
Tracing module, for the profile according to target subject, in the second preview image for being got by dual camera Target subject be tracked, to determine profile of the target subject in the second preview image;
Adjusting module, the parameter for the background area to the second preview image are adjusted, wherein, background area is mesh Mark all areas outside the profile of main body.
So, in the embodiment of the present invention, by dual camera, the first preview image with depth of view information is obtained, and will First preview image is shown in mobile terminal display screen;The user instruction of the first preview image triggering is touched according to user, is obtained Take the first depth of view information corresponding to touch point;Based on the first depth of view information, the wheel of the target subject in the first preview image is determined It is wide;According to the profile of target subject, the target subject in the second preview image for being got by dual camera is tracked, To determine profile of the target subject in the second preview image;The parameter of the background area of second preview image is adjusted, Wherein, background area is all areas outside the profile of target subject.So as to in each two field picture in dynamic image Target subject be accurately positioned, improve body profile extraction accuracy, effectively improve user to the image after processing Satisfaction.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is a kind of one of flow chart of image processing method in the embodiment of the present invention;
Fig. 2 is the two of the flow chart of a kind of image processing method in the embodiment of the present invention;
Fig. 3 is one of block diagram of mobile terminal in the embodiment of the present invention;
Fig. 4 is the two of the block diagram of the mobile terminal in the embodiment of the present invention;
Fig. 5 is the three of the block diagram of the mobile terminal in the embodiment of the present invention;
Fig. 6 is the four of the block diagram of the mobile terminal in the embodiment of the present invention;
Fig. 7 is the five of the block diagram of the mobile terminal in the embodiment of the present invention;
Fig. 8 is the six of the block diagram of the mobile terminal in the embodiment of the present invention;
Fig. 9 is the seven of the block diagram of the mobile terminal in the embodiment of the present invention;
Figure 10 is the structural representation of the mobile terminal in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
Embodiment one
Reference picture 1, a kind of flow chart of image processing method in the embodiment of the present invention is shown, is specifically included:
Step 101, by dual camera, the first preview image with depth of view information is obtained, and by the first preview image It is shown in mobile terminal display screen.
Specifically, technical scheme in an embodiment of the present invention is applied in the mobile terminal with dual camera.It is double Camera is arranged at mobile terminal back, and position can be configured according to the actual requirements, and the present invention is not limited this.
In an embodiment of the present invention, mobile terminal can be shot by dual camera to target subject, shot Cheng Zhong, the first preview image of main camera shooting is shown on the display screen of mobile terminal.Mobile terminal can be by taking the photograph to double As head progress calibration process, depth of view information corresponding to each pixel in the first preview image is got.Specific calibration method It can be realized by the technical scheme in prior embodiment, the present invention is not repeated this.
In embodiments of the invention, the first preview image got is shown in the display screen of mobile terminal by mobile terminal In, it is available for user's preview and operation.
Step 102, the user instruction of the first preview image triggering is touched according to user, obtains the first scape corresponding to touch point Deeply convince breath.
Specifically, in an embodiment of the present invention, user can be by the display interface that mobile terminal provides in the first preview Operated on image.In an embodiment of the present invention, the operation to the first preview image includes touch operation, i.e. Yong Huke By touch operation, target subject is clicked on.The touch operation that mobile terminal detection user is acted on touch-screen, so as to trigger use Family instructs.Meanwhile the touch point that mobile terminal detects, get the first depth of view information corresponding to touch point.
Step 103, based on the first depth of view information, the profile of the target subject in the first preview image is determined.
Specifically, in an embodiment of the present invention, mobile terminal can determine according to the first depth of view information got The profile of target subject in one preview image.Specific determination mode will be described in detail in the following embodiments.
Step 104, the profile according to target subject, to the target in the second preview image for being got by dual camera Main body is tracked, to determine profile of the target subject in the second preview image.
Specifically, in an embodiment of the present invention, target subject of the mobile terminal in the first preview image is determined , can be using the profile of the target subject in the first preview image as reference, to being obtained in shooting process by dual camera after profile Target subject in the second preview image got is tracked, so that it is determined that going out wheel of the target subject in the second preview image It is wide.
Step 105, the parameter of the background area of the second preview image is adjusted, wherein, background area is target master All areas outside the profile of body.
Specifically, in an embodiment of the present invention, mobile terminal is it is determined that target subject profile in the second preview image Afterwards, parameter adjustment can be carried out to the background area beyond target subject profile.In one embodiment, parameter adjustment can be to adjust The saturation degree of whole background area.In another embodiment, parameter adjustment can be the brightness of adjustment background area.The present invention is right This is not limited.
To sum up, the technical scheme in the embodiment of the present invention, by dual camera, the first preview with depth of view information is obtained Image;The user instruction that the first preview image shown in mobile terminal display screen triggers is touched according to user, obtains touch point Corresponding first depth of view information;Based on the first depth of view information, the profile of the target subject in the first preview image is determined;According to mesh The profile of main body is marked, the target subject in the second preview image for being got by dual camera is tracked, to determine mesh Mark profile of the main body in the second preview image;The parameter of the background area of second preview image is adjusted, wherein, background Region is all areas outside the profile of target subject.So as to the target subject in each two field picture in dynamic image It is accurately positioned, improves the accuracy of body profile extraction, effectively improve satisfaction of the user to the image after processing.
Embodiment two
Reference picture 2, a kind of flow chart of image processing method in the embodiment of the present invention is shown, is specifically included:
Step 201, by dual camera, the first preview image with depth of view information is obtained, and by the first preview image It is shown in mobile terminal display screen.
Specifically, technical scheme in an embodiment of the present invention is applied in the mobile terminal with dual camera.It is double Camera is arranged at mobile terminal back, and position can be configured according to the actual requirements, and the present invention is not limited this.
In an embodiment of the present invention, mobile terminal can be shot by dual camera to target subject, shot Cheng Zhong, the first preview image of main camera shooting is shown on the display screen of mobile terminal.Mobile terminal can be by taking the photograph to double As head progress calibration process, depth of view information corresponding to each pixel in the first preview image is got.Specific calibration method It can be realized by the technical scheme in prior embodiment, the present invention is not repeated this.
In embodiments of the invention, the first preview image got is shown in the display screen of mobile terminal by mobile terminal In, it is available for user's preview and operation.
Step 202, the user instruction that the first preview image shown in mobile terminal display screen triggers is touched according to user, Obtain the first depth of view information corresponding to touch point.
Specifically, in an embodiment of the present invention, step 202 specifically includes:
Sub-step 2021, obtain depth of view information corresponding to all pixels point in the first preview image.
Specifically, in an embodiment of the present invention, mobile terminal can calculate depth of view information corresponding to each pixel, tool The depth of field computational methods of body, and pixel choosing method can realize that the present invention is right by the technical scheme in existing embodiment This is not repeated.
Sub-step 2022, according to user instruction, the first pixel corresponding with touch point is detected on the first preview image.
Specifically, in an embodiment of the present invention, user can be by the display interface that mobile terminal provides in the first preview Operated on image.In an embodiment of the present invention, the operation to the first preview image includes touch operation, i.e. Yong Huke By touch operation, target subject is clicked on.The touch operation that mobile terminal detection user is acted on touch-screen, determines to touch Point, and detect the first pixel corresponding with touch point.
Sub-step 2023, obtain the first depth of view information of the first pixel.
Specifically, in an embodiment of the present invention, the depth of field corresponding to mobile terminal from all pixels point having calculated that is believed In breath, the first depth of view information corresponding with the first pixel is obtained.
Step 203, based on the first depth of view information, the first field depth is determined.
Specifically, in an embodiment of the present invention, mobile terminal can determine first according to the first depth of view information got Field depth.Specifically, in an embodiment of the present invention, user can be configured according to actual demand to field depth.Citing Explanation:Field depth can be arranged to fluctuate 0.5 above and below the first depth of view information by user.That is, if the first depth of view information is 1, First field depth is:0.5-1.5.
Step 204, obtain first corresponding to the multiple depth of view information fallen into the first preview image in the first field depth Pixel point set.
Specifically, in an embodiment of the present invention, mobile terminal is retrieved to depth of view information corresponding to all pixels point, To obtain pixel corresponding to the multiple depth of view information fallen into the first field depth, and form the first pixel point set.
Step 205, based on multiple pixels in the first pixel point set, the profile of target subject is obtained.
Specifically, in an embodiment of the present invention, mobile terminal can according to multiple pixels in the first pixel point set, Determine the profile of target subject.Specifically include:
Sub-step 2051, connection processing is carried out to multiple pixels in the first pixel point set.
In an embodiment of the present invention, mobile terminal can carry out connectivity part to multiple pixels in the first pixel point set Reason.The implementation method of specific connection processing can refer to the technical scheme in prior embodiment, and the present invention is not repeated this.
Sub-step 2052, according to connection result, determine connected region.
In an embodiment of the present invention, mobile terminal can determine connected region according to the result of connection processing.
Sub-step 2053, detect in multiple pixels in the first pixel point set with the presence or absence of being not belonging to connected region Region exterior pixel point.
In an embodiment of the present invention, due in the first preview image, it is understood that there may be the picture fallen into the first field depth Vegetarian refreshments, but such pixel is not belonging to target subject, i.e. in target subject profile exterior domain, therefore, determining connected region Behind domain, such pixel will be excluded in connected region.Mobile terminal can detect that such region for being not belonging to connected region Exterior pixel point.
Sub-step 2054, if so, then based on removing exterior pixel point in region in multiple pixels in the first pixel point set Other pixels, build the profile of target subject.
In an embodiment of the present invention, if mobile terminal detects above-mentioned zone exterior pixel point be present, mobile terminal exists When determining the profile of target subject, such region exterior pixel point is excluded, i.e. by the first pixel point set in connected region Multiple pixels are attached, to build the profile of target subject.
Step 206, the profile according to target subject, to the target in the second preview image for being got by dual camera Main body is tracked, to determine profile of the target subject in the second preview image.
Specifically, in an embodiment of the present invention, step 206, it may particularly include:
Sub-step 2061, one or more features in the profile of target subject are extracted from the first preview image Point, to form fisrt feature point set.
Specifically, in an embodiment of the present invention, the target subject that mobile terminal can be determined from the first preview image Profile in, extract more than one characteristic point of Europe, to form fisrt feature point set.In one embodiment, extraction is special The method of sign point can be corner detection method, i.e. the characteristic information at each angle of extraction target subject profile.In other realities Apply in example, other achievable feature extracting methods can also be taken, the present invention is not limited this.
In an embodiment of the present invention, fisrt feature point set can be used for unique mark target subject.
Sub-step 2062, all characteristic points in the second preview image are extracted, to form second feature point set.
Specifically, in an embodiment of the present invention, mobile terminal can extract all characteristic points in the second preview image, from And form second feature point set.In an embodiment of the present invention, the second preview image can be in preview or shooting process Any image.
Sub-step 2063, second feature point set is matched with fisrt feature point set, extracts the spy that the match is successful Point is levied, to form third feature point set.
Specifically, in an embodiment of the present invention, mobile terminal can be by second feature point set and fisrt feature point set Matched, and extract the characteristic point that the match is successful, to form third feature point set.
Sub-step 2064, reference characteristic point is selected in third feature point set.
Specifically, in an embodiment of the present invention, mobile terminal selects reference characteristic point in third feature point set.Tool Body system of selection can be:By depth of view information corresponding to each characteristic point in third feature point set, variance calculating is carried out, is taken Characteristic point is as reference characteristic point corresponding to the minimum depth of view information of variance.In other embodiments, other methods can also be used Datum mark is chosen, such as chooses the geometric center for calculating characteristic point, and on the basis of the nearest characteristic point of selected distance geometric center Point etc..The present invention is not limited this.
Sub-step 2065, obtain the depth of view information of all pixels point in the second preview image.
Specifically, in an embodiment of the present invention, in the preview image of acquisition for mobile terminal second corresponding to all pixels point Depth of view information.
Sub-step 2066, obtain the second depth of view information of reference characteristic point corresponding pixel points in the second preview image.
Specifically, mobile terminal determines the pixel corresponding with reference characteristic point, and obtain second corresponding to the pixel Depth of view information.
Then, mobile terminal can be based on the second depth of view information, determine profile of the target subject in the second preview image.Tool Body:
Sub-step 2067, based on the second depth of view information, determine the second field depth.
The step is similar with step 203, does not repeat herein.
Sub-step 2068, obtain and the is fallen into corresponding to multiple depth of view information in the second field depth in the second preview image Two pixel point sets.
The step is similar with step 204, does not repeat herein.
Sub-step 2069, connection processing is carried out to multiple pixels in the second pixel point set.
Sub-step 2070, according to connection result, determine connected region.
Sub-step 2071, detect in multiple pixels in the second pixel point set with the presence or absence of being not belonging to connected region Region exterior pixel point.
Sub-step 2072, if so, then based on removing exterior pixel point in region in multiple pixels in the second pixel point set Other pixels, build the profile of target subject.
Sub-step 2067-2072 detail is similar with sub-step 2051-2054, does not repeat herein.
Step 207, the parameter of the background area of the second preview image is adjusted, wherein, background area is target master All areas outside the profile of body.
Specifically, in an embodiment of the present invention, mobile terminal is it is determined that target subject profile in the second preview image Afterwards, parameter adjustment can be carried out to the background area beyond target subject profile.In one embodiment, parameter adjustment can be to adjust The saturation degree of whole background area.In another embodiment, parameter adjustment can be the brightness of adjustment background area.The present invention is right This is not limited.
In addition, the preview image in the embodiment of the present invention, can refer to when mobile terminal is under preview state, photograph Preview image.It can also refer in the case where mobile terminal is in video state, photograph and be shown in the preview graph on display screen Picture.
In one embodiment, if mobile terminal is in video state, mobile terminal is also needed to the record in mobile terminal As the image of the video image of form is correspondingly processed.Specifically include:
Whether current mobile terminal detection is RECORD mode, if so, then mobile terminal treats according in above-described embodiment Preview image same treatment is made to the corresponding two field picture in video image.That is, by the place of the background area in preview image Reason result is mapped in the corresponding frame in video image.
To sum up, the technical scheme in the embodiment of the present invention, by dual camera, the first preview with depth of view information is obtained Image, and the first preview image is included in mobile terminal display screen;The use of the first preview image triggering is touched according to user Family instructs, and obtains the first depth of view information corresponding to touch point;Based on the first depth of view information, the target in the first preview image is determined The profile of main body;According to the profile of target subject, to the target subject in the second preview image for being got by dual camera It is tracked, to determine profile of the target subject in the second preview image;To the parameter of the background area of the second preview image It is adjusted, wherein, background area is all areas outside the profile of target subject.So as to each in dynamic image Target subject in two field picture is accurately positioned, and improves the accuracy of body profile extraction, effectively improves user to processing The satisfaction of image afterwards.
Embodiment three
Reference picture 3, show a kind of block diagram of mobile terminal 300 in the embodiment of the present invention.Specifically include:
First acquisition module 301, for by dual camera, obtaining the first preview image with depth of view information;
Second acquisition module 302, touched for touching the first preview image shown in mobile terminal display screen according to user The user instruction of hair, obtain the first depth of view information corresponding to touch point;
Determining module 303, for based on the first depth of view information, determining the profile of the target subject in the first preview image;
Tracing module 304, for the profile according to target subject, to the second preview image got by dual camera In target subject be tracked, to determine profile of the target subject in the second preview image;
Adjusting module 305, the parameter for the background area to the second preview image are adjusted, wherein, background area For all areas outside the profile of target subject.
Reference picture 4, in a preferred embodiment of the invention, on the basis of Fig. 3, the second acquisition module 302 enters One step includes:
First acquisition submodule 3021, for obtaining depth of view information corresponding to all pixels point in the first preview image;
First detection sub-module 3022, for according to user instruction, being detected on the first preview image corresponding with touch point The first pixel;
Second acquisition submodule 3023, for obtaining the first depth of view information of the first pixel.
Reference picture 5, in a preferred embodiment of the invention, on the basis of Fig. 3, determining module 303 is further Including:
First determination sub-module 3031, for based on the first depth of view information, determining the first field depth;
3rd acquisition submodule 3032, multiple depth of field in the first field depth are fallen into for obtaining in the first preview image First pixel point set corresponding to information;
4th acquisition submodule 3033, for based on multiple pixels in the first pixel point set, obtaining target subject Profile.
Reference picture 6, in a preferred embodiment of the invention, on the basis of Fig. 3, the 4th acquisition submodule 3033 Further comprise:
First connection processing unit 3033a, for carrying out connection processing to multiple pixels in the first pixel point set;
First determining unit 3033b, for according to result is connected, determining connected region;
First detection unit 3033c, do not belong to for detecting to whether there is in multiple pixels in the first pixel point set In the region exterior pixel point of connected region;
First construction unit 3033d, for if so, then based on removing region in multiple pixels in the first pixel point set Other pixels of exterior pixel point, build the profile of target subject.
Reference picture 7, in a preferred embodiment of the invention, on the basis of Fig. 3, tracing module 304 is further Including:
First extracting sub-module 3041, one or one in profile for extracting target subject from the first preview image Individual features above point, to form fisrt feature point set;
Second extracting sub-module 3042, for extracting all characteristic points in the second preview image, to form second feature Point set;
Matched sub-block 3043, for second feature point set to be matched with fisrt feature point set, extraction matching Successful characteristic point, to form third feature point set;
Submodule 3044 is selected, for selecting reference characteristic point in third feature point set;
5th acquisition submodule 3045, for obtaining the depth of view information of all pixels point in the second preview image;
6th acquisition submodule 3046, for obtain reference characteristic point in the second preview image corresponding pixel points second Depth of view information;
Second determination sub-module 3047, for based on the second depth of view information, determining target subject in the second preview image Profile.
Reference picture 8, in a preferred embodiment of the invention, on the basis of Fig. 3, the second determination sub-module 3047 Further comprise:
Second determining unit 3047a, for based on the second depth of view information, determining the second field depth;
First acquisition unit 3047b, multiple depth of field in the second field depth are fallen into for obtaining in the second preview image Second pixel point set corresponding to information;
Second acquisition unit 3047c, for based on multiple pixels in the second pixel point set, obtaining target subject Profile.
In addition, in a preferred embodiment of the invention, second acquisition unit 3047c is further used for:
Connection processing is carried out to multiple pixels in the second pixel point set;
According to connection result, connected region is determined;
Detect in multiple pixels in the second pixel point set with the presence or absence of the region exterior pixel for being not belonging to connected region Point;
If so, then based in multiple pixels in the second pixel point set remove region exterior pixel point other pixels, Build the profile of target subject.
To sum up, the technical scheme in the embodiment of the present invention, by dual camera, the first preview with depth of view information is obtained Image, and the first preview image is included in mobile terminal display screen;The use of the first preview image triggering is touched according to user Family instructs, and obtains the first depth of view information corresponding to touch point;Based on the first depth of view information, the target in the first preview image is determined The profile of main body;According to the profile of target subject, to the target subject in the second preview image for being got by dual camera It is tracked, to determine profile of the target subject in the second preview image;To the parameter of the background area of the second preview image It is adjusted, wherein, background area is all areas outside the profile of target subject.So as to each in dynamic image Target subject in two field picture is accurately positioned, and improves the accuracy of body profile extraction, effectively improves user to processing The satisfaction of image afterwards.
Example IV
Fig. 9 is the block diagram of the mobile terminal of another embodiment of the present invention.Mobile terminal 900 shown in Fig. 9 includes:At least One processor 901, memory 902, at least one network interface 904 and other users interface 903.In mobile terminal 900 Each component is coupled by bus system 905.It is understood that bus system 905 is used to realize the company between these components Connect letter.Bus system 905 is in addition to including data/address bus, in addition to power bus, controlling bus and status signal bus in addition.But It is for the sake of clear explanation, various buses is all designated as bus system 905 in fig.9.
Wherein, user interface 903 can include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch-screen etc..
It is appreciated that the memory 902 in the embodiment of the present invention can be volatile memory or nonvolatile memory, Or it may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read- OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM (ErasablePROM, EPROM), Electrically Erasable Read Only Memory (ElectricallyEPROM, EEPROM) dodge Deposit.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is used as outside slow at a high speed Deposit.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM (StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory (SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory (DirectRambusRAM, DRRAM).The memory 902 of the system and method for description of the embodiment of the present invention is intended to include but unlimited In these memories with any other suitable type.
In some embodiments, memory 902 stores following element, can perform module or data structure, or Their subset of person, or their superset:Operating system 9021 and application program 9022.
Wherein, operating system 9021, comprising various system programs, such as ccf layer, core library layer, driving layer etc., it is used for Realize various basic businesses and the hardware based task of processing.Application program 9022, include various application programs, such as media Player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize embodiment of the present invention side The program of method may be embodied in application program 9022.
In embodiments of the present invention, by calling program or the instruction of the storage of memory 902, specifically, can be application The program stored in program 9022 or instruction, processor 901, which is used to obtain, pass through dual camera, and obtaining has the of depth of view information One preview image, and the first preview image is included in mobile terminal display screen;The first preview image is touched according to user to touch The user instruction of hair, obtain the first depth of view information corresponding to touch point;Based on the first depth of view information, determine in the first preview image Target subject profile;According to the profile of target subject, to the mesh in the second preview image for being got by dual camera Mark main body is tracked, to determine profile of the target subject in the second preview image;To the background area of the second preview image Parameter be adjusted, wherein, background area for target subject profile outside all areas.
The method that the embodiments of the present invention disclose can apply in processor 901, or be realized by processor 901. Processor 901 is probably a kind of IC chip, has the disposal ability of signal.In implementation process, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 901 or the instruction of software form.Above-mentioned processing Device 901 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), application specific integrated circuit (ApplicationSpecific IntegratedCircuit, ASIC), ready-made programmable gate array (FieldProgrammableGateArray, FPGA) either other PLDs, discrete gate or transistor logic Device, discrete hardware components.It can realize or perform disclosed each method, step and the box in the embodiment of the present invention Figure.General processor can be microprocessor or the processor can also be any conventional processor etc..With reference to the present invention The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and perform completion, or use decoding processor In hardware and software module combination perform completion.Software module can be located at random access memory, and flash memory, read-only storage can In the ripe storage medium in this area such as program read-only memory or electrically erasable programmable memory, register.The storage Medium is located at memory 902, and processor 901 reads the information in memory 902, and the step of the above method is completed with reference to its hardware Suddenly.
It is understood that the embodiment of the present invention description these embodiments can use hardware, software, firmware, middleware, Microcode or its combination are realized.Realized for hardware, processing unit can be realized in one or more application specific integrated circuits (ApplicationSpecificIntegratedCircuits, ASIC), digital signal processor (DigitalSignalProcessing, DSP), digital signal processing appts (DSPDevice, DSPD), programmable logic device (ProgrammableLogicDevice, PLD), field programmable gate array (Field-ProgrammableGateArray, FPGA), general processor, controller, microcontroller, microprocessor, other electronics lists for performing herein described function In member or its combination.
For software realize, can by perform the module (such as process, function etc.) of function described in the embodiment of the present invention come Realize the technology described in the embodiment of the present invention.Software code is storable in memory and passes through computing device.Memory can To realize within a processor or outside processor.
Alternatively, processor 901 is used for:Obtain depth of view information corresponding to all pixels point in the first preview image;Foundation User instruction, the first pixel corresponding with touch point is detected on the first preview image;Obtain the first scape of the first pixel Deeply convince breath.
Alternatively, processor 901 is additionally operable to:Based on the first depth of view information, the first field depth is determined;Obtain the first preview First pixel point set corresponding to the multiple depth of view information fallen into image in the first field depth;Based on the first pixel point set In multiple pixels, obtain the profile of target subject.
Alternatively, processor 901 is additionally operable to:Connection processing is carried out to multiple pixels in the first pixel point set;According to According to connection result, connected region is determined;Detect to whether there is in multiple pixels in the first pixel point set and be not belonging to The region exterior pixel point of connected region;If so, then based on removing region exterior pixel in multiple pixels in the first pixel point set Other pixels of point, build the profile of target subject.
Alternatively, processor 901 is additionally operable to:One or one in the profile of target subject is extracted from the first preview image Individual features above point, to form fisrt feature point set;All characteristic points in the second preview image are extracted, it is special to form second Levy point set;Second feature point set is matched with fisrt feature point set, extracts the characteristic point that the match is successful, to form Third feature point set;Reference characteristic point is selected in third feature point set;Obtain all pixels point in the second preview image Depth of view information;Obtain the second depth of view information of reference characteristic point corresponding pixel points in the second preview image;Based on the second scape Deeply convince breath, determine profile of the target subject in the second preview image.
Alternatively, processor 901 is additionally operable to:Based on the second depth of view information, the second field depth is determined;Obtain the second preview Second pixel point set corresponding to the multiple depth of view information fallen into image in the second field depth;Based on the second pixel point set In multiple pixels, obtain the profile of target subject.
Alternatively, processor 901 is additionally operable to:Connection processing is carried out to multiple pixels in the second pixel point set;According to According to connection result, connected region is determined;Detect to whether there is in multiple pixels in the second pixel point set and be not belonging to The region exterior pixel point of connected region;If so, then based on removing region exterior pixel in multiple pixels in the second pixel point set Other pixels of point, build the profile of target subject.
Mobile terminal 900 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, here Repeat no more.
To sum up, the technical scheme in the embodiment of the present invention, by dual camera, the first preview with depth of view information is obtained Image, and the first preview image is included in mobile terminal display screen;The use of the first preview image triggering is touched according to user Family instructs, and obtains the first depth of view information corresponding to touch point;Based on the first depth of view information, the target in the first preview image is determined The profile of main body;According to the profile of target subject, to the target subject in the second preview image for being got by dual camera It is tracked, to determine profile of the target subject in the second preview image;To the parameter of the background area of the second preview image It is adjusted, wherein, background area is all areas outside the profile of target subject.So as to each in dynamic image Target subject in two field picture is accurately positioned, and improves the accuracy of body profile extraction, effectively improves user to processing The satisfaction of image afterwards.
Embodiment five
Figure 10 is the structural representation of the mobile terminal of another embodiment of the present invention.Specifically, the mobile end in Figure 10 End 1000 can be mobile phone, tablet personal computer, personal digital assistant (PersonalDigital Assistant, PDA) or vehicle-mounted Computer etc..
Mobile terminal 1000 in Figure 10 includes radio frequency (RadioFrequency, RF) circuit 1010, memory 1020, defeated Enter unit 1030, display unit 1040, processor 1060, voicefrequency circuit 1070, WiFi (WirelessFidelity) module 1080 and power supply 1090.
Wherein, input block 1030 is available for the numeral or character information for receiving user's input, and generation and movement are eventually The signal input that the user at end 1000 is set and function control is relevant.Specifically, in the embodiment of the present invention, the input block 1030 can include contact panel 1031.Contact panel 1031, also referred to as touch-screen, collect user's touching on or near it Operation (for example user uses the operations of any suitable object or annex on contact panel 1031 such as finger, stylus) is touched, and Corresponding attachment means are driven according to formula set in advance.Optionally, contact panel 1031 may include touch detecting apparatus and Two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the letter that touch operation is brought Number, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into Contact coordinate, then give the processor 1060, and the order sent of reception processing device 1060 and can be performed.Furthermore, it is possible to Contact panel 1031 is realized using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel 1031, input block 1030 can also include other input equipments 1032, and other input equipments 1032 can include but is not limited to One kind or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc. Kind.
Wherein, display unit 1040 can be used for display by the information of user's input or be supplied to information and the movement of user The various menu interfaces of terminal 1000.Display unit 1040 may include display panel 1041, optionally, can use LCD or have The forms such as machine light emitting diode (OrganicLight-EmittingDiode, OLED) configure display panel 1041.
It should be noted that contact panel 1031 can cover display panel 1041, touch display screen is formed, when the touch display screen After detecting the touch operation on or near it, processor 1060 is sent to determine the type of touch event, is followed by subsequent processing Device 1060 provides corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area And arrangement mode of the conventional control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area It can also be the empty interface not comprising any content.The conventional control viewing area is used to show the higher control of utilization rate, for example, Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 1060 is the control centre of mobile terminal 1000, utilizes various interfaces and connection whole mobile phone Various pieces, by running or performing the software program and/or module that are stored in first memory 1021, and call and deposit The data in second memory 1022 are stored up, perform the various functions and processing data of mobile terminal 1000, so as to mobile whole End 1000 carries out integral monitoring.Optionally, processor 1060 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 1021 in software program and/or module and/ Or the data in the second memory 1022, processor 1060 are used to pass through dual camera, obtain first with depth of view information Preview image, and the first preview image is included in mobile terminal display screen;The triggering of first preview image is touched according to user User instruction, obtain touch point corresponding to the first depth of view information;Based on the first depth of view information, determine in the first preview image The profile of target subject;According to the profile of target subject, to the target in the second preview image for being got by dual camera Main body is tracked, to determine profile of the target subject in the second preview image;To the background area of the second preview image Parameter is adjusted, wherein, background area is all areas outside the profile of target subject.
Alternatively, processor 701 is used for:Obtain depth of view information corresponding to all pixels point in the first preview image;Foundation User instruction, the first pixel corresponding with touch point is detected on the first preview image;Obtain the first scape of the first pixel Deeply convince breath.
Alternatively, processor 1060 is additionally operable to:Based on the first depth of view information, the first field depth is determined;It is pre- to obtain first First pixel point set corresponding to the multiple depth of view information fallen into the first field depth look in image;Based on the first pixel point set Multiple pixels in conjunction, obtain the profile of target subject.
Alternatively, processor 1060 is additionally operable to:Connection processing is carried out to multiple pixels in the first pixel point set;According to According to connection result, connected region is determined;Detect to whether there is in multiple pixels in the first pixel point set and be not belonging to The region exterior pixel point of connected region;If so, then based on removing region exterior pixel in multiple pixels in the first pixel point set Other pixels of point, build the profile of target subject.
Alternatively, processor 1060 is additionally operable to:From the first preview image extract target subject profile in one or More than one characteristic point, to form fisrt feature point set;All characteristic points in the second preview image are extracted, to form second Set of characteristic points;Second feature point set is matched with fisrt feature point set, the characteristic point that the match is successful is extracted, with structure Into third feature point set;Reference characteristic point is selected in third feature point set;Obtain all pixels in the second preview image The depth of view information of point;Obtain the second depth of view information of reference characteristic point corresponding pixel points in the second preview image;Based on second Depth of view information, determine profile of the target subject in the second preview image.
Alternatively, processor 1060 is additionally operable to:Based on the second depth of view information, the second field depth is determined;It is pre- to obtain second Second pixel point set corresponding to the multiple depth of view information fallen into the second field depth look in image;Based on the second pixel point set Multiple pixels in conjunction, obtain the profile of target subject.
Alternatively, processor 1060 is additionally operable to:Connection processing is carried out to multiple pixels in the second pixel point set;According to According to connection result, connected region is determined;Detect to whether there is in multiple pixels in the second pixel point set and be not belonging to The region exterior pixel point of connected region;If so, then based on removing region exterior pixel in multiple pixels in the second pixel point set Other pixels of point, build the profile of target subject.
Mobile terminal 1000 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, this In repeat no more.
It can be seen that the technical scheme in the embodiment of the present invention, by dual camera, obtains the first preview with depth of view information Image, and the first preview image is included in mobile terminal display screen;The use of the first preview image triggering is touched according to user Family instructs, and obtains the first depth of view information corresponding to touch point;Based on the first depth of view information, the target in the first preview image is determined The profile of main body;According to the profile of target subject, to the target subject in the second preview image for being got by dual camera It is tracked, to determine profile of the target subject in the second preview image;To the parameter of the background area of the second preview image It is adjusted, wherein, background area is all areas outside the profile of target subject.So as to each in dynamic image Target subject in two field picture is accurately positioned, and improves the accuracy of body profile extraction, effectively improves user to processing The satisfaction of image afterwards.
The embodiment of the present invention additionally provides a kind of mobile terminal, including:Memory, processor and storage are on a memory simultaneously The image processing program that can be run on a processor, appointing shown in the present invention is realized when image processing program is executed by processor Anticipate a kind of image processing method the step of.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, is stored with computer-readable recording medium Image processing program, any one image processing method shown in the present invention is realized when image processing program is executed by processor The step of.
For device embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, it is related Part illustrates referring to the part of embodiment of the method.
Provided herein image procossing scheme not with the intrinsic phase of any certain computer, virtual system or miscellaneous equipment Close.Various general-purpose systems can also be used together with teaching based on this.As described above, construction has present invention side Structure required by the system of case is obvious.In addition, the present invention is not also directed to any certain programmed language.Should be bright In vain, various programming languages can be utilized to realize the content of invention described herein, and that is done above to language-specific retouches State is to disclose the preferred forms of the present invention.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.More precisely, such as right As claim reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows tool Thus claims of body embodiment are expressly incorporated in the embodiment, wherein the conduct of each claim in itself The separate embodiments of the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any Mode it can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) come realize in image procossing scheme according to embodiments of the present invention some or The some or all functions of person's whole part.The present invention is also implemented as perform method as described herein one Divide either whole equipment or program of device (for example, computer program and computer program product).It is such to realize this hair Bright program can store on a computer-readable medium, or can have the form of one or more signal.It is such Signal can be downloaded from internet website and obtained, and either provided on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (16)

  1. A kind of 1. image processing method, applied to the mobile terminal with dual camera, it is characterised in that methods described includes:
    By the dual camera, the first preview image with depth of view information is obtained, and first preview image is shown In the mobile terminal display screen;
    The user instruction of the first preview image triggering is touched according to user, obtains the first depth of view information corresponding to touch point;
    Based on first depth of view information, the profile of the target subject in first preview image is determined;
    According to the profile of the target subject, to the target in the second preview image for being got by the dual camera Main body is tracked, to determine profile of the target subject in second preview image;
    The parameter of the background area of second preview image is adjusted, wherein, the background area is the target master All areas outside the profile of body.
  2. 2. according to the method for claim 1, it is characterised in that described to be touched according to user in the mobile terminal display screen The user instruction of first preview image triggering of display, corresponding to acquisition touch point the step of the first depth of view information, specifically Including:
    Obtain depth of view information corresponding to all pixels point in the first preview image;
    According to the user instruction, the first pixel corresponding with the touch point is detected on first preview image;
    Obtain the first depth of view information of first pixel.
  3. 3. according to the method for claim 1, it is characterised in that it is described to be based on first depth of view information, determine described the The step of profile of target subject in one preview image, specifically include:
    Based on first depth of view information, the first field depth is determined;
    Obtain the first pixel corresponding to the multiple depth of view information fallen into first preview image in first field depth Point set;
    Based on multiple pixels in the first pixel point set, the profile of the target subject is obtained.
  4. 4. according to the method for claim 3, it is characterised in that multiple pictures based in the first pixel point set Vegetarian refreshments, the step of obtaining the profile of the target subject, specifically include:
    Connection processing is carried out to multiple pixels in the first pixel point set;
    According to the connection result, connected region is determined;
    Detect to whether there is in multiple pixels in the first pixel point set and be not belonging to outside the region of the connected region Pixel;
    If so, then based on other pixels that the region exterior pixel point is removed in multiple pixels in the first pixel point set Point, build the profile of the target subject.
  5. 5. according to the method for claim 1, it is characterised in that the profile according to the target subject, to passing through The step of target subject in the second preview image that dual camera is got is tracked is stated, is specifically included:
    One or more characteristic points in the profile for extracting the target subject from first preview image, to form Fisrt feature point set;
    All characteristic points in second preview image are extracted, to form second feature point set;
    The second feature point set is matched with the fisrt feature point set, extracts the characteristic point that the match is successful, with Form third feature point set;
    Reference characteristic point is selected in the third feature point set;
    Obtain the depth of view information of all pixels point in second preview image;
    Obtain the second depth of view information of reference characteristic point corresponding pixel points in second preview image;
    Based on second depth of view information, profile of the target subject in second preview image is determined.
  6. 6. according to the method for claim 5, it is characterised in that the profile according to the target subject, to passing through The step of target subject in the second preview image that dual camera is got is tracked is stated, is specifically included:
    Based on second depth of view information, the second field depth is determined;
    Obtain the second pixel corresponding to the multiple depth of view information fallen into second preview image in second field depth Point set;
    Based on multiple pixels in the second pixel point set, the profile of the target subject is obtained.
  7. 7. according to the method for claim 6, it is characterised in that multiple pictures based in the second pixel point set Vegetarian refreshments, the step of obtaining the profile of the target subject, specifically include:
    Connection processing is carried out to multiple pixels in the second pixel point set;
    According to the connection result, connected region is determined;
    Detect to whether there is in multiple pixels in the second pixel point set and be not belonging to outside the region of the connected region Pixel;
    If so, then based on other pixels that the region exterior pixel point is removed in multiple pixels in the second pixel point set Point, build the profile of the target subject.
  8. 8. a kind of mobile terminal, the mobile terminal has dual camera, it is characterised in that the mobile terminal includes:
    First acquisition module, for by the dual camera, obtaining the first preview image with depth of view information, and will described in First preview image is shown in the mobile terminal display screen;;
    Second acquisition module, for touching the user instruction of the first preview image triggering according to user, obtain touch point pair The first depth of view information answered;
    Determining module, for based on first depth of view information, determining the profile of the target subject in first preview image;
    Tracing module, for the profile according to the target subject, to the second preview graph got by the dual camera The target subject as in is tracked, to determine profile of the target subject in second preview image;
    Adjusting module, the parameter for the background area to second preview image are adjusted, wherein, the background area For all areas outside the profile of the target subject.
  9. 9. mobile terminal according to claim 8, it is characterised in that second acquisition module further comprises:
    First acquisition submodule, for obtaining depth of view information corresponding to all pixels point in the first preview image;
    First detection sub-module, for according to the user instruction, being detected and the touch point on first preview image Corresponding first pixel;
    Second acquisition submodule, for obtaining the first depth of view information of first pixel.
  10. 10. mobile terminal according to claim 8, it is characterised in that the determining module further comprises:
    First determination sub-module, for based on first depth of view information, determining the first field depth;
    3rd acquisition submodule, for obtaining the multiple depth of field fallen into first preview image in first field depth First pixel point set corresponding to information;
    4th acquisition submodule, for based on multiple pixels in the first pixel point set, obtaining the target subject Profile.
  11. 11. mobile terminal according to claim 10, it is characterised in that the 4th acquisition submodule further comprises:
    First connection processing unit, for carrying out connection processing to multiple pixels in the first pixel point set;
    First determining unit, for according to the connection result, determining connected region;
    First detection unit, it is described with the presence or absence of being not belonging in multiple pixels in the first pixel point set for detecting The region exterior pixel point of connected region;
    First construction unit, for if so, then based on removing the region in multiple pixels in the first pixel point set Other pixels of exterior pixel point, build the profile of the target subject.
  12. 12. mobile terminal according to claim 8, it is characterised in that the tracing module further comprises:
    First extracting sub-module, for extract the target subject from first preview image profile in one or one Individual features above point, to form fisrt feature point set;
    Second extracting sub-module, for extracting all characteristic points in second preview image, to form second feature point set Close;
    Matched sub-block, for the second feature point set to be matched with the fisrt feature point set, extraction matching Successful characteristic point, to form third feature point set;
    Submodule is selected, for selecting reference characteristic point in the third feature point set;
    5th acquisition submodule, for obtaining the depth of view information of all pixels point in second preview image;
    6th acquisition submodule, for obtain the reference characteristic point in second preview image corresponding pixel points second Depth of view information;
    Second determination sub-module, for based on second depth of view information, determining the target subject in second preview graph Profile as in.
  13. 13. mobile terminal according to claim 12, it is characterised in that second determination sub-module further comprises:
    Second determining unit, for based on second depth of view information, determining the second field depth;
    First acquisition unit, for obtaining the multiple depth of field letter fallen into second preview image in second field depth Second pixel point set corresponding to breath;
    Second acquisition unit, for based on multiple pixels in the second pixel point set, obtaining the target subject Profile.
  14. 14. mobile terminal according to claim 13, it is characterised in that the second acquisition unit is further used for:
    Connection processing is carried out to multiple pixels in the second pixel point set;
    According to the connection result, connected region is determined;
    Detect to whether there is in multiple pixels in the second pixel point set and be not belonging to outside the region of the connected region Pixel;
    If so, then based on other pixels that the region exterior pixel point is removed in multiple pixels in the second pixel point set Point, build the profile of the target subject.
  15. A kind of 15. mobile terminal, it is characterised in that including:Memory, processor and it is stored on the memory and can be in institute The image processing program run on processor is stated, described image processing routine is realized such as claim during the computing device The step of image processing method any one of 1 to 7.
  16. 16. a kind of computer-readable recording medium, it is characterised in that be stored with the computer-readable recording medium at image Program is managed, the image procossing as any one of claim 1 to 7 is realized when described image processing routine is executed by processor The step of method.
CN201710744927.XA 2017-08-25 2017-08-25 Image processing method and mobile terminal Active CN107483821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710744927.XA CN107483821B (en) 2017-08-25 2017-08-25 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710744927.XA CN107483821B (en) 2017-08-25 2017-08-25 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107483821A true CN107483821A (en) 2017-12-15
CN107483821B CN107483821B (en) 2020-08-14

Family

ID=60602760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710744927.XA Active CN107483821B (en) 2017-08-25 2017-08-25 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107483821B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109688321A (en) * 2018-11-21 2019-04-26 惠州Tcl移动通信有限公司 Electronic equipment and its image display method, the device with store function
CN110661971A (en) * 2019-09-03 2020-01-07 RealMe重庆移动通信有限公司 Image shooting method and device, storage medium and electronic equipment
CN112532881A (en) * 2020-11-26 2021-03-19 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112887606A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101400001A (en) * 2008-11-03 2009-04-01 清华大学 Generation method and system for video frame depth chart
JP2010057105A (en) * 2008-08-29 2010-03-11 Tokyo Institute Of Technology Three-dimensional object tracking method and system
JP2011199526A (en) * 2010-03-18 2011-10-06 Fujifilm Corp Object tracking device, and method of controlling operation of the same
CN202172446U (en) * 2011-08-29 2012-03-21 华为终端有限公司 Wide angle photographing apparatus
CN104915965A (en) * 2014-03-14 2015-09-16 华为技术有限公司 Camera tracking method and device
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106161945A (en) * 2016-08-01 2016-11-23 乐视控股(北京)有限公司 Take pictures treating method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010057105A (en) * 2008-08-29 2010-03-11 Tokyo Institute Of Technology Three-dimensional object tracking method and system
CN101400001A (en) * 2008-11-03 2009-04-01 清华大学 Generation method and system for video frame depth chart
JP2011199526A (en) * 2010-03-18 2011-10-06 Fujifilm Corp Object tracking device, and method of controlling operation of the same
CN202172446U (en) * 2011-08-29 2012-03-21 华为终端有限公司 Wide angle photographing apparatus
CN104915965A (en) * 2014-03-14 2015-09-16 华为技术有限公司 Camera tracking method and device
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106161945A (en) * 2016-08-01 2016-11-23 乐视控股(北京)有限公司 Take pictures treating method and apparatus

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109688321A (en) * 2018-11-21 2019-04-26 惠州Tcl移动通信有限公司 Electronic equipment and its image display method, the device with store function
CN110661971A (en) * 2019-09-03 2020-01-07 RealMe重庆移动通信有限公司 Image shooting method and device, storage medium and electronic equipment
CN112532881A (en) * 2020-11-26 2021-03-19 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112887606A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Shooting method and device and electronic equipment

Also Published As

Publication number Publication date
CN107483821B (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN107483821A (en) A kind of image processing method and mobile terminal
CN107172346A (en) A kind of weakening method and mobile terminal
CN107657591A (en) A kind of image processing method and mobile terminal
CN106681603B (en) A kind of method and mobile terminal for adjusting video playing progress
CN106775252A (en) The message treatment method and mobile terminal of a kind of mobile terminal
CN106445235B (en) A kind of touch initial position recognition methods and mobile terminal
CN107179865A (en) A kind of page switching method and terminal
CN107181913A (en) A kind of photographic method and mobile terminal
CN107317993A (en) A kind of video call method and mobile terminal
CN107103304A (en) The display methods and mobile terminal in a kind of fingerprint recognition region
CN106406656A (en) Application program toolbar control method and mobile terminal
CN107734156A (en) A kind of image pickup method and mobile terminal
CN107302655B (en) It is a kind of to shoot the adjusting method and mobile terminal found a view
CN106993091A (en) A kind of image weakening method and mobile terminal
CN107707823A (en) A kind of image pickup method and mobile terminal
CN107084736A (en) A kind of air navigation aid and mobile terminal
CN106527906A (en) Picture capture method and mobile terminal
CN107643912A (en) A kind of information sharing method and mobile terminal
CN107404577A (en) A kind of image processing method, mobile terminal and computer-readable recording medium
CN106911897A (en) A kind of determination method and mobile terminal for shooting focus
CN106776085A (en) Touch screen protective method and mobile terminal
CN107798228A (en) A kind of face identification method and mobile terminal
CN106648426A (en) Method for adjusting video playing progress and mobile terminal
CN106888354A (en) A kind of singlehanded photographic method and mobile terminal
CN106874046A (en) The operating method and mobile terminal of a kind of application program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210423

Address after: 1-3 / F, No.57, Boxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Patentee after: Aiku software technology (Shanghai) Co.,Ltd.

Address before: No. 283, Wusha Bubugao Avenue, Chang'an Town, Dongguan City, Guangdong Province, 523860

Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd.

TR01 Transfer of patent right