CN109948525A - It takes pictures processing method, device, mobile terminal and storage medium - Google Patents

It takes pictures processing method, device, mobile terminal and storage medium Download PDF

Info

Publication number
CN109948525A
CN109948525A CN201910204839.XA CN201910204839A CN109948525A CN 109948525 A CN109948525 A CN 109948525A CN 201910204839 A CN201910204839 A CN 201910204839A CN 109948525 A CN109948525 A CN 109948525A
Authority
CN
China
Prior art keywords
preview image
image
area
shelter
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910204839.XA
Other languages
Chinese (zh)
Inventor
李亚乾
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910204839.XA priority Critical patent/CN109948525A/en
Publication of CN109948525A publication Critical patent/CN109948525A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

It takes pictures processing method, device, mobile terminal and storage medium this application discloses one kind, is related to technical field of electronic equipment.The described method includes: acquisition preview image, the preview image is inputted to the target detection model trained, obtain the information for the target detection model output trained, when reading the information includes the location information of shelter in preview image, the preview image is cut based on the location information, acquisition does not include the target image of the shelter.Processing method of taking pictures, device, mobile terminal and storage medium provided by the embodiments of the present application pass through the target detection model trained and carry out occlusion detection to preview image, and export the location information of shelter according to testing result to provide the foundation of cutting, to obtain the target image for not including shelter, shooting effect is promoted.

Description

It takes pictures processing method, device, mobile terminal and storage medium
Technical field
This application involves technical field of electronic equipment, take pictures processing method, device, movement eventually more particularly, to one kind End and storage medium.
Background technique
With the development of science and technology, mobile terminal have become in people's daily life most common electronic product it One.Also, user often passes through mobile terminal and takes pictures, and still, mobile terminal has once in a while when being taken pictures and blocks Object interference, for example, user's finger is interfered, to influence the total quality of photo.
Summary of the invention
In view of the above problems, it takes pictures processing method, device, mobile terminal and storage medium present applicant proposes one kind, To solve the above problems.
It takes pictures processing method in a first aspect, the embodiment of the present application provides one kind, which comprises acquisition preview graph The preview image is inputted the target detection model trained by picture;Obtain the target detection model output trained Information;When reading the information includes the location information of shelter in the preview image, it is based on the location information pair The preview image is cut, and acquisition does not include the target image of the shelter.
Second aspect, the embodiment of the present application provide one kind and take pictures processing unit, and described device includes: Image Acquisition mould The preview image is inputted the target detection model trained for acquiring preview image by block;Data obtaining module is used for Obtain the information of the target detection model output trained;Image cropping module, for including when reading the information In the preview image when location information of shelter, the preview image is cut based on the location information, is obtained It does not include the target image of the shelter.
The third aspect, the embodiment of the present application provide a kind of mobile terminal, including memory and processor, the memory It is couple to the processor, the memory store instruction, the processor is held when executed by the processor The row above method.
Fourth aspect, the embodiment of the present application provides a kind of computer-readable storage medium, described computer-readable Program code is stored in storage medium, said program code can be called by processor and execute the above method.
Processing method of taking pictures, device, mobile terminal and storage medium provided by the embodiments of the present application acquire preview graph The preview image is inputted the target detection model trained by picture, obtains the information for the target detection model output trained, when When to read the information include the location information of shelter in preview image, the preview image is cut out based on the location information It cuts, acquisition does not include the target image of the shelter, to be hidden by the target detection model trained to preview image Gear detection, and the location information of output shelter is according to testing result to provide the foundation of cutting, so that obtaining does not include blocking The target image of object promotes shooting effect.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 shows the flow diagram of the processing method of taking pictures of the application one embodiment offer;
Fig. 2 shows the flow diagrams for the processing method of taking pictures that another embodiment of the application provides;
Fig. 3 shows the first interface schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 4 shows second of interface schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 5 shows the flow diagram of the step S270 of the processing method shown in Fig. 2 of taking pictures of the application;
Fig. 6 shows the third interface schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 7 shows the flow diagram of the step S272 of the processing method shown in fig. 5 of taking pictures of the application;
Fig. 8 shows the 4th kind of interface schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 9 shows the 5th kind of interface schematic diagram of mobile terminal provided by the embodiments of the present application;
Figure 10 shows the module frame chart of processing unit provided by the embodiments of the present application of taking pictures;
Figure 11 shows the embodiment of the present application for executing the mobile end of the processing method of taking pictures according to the embodiment of the present application The block diagram at end;
Figure 12 shows realizing for saving or carrying according to the place of taking pictures of the embodiment of the present application for the embodiment of the present application The storage unit of the program code of reason method.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described.
Currently, camera function has become the standard configuration of most mobile terminals, mobile terminal user can be carried around movement Terminal simultaneously passes through the fine moment of the mobile terminal records at one's side, in addition, with the intelligentized fast development of mobile terminal, it is mobile Terminal user is also higher and higher to the quality requirement of photo, for example, mobile terminal user, which is expected that by mobile terminal shooting, not to be had The target object of shelter.But shelter interference is had when taking pictures at present by mobile terminal once in a while, for example, finger blocks The taking lens etc. of mobile terminal, then, when forming photo, the finger of user appears in one jiao of photo, to influence The total quality of photo.To solve the above-mentioned problems, current technology can be carried out at later stage compilation by user with software Reason, achievees the effect that occlusion removal object, but this processing mode very relies on the background of photo, if background color it is single and It is regular, then user can by modification remove shelter, if background color is complicated, user need with selection, The modes such as duplication, mobile background, cover the place being blocked by obstructions, this not only needs user to pay a large amount of patience, simultaneously Also high to the requirement of software, therefore, treatment effect is undesirable.
In view of the above-mentioned problems, inventor has found by long-term research, and propose provided by the embodiments of the present application take pictures Processing method, device, mobile terminal and storage medium block preview image by the target detection model trained Detection, and the location information of output shelter is according to testing result to provide the foundation of cutting, so that obtaining does not include shelter Target image, promoted shooting effect.Wherein, processing method of specifically taking pictures carries out specifically in subsequent embodiment It is bright.
Embodiment
Referring to Fig. 1, Fig. 1 shows the flow diagram of the processing method of taking pictures of the application one embodiment offer.Institute Processing method of taking pictures is stated for carrying out occlusion detection to preview image by the target detection model trained, and according to detecting knot Fruit exports the location information of shelter to provide the foundation of cutting, so that acquisition does not include the target image of shelter, is promoted and is clapped Take the photograph effect.In the particular embodiment, the processing method of taking pictures be applied to it is as shown in Figure 10 take pictures processing unit 200 and Mobile terminal 100 (Figure 11) configured with the processing unit 200 of taking pictures.It will illustrate this implementation by taking mobile terminal as an example below Example detailed process, it will of course be understood that, mobile terminal applied by the present embodiment can for smart phone, tablet computer, Wearable electronic equipment, mobile unit, gateway etc. include the electronic equipment of camera, do not do specific restriction herein.Below will It is explained in detail for process shown in FIG. 1, the processing method of taking pictures can specifically include following steps:
Step S110: the preview image is inputted the target detection model trained by acquisition preview image.
In the present embodiment, the mobile terminal acquires preview image by camera, wherein, can as a kind of mode To acquire preview image by the front camera of mobile terminal, for example, through front camera acquisition user in self-timer Preview image;Preview image can be acquired by the rear camera of mobile terminal, for example, acquiring user by rear camera Preview image when he claps;Preview image can also be acquired by the rotating pick-up head of mobile terminal, it is possible to understand that, lead to The rotating pick-up head of mobile terminal is crossed, which can acquire self-timer preview by way of rotating rotating pick-up head Image or he clap preview image, it is not limited here.
Further, which can be inputted the target trained after collecting preview image by mobile terminal Detection model, wherein the target detection model trained is obtained by machine learning, specifically, acquisition training first Data set, wherein the attribute or feature for a kind of data that training data is concentrated are different from another kind of data, then by that will acquire Training dataset modeling is trained to neural network according to preset algorithm, to sum up rule based on the training dataset Rule, the target detection model trained.In the present embodiment, training dataset for example can be that there are the multiple of shelter The multiple label informations of the location information of original image and dated shelter in original image.
It should be understood that the target detection model trained is stored in mobile terminal sheet after the completion of can training in advance Ground.Based on this, mobile terminal, can be directly in the target detection mould for locally this being called to train after collecting preview image Type, for example, instruction can be directly transmitted to target detection model, to indicate that the target detection model trained is stored in target The preview image or mobile terminal are read in region to be directly stored in the local mesh trained for preview image input Detection model is marked, so that effectively avoiding the influence due to network factors from reducing preview image inputs the target detection model trained Speed, to promote the speed that the target detection model trained obtains preview image, promotion user experience.
Connect in addition, the target detection model trained is stored in after the completion of can also training in advance with communication of mobile terminal The server connect.Based on this, mobile terminal can send a command to the service of being stored in after collecting preview image by network The target detection model of device trained, to indicate that the target detection model trained reads mobile terminal acquisition by network Preview image or mobile terminal preview image can be sent to the target trained for being stored in server by network Detection model, so that mobile terminal is deposited in reduction in such a way that the target detection model that will have been trained is stored in server The occupancy in space is stored up, the influence operated normally to mobile terminal is reduced.
Wherein, as a kind of mode, the target detection model trained is for detecting in the preview image whether have screening Block material and when having shelter in detecting the preview image, exports location information of the shelter in preview image.Also It is to say, whether which can be used for having shelter to detect in preview image, wherein the screening Block material may include the finger-image of user, palm image etc., it is not limited here.As a kind of enforceable mode, when When trained target detection model inspection has shelter into the preview image, the shelter can be exported in preview image Location information exports the shelter pre- for example, exporting coordinate information of the shelter in the image coordinate system of preview image Look at the location drawing picture etc. of image, and when target detection model inspection does not have shelter into the preview image, use can be exported There is no the information of shelter or non-output information in characterizing in the preview image, for example, output blank information or non-output information Deng.
Step S120: the information of the target detection model output trained is obtained.
In the present embodiment, the target detection model trained exports corresponding information based on the preview image of reading, then The information of the acquisition for mobile terminal target detection model trained output.It should be understood that if the target trained Detection model is stored in mobile terminal local, then the mobile terminal directly acquires the letter of the target detection model trained output Breath;If the target detection model trained is stored in server, which can be obtained by network from server The information of the target detection model trained output.As a kind of enforceable mode, the available target trained Voice messaging, text information, the pictorial information etc. of detection model output, it is not limited here.
Step S130: when reading the information includes the location information of shelter in the preview image, it is based on institute It states location information to cut the preview image, acquisition does not include the target image of the shelter.
As a kind of mode, the information of the target detection model trained output can be xml document, then mobile terminal Analysis can be read out to the content recorded in the xml document, wherein when mobile terminal reads the information include preview graph As in when the location information of shelter, then it can determine in the preview image there are shelter and the shelter is in preview image Image coordinate system under position.As a kind of enforceable mode, mobile terminal can be read by camera system and trained The output of target detection model information, and respond the information of output.In the present embodiment, in the location information for determining shelter Afterwards, can the location information based on the shelter to preview image carry out automatic cutting, it is possible to understand that, the position can be based on Information cuts the shelter in preview image, to obtain the target image for not including shelter, to promote target figure The bandwagon effect of picture.Further, after obtaining target image, which can be output to album system preservation, The displaying interface that the target image can be output to mobile terminal is shown, it is not limited here.
As a kind of mode, the location information of shelter may include all positions of the shelter in preview graph image Point, the i.e. location point of all pixels corresponding to shelter, then mobile terminal can be carried out based on the location point of all pixels Cutting processing, i.e., cut the location point of all pixels, so that acquisition does not include the target image of shelter.As another A kind of mode, the location information of shelter also may include the location point of marginal position of the shelter in preview image, that is, hide The location point of all pixels of the marginal position of block material, then mobile terminal can be based on the position of all pixels of marginal position Point carries out cutting processing, i.e., cuts along the marginal position of shelter to shelter, so that acquisition does not include the mesh of shelter Logo image.Specifically cutting method is in the present embodiment without limitation.
The processing method of taking pictures that the application one embodiment provides acquires preview image, and preview image input has been instructed Experienced target detection model obtains the information for the target detection model output trained, includes preview graph when reading the information As in when the location information of shelter, the preview image is cut based on the location information, acquisition does not include the shelter Target image, thus by the target detection model trained to preview image progress occlusion detection, and according to testing result The location information of shelter is exported to provide the foundation of cutting, so that acquisition does not include the target image of shelter, promotes shooting Effect.
Referring to Fig. 2, the flow diagram of the processing method of taking pictures provided Fig. 2 shows another embodiment of the application. The method is applied to above-mentioned mobile terminal, wherein in the present embodiment, which is shelter in preview image Occlusion area.To be explained in detail below for process shown in Fig. 2, the processing method of taking pictures can specifically include with Lower step:
Step S210: the position there are the multiple original images and dated shelter of shelter in the original image is obtained Multiple label informations of confidence breath, wherein the multiple original image and the multiple label information correspond.
In the present embodiment, multiple training datasets are acquired first, and multiple training dataset includes that there are shelters Multiple label informations of the location information of multiple original images and dated shelter in original image, wherein multiple original graphs Picture and multiple label informations correspond, i.e., each original image in multiple original images corresponds in multiple label informations One label information, certainly, multiple label information can be identical, can not also be identical, for example, the label information can be " blocking ", can also be respectively " blocking 1 ", " blocking 2 ", " blocking 3 " etc., it is not limited here.
Wherein, multiple can be shot by mobile terminal by camera there are the original image of shelter obtains, Ke Yicong Mobile terminal obtains in locally saving, and can also be obtained by mobile terminal from server, it is not limited here.In addition, multiple Label information can be labeled on the basis of original image manually by user, can by mobile terminal original image base It is labeled automatically on plinth, it is not limited here, wherein the label information may include that callout box is added in original image The mark image for having callout box is formed, also may include the mark original image in the form of xml document.
Step S220: default neural network is instructed based on the multiple original image and the multiple label information Practice, obtains the target detection model trained.
As a kind of mode, after obtaining multiple original images and multiple label informations, by multiple original images and multiple Label information is trained default neural network as training dataset, to obtain the target detection model trained.It can be with Understand, multiple original images and multiple label informations default neural network can will be inputted in pairs correspondingly, to carry out Training, to obtain the target detection model trained.In addition, after obtaining the target detection model trained, it can also be right The accuracy of the target detection model trained is verified, and judges that the target detection model trained is based on input number According to output information whether meet preset requirement, when the output information of the target detection model based on input data trained not When meeting preset requirement, training dataset can be resurveyed, default neural network is trained, or obtain multiple instructions again Practice data set to be corrected the target detection model trained, it is not limited here.Wherein it is possible to based on ssd algorithm, Faster-rcnn algorithm, yolo algorithm etc. are trained the default neural network, and details are not described herein.
Step S230: the preview image is inputted the target detection model trained by acquisition preview image.
Step S240: the information of the target detection model output trained is obtained.
Wherein, the specific descriptions of step S230- step S240 please refer to step S110- step S120, and details are not described herein.
Step S250: when reading the information includes the location information of shelter in the preview image, institute is obtained It states the area of occlusion area and the area of the preview image, and calculates the area and the preview image of the occlusion area The area ratio of area.
In the present embodiment, which is occlusion area of the shelter in preview image, as shown in Figure 3 and Figure 4, Wherein, in Fig. 3, A is for indicating preview image, and B is for indicating shelter, and in Fig. 4, C is for indicating shelter B pre- The occlusion area look in image A.It therefore, include shelter B in preview image A reading the information as a kind of mode When location information, the corresponding occlusion area C of shelter B can be obtained based on the location information of shelter B, it is possible to understand that , the size of occlusion area C is at least equal to shelter B, that is to say, that the size of occlusion area C can be with shelter B Size it is identical, the size of occlusion area C can also be greater than the size of shelter B, in addition, the shape of occlusion area C can , can also be different from the shape of shelter B with identical as the shape of shelter B, and the shape of occlusion area C can be not advise Then polygon, can be round, can be ellipse, can be regular polygon etc., optionally, shelter B shown in Fig. 4 is Finger, and occlusion area C is rectangle.
As a kind of mode, when determining occlusion area of the shelter in preview image, the available blocked area The area in domain and the area of preview image, then again the areal calculation of the area based on the occlusion area and preview image this block Area ratio between region and preview image.As shown in figure 4, then the area of occlusion area C can pass through occlusion area C's Long and wide product calculates, and the area of occlusion area C is denoted as S1, and the area of preview image A can pass through preview image A Length and wide product calculate, the area of preview image A is denoted as S2, then can calculate area S1 and the institute of occlusion area C State the area ratio S2/S1 of the area S2 of preview image A.In addition, the area S2 of preview image A can be fixed value, herein not It limits.
Step S260: judge whether the area ratio is less than preset area ratio.
In the present embodiment, the mobile terminal is provided with preset area ratio, wherein the preset area ratio can be set in advance Completion is set, can also be configured again when judging, in addition, the preset area ratio can be stored in advance in mobile terminal sheet Ground can also be stored in advance in server, it is not limited here.As a kind of mode, in the area for getting occlusion area and After the area ratio of the area of preview image, which is compared with preset area ratio, to judge whether the area ratio is small In the preset area ratio, it is possible to understand that, when the value of the area ratio is less than the value of the preset area ratio, it can determine the area Than being less than preset area ratio;When the value of the area ratio is not less than the value of preset area ratio, it can determine that the area ratio is not less than The preset area ratio.
Step S270: when the area ratio is less than the preset area ratio, to the occlusion area in the preview image It is cut, acquisition does not include the target image of the occlusion area.
Wherein, when the area ratio of the area for the area and preview image for determining the occlusion area is less than the preset area ratio When, it is smaller to characterize the corresponding occlusion area of shelter ratio shared in preview image, will the occlusion area cut after it is right It is smaller in the total quality influence of photo, therefore, as a kind of mode, when the area and preview image for determining the occlusion area When the area ratio of area is less than the preset area ratio, the occlusion area in the preview image can be cut, to obtain not Target image including the occlusion area.
Opposite, when the area ratio of the area for the area and preview image for determining the occlusion area is not less than the preset area Than when, the corresponding occlusion area of shelter large percentage shared in preview image is characterized, after which is cut It is affected for the total quality of photo, therefore, as a kind of mode, when the area and preview image for determining the occlusion area The area ratio of area when being not less than the preset area ratio, can be without the cutting of occlusion area in preview image, and issue Prompt information, wherein the prompt information is for prompting user to resurvey image, to obtain quality more target image.
Referring to Fig. 5, Fig. 5 shows the process signal of the step S270 of the processing method shown in Fig. 2 of taking pictures of the application Figure.It will be explained in detail below for process shown in fig. 5, the method can specifically include following steps:
Step S271: the inactive area in the preview image is obtained based on the occlusion area, wherein the dead space Domain is that there are the regions of same coordinate with the occlusion area under the image coordinate system in the preview image.
Referring to Fig. 6, after obtaining occlusion area C, it is pre- this can be obtained based on occlusion area C as a kind of mode The inactive area D look in image, wherein inactive area D is to exist under the coordinate system in preview image A with occlusion area C The region of same coordinate.For example, ordinate is [4,5] if it is [0,2] that occlusion area C, which is abscissa, then, the dead space It is [0,2] or ordinate is all areas of [4,5] that domain D, which includes abscissa,.It should be understood that if the shape of occlusion area C Shape is rectangle, then, the shape of inactive area D is the irregular polygon of two rectangles composition;If the shape of occlusion area C Shape is circle, then, the shape of inactive area D may be the irregular polygon of two rectangles composition, not limit herein It is fixed.
In addition, if the inactive area not in the marginal position of preview image, it is possible to inactive area be extended to pre- The marginal position of image is look at, for example, if the coordinate information of the preview image is abscissa [0,8], ordinate [0,6], if the screening Gear region C is that abscissa is [0,2], and ordinate is [4,5], it is possible to determine the ordinate of the occlusion area not in preview The marginal position of image, at this point it is possible to which the ordinate of inactive area D to be extended to the marginal position of preview image A, the i.e. nothing It is [0,2] or ordinate is all areas of [4,6] that effect region D, which may include abscissa,.
Step S272: cutting the inactive area in the preview image, and acquisition does not include the dead space The target image in domain.
It in the present embodiment, can be invalid based on this after determining the inactive area that shelter is formed in preview image Region to preview image carry out automatic cutting, it is possible to understand that, can the marginal position based on inactive area to inactive area into Row is cut, to obtain the target image for not including shelter and rule, as shown in Figure 6, wherein in Fig. 6, E is for characterizing mesh Logo image, so as to promote the bandwagon effect of target image.
Referring to Fig. 7, Fig. 7 shows the process signal of the step S272 of the processing method shown in fig. 5 of taking pictures of the application Figure.It will be explained in detail below for process shown in Fig. 7, the method can specifically include following steps:
Step S2721: judge in the inactive area whether to include the target object focused in the preview image.
As a kind of mode, the target object of camera focus alignment in the preview image of mobile terminal acquisition is obtained, it can With understanding, the target object of camera focus alignment is the object focused in preview image, and as user it is expected the object shot Body, it is therefore contemplated that user it is expected to shoot the integral photograph of the target object.As a kind of enforceable mode, work as determination After inactive area, it can further judge in the inactive area whether to include the target object focused in preview image, it can be with Understand, if including target object in the inactive area and cutting the inactive area, then, the target image ultimately generated is then Do not include the integral photograph of the target object, be unsatisfactory for user demand, therefore, can in inactive area whether include object Body is detected and is judged, to promote user experience.
Step S2722: when in the inactive area including the target object focused in the preview image, prompt is issued Information, wherein the prompt information is for prompting user to resurvey image.
As an implementation, if include the target object in the inactive area, characterization can not generate packet after cutting The target image of complete object object is included, therefore, the inactive area of the preview image can not be cut and issue prompt Information, wherein the prompt information is for prompting user to resurvey image, to reacquire complete target image, in this reality It applies in example, which may include auditory tone cues, vibration prompt, text prompt, picture prompting etc., it is not limited here.
Alternatively, it if include the target object in the inactive area, can not be generated after characterizing automatic cutting Therefore target image including complete object object can not directly automatically cut the inactive area of the preview image And show can operation interface, wherein this can operation interface can be used for user and choose whether to cut inactive area, and When receiving the command information that user's instruction is cut, crop box is shown, wherein the crop box is cut out for the automatic frame choosing of user Region is cut, further to meet the cutting needs of user.
Step S2723: when in the inactive area not including the target object focused in the preview image, to described The inactive area in preview image is cut, and the effective district in the preview image in addition to the inactive area is obtained Domain.
Wherein, when not including the target object focused in preview image in inactive area, characterization can be generated after cutting Therefore target image including completing target object can directly cut the inactive area of the preview image, to obtain Effective coverage in preview image in addition to inactive area, as shown in Figure 8, wherein the F in Fig. 8 is for characterizing effective coverage.I.e. Inactive area D in preview image A is cut, effective coverage F is obtained.
Step S2724: the effective coverage is zoomed in and out into processing, acquisition does not include the target figure of the inactive area Picture, wherein the Aspect Ratio of the target image is consistent with the Aspect Ratio of the preview image.
Further, since effective coverage is to obtain after cutting to the inactive area in preview image, this has There may be uncoordinated problems for the Aspect Ratio in effect region, so that the target image after cutting is unnatural, therefore, as A kind of mode can zoom in and out processing based on the Aspect Ratio of effective coverage, to obtain the target image more coordinated.At this In embodiment, effective coverage can zoom in and out to processing, obtain do not include inactive area target image, and the target image Aspect Ratio it is consistent with the Aspect Ratio of preview image, specifically, obtain the Aspect Ratio of effective coverage and obtain the preview Then the Aspect Ratio of image judges whether the Aspect Ratio of the effective coverage is consistent with the Aspect Ratio of preview image, if should The Aspect Ratio of effective coverage and the Aspect Ratio of preview image are consistent, can retain the effective coverage and using effective coverage as Target image;If the Aspect Ratio of the effective coverage and the Aspect Ratio of preview image are inconsistent, the length based on preview image Wide ratio is adjusted effective coverage, to obtain the consistent target image of Aspect Ratio with preview image.Wherein, the preview The Aspect Ratio of image can be fixed value, it is not limited here.
As shown in figure 8, in fig. 8, it, will if the Aspect Ratio of effective coverage F and the Aspect Ratio of preview image A are inconsistent Bandwagon effect is poor when effective coverage F is as target image, therefore, can the Aspect Ratio based on preview image A to effective Region F is adjusted, to obtain the consistent target image G of Aspect Ratio with preview image A, as shown in figure 9, to promote mesh The bandwagon effect of logo image.
The processing method of taking pictures that another embodiment of the application provides, there are the multiple original images and note of shelter for acquisition Multiple label informations of location information of the bright shelter in original image, wherein multiple original images and multiple label informations It corresponds, default neural network is trained based on multiple original images and multiple label informations, obtain target detection mould Type.Preview image is acquired, which is inputted to the target detection model trained, obtains the target detection model trained The information of output obtains the face of the occlusion area when reading the information includes the location information of shelter in preview image Long-pending and preview image area, and the area ratio of the area of the occlusion area and the area of preview image is calculated, judge the area Than whether being less than preset area ratio, when the area is less than preset area ratio, the occlusion area in preview image is cut, Acquisition does not include the target image of the occlusion area.Compared to processing method shown in FIG. 1 of taking pictures, this implementation also training in advance is simultaneously Target detection model is created, meanwhile, the present embodiment is less than pre- in the area ratio of the area of occlusion area and the area of preview image If when area ratio, cutting to the occlusion area in preview image, guarantee the bandwagon effect of target image.
Referring to Fig. 10, Figure 10 shows the module frame chart of processing unit 200 provided by the embodiments of the present application of taking pictures.The bat It is applied to above-mentioned mobile terminal 100 according to processing unit 200.It will be illustrated below for block diagram shown in Fig. 10, it is described to take pictures Processing unit 200 includes: image capture module 210, data obtaining module 220 and image cropping module 230, in which:
The preview image is inputted the target detection mould trained for acquiring preview image by image capture module 210 Type.
Data obtaining module 220, for obtaining the information of the target detection model output trained.
Image cropping module 230 reads the position letter that the information includes shelter in the preview image for working as When breath, the preview image is cut based on the location information, acquisition does not include the target image of the shelter.Into One step, when the location information is occlusion area of the shelter in the preview image, described image cuts mould Block 230 includes: area acquisition submodule, area judging submodule and image cropping submodule, in which:
Area acquisition submodule for obtaining the area of the occlusion area and the area of the preview image, and calculates The area ratio of the area of the area of the occlusion area and the preview image;
Area judging submodule, for judging whether the area ratio is less than preset area ratio;
Image cropping submodule, for executing to the preview graph when the area ratio is less than the preset area ratio Occlusion area as in is cut, and acquisition does not include the target image of the occlusion area.Further, described image is cut Submodule includes: inactive area acquiring unit, inactive area cutting unit, in which:
Inactive area acquiring unit, for obtaining the inactive area in the preview image based on the occlusion area, In, the inactive area is that there are the areas of same coordinate with the occlusion area under the image coordinate system in the preview image Domain.
Inactive area cuts unit and is not wrapped for cutting to the inactive area in the preview image Include the target image of the inactive area.Further, it includes: that inactive area judgement is single that the inactive area, which cuts unit, Member, prompt information are sent to unit, inactive area cuts subelement and effective coverage scaling subelement, in which:
Inactive area judgment sub-unit, for judging in the inactive area whether to include focusing in the preview image Target object.
Prompt information transmission sub-unit, for when the object in the inactive area including being focused in the preview image When body, prompt information is issued, wherein the prompt information is for prompting user to resurvey image.
Inactive area cuts subelement, for not including the target focused in the preview image in the inactive area When object, execution cuts the inactive area in the preview image, obtains in the preview image except the nothing Imitate the effective coverage except region.
Effective coverage scales subelement, and for the effective coverage to be zoomed in and out processing, acquisition does not include described invalid The target image in region, wherein the Aspect Ratio of the target image is consistent with the Aspect Ratio of the preview image.
Further, the processing unit 200 of taking pictures further include: label acquisition module and model training module, in which:
Label acquisition module, for obtaining, there are multiple original images of shelter and dated shelter in the original graph Multiple label informations of location information as in, wherein the multiple original image and the multiple label information correspond.
Model training module, for being based on the multiple original image and the multiple label information to default neural network It is trained, obtains the target detection model trained.
It is apparent to those skilled in the art that for convenience and simplicity of description, foregoing description device and The specific work process of module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, the mutual coupling of module can be electrical property, mechanical or other The coupling of form.
It, can also be in addition, can integrate in a processing module in each functional module in each embodiment of the application It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould Block both can take the form of hardware realization, can also be realized in the form of software function module.
Figure 11 is please referred to, it illustrates a kind of structural block diagrams of mobile terminal 100 provided by the embodiments of the present application.The movement Terminal 100, which can be smart phone, tablet computer, e-book etc., can run the electronic equipment of application program.In the application Mobile terminal 100 may include one or more such as lower component: processor 110, memory 120, screen 130, camera 140 with And one or more application program, wherein one or more application programs can be stored in memory 120 and be configured as It is executed by one or more processors 110, one or more programs are configured to carry out as described in preceding method embodiment Method.
Wherein, processor 110 may include one or more processing core.Processor 110 utilizes various interfaces and route The various pieces in entire electronic equipment 100 are connected, by running or executing the instruction being stored in memory 120, program, generation Code collection or instruction set, and the data being stored in memory 120 are called, execute the various functions and processing of electronic equipment 100 Data.Optionally, processor 110 can be using Digital Signal Processing (Digital Signal Processing, DSP), scene Programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA) at least one of example, in hardware realize.Processor 110 can integrating central processor (Central Processing Unit, CPU), in graphics processor (Graphics Processing Unit, GPU) and modem etc. One or more of combinations.Wherein, the main processing operation system of CPU, user interface and application program etc.;GPU is for being responsible for Show the rendering and drafting of content;Modem is for handling wireless communication.It is understood that above-mentioned modem It can not be integrated into processor 110, be realized separately through one piece of communication chip.
Memory 120 may include random access memory (Random Access Memory, RAM), also may include read-only Memory (Read-Only Memory).Memory 120 can be used for store instruction, program, code, code set or instruction set.It deposits Reservoir 120 may include storing program area and storage data area, wherein the finger that storing program area can store for realizing operating system Enable, for realizing at least one function instruction (such as touch function, sound-playing function, image player function etc.), be used for Realize the instruction etc. of following each embodiments of the method.Storage data area can also store the number that terminal 100 is created in use According to (such as phone directory, audio, video data, chat record data) etc..
Screen 130 is used to show information input by user, is supplied to user information and the mobile terminal 100 Various graphical user interface, these graphical user interface can by figure, text, icon, number, video and any combination thereof Lai It constitutes, in an example, which can be liquid crystal display (Liquid Crystal Display, LCD), can also Think Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED), it is not limited here.
Camera 140 can be fixedly installed on mobile terminal 100, can be slideably positioned in mobile terminal 100, can also turn It is dynamic to be set to mobile terminal 100, it is not limited here.
Figure 12 is please referred to, it illustrates a kind of structural frames of computer readable storage medium provided by the embodiments of the present application Figure.Program code is stored in the computer-readable medium 300, said program code can be called by processor and execute the above method Method described in embodiment.
Computer readable storage medium 300 can be such as flash memory, EEPROM (electrically erasable programmable read-only memory), The electronic memory of EPROM, hard disk or ROM etc.Optionally, computer readable storage medium 300 includes non-volatile meter Calculation machine readable medium (non-transitory computer-readable storage medium).Computer-readable storage Medium 300 has the memory space for the program code 310 for executing any method and step in the above method.These program codes can With from reading or be written in one or more computer program product in this one or more computer program product. Program code 310 can for example be compressed in a suitable form.
In conclusion processing method of taking pictures, device, mobile terminal and storage medium provided by the embodiments of the present application, are adopted Collect preview image, which is inputted to the target detection model trained, obtains the target detection model output trained Information, when reading the information includes the location information of shelter in preview image, based on the location information to the preview Image is cut, and acquisition does not include the target image of the shelter, to pass through the target detection model trained to preview Image carries out occlusion detection, and the location information of output shelter is according to testing result to provide the foundation of cutting, to obtain Do not include the target image of shelter, promotes shooting effect.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and Range.

Claims (11)

  1. The processing method 1. one kind is taken pictures, which is characterized in that the described method includes:
    Preview image is acquired, the preview image is inputted to the target detection model trained;
    Obtain the information of the target detection model output trained;
    When reading the information includes the location information of shelter in the preview image, based on the location information to institute It states preview image to be cut, acquisition does not include the target image of the shelter.
  2. 2. the method according to claim 1, wherein the target detection model trained is described for detecting When whether having shelter in preview image and having shelter in detecting the preview image, the shelter is exported in institute State the location information in preview image.
  3. 3. the method according to claim 1, wherein when the location information is the shelter in the preview Described to be cut based on the location information to the preview image when occlusion area in image, acquisition does not include described The target image of shelter, comprising:
    Occlusion area in the preview image is cut, acquisition does not include the target image of the occlusion area.
  4. 4. according to the method described in claim 3, it is characterized in that, the occlusion area in the preview image is cut out It cuts, acquisition does not include the target image of the occlusion area, comprising:
    The inactive area in the preview image is obtained based on the occlusion area, wherein the inactive area is described pre- Look under the image coordinate system in image that there are the regions of same coordinate with the occlusion area;
    The inactive area in the preview image is cut, acquisition does not include the target image of the inactive area.
  5. 5. according to the method described in claim 4, it is characterized in that, the inactive area in the preview image into Row is cut, and acquisition does not include the target image of the inactive area, comprising:
    The inactive area in the preview image is cut, obtain in the preview image except the inactive area it Outer effective coverage;
    The effective coverage is zoomed in and out into processing, acquisition does not include the target image of the inactive area, wherein the target The Aspect Ratio of image is consistent with the Aspect Ratio of the preview image.
  6. 6. according to the method described in claim 5, it is characterized in that, the inactive area in the preview image into Row is cut, before obtaining the effective coverage in the preview image in addition to the inactive area, further includes:
    Judge in the inactive area whether to include the target object focused in the preview image;
    When in the inactive area including the target object focused in the preview image, prompt information is issued, wherein described Prompt information is for prompting user to resurvey image;
    When in the inactive area not including the target object focused in the preview image, execute in the preview image The inactive area cut, obtain the effective coverage in the preview image in addition to the inactive area.
  7. 7. according to the described in any item methods of claim 3-6, which is characterized in that the blocked area in the preview image Domain is cut, before acquisition does not include the target image of the occlusion area, further includes:
    The area of the occlusion area and the area of the preview image are obtained, and calculates the area of the occlusion area and described The area ratio of the area of preview image;
    Judge whether the area ratio is less than preset area ratio;
    When the area ratio is less than the preset area ratio, execution cuts the occlusion area in the preview image, Acquisition does not include the target image of the occlusion area.
  8. 8. method according to claim 1-6, which is characterized in that the acquisition preview image, by the preview Before the target detection model that image input has been trained, further includes:
    Obtain the multiple of the location information there are the multiple original images and dated shelter of shelter in the original image Label information, wherein the multiple original image and the multiple label information correspond;
    Default neural network is trained based on the multiple original image and the multiple label information, obtains described instructed Experienced target detection model.
  9. The processing unit 9. one kind is taken pictures, which is characterized in that described device includes:
    The preview image is inputted the target detection model trained for acquiring preview image by image capture module;
    Data obtaining module, for obtaining the information of the target detection model output trained;
    Image cropping module, for when reading the information includes the location information of shelter in the preview image, base The preview image is cut in the location information, acquisition does not include the target image of the shelter.
  10. 10. a kind of mobile terminal, which is characterized in that including memory and processor, the memory is couple to the processor, The memory store instruction, the processor executes claim 1-8 such as and appoints when executed by the processor Method described in one.
  11. 11. a kind of computer-readable storage medium, which is characterized in that be stored with journey in the computer-readable storage medium Sequence code, said program code can be called by processor and execute the method according to claim 1.
CN201910204839.XA 2019-03-18 2019-03-18 It takes pictures processing method, device, mobile terminal and storage medium Pending CN109948525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910204839.XA CN109948525A (en) 2019-03-18 2019-03-18 It takes pictures processing method, device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910204839.XA CN109948525A (en) 2019-03-18 2019-03-18 It takes pictures processing method, device, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN109948525A true CN109948525A (en) 2019-06-28

Family

ID=67010189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910204839.XA Pending CN109948525A (en) 2019-03-18 2019-03-18 It takes pictures processing method, device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN109948525A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN110677586A (en) * 2019-10-09 2020-01-10 Oppo广东移动通信有限公司 Image display method, image display device and mobile terminal
CN111753783A (en) * 2020-06-30 2020-10-09 北京小米松果电子有限公司 Finger occlusion image detection method, device and medium
CN112162672A (en) * 2020-10-19 2021-01-01 腾讯科技(深圳)有限公司 Information flow display processing method and device, electronic equipment and storage medium
CN113014846A (en) * 2019-12-19 2021-06-22 华为技术有限公司 Video acquisition control method, electronic equipment and computer readable storage medium
CN115311589A (en) * 2022-10-12 2022-11-08 山东乾元泽孚科技股份有限公司 Hidden danger processing method and equipment for lighting building
CN111753783B (en) * 2020-06-30 2024-05-28 北京小米松果电子有限公司 Finger shielding image detection method, device and medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101917546A (en) * 2009-03-19 2010-12-15 卡西欧计算机株式会社 Image processing apparatus and image processing method
CN104639748A (en) * 2015-01-30 2015-05-20 深圳市中兴移动通信有限公司 Method and device for displaying desktop background based on borderless mobile phone
CN104899255A (en) * 2015-05-15 2015-09-09 浙江大学 Image database establishing method suitable for training deep convolution neural network
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
JP2017142133A (en) * 2016-02-09 2017-08-17 株式会社イシダ Optical inspection device
CN107680069A (en) * 2017-08-30 2018-02-09 歌尔股份有限公司 A kind of image processing method, device and terminal device
CN107730457A (en) * 2017-08-28 2018-02-23 广东数相智能科技有限公司 A kind of image completion method, apparatus, electronic equipment and storage medium
CN108566516A (en) * 2018-05-14 2018-09-21 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108573499A (en) * 2018-03-16 2018-09-25 东华大学 A kind of visual target tracking method based on dimension self-adaption and occlusion detection
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN108683845A (en) * 2018-05-14 2018-10-19 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108765380A (en) * 2018-05-14 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108876726A (en) * 2017-12-12 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of image procossing
CN109361874A (en) * 2018-12-19 2019-02-19 维沃移动通信有限公司 A kind of photographic method and terminal

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101917546A (en) * 2009-03-19 2010-12-15 卡西欧计算机株式会社 Image processing apparatus and image processing method
CN104639748A (en) * 2015-01-30 2015-05-20 深圳市中兴移动通信有限公司 Method and device for displaying desktop background based on borderless mobile phone
CN104899255A (en) * 2015-05-15 2015-09-09 浙江大学 Image database establishing method suitable for training deep convolution neural network
JP2017142133A (en) * 2016-02-09 2017-08-17 株式会社イシダ Optical inspection device
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
CN107730457A (en) * 2017-08-28 2018-02-23 广东数相智能科技有限公司 A kind of image completion method, apparatus, electronic equipment and storage medium
CN107680069A (en) * 2017-08-30 2018-02-09 歌尔股份有限公司 A kind of image processing method, device and terminal device
CN108876726A (en) * 2017-12-12 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of image procossing
CN108573499A (en) * 2018-03-16 2018-09-25 东华大学 A kind of visual target tracking method based on dimension self-adaption and occlusion detection
CN108566516A (en) * 2018-05-14 2018-09-21 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108683845A (en) * 2018-05-14 2018-10-19 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108765380A (en) * 2018-05-14 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN109361874A (en) * 2018-12-19 2019-02-19 维沃移动通信有限公司 A kind of photographic method and terminal

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN110677586A (en) * 2019-10-09 2020-01-10 Oppo广东移动通信有限公司 Image display method, image display device and mobile terminal
CN110677586B (en) * 2019-10-09 2021-06-25 Oppo广东移动通信有限公司 Image display method, image display device and mobile terminal
US11770603B2 (en) 2019-10-09 2023-09-26 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image display method having visual effect of increasing size of target image, mobile terminal, and computer-readable storage medium
CN113014846A (en) * 2019-12-19 2021-06-22 华为技术有限公司 Video acquisition control method, electronic equipment and computer readable storage medium
CN111753783A (en) * 2020-06-30 2020-10-09 北京小米松果电子有限公司 Finger occlusion image detection method, device and medium
CN111753783B (en) * 2020-06-30 2024-05-28 北京小米松果电子有限公司 Finger shielding image detection method, device and medium
CN112162672A (en) * 2020-10-19 2021-01-01 腾讯科技(深圳)有限公司 Information flow display processing method and device, electronic equipment and storage medium
CN115311589A (en) * 2022-10-12 2022-11-08 山东乾元泽孚科技股份有限公司 Hidden danger processing method and equipment for lighting building

Similar Documents

Publication Publication Date Title
CN109948525A (en) It takes pictures processing method, device, mobile terminal and storage medium
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
JP6357589B2 (en) Image display method, apparatus, program, and recording medium
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
CN109978805A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN107230187A (en) The method and apparatus of multimedia signal processing
US20150077591A1 (en) Information processing device and information processing method
CN106570110A (en) De-overlapping processing method and apparatus of image
CN109951635A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN106648424B (en) Screenshot method and device
CN107168619B (en) User generated content processing method and device
WO2020192692A1 (en) Image processing method and related apparatus
CN106682652B (en) Structure surface disease inspection and analysis method based on augmented reality
US20130076941A1 (en) Systems And Methods For Editing Digital Photos Using Surrounding Context
WO2023025010A1 (en) Stroboscopic banding information recognition method and apparatus, and electronic device
CN102857685A (en) Image capturing method and image capturing system
CN112669197A (en) Image processing method, image processing device, mobile terminal and storage medium
CN108961375A (en) A kind of method and device generating 3-D image according to two dimensional image
CN108494996A (en) Image processing method, device, storage medium and mobile terminal
CN111461070B (en) Text recognition method, device, electronic equipment and storage medium
CN104580892A (en) Method for terminal to take images
WO2015196681A1 (en) Picture processing method and electronic device
US11836847B2 (en) Systems and methods for creating and displaying interactive 3D representations of real objects
CN111784604B (en) Image processing method, device, equipment and computer readable storage medium
CN104580889A (en) Terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190628

RJ01 Rejection of invention patent application after publication