CN108712606B - Reminding method, device, storage medium and mobile terminal - Google Patents

Reminding method, device, storage medium and mobile terminal Download PDF

Info

Publication number
CN108712606B
CN108712606B CN201810457182.3A CN201810457182A CN108712606B CN 108712606 B CN108712606 B CN 108712606B CN 201810457182 A CN201810457182 A CN 201810457182A CN 108712606 B CN108712606 B CN 108712606B
Authority
CN
China
Prior art keywords
occlusion
preview image
shooting preview
occlusion area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810457182.3A
Other languages
Chinese (zh)
Other versions
CN108712606A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810457182.3A priority Critical patent/CN108712606B/en
Publication of CN108712606A publication Critical patent/CN108712606A/en
Application granted granted Critical
Publication of CN108712606B publication Critical patent/CN108712606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the present application discloses reminding method, device, storage medium and mobile terminal.This method comprises: obtaining shooting preview image when occlusion detection event is triggered;Shooting preview image is input in occlusion detection model trained in advance;It is determined in shooting preview image according to the output result of occlusion detection model with the presence or absence of the first occlusion area;If it is determined that there are the first occlusion areas in shooting preview image, then user's occlusion removal object is prompted, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.The embodiment of the present application is by using above-mentioned technical proposal, occlusion detection can be carried out to shooting preview image by the occlusion detection model constructed in advance, and accurately and rapidly judge in shooting preview image with the presence or absence of occlusion area, and there are when occlusion area in determining shooting preview image, prompt user's occlusion removal object in time can effectively improve the quality of shooting image.

Description

Reminding method, device, storage medium and mobile terminal
Technical field
The invention relates to technical field of image processing more particularly to reminding method, device, storage medium and movements Terminal.
Background technique
With the fast development of electronic technology and the increasingly raising of people's living standard, terminal has become in people's life Essential a part.Most of terminal all has camera function of taking pictures now, and takes pictures or camera function is benefited from deeply Family is liked, and has been more and more widely used.User passes through the camera function of taking pictures of terminal, the point drop in record life Drop, and save in the terminal, convenient for recalling, appreciating and check in the future.
However, in some cases, during user shoots photo or video, there are the camera shootings of shelter shield portions The case where head, cause shooting picture second-rate, influences the beauty for shooting image.Therefore, the quality for improving shooting image becomes It is most important.
Summary of the invention
The embodiment of the present application provides reminding method, device, storage medium and mobile terminal, can effectively improve shooting image Quality.
In a first aspect, the embodiment of the invention provides a kind of reminding methods, comprising:
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;According to the occlusion detection model Output result determine in the shooting preview image with the presence or absence of the first occlusion area;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein institute Stating shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
Second aspect, the embodiment of the invention provides a kind of suggestion devices, comprising:
Shooting preview image collection module, for obtaining shooting preview image when occlusion detection event is triggered;
Shooting preview image input module, for the shooting preview image to be input to occlusion detection mould trained in advance In type;
Occlusion area judgment module, for determining the shooting preview figure according to the output result of the occlusion detection model It whether there is the first occlusion area as in;
User prompt module, for if it is determined that then prompting user there are the first occlusion area in the shooting preview image Occlusion removal object, wherein the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the reminding method as described in the embodiment of the present application when the program is executed by processor.
Fourth aspect, the embodiment of the present application provide a kind of mobile terminal, including memory, processor and are stored in storage It can realize on device and when the computer program of processor operation, the processor execute the computer program as the application is real Apply reminding method described in example.
The prompt scheme provided in the embodiment of the present invention obtains shooting preview image when occlusion detection event is triggered, Shooting preview image is input in occlusion detection model trained in advance, and is determined according to the output result of occlusion detection model It whether there is the first occlusion area in shooting preview image, however, it is determined that there are the first occlusion area in shooting preview image, then mention Show user's occlusion removal object, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.Pass through this Apply for the technical solution that embodiment provides, shooting preview image can be blocked by the occlusion detection model constructed in advance Detection is accurately and rapidly judged with the presence or absence of occlusion area in shooting preview image, and in determining shooting preview image There are when occlusion area, user's occlusion removal object is prompted in time, can effectively improve the quality of shooting image.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of reminding method provided by the embodiments of the present application;
Fig. 2 is the flow diagram of another reminding method provided by the embodiments of the present application;
Fig. 3 is the flow diagram of another reminding method provided by the embodiments of the present application;
Fig. 4 is a kind of structural block diagram of suggestion device provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 6 is the structural schematic diagram of another mobile terminal provided by the embodiments of the present application.
Specific embodiment
To further illustrate the technical scheme of the present invention below with reference to the accompanying drawings and specific embodiments.It is understood that It is that specific embodiment described herein is used only for explaining the present invention rather than limiting the invention.It further needs exist for illustrating , only the parts related to the present invention are shown for ease of description, in attached drawing rather than entire infrastructure.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail The processing or method described as flow chart.Although each step is described as the processing of sequence by flow chart, many of these Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation The processing can be terminated when completion, it is also possible to have the additional step being not included in attached drawing.The processing can be with Corresponding to method, function, regulation, subroutine, subprogram etc..
Fig. 1 is the flow diagram of reminding method provided in an embodiment of the present invention, and the present embodiment is applicable to image and blocks The case where detection, this method can be executed by suggestion device, and wherein the device can be implemented by software and/or hardware, and can generally be collected At in the terminal.As shown in Figure 1, this method comprises:
Step 101, when occlusion detection event is triggered, obtain shooting preview image.
Illustratively, the mobile terminal in the embodiment of the present application may include the mobile devices such as mobile phone and tablet computer.
When occlusion detection event is triggered, shooting preview image is obtained by the camera of mobile terminal, to start Occlusion detection event.
Illustratively, in order to carry out occlusion detection on suitable opportunity, occlusion detection event can be preset and be triggered Condition.Optionally, it in order to really determine demand of the user to occlusion detection, can be blocked detecting that active user actively opens Right to examin is prescribed a time limit, and occlusion detection event is triggered.Optionally, in order to make occlusion detection be applied to more valuable Time window, with Save extra power consumption brought by occlusion detection, can Time window to occlusion detection and application scenarios analyzed or investigated Deng, scene is reasonably preset in setting, when detecting that mobile terminal is in default scene, triggering occlusion detection event.For example, working as When the environmental light brightness of mobile terminal present position is greater than predetermined luminance threshold value, occlusion detection event is triggered.It is understood that When environmental light brightness is larger, it is be easy to cause the image overexposure of shooting, user reduces overexposure to reduce environmental light brightness The case where a possibility that occurring, it will usually influence of the bright environment light to taking pictures was reduced with clothing or hand.But at this In the process, it is easy in the careless situation of user, partial occlusion is generated to camera.It should be noted that the application is implemented The specific manifestation form that example is triggered to occlusion detection event is without limitation.
In the embodiment of the present application, when occlusion detection event is triggered, shooting preview image is obtained.It is understood that Be, when user needs to take pictures, the shooting function to open a terminal, such as open a terminal in camera applications, that is, that opens a terminal takes the photograph As head obtains the image in shooting preview interface, i.e. shooting preview image into shooting preview interface.It is understood that clapping Taking the photograph preview image may include that user wants the image that the content (such as personage, landscape) of shooting is presented in shooting preview interface. Wherein, camera can be 2D camera, or 3D camera.3D camera is properly termed as 3D sensor again.3D camera shooting Head and the difference of common camera (namely 2D camera) be, 3D camera not only available flat image can also obtain Take the depth information of reference object, that is, three-dimensional positions and dimensions information.When camera is 2D camera, the bat of acquisition Taking the photograph preview image is 2D shooting preview image;When camera is 3D camera, the shooting preview image of acquisition is that 3D shooting is pre- Look at image.
The shooting preview image is input in occlusion detection model trained in advance by step 102.
Wherein, occlusion detection model can be understood as quickly judging the shooting preview image after inputting shooting preview image In whether include occlusion area learning model.Occlusion detection model may include neural network model, decision-tree model and with Any one in the machine learning models such as machine forest model.Occlusion detection model can be to the image and image in sample database Generation is trained with the presence or absence of the judging result of occlusion area.Illustratively, occlusion area detection model is based on existing and hide It keeps off the image in region and there is no the characteristic rule generations that the image of occlusion area is presented respectively.It is hidden it is understood that existing Gear region and the feature presented there is no the image of occlusion area are different, therefore, can be to there are the images of occlusion area The different characteristic rules presented respectively with the image there is no occlusion area are learnt, and occlusion area detection model is generated. It wherein, may include: the bright of image there are the image of occlusion area and the different characteristic presented there is no the image of occlusion area At least one of degree, the fuzziness of image, the texture of image and the exposure of image.When occlusion detection event is triggered, Obtain shooting preview image, and the shooting preview image that will acquire is input in occlusion detection model, with facilitate it is subsequent can be into One step is according to occlusion detection model to the analysis of shooting preview image as a result, whether determine in shooting preview image includes blocked area Domain.
Step 103 determines in the shooting preview image according to the output result of the occlusion detection model and whether there is First occlusion area.
In the embodiment of the present application, the shooting preview image that will be obtained in step 101, be input to training in advance blocks inspection After surveying model, occlusion detection model can be analyzed the characteristic information of the shooting preview image, and can be tied according to analysis Fruit determines in the shooting preview image with the presence or absence of occlusion area.Illustratively, when the output result of occlusion detection model is " 0 " When, it is determined that the first occlusion area is not present in shooting preview image;When the output result of occlusion detection model is " 1 ", then Determine that there are the first occlusion areas in shooting preview image.Alternatively, when the output result of occlusion detection model is " 1 ", then really Determine that the first occlusion area is not present in shooting preview image;When the output result of occlusion detection model is " 0 ", it is determined that shooting There are the first occlusion areas in preview image.It is of course also possible to for when the output result of occlusion detection model is "No", then really Determine that the first occlusion area is not present in shooting preview image;When the output result of occlusion detection model is "Yes", it is determined that clap Take the photograph in preview image that there are the first occlusion areas.The embodiment of the present application does not limit this.
Step 104, if it is determined that there are the first occlusion areas in the shooting preview image, then prompt user's occlusion removal Object.
Wherein, the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
In the embodiment of the present application, when determining that there are the first occlusion areas in shooting preview image, namely there are blocked areas When domain, illustrate there is the shelter for influencing shooting image beauty in front of camera, at this point it is possible to prompt user's occlusion removal Object.Wherein, shelter may include finger, clothing or foreign matter present on camera etc. to unrelated with reference object and right The object that the quality of shooting image has an impact.Illustratively, when determining in shooting preview image there are when the first occlusion area, It issues prompt information: " there are shelter in front of camera, making in shooting preview image that there are occlusion areas, please remove this in time Shelter ".It should be noted that user's occlusion removal object can be prompted in the form of text, it can also be with the shape of voice broadcast Formula prompts user's occlusion removal object, and the embodiment of the present application is not especially limited the prompt form of prompt user's occlusion removal object.
Reminding method provided by the embodiments of the present application obtains shooting preview image when occlusion detection event is triggered, will Shooting preview image is input in occlusion detection model trained in advance, and is determined and clapped according to the output result of occlusion detection model It takes the photograph in preview image with the presence or absence of the first occlusion area, however, it is determined that there are the first occlusion area in shooting preview image, then prompt User's occlusion removal object, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.Pass through this Shen Please embodiment provide technical solution, shooting preview image can be carried out to block inspection by the occlusion detection model constructed in advance It surveys, and accurately and rapidly judges with the presence or absence of occlusion area in shooting preview image, and in determining shooting preview image There are when occlusion area, user's occlusion removal object is prompted in time, can effectively improve the quality of shooting image.
In some embodiments, before occlusion detection event is triggered, comprising: obtain first sample image, wherein institute Stating first sample image includes the image there are occlusion area;By the occlusion area of the first sample image, there are results to be denoted as The sample labeling of the first sample image, wherein the occlusion area includes there are occlusion area or being not present there are result Occlusion area;According to the first sample image and corresponding sample labeling, the first default machine learning model is trained, Obtain occlusion detection model.The advantages of this arrangement are as follows according to occlusion area, there are results to the progress of corresponding sample image Label, i.e., using occlusion area, there are results as the sample labeling of corresponding sample image, can greatly improve to occlusion detection The accuracy of model training.
In the embodiment of the present application, first sample image is obtained, wherein include that there are occlusion areas in first sample image Image and image there is no occlusion area, that is, there are occlusion areas for parts of images in first sample image, and parts of images is not There are occlusion areas.Illustratively, 5000 first sample images are obtained comprising the sample image of occlusion area can be with It is 3000, the sample image not comprising occlusion area can be 2000.And in first sample image, there are occlusion areas Image quantity and there is no the quantity of the image of occlusion area without limitation.In addition, first sample image may include: net The combination of shooting image one or two in image and local picture library in network platform image library.Obtain first sample image Afterwards, by the occlusion area of first sample image, there are the sample labelings that result is denoted as first sample image.For example, working as first sample There are when occlusion area in image, use 1 is indicated, and current sample image is labeled as 1, is used as current sample graph for 1 in other words The sample labeling of picture;When occlusion area is not present in first sample image, use 0 is indicated, and current sample image is labeled as 0, in other words by 0 sample labeling as current sample image.According to first sample image and corresponding sample labeling, to first Default machine learning model is trained, and obtains occlusion detection model.It is understood that by first sample image and corresponding Sample labeling is trained the first default machine learning model as training sample set, using the training sample set, generates and hides Keep off detection model.Wherein, the first default machine learning model may include neural network model, decision-tree model, random forest Any one in model and model-naive Bayesian.The embodiment of the present application to the first default machine learning model without limitation.
Optionally, there are results can be determined according to the input results of user for the occlusion area of first sample image, example Such as, when user's human eye can quickly and intuitively be judged in first sample image with the presence or absence of occlusion area, can according to There are results come the occlusion area that determines correspondence image for the input results at family.For another example, in order to improve the screening to first sample image Keep off region there are result determine accuracy can be to first to further increase the accuracy to occlusion detection model training Sample image carries out image analysis, such as analyzes Color Distribution Features, the grain distribution feature, fuzziness feature of first sample image And sharpness etc., according to image analysis result, determining the occlusion area of first sample image, there are results.It needs to illustrate It is that there are the methods of determination of result for occlusion area of the embodiment of the present application to first sample image without limitation.
Wherein, before occlusion detection event is triggered, occlusion detection model is obtained.It should be noted that can be shifting Dynamic terminal obtains above-mentioned first sample image and corresponding sample labeling, utilizes first sample image and corresponding sample labeling pair Default machine learning model is trained, and directly generates occlusion detection model.It can also be that mobile terminal calls directly other shiftings The occlusion detection model that dynamic terminal training generates, for example, utilizing an acquisition for mobile terminal training sample set and life before factory At occlusion detection model, then the occlusion detection model is stored to mobile terminal, is directly used for other mobile terminals. Alternatively, server obtains a large amount of sample image, and according to corresponding occlusion area, there are results to be marked, and is instructed Practice sample set.Server is based on default machine learning model and is trained to training sample set, obtains occlusion detection model.Work as shifting When dynamic terminal needs to carry out occlusion detection namely when occlusion detection event is triggered, blocked from server calls are trained Detection model.
In some embodiments, in the shooting preview image to be input to in advance trained occlusion detection model it Before, further includes: obtain the fuzziness of the shooting preview image;The shooting preview image is input to blocking for training in advance In detection model, comprising: when the fuzziness is greater than preset threshold, the shooting preview image is input to training in advance In occlusion detection model.The advantages of this arrangement are as follows can when the fuzziness for detecting shooting preview image is larger, then into Whether one step judges comprising occlusion area in shooting preview image, it is possible to prevente effectively from occlusion detection is carried out in unnecessary situation, Further decrease the power consumption of mobile terminal.
Illustratively, before being input to shooting preview image in occlusion detection model trained in advance, to the shooting Preview image carries out image analysis, determines the fuzziness of the shooting preview image.Wherein it is possible to be based on image histogram concentration degree The fuzziness of shooting preview image is evaluated, is also based on step edge width measuring the fuzziness of shooting preview image, It should be noted that the embodiment of the present application to the method for determination of the fuzziness of shooting preview image without limitation.Shooting preview figure The fuzziness of picture reflects the picture quality of shooting preview image, and fuzziness is higher, and corresponding picture quality is poorer, conversely, mould Paste degree is lower, and corresponding picture quality is higher.It is understood that when, there are when occlusion area, being caused in shooting preview image The shelter of occlusion area is usually except the focal range of camera namely when camera shoots shelter, usually Camera focal length can not be directed at, will cause the corresponding image-region of shelter fuzziness is higher namely occlusion area Fog-level is larger, and occlusion area lacks apparent textural characteristics or sharp edge feature, can further influence entire shooting The fuzziness of preview image.Therefore, when the fuzziness for detecting shooting preview image is greater than preset threshold, show that the shooting is pre- Picture quality of looking at is poor, it is understood that there may be occlusion area, at this point it is possible to by shooting preview image be input in advance training block inspection It surveys in model, whether there is occlusion area in further accurate judgement shooting preview image.
In some embodiments, in the shooting preview image to be input to in advance trained occlusion detection model it Before, further includes: it whether include object within the scope of detection camera pre-determined distance;The shooting preview image is input to preparatory instruction In experienced occlusion detection model, comprising: when detecting within the scope of camera pre-determined distance comprising object, by the shooting preview Image is input in occlusion detection model trained in advance.It is shot the advantages of this arrangement are as follows first can substantially detect Whether there is the possibility there are occlusion area in preview image, when detecting in camera preset range comprising object, shows to clap Take the photograph in preview image that there may be occlusion areas, at this point, further judging in shooting preview image by occlusion detection model Whether include occlusion area, it is possible to prevente effectively from unnecessary situation occlusion area detection, further decrease mobile terminal Power consumption.
Illustratively, before being input to shooting preview image in occlusion detection model trained in advance, detection camera shooting It whether include object within the scope of head pre-determined distance.For example, detection device, such as infrared detecting set can be set around camera, Through detection device detection within the scope of the pre-determined distance of camera, as in 10 cm ranges, if there are objects.It can manage Solution can shoot longer-distance object, when usually being shot by camera that is, causing occlusion area Shelter usually apart from the distance of camera, it will usually far smaller than actual photographed object distance camera distance.Therefore, When detecting that there are when object, show that there may be shelter namely the camera shootings around camera in camera preset range Existing object may make in shooting preview image that there are occlusion areas in head preset range, at this point it is possible to by shooting preview Image is input in occlusion detection model trained in advance, is blocked with whether there is in further accurate judgement shooting preview image Region.
In some embodiments, however, it is determined that there are the first occlusion area in the shooting preview image, then prompt user to move Except shelter, comprising: if it is determined that there are the first occlusion areas in the shooting preview image, it is determined that first occlusion area Position in the shooting preview image;Prompt user according to the position occlusion removal object.The advantages of this arrangement are as follows Approximate location locating for can determining shelter according to the position of occlusion area further uses family according to the position accurately Occlusion removal object, effectively avoid user miss occlusion removal object the case where generation.
In the embodiment of the present application, in order to cause user's removal really in shooting preview image, there are the first screenings The shelter in gear region further determines that the first occlusion area there are when the first occlusion area in determining shooting preview image Position in shooting preview image prompts position occlusion removal of the user according to the first occlusion area in shooting preview image Object.It is understood that some objects may be real reference object there may be multiple objects around camera, have Object may will not make to shoot in image that there are blocked areas neither reference object, will not cause blocking for camera Domain, and some objects be really cause shooting image in there are the shelters of occlusion area.If user is according to shooting preview figure There are the prompt informations of occlusion area as in, blindly one by one remove the object around camera, it is not only possible to can remove Real reference object, and user removes taking long time for multiple objects, and camera is caused to shoot image low efficiency, user's body Test difference.Therefore, the first occlusion area is being determined behind the position in shooting preview image, can substantially judge to cause shooting preview The substantially distribution orientation that there are the shelters of the first occlusion area in image in front of camera, user can be according to the orientation point Cloth quickly and accurately determines shelter, and shelter is removed.Illustratively, when determining in the upper left corner of shooting preview image Position is there are when the first occlusion area namely when upper left position of first occlusion area in shooting preview image, Ke Yi great Generally prejudging out causes the shelter of the first occlusion area within the scope of the pre-determined distance of the left front of camera, and user can basis Prompt removes the shelter within the scope of the pre-determined distance of camera left front.
In some embodiments, position of first occlusion area in the shooting preview image is determined, comprising: will The shooting preview image is input to occlusion area trained in advance and determines in model;Wherein, the occlusion area determines model It is generated based on the characteristic rule that occlusion area is presented in the picture;Determine that the output result of model determines according to the occlusion area Position of first occlusion area in the shooting preview image.The advantages of this arrangement are as follows preparatory structure can be passed through The occlusion area built determines model, quickly and accurately determines specific location of the occlusion area in shooting preview image.
In the embodiment of the present application, occlusion area determines that model can be understood as after inputting shooting preview image, can be with Quickly judge the learning model of occlusion area specific distributing position in shooting preview image.Occlusion area determines that model can be with Including any one in the machine learning models such as neural network model, decision-tree model and Random Forest model.Occlusion area It determines that model can be to the sample image in sample database there are occlusion area, and is labelled with occlusion area in sample image The sample training collection of position is trained generation.Illustratively, occlusion area determines that model is based on occlusion area in the picture The characteristic rule of presentation generates.It is understood that occlusion area and de-occlusion region presentation are characterized in an image Different, therefore, the characteristic rule that occlusion area is presented in the picture can be learnt, generate occlusion area and determine mould Type.Wherein, the feature that occlusion area is presented in the picture may include: that occlusion area size in the picture, occlusion area exist Position, occlusion area in image shape in the picture, the brightness of occlusion area, the color of occlusion area, occlusion area At least one of fuzziness and the texture of occlusion area.
It determines in shooting preview image when by occlusion detection model there are when the first occlusion area, then by the shooting preview Image is input to occlusion area trained in advance and determines in model that occlusion area determines that model can be to the spy of shooting preview image Reference breath is analyzed, and can determine position of first occlusion area in the shooting preview image based on the analysis results, Determine which specific partial image region is the first occlusion area in shooting preview image.
It is understood that occlusion area determines model and occlusion detection model is two different learning models.Wherein, Occlusion detection model is mainly used for judging in shooting preview image whether can only obtaining comprising occlusion area namely occlusion detection model Whether include the judging result of occlusion area in shooting preview image out, and specific occlusion area can not be determined in shooting preview Position in image.And occlusion area determines that model is mainly used for accurately determining out occlusion area in shooting preview image Specific location can also determine which specific block image-region is occlusion area in shooting preview image.Wherein, occlusion area Shape can be rule, be also possible to irregular.In addition, since occlusion detection model is mainly used for judging shooting preview It whether there is occlusion area in image, the processing speed of occlusion detection model is typically superior to the processing speed that occlusion area determines model Degree.
In some embodiments, it determines in model the shooting preview image is input in advance trained occlusion area Before, further includes: obtain the second sample image, wherein second sample image is that there are the images of the second occlusion area;In Mark the position of second occlusion area in second sample image, and by mark behind the second occlusion area position second Sample image is as training sample set;The second default machine learning model is trained to institute using the training sample set The characteristic rule for stating the second occlusion area is learnt, and is obtained occlusion area and is determined model.The advantages of this arrangement are as follows by wrapping The second sample image containing the second occlusion area determines the samples sources of model as occlusion area, can greatly improve to blocking Region determines the accuracy of model training.
In the embodiment of the present application, the second sample image is obtained, wherein the second sample image is the figure there are occlusion area Picture.Wherein it is possible to the second occlusion area in the second sample image is determined based on image processing techniques, it can also be according to user's Circle selection operation determines the second occlusion area in the second sample image.The second occlusion area is marked in the second sample image The specific location of second occlusion area is also labeled in corresponding second sample image by note.The second occlusion area will be marked The second sample image behind position carries out the second default machine learning model as training sample set, and using training sample set Training, obtains occlusion area and determines model.Wherein, the second default machine learning model may include neural network model, decision Any one in tree-model, Random Forest model and model-naive Bayesian.The embodiment of the present application is to the second default engineering Practise model without limitation.In addition, the second default machine learning model and the mentioned above first default machine learning model can be with It is identical, it can also be different, the embodiment of the present application does not limit this.
Wherein, before shooting preview image to be input to occlusion area trained in advance and is determined in model, acquisition is blocked Region determines model.It should be noted that can be above-mentioned second sample image of acquisition for mobile terminal, and mark second is blocked Second sample image of regional location as training sample set, using the training sample set to the second default machine learning model into Row training, directly generates occlusion area and determines model.It can also be that mobile terminal calls directly the training of other mobile terminals and generates Occlusion area determine model.Training sample set is carried out it is of course also possible to be based on default machine learning model by server Training, obtains occlusion area and determines model.When mobile terminal needs to further determine that the tool of occlusion area in shooting preview image When body position, from server calls, trained occlusion area determines model.
In some embodiments, after prompting user's occlusion removal object, further includes: judged whether to receive shelter The feedback information of removal;When receiving the feedback information, shooting preview image is shot.It is understood that working as When receiving the removed feedback information of shelter, illustrate that occlusion area, Ye Ji have been not present in shooting preview image There is no the shelters made in shooting preview image there are occlusion area in front of camera, at this point it is possible to directly pre- to the shooting Image of looking at is shot.The quality of shooting image can be effectively ensured in this way, making it, there is no occlusion areas.
Wherein, feedback information can be understood as the removed determining information of shelter.It illustratively, can be in terminal device Human-computer interaction interface in the determination option that whether removes to shelter is set.Wherein it is determined that option may include "Yes" and Two options of "No" indicate that user removes shelter when determining option is "Yes".And when determining option is "No", Indicate that user does not remove shelter.
Optionally, it is prompting again to input shooting preview image in the preset time period after user's occlusion removal object Into occlusion detection model trained in advance, judge in the shooting preview image currently obtained whether to include occlusion area again, If it is not, illustrating that user removes shelter, directly shooting preview image can be shot.Optionally, it is used in prompt In preset time period after the occlusion removal object of family, whether include object, when detecting if detecting within the scope of camera pre-determined distance When not including object within the scope of camera pre-determined distance, illustrate that user removes shelter, it can be directly to shooting preview Image is shot.The advantages of this arrangement are as follows the quality of shooting image can be effectively ensured, making it, there is no blocked areas Domain.
Fig. 2 is the flow diagram of reminding method provided by the embodiments of the present application.As shown in Fig. 2, this method comprises:
Step 201 obtains first sample image.
Wherein, first sample image includes the image there are occlusion area.
Step 202, by the occlusion area of first sample image, there are the sample labelings that result is denoted as first sample image.
Wherein, occlusion area includes that there are occlusion area or occlusion area is not present there are result.
Step 203, according to first sample image and corresponding sample labeling, the first default machine learning model is instructed Practice, obtains occlusion detection model.
Step 204, when occlusion detection event is triggered, obtain shooting preview image.
Step 205, the fuzziness for obtaining shooting preview image.
Step 206 judges whether the fuzziness is greater than preset threshold, if so, thening follow the steps 207, otherwise, executes step Rapid 212.
Shooting preview image is input in occlusion detection model by step 207.
Step 208 is determined in shooting preview image according to the output result of occlusion detection model and is blocked with the presence or absence of first Otherwise region, executes step 212 if so, thening follow the steps 209.
Step 209, prompt user's occlusion removal object.
Wherein, shelter includes making in shooting preview image that there are the objects of the first occlusion area.
Step 210 judges whether to receive the removed feedback information of shelter, if so, 211 are thened follow the steps, otherwise, Return to step 209.
Step 211 shoots shooting preview image.
Step 212 determines that there is no occlusion areas in shooting preview image, directly shoot shooting preview image.
The reminding method provided in the embodiment of the present application will be clapped when the fuzziness for detecting shooting preview image is larger It takes the photograph preview image to be input in occlusion detection model trained in advance, and is clapped when being determined according to the output result of occlusion detection model It takes the photograph in preview image there are when the first occlusion area, prompts user's occlusion removal object, and removed anti-receiving shelter When feedforward information, shooting preview image is shot, wherein occlusion detection model is based on to first sample image and corresponding sample This label is trained generation.By using above-mentioned technical proposal, occlusion detection is carried out in the case where effectively avoiding unnecessary situation, Under the premise of the power consumption for reducing mobile terminal, can accurately and rapidly it judge in shooting preview image with the presence or absence of blocked area Domain, and there are when occlusion area in determining shooting preview image, user's occlusion removal object is prompted in time, and is hidden receiving When the removed feedback information of block material, shooting preview image is shot, the quality of shooting image can be effectively ensured.
Fig. 3 is the flow diagram of reminding method provided by the embodiments of the present application.As shown in figure 3, this method comprises:
Step 301 obtains first sample image.
Wherein, first sample image includes the image there are occlusion area.
Step 302, by the occlusion area of first sample image, there are the sample labelings that result is denoted as first sample image.
Wherein, occlusion area includes that there are occlusion area or occlusion area is not present there are result.
Step 303, according to first sample image and corresponding sample labeling, the first default machine learning model is instructed Practice, obtains occlusion detection model.
Step 304, when occlusion detection event is triggered, obtain shooting preview image.
It whether include object within the scope of step 305, detection camera pre-determined distance, if so, 306 are thened follow the steps, otherwise, Execute step 316.
Shooting preview image is input in occlusion detection model by step 306.
Step 307 is determined in shooting preview image according to the output result of occlusion detection model and is blocked with the presence or absence of first Otherwise region, executes step 316 if so, thening follow the steps 308.
Step 308 obtains the second sample image.
Wherein, the second sample image is that there are the images of the second occlusion area;
Step 309, the position that the second occlusion area is marked in the second sample image, and the second occlusion area position will be marked The second sample image postponed is as training sample set.
Step 310 is trained to the second occlusion area the second default machine learning model using training sample set Characteristic rule learnt, obtain occlusion area and determine model.
Shooting preview image is input in advance trained occlusion area and determines in model by step 311.
Step 312 determines that the output result of model determines the first occlusion area in shooting preview image according to occlusion area In position.
Step 313, prompt user are according to position occlusion removal object.
Wherein, shelter includes making in shooting preview image that there are the objects of the first occlusion area.
Step 314 judges whether to receive the removed feedback information of shelter, if so, 315 are thened follow the steps, otherwise, Return to step 313.
Step 315 shoots shooting preview image.
Step 316 determines that there is no occlusion areas in shooting preview image, directly shoot shooting preview image.
It should be noted that step 308- step 310 can also execute before step 304.When step 308- step 310 When executing before step 304, step 301- step 303 can be first carried out, it is rear to execute step 308- step 310, it can also be first Step 308- step 310 is executed, it is rear to execute step 301- step 303.The embodiment of the present application is not construed as limiting this.
The reminding method provided in the embodiment of the present application, when detecting within the scope of camera pre-determined distance comprising object, Shooting preview image is input in occlusion detection model trained in advance, first can substantially detect whether to have to exist and block The possibility in region, when detecting in camera preset range comprising object, further judge in shooting preview image whether Comprising occlusion area, it is possible to prevente effectively from unnecessary situation occlusion area detection, further decrease the power consumption of mobile terminal. And when the output result according to occlusion detection model determines in shooting preview image there are when the first occlusion area, further basis Occlusion area determines that model determines position of first occlusion area in shooting preview image, and user is prompted to be removed according to position The case where shelter can be used family according to the position accurately occlusion removal object, user effectively avoided to miss occlusion removal object Occur.
Fig. 4 is a kind of structural block diagram of suggestion device provided by the embodiments of the present application, which can be by software and/or hardware It realizes, is typically integrated in mobile terminal, the quality of shooting image can be improved by executing reminding method.As shown in figure 4, should Device includes:
Shooting preview image collection module 401, for obtaining shooting preview image when occlusion detection event is triggered;
Shooting preview image input module 402, for by the shooting preview image be input in advance training block inspection It surveys in model;
Occlusion area judgment module 403, for determining that the shooting is pre- according to the output result of the occlusion detection model It lookes in image with the presence or absence of the first occlusion area;
User prompt module 404, for if it is determined that then prompting to use there are the first occlusion area in the shooting preview image Family occlusion removal object, wherein the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
Suggestion device provided by the embodiments of the present application obtains shooting preview image when occlusion detection event is triggered, will Shooting preview image is input in occlusion detection model trained in advance, and is determined and clapped according to the output result of occlusion detection model It takes the photograph in preview image with the presence or absence of the first occlusion area, however, it is determined that there are the first occlusion area in shooting preview image, then prompt User's occlusion removal object, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.Pass through this Shen Please embodiment provide technical solution, shooting preview image can be carried out to block inspection by the occlusion detection model constructed in advance It surveys, and accurately and rapidly judges with the presence or absence of occlusion area in shooting preview image, and in determining shooting preview image There are when occlusion area, user's occlusion removal object is prompted in time, can effectively improve the quality of shooting image.Optionally, the dress It sets further include:
First sample image collection module, for obtaining first sample image before occlusion detection event is triggered, In, the first sample image includes the image there are occlusion area;
Result queue module is blocked, for there are results to be denoted as described first by the occlusion area of the first sample image The sample labeling of sample image, wherein the occlusion area includes that there are occlusion area or occlusion area is not present there are result;
Occlusion detection model training module is used for according to the first sample image and corresponding sample labeling, to first Default machine learning model is trained, and obtains occlusion detection model.
Optionally, the device further include:
Fuzziness obtains module, in the shooting preview image to be input to occlusion detection model trained in advance Before, the fuzziness of the shooting preview image is obtained;
Shooting preview image input module, is used for:
When the fuzziness is greater than preset threshold, the shooting preview image is input to occlusion detection trained in advance In model.
Optionally, the device further include:
Object detection module, in the shooting preview image to be input to in advance trained occlusion detection model it Before, whether detect within the scope of camera pre-determined distance includes object;
Shooting preview image input module, is used for:
When detecting within the scope of camera pre-determined distance comprising object, the shooting preview image is input to preparatory instruction In experienced occlusion detection model.
Optionally, user prompt module, comprising:
Blocking position determination unit, for if it is determined that there are the first occlusion areas in the shooting preview image, it is determined that Position of first occlusion area in the shooting preview image;
User's prompt unit, for prompting user according to the position occlusion removal object.
Optionally, blocking position determination unit is used for:
The shooting preview image is input to occlusion area trained in advance to determine in model;Wherein, the blocked area Domain determines that model is generated based on the characteristic rule that occlusion area is presented in the picture;
Determine that the output result of model determines first occlusion area in the shooting preview according to the occlusion area Position in image.
Optionally, before the shooting preview image to be input to occlusion area trained in advance and is determined in model, also Include:
Obtain the second sample image, wherein second sample image is that there are the images of the second occlusion area;
The position of second occlusion area is marked in second sample image, and will mark the second occlusion area position The second sample image postponed is as training sample set;
The second default machine learning model is trained to second occlusion area using the training sample set Characteristic rule learnt, obtain occlusion area and determine model.
Optionally, which further includes
Feedback information judgment module, for having judged whether to receive shelter after prompting user's occlusion removal object The feedback information of removal;
Image taking module, for being shot to shooting preview image when receiving the feedback information.
The embodiment of the present application also provides a kind of storage medium comprising computer executable instructions, and the computer is executable Instruction is used to execute reminding method when being executed by computer processor, this method comprises:
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;
It is determined in the shooting preview image according to the output result of the occlusion detection model and is blocked with the presence or absence of first Region;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein institute Stating shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as DRAM, DDRRAM, SRAM, EDORAM, Lan Basi (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium (example Such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other types Memory or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed, or It can be located in different second computer systems, second computer system is connected to the first meter by network (such as internet) Calculation machine system.Second computer system can provide program instruction to the first computer for executing.Term " storage medium " can To include two or more that may reside in different location (such as in the different computer systems by network connection) Storage medium.Storage medium can store the program instruction that can be performed by one or more processors and (such as be implemented as counting Calculation machine program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application The prompt operation that executable instruction is not limited to the described above, can also be performed reminding method provided by the application any embodiment In relevant operation.
The embodiment of the present application provides a kind of mobile terminal, and provided by the embodiments of the present application mention can be integrated in the mobile terminal Showing device.Fig. 5 is a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application.Mobile terminal 500 may include: to deposit On a memory and can be in the computer program of processor operation, the processor 502 hold for reservoir 501, processor 502 and storage The reminding method as described in the embodiment of the present application is realized when the row computer program.
Mobile terminal provided by the embodiments of the present application, can be by the occlusion detection model that constructs in advance to shooting preview figure As carrying out occlusion detection, and accurately and rapidly judge with the presence or absence of occlusion area in shooting preview image, and determining to clap It takes the photograph in preview image there are when occlusion area, prompts user's occlusion removal object in time, can effectively improve the quality of shooting image.
Fig. 6 is the structural schematic diagram of another mobile terminal provided by the embodiments of the present application, which may include: Shell (not shown), memory 601, central processing unit (central processing unit, CPU) 602 (are also known as located Manage device, hereinafter referred to as CPU), circuit board (not shown) and power circuit (not shown).The circuit board is placed in institute State the space interior that shell surrounds;The CPU602 and the memory 601 are arranged on the circuit board;The power supply electricity Road, for each circuit or the device power supply for the mobile terminal;The memory 601, for storing executable program generation Code;The CPU602 is run and the executable journey by reading the executable program code stored in the memory 601 The corresponding computer program of sequence code, to perform the steps of
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;
It is determined in the shooting preview image according to the output result of the occlusion detection model and is blocked with the presence or absence of first Region;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein institute Stating shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
The mobile terminal further include: Peripheral Interface 603, RF (Radio Frequency, radio frequency) circuit 605, audio-frequency electric Road 606, loudspeaker 611, power management chip 608, input/output (I/O) subsystem 609, other input/control devicess 610, Touch screen 612, other input/control devicess 610 and outside port 604, these components pass through one or more communication bus Or signal wire 607 communicates.
It should be understood that illustrating the example that mobile terminal 600 is only mobile terminal, and mobile terminal 600 It can have than shown in the drawings more or less component, can combine two or more components, or can be with It is configured with different components.Various parts shown in the drawings can include one or more signal processings and/or dedicated It is realized in the combination of hardware, software or hardware and software including integrated circuit.
Just the mobile terminal provided in this embodiment for prompt is described in detail below, and the mobile terminal is with mobile phone For.
Memory 601, the memory 601 can be accessed by CPU602, Peripheral Interface 603 etc., and the memory 601 can It can also include nonvolatile memory to include high-speed random access memory, such as one or more disk memory, Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU602 and deposited by Peripheral Interface 603, the Peripheral Interface 603 Reservoir 601.
I/O subsystem 609, the I/O subsystem 609 can be by the input/output peripherals in equipment, such as touch screen 612 With other input/control devicess 610, it is connected to Peripheral Interface 603.I/O subsystem 609 may include 6091 He of display controller For controlling one or more input controllers 6092 of other input/control devicess 610.Wherein, one or more input controls Device 6092 processed receives electric signal from other input/control devicess 610 or sends electric signal to other input/control devicess 610, Other input/control devicess 610 may include physical button (push button, rocker buttons etc.), dial, slide switch, behaviour Vertical pole clicks idler wheel.It is worth noting that input controller 6092 can with it is following any one connect: keyboard, infrared port, The indicating equipment of USB interface and such as mouse.
Touch screen 612, the touch screen 612 are the input interface and output interface between customer mobile terminal and user, Visual output is shown to user, visual output may include figure, text, icon, video etc..
Display controller 6091 in I/O subsystem 609 receives electric signal from touch screen 612 or sends out to touch screen 612 Electric signals.Touch screen 612 detects the contact on touch screen, and the contact that display controller 6091 will test is converted to and is shown The interaction of user interface object on touch screen 612, i.e. realization human-computer interaction, the user interface being shown on touch screen 612 Object can be the icon of running game, the icon for being networked to corresponding network etc..It is worth noting that equipment can also include light Mouse, light mouse are the extensions for the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen.
RF circuit 605 is mainly used for establishing the communication of mobile phone Yu wireless network (i.e. network side), realizes mobile phone and wireless network The data receiver of network and transmission.Such as transmitting-receiving short message, Email etc..Specifically, RF circuit 605 receives and sends RF letter Number, RF signal is also referred to as electromagnetic signal, and RF circuit 605 converts electrical signals to electromagnetic signal or electromagnetic signal is converted to telecommunications Number, and communicated by the electromagnetic signal with mobile communications network and other equipment.RF circuit 605 may include being used for Execute the known circuit of these functions comprising but it is not limited to antenna system, RF transceiver, one or more amplifiers, tuning Device, one or more oscillators, digital signal processor, CODEC (COder-DECoder, coder) chipset, Yong Hubiao Know module (Subscriber Identity Module, SIM) etc..
Voicefrequency circuit 606 is mainly used for receiving audio data from Peripheral Interface 603, which is converted to telecommunications Number, and the electric signal is sent to loudspeaker 611.
Loudspeaker 611 is reduced to sound for mobile phone to be passed through RF circuit 605 from the received voice signal of wireless network And the sound is played to user.
Power management chip 608, the hardware for being connected by CPU602, I/O subsystem and Peripheral Interface are powered And power management.
The application any embodiment institute can be performed in suggestion device, storage medium and the mobile terminal provided in above-described embodiment The reminding method of offer has and executes the corresponding functional module of this method and beneficial effect.It is not detailed in the above-described embodiments to retouch The technical detail stated, reference can be made to reminding method provided by the application any embodiment.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (9)

1. a kind of reminding method characterized by comprising
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;
It is determined in the shooting preview image according to the output result of the occlusion detection model with the presence or absence of the first occlusion area;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein the screening Block material includes making in the shooting preview image that there are the objects of the first occlusion area;
Wherein, however, it is determined that there are the first occlusion areas in the shooting preview image, then prompt user's occlusion removal object, comprising:
If it is determined that in the shooting preview image, there are the first occlusion areas, then the shooting preview image are input to preparatory instruction Experienced occlusion area determines in model;Wherein, the occlusion area determines the spy that model is presented in the picture based on occlusion area Levy law generation;
Determine that the output result of model determines first occlusion area in the shooting preview image according to the occlusion area In position;
Prompt user according to the position occlusion removal object.
2. the method according to claim 1, wherein before occlusion detection event is triggered, comprising:
Obtain first sample image, wherein the first sample image includes the image there are occlusion area;
By the occlusion area of the first sample image, there are the sample labelings that result is denoted as the first sample image, wherein The occlusion area includes that there are occlusion area or occlusion area is not present there are result;
According to the first sample image and corresponding sample labeling, the first default machine learning model is trained, is obtained Occlusion detection model.
3. the method according to claim 1, wherein the shooting preview image is input to training in advance Before in occlusion detection model, further includes:
Obtain the fuzziness of the shooting preview image;
The shooting preview image is input in occlusion detection model trained in advance, comprising:
When the fuzziness is greater than preset threshold, the shooting preview image is input to occlusion detection model trained in advance In.
4. the method according to claim 1, wherein the shooting preview image is input to training in advance Before in occlusion detection model, further includes:
Whether detect within the scope of camera pre-determined distance includes object;
The shooting preview image is input in occlusion detection model trained in advance, comprising:
When detecting within the scope of camera pre-determined distance comprising object, the shooting preview image is input to training in advance In occlusion detection model.
5. the method according to claim 1, wherein the shooting preview image is input to training in advance Before occlusion area determines in model, further includes:
Obtain the second sample image, wherein second sample image is that there are the images of the second occlusion area;
The position of second occlusion area is marked in second sample image, and will be behind the second occlusion area position of mark The second sample image as training sample set;
The second default machine learning model is trained with the spy to second occlusion area using the training sample set Sign rule is learnt, and is obtained occlusion area and is determined model.
6. -5 any method according to claim 1, which is characterized in that after prompting user's occlusion removal object, also wrap It includes:
Judge whether to receive the removed feedback information of shelter;
When receiving the feedback information, shooting preview image is shot.
7. a kind of suggestion device characterized by comprising
Shooting preview image collection module, for obtaining shooting preview image when occlusion detection event is triggered;
Shooting preview image input module, for the shooting preview image to be input to occlusion detection model trained in advance In;
Occlusion area judgment module, for being determined in the shooting preview image according to the output result of the occlusion detection model With the presence or absence of the first occlusion area;
User prompt module, for if it is determined that then user is prompted to remove there are the first occlusion area in the shooting preview image Shelter, wherein the shelter includes making in the shooting preview image that there are the objects of the first occlusion area;
Wherein, the user prompt module, comprising:
Blocking position determination unit determines model for the shooting preview image to be input to occlusion area trained in advance In;Wherein, the occlusion area determines that model is generated based on the characteristic rule that occlusion area is presented in the picture;According to the screening Gear region determines that the output result of model determines position of first occlusion area in the shooting preview image;
User's prompt unit, for prompting user according to the position occlusion removal object.
8. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor Such as reminding method as claimed in any one of claims 1 to 6 is realized when row.
9. a kind of mobile terminal, which is characterized in that including memory, processor and storage can be transported on a memory and in processor Capable computer program, which is characterized in that the processor is realized when executing the computer program as claim 1-6 is any The reminding method.
CN201810457182.3A 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal Active CN108712606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810457182.3A CN108712606B (en) 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810457182.3A CN108712606B (en) 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal

Publications (2)

Publication Number Publication Date
CN108712606A CN108712606A (en) 2018-10-26
CN108712606B true CN108712606B (en) 2019-10-29

Family

ID=63869013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810457182.3A Active CN108712606B (en) 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN108712606B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361874B (en) * 2018-12-19 2021-05-14 维沃移动通信有限公司 Photographing method and terminal
CN109951636A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN109951635B (en) * 2019-03-18 2021-01-12 Oppo广东移动通信有限公司 Photographing processing method and device, mobile terminal and storage medium
CN110321819B (en) * 2019-06-21 2021-09-14 浙江大华技术股份有限公司 Shielding detection method and device of camera equipment and storage device
CN111476123A (en) * 2020-03-26 2020-07-31 杭州鸿泉物联网技术股份有限公司 Vehicle state identification method and device, electronic equipment and storage medium
CN114079766B (en) * 2020-08-10 2023-08-11 珠海格力电器股份有限公司 Under-screen camera shielding prompting method, storage medium and terminal equipment
CN111932481B (en) * 2020-09-11 2021-02-05 广州汽车集团股份有限公司 Fuzzy optimization method and device for automobile reversing image
CN112381054A (en) * 2020-12-02 2021-02-19 东方网力科技股份有限公司 Method for detecting working state of camera and related equipment and system
CN113301250A (en) * 2021-05-13 2021-08-24 Oppo广东移动通信有限公司 Image recognition method and device, computer readable medium and electronic equipment
CN114333345B (en) * 2021-12-31 2023-05-30 北京精英路通科技有限公司 Early warning method, device, storage medium and program product for shielding parking space
CN115311589B (en) * 2022-10-12 2023-03-31 山东乾元泽孚科技股份有限公司 Hidden danger processing method and equipment for lighting building

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4481951B2 (en) * 2006-04-03 2010-06-16 富士通株式会社 Imaging device
CN105933607B (en) * 2016-05-26 2019-01-22 维沃移动通信有限公司 A kind of take pictures effect method of adjustment and the mobile terminal of mobile terminal
CN107909065B (en) * 2017-12-29 2020-06-16 百度在线网络技术(北京)有限公司 Method and device for detecting face occlusion

Also Published As

Publication number Publication date
CN108712606A (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN108712606B (en) Reminding method, device, storage medium and mobile terminal
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
CN109547701B (en) Image shooting method and device, storage medium and electronic equipment
CN109685746A (en) Brightness of image method of adjustment, device, storage medium and terminal
CN103871051B (en) Image processing method, device and electronic equipment
CN109951595A (en) Intelligence adjusts method, apparatus, storage medium and the mobile terminal of screen intensity
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN109951628A (en) Model building method, photographic method, device, storage medium and terminal
CN108234882B (en) Image blurring method and mobile terminal
CN108551552B (en) Image processing method, device, storage medium and mobile terminal
CN109120863B (en) Shooting method, shooting device, storage medium and mobile terminal
CN109741281A (en) Image processing method, device, storage medium and terminal
CN108304758A (en) Facial features tracking method and device
CN108494996B (en) Image processing method, device, storage medium and mobile terminal
CN109741279A (en) Image saturation method of adjustment, device, storage medium and terminal
CN108765380A (en) Image processing method, device, storage medium and mobile terminal
CN109167931A (en) Image processing method, device, storage medium and mobile terminal
CN109218621B (en) Image processing method, device, storage medium and mobile terminal
CN109005350A (en) Image repeats shooting reminding method, device, storage medium and mobile terminal
CN109639896A (en) Block object detecting method, device, storage medium and mobile terminal
CN106204552B (en) A kind of detection method and device of video source
CN109360222B (en) Image segmentation method, device and storage medium
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
CN109285178A (en) Image partition method, device and storage medium
CN107292817B (en) Image processing method, device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant