CN109919116A - Scene recognition method, device, electronic equipment and storage medium - Google Patents

Scene recognition method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109919116A
CN109919116A CN201910194130.6A CN201910194130A CN109919116A CN 109919116 A CN109919116 A CN 109919116A CN 201910194130 A CN201910194130 A CN 201910194130A CN 109919116 A CN109919116 A CN 109919116A
Authority
CN
China
Prior art keywords
scene
region
preview screen
scene tag
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910194130.6A
Other languages
Chinese (zh)
Other versions
CN109919116B (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910194130.6A priority Critical patent/CN109919116B/en
Publication of CN109919116A publication Critical patent/CN109919116A/en
Application granted granted Critical
Publication of CN109919116B publication Critical patent/CN109919116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application proposes a kind of scene recognition method, device, electronic equipment and storage medium, belongs to technical field of imaging.Wherein, this method comprises: according to default rule, current preview screen is divided into N number of region, wherein N is the positive integer greater than 1;Using preset scene Recognition model, scene Recognition is carried out to N number of region respectively, with the corresponding scene tag in each region of determination;According to the corresponding scene tag in each region, the corresponding scene tag of current preview screen is determined;According to the current corresponding scene tag of preview screen, target screening-mode is determined.As a result, by this scene recognition method, interfering with each other for a variety of image contents in preview screen is reduced, the accuracy rate of scene Recognition is improved, improves user experience.

Description

Scene recognition method, device, electronic equipment and storage medium
Technical field
This application involves technical field of imaging more particularly to a kind of scene recognition method, device, electronic equipment and storage to be situated between Matter.
Background technique
With the development of science and technology, mobile terminal is more more and more universal.Most mobile terminals are all built-in with camera, and With the enhancing of mobile terminal processing capacity and the development of camera technology, the performance of built-in camera is stronger and stronger, claps The quality for taking the photograph image is also higher and higher.Easy to operate and easy to carry, people's use in daily life of present mobile terminal Mobile terminal, which is taken pictures, has become a kind of normality.
In the related technology, mobile terminal can determine current photographed scene, in turn according to the entire content of preview screen Select corresponding screening-mode.But this scene recognition method, when the scene of shooting is complex, such as preview screen In simultaneously comprising the plurality of kinds of contents such as portrait, building, night scene when, the plurality of kinds of contents in picture can interfere with each other, so as to cause scene The error rate of identification is high, affects user experience.
Summary of the invention
Scene recognition method, device, electronic equipment and the storage medium that the application proposes, for solving in the related technology, Scene recognition method in mobile terminal, when the scene of shooting is complex, the plurality of kinds of contents in picture can be interfered with each other, from And the problem of causing the error rate of scene Recognition high, influencing user experience.
The scene recognition method that the application one side embodiment proposes, comprising: according to default rule, by current preview Picture is divided into N number of region, wherein N is the positive integer greater than 1;Using preset scene Recognition model, respectively to described N number of Region carries out scene Recognition, with the corresponding scene tag in each region of determination;According to the corresponding scene tag in each region, Determine the corresponding scene tag of the current preview screen;According to the corresponding scene tag of the current preview screen, really Set the goal screening-mode.
The scene Recognition device that the application another aspect embodiment proposes, comprising: division module, for according to preset rule Then, current preview screen is divided into N number of region, wherein N is the positive integer greater than 1;Identification module, it is default for utilizing Scene Recognition model, scene Recognition is carried out to the N number of region respectively, with the corresponding scene tag in each region of determination;The One determining module, for determining the corresponding field of the current preview screen according to the corresponding scene tag in each region Scape label;Second determining module, for determining that target shoots mould according to the corresponding scene tag of the current preview screen Formula.
The electronic equipment that the application another further aspect embodiment proposes comprising: the camera module, memory, processor And store the computer program that can be run on a memory and on a processor, which is characterized in that described in the processor executes Foregoing scene recognition method is realized when program.
The computer readable storage medium that the application another further aspect embodiment proposes, is stored thereon with computer program, It is characterized in that, foregoing scene recognition method is realized when described program is executed by processor.
The computer program that the another aspect embodiment of the application proposes, when which is executed by processor, to realize this Shen It please scene recognition method described in embodiment.
Scene recognition method, device, electronic equipment, computer readable storage medium and calculating provided by the embodiments of the present application Current preview screen can be divided into multiple regions, and utilize preset scene Recognition according to default rule by machine program Model carries out scene Recognition to multiple regions respectively, with the corresponding scene tag in each region of determination, later according to each region Corresponding scene tag determines the corresponding scene tag of current preview screen, and then corresponding according to current preview screen Scene tag determines target screening-mode.As a result, by the way that current preview screen is divided into multiple regions, and respectively to every A region carries out scene Recognition, can determine that current preview picture is corresponding according to the corresponding scene tag in each region later Scene tag improves the accuracy rate of scene Recognition, changes to reduce interfering with each other for a variety of image contents in preview screen It has been apt to user experience.
The additional aspect of the application and advantage will be set forth in part in the description, and will partially become from the following description It obtains obviously, or recognized by the practice of the application.
Detailed description of the invention
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, in which:
Fig. 1 is a kind of flow diagram of scene recognition method provided by the embodiment of the present application;
Fig. 2 is the flow diagram of another kind scene recognition method provided by the embodiment of the present application;
Fig. 3 is the flow diagram of another kind scene recognition method provided by the embodiment of the present application;
Fig. 4 is a kind of structural schematic diagram of scene Recognition device provided by the embodiment of the present application;
Fig. 5 is the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present application;
Fig. 6 is the structural schematic diagram of another kind electronic equipment provided by the embodiment of the present application.
Specific embodiment
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element.The embodiments described below with reference to the accompanying drawings are exemplary, It is intended for explaining the application, and should not be understood as the limitation to the application.
The embodiment of the present application is in the related technology, the scene recognition method in mobile terminal, shooting scene more When complicated, the plurality of kinds of contents in picture can be interfered with each other, high so as to cause the error rate of scene Recognition, influence asking for user experience Topic, proposes a kind of scene recognition method.
Scene recognition method provided by the embodiments of the present application can draw current preview screen according to default rule It is divided into multiple regions, and utilizes preset scene Recognition model, scene Recognition is carried out to multiple regions respectively, with each area of determination The corresponding scene tag in domain determines the corresponding scene of current preview screen later according to the corresponding scene tag in each region Label, and then according to the current corresponding scene tag of preview screen, determine target screening-mode.As a result, by will be current Preview screen is divided into multiple regions, and carries out scene Recognition to each region respectively, later can be corresponding according to each region Scene tag, the corresponding scene tag of current preview picture is determined, to reduce a variety of image contents in preview screen It interferes with each other, improves the accuracy rate of scene Recognition, improve user experience.
Below with reference to the accompanying drawings to scene recognition method provided by the present application, device, electronic equipment, storage medium and computer Program is described in detail.
Fig. 1 is a kind of flow diagram of scene recognition method provided by the embodiment of the present application.
As shown in Figure 1, the scene recognition method, comprising the following steps:
Step 101, according to default rule, current preview screen is divided into N number of region, wherein N is greater than 1 Positive integer.
It should be noted that when current photographed scene is complex, may include in current preview screen portrait, A variety of image contents such as building, night scene, and different types of image content in preview screen can carry out scene to preview screen When identification, interfere with each other, it is low so as to cause the accuracy rate of scene Recognition.For example, when occurring portrait in preview screen, it will usually It is portrait mode of figure by current scene Recognition, still, if portrait only accounts for the very small part of preview screen, and user thinks pair Scenery in picture is shot, at this point, current scene is determined as portrait mode of figure, will lead to scene Recognition inaccuracy, It is undesirable so as to cause the shooting effect to scenery.Therefore, it in a kind of possible way of realization of the embodiment of the present application, can incite somebody to action Current preview screen is divided into multiple regions, and carries out scene Recognition to each region marked off respectively, to reduce preview Interfering with each other between a variety of image contents in picture, to improve the accuracy rate of scene Recognition.
Wherein, default rule may include the specific value of N, and divide the rule that preview screen is followed.Than Such as, preview screen is averagely divided into 9 regions, or be averagely divided into 16 regions etc..
It should be noted that the value of N is bigger, the accuracy of scene Recognition is higher, correspondingly, carrying out answering for scene Recognition Miscellaneous degree also will increase.In actual use, the rule for dividing preview screen can be preset, according to actual needs to balance scene Recognition Accuracy and complexity.Also, when being divided into multiple regions to preview screen, preview screen can be averagely divided into multiple Preview screen, can also be divided into the different region of multiple areas by the identical region of area, and the embodiment of the present application does not do this It limits.
Step 102, using preset scene Recognition model, scene Recognition is carried out to N number of region respectively, it is every to determine The corresponding scene tag in a region.
In the embodiment of the present application, current preview screen is divided into after N number of region, it can utilize preset field Scape identification model carries out scene Recognition to each region after division respectively, with the corresponding scene tag in each region of determination.
It should be noted that preset scene Recognition model is to be trained acquisition according to a large amount of image data, and it can To integrate in the electronic device.After image data is inputted preset scene Recognition model, preset scene Recognition model can Directly to export the corresponding scene tag of the image data.It therefore, can be in a kind of application possible way of realization of embodiment By the corresponding image data in each region after division, preset scene Recognition model is inputted, with the corresponding field in each region of determination Scape label.
For example, if in the corresponding image data of region A including portrait, region A correspondence image data are inputted pre- If scene Recognition model when, can determine the corresponding scene tag of region A be " portrait ";If in the corresponding image of region B Luminance information is less than threshold value, then, can be by region B's when B corresponding image data in region being inputted preset scene knowledge model Scene tag is determined as " night scene ".
Step 103, according to the corresponding scene tag in each region, the corresponding field of the current preview screen is determined Scape label.
It in the embodiment of the present application, can be according to each region after determining the corresponding scene tag in each region Corresponding scene tag determines the corresponding scene tag of current preview picture.Specifically, can be according to field each in preview screen The quantity of scape label determines the corresponding scene tag of current preview picture.
Further, the corresponding scene tag in each region can be counted, with according to the quantity of each scene tag, Determine the corresponding scene tag of current preview picture.I.e. in a kind of possible way of realization of the embodiment of the present application, above-mentioned steps 103, may include:
The corresponding scene tag in each region is counted, determines the corresponding each field of the current preview screen The quantity of scape label;
By the maximum scene tag of quantity, it is determined as the corresponding scene tag of the current preview screen.
It should be noted that the quantity of scene tag is the quantity in the corresponding region of the scene tag, scene tag Quantity is bigger, illustrates that the quantity in the corresponding region of the scene tag is bigger.Correspondingly, the corresponding area of the maximum scene tag of quantity The quantity in domain is also maximum, i.e., the area of preview screen shared by the corresponding region of the scene tag is also maximum or larger.Therefore, quantity Maximum scene tag can reflect out the corresponding scene in current preview picture major part region, so as to which quantity is maximum Scene tag be determined as the corresponding scene tag of current preview picture.
In a kind of possible way of realization of the embodiment of the present application, the first threshold of scene tag proportion can also be preset It is worth, and according to the ratio of the quantity of every kind of scene tag and all scene tag quantity, the relationship with preset first threshold, really The corresponding scene tag of settled preceding preview screen.Specifically, can be by the quantity of scene tag and all scene tag quantity Ratio is greater than the scene tag of first threshold, is determined as the scene tag of current preview picture.
For example, preset first threshold is 40%, is determined in current preview screen comprising " portrait " and " building " Two kinds of scene tags, wherein the ratio of " portrait " scene tag and all scene tags is 70%, " building " scene tag and institute The ratio for having scene tag is 30%, then can determine that the corresponding scene tag of current preview picture is " portrait ".
In the alternatively possible way of realization of the embodiment of the present application, multiple scene tag proportions can also be preset Threshold value, and according to the relationship of scene tag and each threshold value each in current preview picture, determine the first scene tag, the second scene Label, third scene tag etc., and then the first scene tag is determined as the corresponding scene tag of preview screen, or can also be with The second scene tag or third scene tag are determined as the corresponding scene tag of current preview picture according to actual needs.
Specifically, if each threshold value of preset scene tag proportion is first threshold, second threshold, wherein first Threshold value is greater than second threshold, then, can when the ratio of the quantity of scene tag and all scene tag quantity is greater than first threshold The scene tag is determined as the first scene tag;When the ratio of the quantity and all scene tag quantity of scene tag is less than First threshold and be greater than second threshold when, which can be determined as to the second scene tag;When the quantity of scene tag When being less than second threshold with the ratio of all scene tag quantity, which can be determined as to third scene tag.
Step 104, according to the corresponding scene tag of the current preview screen, target screening-mode is determined.
It in the embodiment of the present application, can be according to current after determining the corresponding scene tag of current preview screen The corresponding scene tag of preview screen determines target screening-mode, and according to determining target screening-mode, carries out image and adopt Collection.
For example, if the corresponding scene tag of current preview screen is " portrait ", it can determine that target shoots mould Formula is " portrait mode of figure ", and according to every acquisition parameters in " portrait mode of figure ", carries out Image Acquisition;If current preview screen Corresponding scene tag is " night scene ", then can determine that target screening-mode is " night scene mode ", and according in " night scene mode " Every acquisition parameters, carry out Image Acquisition.
Scene recognition method provided by the embodiments of the present application can draw current preview screen according to default rule It is divided into multiple regions, and utilizes preset scene Recognition model, scene Recognition is carried out to multiple regions respectively, with each area of determination The corresponding scene tag in domain determines the corresponding scene of current preview screen later according to the corresponding scene tag in each region Label, and then according to the current corresponding scene tag of preview screen, determine target screening-mode.As a result, by will be current Preview screen is divided into multiple regions, and carries out scene Recognition to each region respectively, later can be corresponding according to each region Scene tag, the corresponding scene tag of current preview picture is determined, to reduce a variety of image contents in preview screen It interferes with each other, improves the accuracy rate of scene Recognition, improve user experience.
In a kind of possible way of realization of the embodiment of the present application, current preview screen can also be divided into multiple faces The different region of product, for example, can be the lesser multiple regions of area by the region division of paying close attention in preview screen, with into One step improves the accuracy to region scene identification is paid close attention in preview screen.
Below with reference to Fig. 2, another scene recognition method provided by the embodiments of the present application is further described.
Fig. 2 is the flow diagram of another kind scene recognition method provided by the embodiment of the present application.
As shown in Fig. 2, the scene recognition method, comprising the following steps:
Step 201, according to default rule, the current preview screen is divided into the different region of N number of area, In, N is the positive integer greater than 1.
It should be noted that may include region-of-interest and the non-interesting region of user in preview screen.In general, it uses Family when shooting, the object that alignment lens can be shot so that the object of shooting is in the middle position of preview screen.Cause This can be according to preset ratio, by the portion of the centre of preview screen in a kind of possible way of realization of the embodiment of the present application Subregion is determined as the region-of-interest in preview screen.For example, preset ratio is 60%, then it can will be in preview screen Centered on heart point and area accounts for the region of the preview screen gross area 60%, is determined as the region-of-interest of preview screen, preview is drawn The region except region-of-interest is removed in face is determined as non-interesting region.
In a kind of possible way of realization of the embodiment of the present application, region-of-interest in preview screen and non-interesting region it Afterwards, region-of-interest and non-interesting region can be divided respectively according to different division rules.For example, area can will be paid close attention to Domain is divided into the lesser region of more multiple areas, to improve the accuracy to region-of-interest scene Recognition in preview screen;It will be non- Region-of-interest is divided into negligible amounts and the biggish region of area, to reduce the quantity in the region marked off in preview screen, drop The complexity of low scene Recognition.
For example, the region-of-interest in preview screen can be averagely divided into the identical region of multiple areas, such as 9 It is a, 16 etc.;Non-interesting region in preview screen is divided into a region as a whole, or by non-interesting region It is divided into identical two regions of area etc..
It should be noted that the example above is exemplary only, the limitation to the application cannot be considered as.In actual use, The division rule of region-of-interest and the division rule in non-interesting region in preview screen can be preset according to actual needs, this Application embodiment does not limit this.
Step 202, using preset scene Recognition model, scene Recognition is carried out to N number of region respectively, it is every to determine The corresponding scene tag in a region.
Step 203, the corresponding scene tag in each region is counted, determines the current preview screen pair The quantity for each scene tag answered.
The specific implementation process and principle of above-mentioned steps 203-203, is referred to the detailed description of above-described embodiment, herein It repeats no more.
Step 204, corresponding according to the quantity of the corresponding each scene tag of the current preview screen and each scene tag The region gross area, determine the corresponding scene tag of the current preview screen.
It should be noted that in the embodiment of the present application, it can be according to the current corresponding each scene tag of preview screen Quantity and each scene tag corresponding region gross area, it is common to determine the corresponding scene tag of current preview picture.In this Shen It please can be determined first according to the quantity of scene tag each in preview screen current in a kind of possible way of realization of embodiment The corresponding scene tag of preview screen, it can the maximum scene tag of quantity is determined as the corresponding field of current preview screen Scape label.Since the area in each region for including in preview screen is different, if current preview screen is corresponding each In scene tag, the maximum scene tag of quantity has multiple, then can further be distinguished according to the maximum each scene tag of quantity The corresponding region gross area determines the corresponding scene tag of current preview screen, it can most by the corresponding region gross area Greatly and the maximum scene tag of quantity, it is determined as the corresponding scene tag of current preview screen.
For example, determine that the corresponding maximum scene tag of quantity is " portrait " and " builds in current preview screen Build ", quantity is 4, and the corresponding region area of " portrait " scene tag accounts for the 30% of the preview screen gross area, " building " field The corresponding region area of scape label accounts for the 40% of the preview screen gross area, then can determine the corresponding scene of current preview screen Label is " building ".
Further, preset scene Recognition model can also export the corresponding scene tag in each region simultaneously, and each The scene tag in region is the confidence level for the scene tag determined.Therefore, if in current preview screen including multiple quantity Maximum scene tag can also further determine that current preview is drawn according to the confidence level of the maximum each scene tag of quantity The corresponding scene tag in face.I.e. in a kind of possible way of realization of the embodiment of the present application, above-mentioned steps 204 may include:
Determine the confidence level of the corresponding scene tag in each region;
It determines first total confidence level of corresponding first scene tag of the current preview screen and corresponds to the second scene mark Second total confidence level of label;
By the corresponding scene tag of the larger value in described first total confidence level and second total confidence level, it is determined as institute State the corresponding scene tag of current preview screen.
Wherein, the confidence level of the corresponding scene tag in each region refers to the corresponding scene tag in each region to determine The confidence level of scene tag out can directly be exported by preset scene Recognition model.First scene tag and the second scene Label refers to the maximum scene tag of the quantity for including in current preview screen.
It should be noted that the image data in region each in current preview screen is inputted preset scene Recognition model Later, preset scene Recognition model can export the corresponding scene tag in each region and the confidence of the scene tag simultaneously Degree, therefore, can be according to the output knot of preset scene Recognition model in a kind of possible way of realization of the embodiment of the present application Fruit determines the confidence level of the corresponding scene tag in each region.
For example, the image data of region A is inputted into preset scene Recognition model, preset scene Recognition model Output is " 80% night scene ", then can determine that the confidence level of the corresponding scene tag of region A " night scene " is 80%.
In a kind of possible way of realization of the embodiment of the present application, determine that each region is corresponding in current preview screen Scene tag confidence level after, can determine the first scene mark according to the confidence level of the corresponding scene tag in each region First total confidence level of label and second total confidence level of the second scene tag, and then by first total confidence level and second total confidence The corresponding scene tag of the larger value in degree, is determined as the corresponding scene tag of current preview screen.
Specifically, first total confidence level of the first scene tag, it is corresponding to can be the corresponding each region of the first scene tag Scene tag confidence level summation, be also possible to the confidence of the corresponding scene tag in the corresponding each region of the first scene tag The mean value of degree, the total confidence level of the second of the second scene tag determine in an identical manner.
As an example it is assumed that by the first scene tag and the scene tag in the corresponding each region of the second scene tag The mean value of confidence level determines first total confidence level of the first scene and second total confidence level of the second scene.Current preview The maximum scene tag of quantity is " portrait " and " building " in picture, and the corresponding region of " portrait " scene tag is region A and area Domain B, the corresponding region of " building " scene tag are region C and region D, and the confidence level of corresponding " portrait " scene tag of region A is The confidence level of corresponding " portrait " scene tag of 80%, region B is 90%, and the confidence level of corresponding " building " scene tag of region C is The confidence level of corresponding " building " scene tag of 75%, region D is 70%, then can determine total confidence level of " portrait " scene tag It is 85%, total confidence level of " building " scene tag is 72.5%, therefore, " portrait " scene tag can be determined as current The corresponding scene tag of preview screen.
Step 205, according to the corresponding scene tag of the current preview screen, target screening-mode is determined.
The specific implementation process and principle of above-mentioned steps 205, are referred to the detailed description of above-described embodiment, herein no longer It repeats.
Scene recognition method provided by the embodiments of the present application can draw current preview screen according to default rule It is divided into the different region of multiple areas, and scene Recognition is carried out to multiple regions respectively, with the corresponding scene in each region of determination Label, later according to the total face of quantity and the corresponding region of each scene tag of the current corresponding each scene tag of preview screen Product determines the corresponding scene tag of current preview screen, and then according to the current corresponding scene tag of preview screen, determines Target screening-mode.As a result, by the way that current preview screen is divided into the different region of multiple areas, and according to all areas The quantity of corresponding each scene tag and the corresponding region gross area of each scene tag, determine the corresponding field of current preview screen Scape label to not only reduce interfering with each other for a variety of image contents in preview screen, and further improves scene knowledge Other accuracy rate, further improves user experience.
In a kind of possible way of realization of the application, according to the determining mesh of the current corresponding scene tag of preview screen Mark screening-mode, it may be possible to pass through acquisition multiple image and be shot by way of synthesizing.Therefore, target screening-mode is being determined When, it can also determine the parameters such as amount of images to be collected, the corresponding exposure time of every frame image to be collected, sensitivity, with And to the mode that collected multiple image is synthesized.
Below with reference to Fig. 3, another scene recognition method provided by the embodiments of the present application is further described.
Fig. 3 is the flow diagram of another kind scene recognition method provided by the embodiment of the present application.
As shown in figure 3, the scene recognition method, comprising the following steps:
Step 301, according to default rule, current preview screen is divided into N number of region, and utilize preset scene Identification model carries out scene Recognition to N number of region respectively, with the corresponding scene tag in each region of determination.
Step 302, according to the corresponding scene tag in each region, the corresponding field of the current preview screen is determined Scape label.
The specific implementation process and principle of above-mentioned steps 301-302, is referred to the detailed description of above-described embodiment, herein It repeats no more.
Step 303, according to the corresponding scene tag of the current preview screen, current amount of images to be collected is determined And the corresponding target light exposure amount of every frame image to be collected.
Wherein, light exposure refers in the time for exposure through the quantity of the light of camera lens.
It in the embodiment of the present application, can be according to preview after determining the corresponding scene tag of current preview screen The mapping relations and the corresponding scene tag of preview screen of the corresponding scene tag of picture and amount of images to be collected and pre- If exposure compensation mode mapping relations, determine current amount of images to be collected and preset exposure compensation mode, Later according to the illuminance of present filming scene, determine benchmark light exposure, so according to the benchmark light exposure determined and Preset exposure compensation mode determines the corresponding target light exposure amount of every frame image to be collected.
It should be noted that in actual use, the corresponding scene tag of preview screen is reflected with amount of images to be collected The mapping relations for penetrating relationship and preview screen corresponding scene tag and preset exposure compensation mode, can be according to reality It needs to preset, the embodiment of the present application does not limit this.,
In a kind of possible way of realization of the embodiment of the present application, the survey optical module in camera module can use, obtain The illuminance of present filming scene, and auto-exposure control (Auto Exposure Control, abbreviation AEC) algorithm is utilized, really The corresponding benchmark light exposure of settled preceding illuminance.In the screening-mode of acquisition multiple image, the light exposure of every frame image can be with It is different, to obtain the image with Different Dynamic range, so that the image after synthesis has higher dynamic range, improves The overall brightness and quality of image.It can be when acquiring every frame image, using different exposure compensation modes, and according to exposure The benchmark light exposure that compensation model and current illuminance are determined determines the corresponding target light exposure amount of every frame image.
In the embodiment of the present application, preset exposure compensation mode refers to and distinguishes preset light exposure for every frame image The combination of (Exposure Value, abbreviation EV).In the initial definition of light exposure, light exposure does not imply that an accurately number Value, and refer to " all camera apertures and the combination of exposure time that identical light exposure can be provided ".Sensitivity, aperture and exposure Light time is long to have determined the light exposure of camera, different parameter combinations can produce equal light exposure, i.e. these various combinations EV value is the same, for example, combined in the identical situation of sensitivity using the aperture of 1/125 second exposure time and F/11, with Using the combination of 1/250 second time for exposure and F/8.0 shutter, the light exposure of acquisition be it is identical, i.e., EV value is identical.EV value When being 0, refer to the light exposure obtained when sensitivity is 100, aperture-coefficient F/1, exposure time are 1 second;Light exposure increases by one Shelves, i.e., exposure time double perhaps sensitivity double or aperture increase by one grade, EV value increase by 1, that is to say, that The corresponding light exposure of 1EV is twice of the corresponding light exposure of 0EV.It as shown in table 1, is exposure time, aperture, sensitivity difference list Solely when variation, the corresponding relationship with EV value.
Table 1
Camera work entered after the digital age, and the light measuring function of camera internal is very powerful, and EV is then through common Differential to indicate to expose one on scale, many cameras all allow that exposure compensating is arranged, and are usually indicated with EV.This In the case of, EV refers to that the difference of camera photometering data corresponding light exposure and actual exposure amount, such as the exposure compensating of+1EV are Refer to and increase by one grade of exposure relative to the corresponding light exposure of camera photometering data, is i.e. actual exposure amount is that camera photometering data are corresponding Twice of light exposure.
It in the embodiment of the present application, can be by the determining corresponding EV value of benchmark light exposure when presetting exposure compensation mode It is preset as 0 ,+1EV refers to one grade of exposure of increase, i.e. light exposure is 2 times of benchmark light exposure, and+2EV refers to two grades of exposures of increase, I.e. light exposure is 4 times of benchmark light exposure, and -1EV refers to one grade of exposure of reduction, i.e. light exposure is 0.5 times etc. of benchmark light exposure Deng.
For example, if amount of images to be collected is 7 frames, the corresponding EV value range of preset exposure compensation mode can To be [+1 ,+1 ,+1 ,+1,0, -3, -6].Wherein, exposure compensation mode is the frame of+1EV, can solve noise problem, by bright It spends relatively high frame and carries out time domain noise reduction, inhibit noise while promoting dark portion details;Exposure compensation mode is the frame of -6EV, The problem of can solve bloom overexposure retains the details of highlight area;Exposure compensation mode is the frame of 0EV and -3EV, then can be with For keeping bloom to the transition between dark space, the effect of preferable light and shade transition is kept.
It should be noted that the corresponding each EV value of preset exposure compensation mode either specifically set according to actual needs It sets, is also possible to the EV value range according to setting, and the principle equal according to the difference between each EV value acquires, the application Embodiment does not limit this.
In a kind of possible way of realization of the embodiment of the present application, according to the illuminance of present filming scene, calculated by ACE It, can be true according to benchmark light exposure and according to the current corresponding scene tag of preview screen after method determines benchmark light exposure The preset exposure compensation mode made determines the corresponding target light exposure amount of every frame image.
For example, if according to the current corresponding scene tag of preview screen, determine that amount of images to be collected is 7 Frame, the corresponding EV value of preset exposure compensation mode is [+1 ,+1 ,+1 ,+1,0, -3, -6].According to the illumination of current shooting environment The determining benchmark light exposure of degree is X.It can determine that every frame waits for according to benchmark light exposure X and preset exposure compensation mode later Acquire the corresponding target light exposure amount of image, it is assumed that the corresponding EV value of the i-th frame image is EVi, then its corresponding target light exposure amount beThe corresponding target light exposure amount of image to be collected if EV value is 0 is X, and the image to be collected that EV value is+1 is corresponding Target light exposure amount is 2X, and the corresponding target light exposure amount of the image to be collected that EV value is -3 is 2-3·X。
Step 304, the degree of jitter current according to camera module, determines target sensitivity.
Wherein, sensitivity, also known as ISO value are that value measures egative film for the index of the sensitivity level of light.For sensitivity Lower egative film needs to expose the longer time to reach the identical imaging with the higher egative film of sensitivity.The sense of digital camera Luminosity is a kind of a kind of index similar to film speed, and the ISO of digital camera can be by adjusting the sensitive of sensor devices Degree merges sensitivity speck to adjust, that is to say, that can be by promoting the light sensitivity of sensor devices or merging several Adjacent sensitivity speck come achieve the purpose that promoted ISO.It should be noted that either digital or egative film photography, in order to reduce Time for exposure would generally introduce more noise using relatively high sensitivity, so as to cause picture quality reduction.
In the embodiment of the present application, target sensitivity, refers to the degree of jitter current according to camera module, determine with work as The adaptable minimum sensitivity of preceding degree of jitter.
It should be noted that in the embodiment of the present application, can by acquiring the lower image of multiframe sensitivity simultaneously, and The multiple image of acquisition is synthesized in a manner of generating target image, the dynamic range and entirety of shooting image can be not only promoted Brightness, and by the value of control sensitivity, effectively inhibit the noise in image, improves the quality of shooting image.
It in the embodiment of the present application, can be by obtaining current gyroscope (Gyro-sensor) information of electronic equipment, really Determine the current degree of jitter of mobile phone, the i.e. current degree of jitter of camera module.
Gyroscope is called angular-rate sensor, can measure rotational angular velocity when physical quantity deflection, inclination.It is set in electronics In standby, gyroscope can be very good the movement of measurement rotation, deflection, judge that the reality of user is dynamic so as to Accurate Analysis Make.The gyroscope information (gyro information) of electronic equipment may include movement of the mobile phone in three dimensions on three dimension directions Information, three dimensions of three-dimensional space can be expressed as three X-axis, Y-axis, Z axis directions, wherein X-axis, Y-axis, Z axis two Two vertical relations.
It should be noted that in a kind of possible way of realization of the embodiment of the present application, it can be current according to electronic equipment Gyro information, determine the current degree of jitter of camera module.The absolute value of the gyro movement of electronic equipment in three directions Bigger, then the degree of jitter of camera module is bigger.Specifically, the absolute value threshold of gyro movement in three directions can be preset Value, and the sum of the absolute value moved according to the current gyro in three directions got, the relationship with preset threshold value, really Determine the current degree of jitter of camera module.
As an example it is assumed that preset threshold value is third threshold value A, the 4th threshold value B, the 5th threshold value C, and A < B < C is currently obtained The sum of absolute value for the movement of gyro in three directions got is S.If S < A, it is determined that the current degree of jitter of camera module For " non-jitter ";If A < S < B, it can determine that the current degree of jitter of camera module is " slight jitter ";If B < S < C, can To determine the current degree of jitter of camera module for " small shake ";If S > C, the current degree of jitter of camera module can be determined For " big shake ".
It should be noted that the example above is exemplary only, the limitation to the application cannot be considered as.In actual use, Can the quantity of preset threshold and each threshold value according to actual needs specific value, and the pass according to gyro information and each threshold value System presets the mapping relations of gyro information and camera module degree of jitter.
In a kind of possible way of realization of the embodiment of the present application, degree of jitter that can be current according to camera module, really The target sensitivity of fixed every frame image, so that shooting duration of video control is in suitable range.Specifically, if camera module is current Degree of jitter it is smaller, then can by the lesser value of the appropriate boil down to of target sensitivity, with effectively inhibit every frame image noise, Improve the quality of shooting image;If the current degree of jitter of camera module is larger, target sensitivity can be properly increased for Biggish value avoids the ghost of degree of jitter aggravation introducing to shorten shooting duration of video.
For example, however, it is determined that the current degree of jitter of camera module is " non-jitter ", then can be determined as sensitivity Lesser value to obtain higher-quality image as far as possible, for example determines that sensitivity is 100;It is trembled if it is determined that camera module is current Traverse degree is " slight jitter ", then sensitivity can be determined as to biggish value, to reduce shooting duration of video, for example determines sensitivity It is 200;If it is determined that the current degree of jitter of camera module is " small shake ", then sensitivity can be further increased, be clapped with reducing Duration is taken the photograph, for example determines that sensitivity is 220;If it is determined that the current degree of jitter of camera module is " big shake ", then can determine Current degree of jitter is excessive, can further increase sensitivity at this time, to reduce shooting duration of video, for example determines that sensitivity is 250。
Step 305, according to the target light exposure amount and the target sensitivity, the corresponding exposure of every frame image to be collected is determined Light time is long.
Wherein, exposure time refers to time of the light by camera lens.
It should be noted that light exposure is related with aperture, exposure time and sensitivity.Wherein, aperture i.e. light admission port Diameter determines the quantity that light passes through in the unit time.When the corresponding sensitivity of every frame image to be collected is identical, and aperture size When identical, the corresponding light exposure of image to be collected is bigger, and the corresponding exposure time of the image to be collected is bigger.
In the embodiment of the present application, the size of aperture can be constant, therefore, determine every frame image to be collected After target light exposure amount and target sensitivity, it can determine that every frame is to be collected according to target sensitivity and target light exposure amount The corresponding exposure time of image, and the corresponding exposure time of image to be collected is proportional to its target light exposure amount.
It, can also be first according to preset target sensitivity and base in a kind of possible way of realization of the embodiment of the present application Quasi- light exposure determines benchmark exposure time, and then according to benchmark exposure time and preset exposure compensation mode, determines every frame The corresponding exposure time of image to be collected.Specifically, assuming that benchmark exposure time is T, the EV value of the i-th frame image to be collected is EVi, then the corresponding exposure time of the i-th frame image to be collected be
Further, in the alternatively possible way of realization of the embodiment of the present application, in order to improve night scene shooting image Quality can also directly preset a variety of night scene modes suitable for night scene photographed scene, and the preview screen current in determination When corresponding scene tag is " night scene ", according to other corresponding scene tags of region each in current preview screen, further Determine current night scene mode.It wherein, include picture number to be collected when acquiring image with the night scene mode in night scene mode Acquisition parameters, the preset night scene modes such as amount, target sensitivity, default exposure compensation mode can have foot prop night scene mode, hand Hold night scene mode, portrait night scene mode etc..
Specifically, can be when the corresponding scene tag of the current preview screen of determination is " night scene ", further judgement is worked as It in the corresponding scene tag in each region whether include " portrait " scene tag in preceding preview screen, and to the content of preview screen Brightness identification is carried out, with the current benchmark light exposure of determination.It whether include later " people according in the corresponding scene tag in each region Picture " scene tag and the current degree of jitter of camera module, determine current night scene mode.For example, camera module is current Degree of jitter is " non-jitter ", and does not include " portrait " scene tag in each region of current preview screen, then can determine Current night scene mode is " foot prop night scene mode ";If the current degree of jitter of camera module is " having shake ", and current pre- It lookes in each region of picture not comprising " portrait " scene tag, then can determine that current night scene mode is " hand-held night scene mould Formula ";If including " portrait " scene tag in each region of current preview screen, it can determine that current night scene mode is " portrait night scene mode ".
It is understood that according to the current degree of jitter of camera module and the corresponding scene of current preview screen Other corresponding scene tags of label and each region, determine current night scene mode and current benchmark light exposure it Afterwards, it can according to the amount of images to be collected, target sensitivity, preset exposure compensation mode for including in night scene mode, And benchmark light exposure, determine the corresponding exposure time of each frame image to be collected.
Further, in a kind of possible way of realization of the embodiment of the present application, each frame image to be collected can also be set Duration range locating for corresponding exposure time to further increase the quality of shooting image, and is not in exposure time and sets The corresponding exposure time of image to be collected within the scope of fixed duration is adjusted, so that the corresponding exposure of each frame image to be collected Duration is within the scope of the duration of setting.
Specifically, if the exposure time of at least frame image to be collected be greater than setting maximum time limit, according to setting when The long upper limit updates the corresponding exposure time of at least frame image to be collected, wherein the maximum time limit value range set is 4.5s To 5.5s;If the exposure time of an at least frame original image is less than the duration lower limit of setting, according to the duration lower limit of setting, update Exposure time is less than the exposure time of each frame original image of setting duration lower limit, wherein duration lower limit is greater than or equal to 10ms;
As an example it is assumed that the duration lower limit set is 10ms, maximum time limit 4.5s is trembled according to camera module is current Traverse degree determines that amount of images to be collected is 7 frames, and the corresponding exposure time of every frame image to be collected determined is respectively 220ms, 220ms, 220ms, 220ms, 100ms, 12.5ms, 6.25ms, then the exposure time of the 7th frame image to be collected is less than The duration lower limit of setting, then the exposure time of the image to be collected of a length of 6.25ms, is updated to when can be by 7 exposed frame 10ms。
Further, in the duration lower limit to exposure time less than setting or greater than the figure to be collected of the maximum time limit of setting After the exposure time of picture updates, the change of its light exposure will cause, so as to cause to update the figure to be collected of exposure time The exposure time of image to be collected of exposure time is equal or close, i.e., light exposure is equal or close as not updating with other, from And lead to the change of exposure compensation mode, eventually lead to the target image got undesirably.Therefore, to be collected in update After the exposure time of image, the ratio of front and back exposure time can be updated according to it, modifies the exposure of other frames image to be collected Light time length or sensitivity.
In a kind of possible way of realization of the embodiment of the present application, the figure to be collected for updating exposure time can be determined first The ratio between exposure time as updated exposure time and before updating, and it is less than the maximum time limit of setting to exposure time And it is to be collected to update remaining each frame according to the ratio determined for remaining each frame image to be collected for being greater than the duration lower limit of setting The target sensitivity or exposure time of image.Specifically, can will be before the ratio and remaining each frame image update to be collected Target sensitivity product, as the target sensitivity after remaining each frame image update to be collected;Alternatively, by the ratio and remaining Exposure time product before each frame image update to be collected, as the exposure time after remaining each frame image update to be collected.
Step 306, according to the target sensitivity and the corresponding exposure time of every frame image to be collected, multiframe is successively acquired Image.
Step 307, the multiple image is subjected to synthesis processing, to generate target image.
In the embodiment of the present application, it is determined that go out after the corresponding exposure time of every frame image to be collected, it can be according to mesh Sensitivity and exposure time are marked, multiple image is successively acquired, and the multiple image of acquisition is subjected to synthesis processing, generates target figure Picture, to improve the quality of shooting image.
Further, when synthesizing to the multiple image of acquisition, for different images, different power can be used Weight values, so that the quality of the target image generated is more preferably.I.e. in a kind of possible way of realization of the embodiment of the present application, above-mentioned step Rapid 307, may include:
According to the corresponding weighted value of preset every frame image, the multiple image is subjected to synthesis processing, to generate target Image.
It should be noted that in a kind of possible way of realization of the embodiment of the present application, it can be according to preset every frame figure As corresponding weighted value, each frame complete image of acquisition is successively overlapped, to generate composograph.Wherein, every frame image Corresponding weighted value can be different, to prevent highlight bar mistake while improving the overall brightness and dark space details of image It exposes, promotes the total quality of shooting image.
In a kind of possible way of realization of the embodiment of the present application, the corresponding weighted value of every frame image can be according to every frame The corresponding exposure compensation mode of image (i.e. EV value) is default, it can the mapping relations of default EV value and weighted value, and then basis The preset relation of every corresponding EV value of frame image and preset EV value and weighted value, determines the corresponding weight of every frame image Value generates composograph to synthesize to the multiple image of acquisition.
In actual use, the corresponding weighted value of every frame image, can preset, the embodiment of the present application is to this according to actual needs Without limitation.
Further, it when being synthesized to the multiple image of acquisition, can also be used according to the different zones in image Different synthesis models are synthesized, to further increase the quality of composograph.It is i.e. possible in the embodiment of the present application one kind In way of realization, above-mentioned steps 307 may include:
According to the corresponding scene tag in each region, the corresponding synthesis model in each region is determined;
According to the corresponding synthesis model in each region, each region in the multiple image is successively synthesized Processing, to generate target image.
In a kind of possible way of realization of the embodiment of the present application, when being synthesized to the multiple image of acquisition, image In the corresponding scene tag in each region it is different, the synthesis model used in synthesis is also possible to different.For example, current The corresponding scene tag of preview screen corresponding to each region synthesis model, can be with the synthesis model in other regions not Together, and by the synthesis model in each region corresponding to the corresponding scene tag of preview screen, it is preset as more preferably synthesis model, with Further increase the picture quality in each region corresponding to the corresponding scene tag of preview screen.
For example, if current preview screen is divided into 3 regions, the corresponding scene tag in first area is " to build Build ", the corresponding scene tag of second area and third region is " portrait ", then can determine the corresponding field of current preview screen Scape label is " portrait ", then second area and third area when synthesizing to the multiple image of acquisition, in each frame image The corresponding image synthesis model in domain is identical, and second area, third region synthesis model corresponding with first area are not With, for example, corresponding weighted value is different.
Scene recognition method provided by the embodiments of the present application can draw current preview screen according to default rule It is divided into multiple regions, and scene Recognition is carried out respectively to multiple regions, with the corresponding scene tag of the current preview screen of determination, Later according to the current corresponding scene tag of preview screen, current amount of images and every frame image to be collected to be collected are determined Corresponding target light exposure amount, and the degree of jitter current according to camera module, determine target sensitivity, and then according to target light exposure Amount and target sensitivity, determine the corresponding exposure time of every frame image to be collected, and to be collected according to target sensitivity and every frame The corresponding exposure time of image successively acquires multiple image and synthesizes.Pass through the current corresponding scene mark of preview screen as a result, Label acquire multiple image and are synthesized, to generate shooting image, so that the accuracy rate of scene Recognition is not only increased, and Multiple image can be acquired with corresponding target screening-mode and synthesized, further improved according to the scene tag determined The quality for shooting image, improves user experience.
In order to realize above-described embodiment, the application also proposes a kind of scene Recognition device.
Fig. 4 is a kind of structural schematic diagram of scene Recognition device provided by the embodiments of the present application.
As shown in figure 4, the scene Recognition device 40, comprising:
Current preview screen is divided into N number of region for according to default rule by division module 41, wherein N is Positive integer greater than 1;
Identification module 42 carries out scene Recognition to N number of region respectively for utilizing preset scene Recognition model, With the corresponding scene tag in each region of determination;
First determining module 43, for determining the current preview according to the corresponding scene tag in each region The corresponding scene tag of picture;
Second determining module 44, for determining that target is shot according to the corresponding scene tag of the current preview screen Mode.
In actual use, scene Recognition device provided by the embodiments of the present application, can be configured in any electronic equipment In, to execute aforementioned scene recognition method.
Scene Recognition device provided by the embodiments of the present application can draw current preview screen according to default rule It is divided into multiple regions, and utilizes preset scene Recognition model, scene Recognition is carried out to multiple regions respectively, with each area of determination The corresponding scene tag in domain determines the corresponding scene of current preview screen later according to the corresponding scene tag in each region Label, and then according to the current corresponding scene tag of preview screen, determine target screening-mode.As a result, by will be current Preview screen is divided into multiple regions, and carries out scene Recognition to each region respectively, later can be corresponding according to each region Scene tag, the corresponding scene tag of current preview picture is determined, to reduce a variety of image contents in preview screen It interferes with each other, improves the accuracy rate of scene Recognition, improve user experience.
In a kind of possible way of realization of the application, above-mentioned first determining module 43 is specifically used for:
The corresponding scene tag in each region is counted, determines the corresponding each field of the current preview screen The quantity of scape label;
By the maximum scene tag of quantity, it is determined as the corresponding scene tag of the current preview screen.
Further, in the alternatively possible way of realization of the application, if the maximum scene tag of quantity includes first Scene tag and the second scene tag, then above-mentioned scene Recognition device 40, further includes:
Third determining module, for determining the confidence level of the corresponding scene tag in each region;
Correspondingly, above-mentioned first determining module 43, is also used to:
It determines first total confidence level of corresponding first scene tag of the current preview screen and corresponds to the second scene mark Second total confidence level of label;
By the corresponding scene tag of the larger value in described first total confidence level and second total confidence level, it is determined as institute State the corresponding scene tag of current preview screen.
In a kind of possible way of realization of the application, above-mentioned division module 41 is specifically used for:
The current preview screen is divided into the different region of N number of area;
Correspondingly, above-mentioned first determining module 43, is also used to:
The corresponding scene tag in each region is counted, determines the corresponding each field of the current preview screen The quantity of scape label;
It is total according to the quantity of the corresponding each scene tag of the current preview screen and the corresponding region of each scene tag Area determines the corresponding scene tag of the current preview screen.
In a kind of possible way of realization of the application, above-mentioned second determining module 44 is specifically used for:
Determine the current corresponding target light exposure amount of amount of images and every frame image to be collected to be collected;
According to the current degree of jitter of camera module, target sensitivity is determined;
According to the target light exposure amount and the target sensitivity, the corresponding exposure time of every frame image to be collected is determined.
Further, in the alternatively possible way of realization of the application, above-mentioned scene Recognition device 40, further includes:
Acquisition module, for successively adopting according to the target sensitivity and the corresponding exposure time of every frame image to be collected Collect multiple image;
Synthesis module, for the multiple image to be carried out synthesis processing, to generate target image.
Further, in the alternatively possible way of realization of the application, above-mentioned synthesis module is specifically used for:
According to the corresponding scene tag in each region, the corresponding synthesis model in each region is determined;
According to the corresponding synthesis model in each region, each region in the multiple image is successively synthesized Processing, to generate target image.
Further, in the application in another possible way of realization, above-mentioned synthesis module is also used to:
According to the corresponding weighted value of preset every frame image, the multiple image is subjected to synthesis processing, to generate target Image.
It should be noted that the aforementioned explanation to Fig. 1, Fig. 2, scene recognition method embodiment shown in Fig. 3 is also fitted For the scene Recognition device 40 of the embodiment, details are not described herein again.
Scene Recognition device provided by the embodiments of the present application can draw current preview screen according to default rule It is divided into the different region of multiple areas, and scene Recognition is carried out to multiple regions respectively, with the corresponding scene in each region of determination Label, later according to the total face of quantity and the corresponding region of each scene tag of the current corresponding each scene tag of preview screen Product determines the corresponding scene tag of current preview screen, and then according to the current corresponding scene tag of preview screen, determines Target screening-mode.As a result, by the way that current preview screen is divided into the different region of multiple areas, and according to all areas The quantity of corresponding each scene tag and the corresponding region gross area of each scene tag, determine the corresponding field of current preview screen Scape label to not only reduce interfering with each other for a variety of image contents in preview screen, and further improves scene knowledge Other accuracy rate, further improves user experience.In order to realize above-described embodiment, the application also proposes a kind of electronic equipment.
Fig. 5 is the structural schematic diagram of electronic equipment provided by the embodiments of the present application.
As shown in figure 5, above-mentioned electronic equipment 200 includes: camera module 201, memory 210, processor 220 and is stored in On memory and the computer program that can run on a processor, when the processor executes described program, realize that the application is real Apply scene recognition method described in example.
As shown in fig. 6, electronic equipment 200 provided by the embodiments of the present application, can also include:
Memory 210 and processor 220 connect the bus 230 of different components (including memory 210 and processor 220), Memory 210 is stored with computer program, realizes scene described in the embodiment of the present application when processor 220 executes described program Recognition methods.
Bus 230 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC) Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Electronic equipment 200 typically comprises various electronic readable medium.These media can be it is any can be electric The usable medium that sub- equipment 200 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 210 can also include the computer system readable media of form of volatile memory, such as arbitrary access Memory (RAM) 240 and/or cache memory 250.Electronic equipment 200 may further include it is other it is removable/can not Mobile, volatile/non-volatile computer system storage medium.Only as an example, storage system 260 can be used for reading and writing not Movably, non-volatile magnetic media (Fig. 6 do not show, commonly referred to as " hard disk drive ").It, can be with although being not shown in Fig. 6 The disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") is provided, and non-volatile to moving The CD drive of CD (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driving Device can be connected by one or more data media interfaces with bus 230.Memory 210 may include at least one program Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform the application The function of each embodiment.
Program/utility 280 with one group of (at least one) program module 270, can store in such as memory In 210, such program module 270 includes --- but being not limited to --- operating system, one or more application program, other It may include the realization of network environment in program module and program data, each of these examples or certain combination.Journey Sequence module 270 usually executes function and/or method in embodiments described herein.
Electronic equipment 200 can also be with one or more external equipments 290 (such as keyboard, sensing equipment, display 291 Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 200 communicate, and/or with make Any equipment (such as network interface card, the modem that the electronic equipment 200 can be communicated with one or more of the other calculating equipment Etc.) communication.This communication can be carried out by input/output (I/O) interface 292.Also, electronic equipment 200 can also lead to Cross network adapter 293 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, example Such as internet) communication.As shown, network adapter 293 is communicated by bus 230 with other modules of electronic equipment 200.It answers When understanding, although not shown in the drawings, other hardware and/or software module can be used in conjunction with electronic equipment 200, including but unlimited In: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and number According to backup storage system etc..
Program of the processor 220 by operation storage in memory 210, thereby executing various function application and data Processing.
It should be noted that the implementation process and technical principle of the electronic equipment of the present embodiment are referring to aforementioned to the application reality The explanation of the scene recognition method of example is applied, details are not described herein again.
Electronic equipment provided by the embodiments of the present application can execute foregoing scene recognition method, according to preset Current preview screen is divided into multiple regions, and utilizes preset scene Recognition model by rule, respectively to multiple regions into Row scene Recognition, with the corresponding scene tag in each region of determination, later according to the corresponding scene tag in each region, determination is worked as The corresponding scene tag of preceding preview screen, and then according to the current corresponding scene tag of preview screen, determine that target is shot Mode.As a result, by the way that current preview screen is divided into multiple regions, and scene Recognition is carried out to each region respectively, it After the corresponding scene tag of current preview picture can be determined, to reduce pre- according to the corresponding scene tag in each region It lookes at the interfering with each other of a variety of image contents in picture, improves the accuracy rate of scene Recognition, improve user experience.
In order to realize above-described embodiment, the application also proposes a kind of computer readable storage medium.
Wherein, the computer readable storage medium, is stored thereon with computer program, when which is executed by processor, To realize scene recognition method described in the embodiment of the present application.
In order to realize above-described embodiment, the application another further aspect embodiment provides a kind of computer program, which is located When managing device execution, to realize scene recognition method described in the embodiment of the present application.
In a kind of optional way of realization, the present embodiment can be using any group of one or more computer-readable media It closes.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable to deposit Storage media for example may be-but not limited to-system, device or the device of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor Part, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: to have The electrical connection of one or more conducting wires, portable computer diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage Medium can be any tangible medium for including or store program, which can be commanded execution system, device or device Using or it is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be Any computer-readable medium other than computer readable storage medium, which can send, propagate or Transmission is for by the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In --- wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, It further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with It is fully executed on consumer electronic devices, partly executes on consumer electronic devices, held as an independent software package Row, partially part executes in devices in remote electronic or completely in devices in remote electronic or service on consumer electronic devices It is executed on device.In the situation for being related to devices in remote electronic, devices in remote electronic can pass through the network of any kind --- packet It includes local area network (LAN) or wide area network (WAN)-is connected to consumer electronic devices, or, it may be connected to external electronic device (example It is such as connected using ISP by internet).
Those skilled in the art will readily occur to its of the application after considering specification and practicing the invention applied here Its embodiment.This application is intended to cover any variations, uses, or adaptations of the application, these modifications, purposes or The common knowledge in the art that person's adaptive change follows the general principle of the application and do not invent including the application Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the application are wanted by right It asks and points out.
It should be understood that the application is not limited to the precise structure that has been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.Scope of the present application is only limited by the accompanying claims.

Claims (11)

1. a kind of scene recognition method characterized by comprising
According to default rule, current preview screen is divided into N number of region, wherein N is the positive integer greater than 1;
Using preset scene Recognition model, scene Recognition is carried out to N number of region respectively, it is corresponding with each region of determination Scene tag;
According to the corresponding scene tag in each region, the corresponding scene tag of the current preview screen is determined;
According to the corresponding scene tag of the current preview screen, target screening-mode is determined.
2. the method as described in claim 1, which is characterized in that according to the corresponding scene tag in each region, determine institute State the corresponding scene tag of current preview screen, comprising:
The corresponding scene tag in each region is counted, determines the corresponding each scene mark of the current preview screen The quantity of label;
By the maximum scene tag of quantity, it is determined as the corresponding scene tag of the current preview screen.
3. method according to claim 2, which is characterized in that if the maximum scene tag of quantity include the first scene tag and Second scene tag, then after the corresponding scene tag in each region of the determination, further includes:
Determine the confidence level of the corresponding scene tag in each region;
The corresponding scene tag of the determination current preview screen, comprising:
It determines first total confidence level of corresponding first scene tag of the current preview screen and corresponds to the second scene tag Second total confidence level;
By the corresponding scene tag of the larger value in described first total confidence level and second total confidence level, it is determined as described work as The corresponding scene tag of preceding preview screen.
4. the method as described in claim 1, which is characterized in that described that current preview screen is divided into N number of region, packet It includes:
The current preview screen is divided into the different region of N number of area;
The corresponding scene tag of the determination current preview screen, comprising:
The corresponding scene tag in each region is counted, determines the corresponding each scene mark of the current preview screen The quantity of label;
According to the quantity of the corresponding each scene tag of the current preview screen and the corresponding region gross area of each scene tag, Determine the corresponding scene tag of the current preview screen.
5. the method as described in claim 1-4 is any, which is characterized in that the determining target screening-mode, comprising:
Determine the current corresponding target light exposure amount of amount of images and every frame image to be collected to be collected;
According to the current degree of jitter of camera module, target sensitivity is determined;
According to the target light exposure amount and the target sensitivity, the corresponding exposure time of every frame image to be collected is determined.
6. method as claimed in claim 5, which is characterized in that after the determining target screening-mode, further includes:
According to the target sensitivity and the corresponding exposure time of every frame image to be collected, multiple image is successively acquired;
The multiple image is subjected to synthesis processing, to generate target image.
7. method as claimed in claim 6, which is characterized in that it is described that the multiple image is subjected to synthesis processing, to generate Target image includes:
According to the corresponding scene tag in each region, the corresponding synthesis model in each region is determined;
According to the corresponding synthesis model in each region, each region in the multiple image is successively carried out at synthesis Reason, to generate target image.
8. method as claimed in claim 6, which is characterized in that it is described that the multiple image is subjected to synthesis processing, to generate Target image includes:
According to the corresponding weighted value of preset every frame image, the multiple image is subjected to synthesis processing, to generate target image.
9. a kind of scene Recognition device characterized by comprising
Division module, for according to default rule, current preview screen to be divided into N number of region, wherein N is greater than 1 Positive integer;
Identification module carries out scene Recognition to N number of region respectively, with determination for utilizing preset scene Recognition model The corresponding scene tag in each region;
First determining module, for determining the current preview screen pair according to the corresponding scene tag in each region The scene tag answered;
Second determining module, for determining target screening-mode according to the corresponding scene tag of the current preview screen.
10. a kind of electronic equipment characterized by comprising the photography mould group, memory, processor and be stored in memory Computer program that is upper and can running on a processor when the processor executes the computer program, is realized as right is wanted Seek scene recognition method described in any one of 1-8.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor Such as scene recognition method of any of claims 1-8 is realized when execution.
CN201910194130.6A 2019-03-14 2019-03-14 Scene recognition method and device, electronic equipment and storage medium Active CN109919116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910194130.6A CN109919116B (en) 2019-03-14 2019-03-14 Scene recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910194130.6A CN109919116B (en) 2019-03-14 2019-03-14 Scene recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109919116A true CN109919116A (en) 2019-06-21
CN109919116B CN109919116B (en) 2022-05-17

Family

ID=66964946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910194130.6A Active CN109919116B (en) 2019-03-14 2019-03-14 Scene recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109919116B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050076A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Shooting processing method and electronic equipment
CN111083364A (en) * 2019-12-18 2020-04-28 华为技术有限公司 Control method, electronic equipment, computer readable storage medium and chip
CN111093023A (en) * 2019-12-19 2020-05-01 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111242230A (en) * 2020-01-17 2020-06-05 腾讯科技(深圳)有限公司 Image processing method and image classification model training method based on artificial intelligence
CN112926512A (en) * 2021-03-25 2021-06-08 深圳市无限动力发展有限公司 Environment type identification method and device and computer equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929597A (en) * 2014-04-30 2014-07-16 杭州摩图科技有限公司 Shooting assisting method and device
US9129148B1 (en) * 2012-11-09 2015-09-08 Orbeus Inc. System, method and apparatus for scene recognition
JP2017117025A (en) * 2015-12-22 2017-06-29 キヤノン株式会社 Pattern identification method, device thereof, and program thereof
CN108875820A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN108900782A (en) * 2018-08-22 2018-11-27 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN109086742A (en) * 2018-08-27 2018-12-25 Oppo广东移动通信有限公司 scene recognition method, scene recognition device and mobile terminal
CN109194882A (en) * 2018-08-22 2019-01-11 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN109191403A (en) * 2018-09-07 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109218609A (en) * 2018-07-23 2019-01-15 麒麟合盛网络技术股份有限公司 Image composition method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129148B1 (en) * 2012-11-09 2015-09-08 Orbeus Inc. System, method and apparatus for scene recognition
CN103929597A (en) * 2014-04-30 2014-07-16 杭州摩图科技有限公司 Shooting assisting method and device
JP2017117025A (en) * 2015-12-22 2017-06-29 キヤノン株式会社 Pattern identification method, device thereof, and program thereof
CN108875820A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN109218609A (en) * 2018-07-23 2019-01-15 麒麟合盛网络技术股份有限公司 Image composition method and device
CN108900782A (en) * 2018-08-22 2018-11-27 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN109194882A (en) * 2018-08-22 2019-01-11 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN109086742A (en) * 2018-08-27 2018-12-25 Oppo广东移动通信有限公司 scene recognition method, scene recognition device and mobile terminal
CN109191403A (en) * 2018-09-07 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083364A (en) * 2019-12-18 2020-04-28 华为技术有限公司 Control method, electronic equipment, computer readable storage medium and chip
CN111093023A (en) * 2019-12-19 2020-05-01 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111093023B (en) * 2019-12-19 2022-03-08 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111050076A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Shooting processing method and electronic equipment
WO2021129640A1 (en) * 2019-12-26 2021-07-01 维沃移动通信有限公司 Method for photographing processing, and electronic apparatus
CN111242230A (en) * 2020-01-17 2020-06-05 腾讯科技(深圳)有限公司 Image processing method and image classification model training method based on artificial intelligence
CN112926512A (en) * 2021-03-25 2021-06-08 深圳市无限动力发展有限公司 Environment type identification method and device and computer equipment
CN112926512B (en) * 2021-03-25 2024-03-15 深圳市无限动力发展有限公司 Environment type identification method and device and computer equipment

Also Published As

Publication number Publication date
CN109919116B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN109218628A (en) Image processing method, device, electronic equipment and storage medium
CN109005366B (en) Night scene shooting processing method and device for camera module, electronic equipment and storage medium
CN109348089A (en) Night scene image processing method, device, electronic equipment and storage medium
CN109919116A (en) Scene recognition method, device, electronic equipment and storage medium
CN109218627A (en) Image processing method, device, electronic equipment and storage medium
US10944901B2 (en) Real time assessment of picture quality
US11532076B2 (en) Image processing method, electronic device and storage medium
CN109729274B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108777767A (en) Photographic method, device, terminal and computer readable storage medium
CN105227857B (en) A kind of method and apparatus of automatic exposure
CN108495050A (en) Photographic method, device, terminal and computer readable storage medium
CN109361853A (en) Image processing method, device, electronic equipment and storage medium
CN108900782A (en) Exposal control method, device and electronic equipment
CN109714539B (en) Image acquisition method and device based on gesture recognition and electronic equipment
CN109688340A (en) Time for exposure control method, device, electronic equipment and storage medium
CN109618102A (en) Focusing process method, apparatus, electronic equipment and storage medium
CN110971833B (en) Image processing method and device, electronic equipment and storage medium
CN110971814B (en) Shooting adjustment method and device, electronic equipment and storage medium
CN109995999A (en) Scene recognition method, device, electronic equipment and storage medium
CN110971813B (en) Focusing method and device, electronic equipment and storage medium
CA2769367C (en) Simulated incident light meter on a mobile device for photography/cinematography
CN110971812B (en) Shooting method and device, electronic equipment and storage medium
JP2023062380A (en) Imaging apparatus, method for controlling imaging apparatus, and program
CN109167918A (en) Information processing method and its device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant