CN109101931A - A kind of scene recognition method, scene Recognition device and terminal device - Google Patents
A kind of scene recognition method, scene Recognition device and terminal device Download PDFInfo
- Publication number
- CN109101931A CN109101931A CN201810947235.XA CN201810947235A CN109101931A CN 109101931 A CN109101931 A CN 109101931A CN 201810947235 A CN201810947235 A CN 201810947235A CN 109101931 A CN109101931 A CN 109101931A
- Authority
- CN
- China
- Prior art keywords
- picture
- scene
- identified
- scene type
- mentioned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of scene recognition method, scene Recognition device and terminal devices, which comprises obtains picture to be identified;Scene classification is carried out to the picture to be identified using the convolutional neural networks model after training;If the convolutional neural networks model identifies that the scene type of the picture to be identified is default scene type, the picture collection parameter of the picture to be identified is obtained;Judge whether the picture collection parameter meets preset parameter range corresponding to the default scene type;If the picture collection parameter meets the preset parameter range, confirm that the scene type of the picture to be identified is the default scene type.Therefore scene recognition method provided herein not carries out scene Recognition based entirely on image content can improve the recognition accuracy of the scene type to pseudo- scene to a certain extent.
Description
Technical field
The application belongs to technical field of image processing more particularly to a kind of scene recognition method, scene Recognition device, terminal
Equipment and computer readable storage medium.
Background technique
Currently, common scene recognition method has: recognition methods and identification neural network based based on special characteristic
Method.Wherein, the recognition methods based on special characteristic is identified to the special characteristic in picture, specific according to what is recognized
Feature determines the scene type of the picture;Recognition methods neural network based is to utilize preparatory trained neural network model
The scene type of picture is identified.
The above-mentioned recognition methods based on special characteristic has faster recognition speed, recognition methods tool neural network based
There is higher recognition accuracy, still, both the above method is all based on image content and is identified, in practical applications, meeting
Having many scenes includes the feature of another scene (for convenient for subsequent descriptions, the scene for including another scene characteristic is referred to as puppet
Scene), for example, indoor scene has outdoor characteristics (for example, the seashore scenery of finishing simulation indoors), in this case, pass
The scene recognition method of system can not the scene type to pseudo- scene correctly identified.
Summary of the invention
It can in view of this, this application provides a kind of scene recognition method, scene Recognition device, terminal device and computers
Storage medium is read, the recognition accuracy of the scene type to pseudo- scene can be improved.
The application first aspect provides a kind of scene recognition method, comprising:
Obtain picture to be identified;
Scene classification is carried out to above-mentioned picture to be identified using the convolutional neural networks model after training;
If above-mentioned convolutional neural networks model identifies that the scene type of above-mentioned picture to be identified is default scene type,
Then:
Obtain the picture collection parameter of above-mentioned picture to be identified;
Judge whether above-mentioned picture collection parameter meets preset parameter range corresponding to above-mentioned default scene type;
If above-mentioned picture collection parameter meets above-mentioned preset parameter range, the scene type of above-mentioned picture to be identified is confirmed
For above-mentioned default scene type.
The application second aspect provides a kind of scene Recognition device, comprising:
Picture obtains module, for obtaining picture to be identified;
Scene classification module, for carrying out scene to above-mentioned picture to be identified using the convolutional neural networks model after training
Classification;
Parameter acquisition module, if identifying the scene type of above-mentioned picture to be identified for above-mentioned convolutional neural networks model
To preset scene type, then the picture collection parameter of above-mentioned picture to be identified is obtained;
Parameter discrimination module, for judging whether above-mentioned picture collection parameter meets corresponding to above-mentioned default scene type
Preset parameter range;
Scene confirmation module, if meeting above-mentioned preset parameter range for above-mentioned picture collection parameter, confirm it is above-mentioned to
The scene type for identifying picture is above-mentioned default scene type.
The application third aspect provides a kind of terminal device, including memory, processor and is stored in above-mentioned storage
In device and the computer program that can run on above-mentioned processor, above-mentioned processor are realized as above when executing above-mentioned computer program
The step of stating first aspect method.
The application fourth aspect provides a kind of computer readable storage medium, above-mentioned computer-readable recording medium storage
There is computer program, realizes when above-mentioned computer program is executed by processor such as the step of above-mentioned first aspect method.
The 5th aspect of the application provides a kind of computer program product, and above-mentioned computer program product includes computer journey
Sequence is realized when above-mentioned computer program is executed by one or more processors such as the step of above-mentioned first aspect method.
Therefore this application provides a kind of scene recognition methods.Firstly, obtaining picture to be identified, and using in advance
Trained convolutional neural networks model carries out scene classification to the picture to be identified, for example, the trained convolutional Neural net
The scene type of above-mentioned picture to be identified is determined as indoor scene classification, seabeach scene type or meadow scene class by network model
Not etc.;Secondly, if above-mentioned convolutional neural networks model identifies that the scene type of above-mentioned picture to be identified is default scene class
Not, then the picture collection parameter for further obtaining the picture to be identified, that is, when acquiring the picture to be identified, each bat of camera
Parameter is taken the photograph, for example, the exposure time of camera and/or sensitivity etc.;Then, judge the picture collection of above-mentioned picture to be identified
Whether parameter meets preset parameter range corresponding to above-mentioned default scene type, if above-mentioned picture collection parameter meet it is above-mentioned pre-
Setting parameter range then confirms that the scene type of above-mentioned picture to be identified is above-mentioned default scene type.Therefore, provided herein
Scene recognition method, it is necessary first to scene point is carried out to picture to be identified using preparatory trained convolutional neural networks model
Class obtains the scene type of the picture to be identified, then further according to the above-mentioned volume of picture collection Verification of the picture to be identified
Whether product neural network model is correct to the scene classification of the picture to be identified (under normal conditions, in order to obtain better shooting
Effect, under different scenes, camera has different acquisition parameters, such as due to the ambient brightness of indoor scene is relatively low,
Therefore, the exposure time of camera is often larger, therefore, can that is to say the figure to be identified according to the acquisition parameters of camera
The picture collection parameter auxiliary judgment scene type of piece), only when verifying correct, just confirm the field of the picture to be identified
Scape classification is the scene type that above-mentioned convolutional neural networks model identifies.Therefore, scene recognition method provided herein
Therefore the scene type to pseudo- scene can be improved to a certain extent by not carrying out scene Recognition based entirely on image content
Recognition accuracy.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram for scene recognition method that the embodiment of the present application one provides;
Fig. 2 is the implementation process schematic diagram of the training process for the convolutional neural networks model that the embodiment of the present application one provides;
Fig. 3 is the training process schematic diagram for the convolutional neural networks model that the embodiment of the present application one provides;
Fig. 4 is the schematic diagram for the mapping table that the embodiment of the present application one provides;
Fig. 5 is the implementation process schematic diagram for another scene recognition method that the embodiment of the present application two provides;
Fig. 6 is a kind of structural schematic diagram for scene Recognition device that the embodiment of the present application three provides;
Fig. 7 is the structural schematic diagram of terminal device provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
Scene recognition method provided by the embodiments of the present application can be adapted for terminal device, and illustratively, above-mentioned terminal is set
It is standby to include but is not limited to: smart phone, tablet computer, learning machine, intelligent wearable device etc..
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal device described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch
Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other
Portable device.It is to be further understood that in certain embodiments, above equipment is not portable communication device, but is had
The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal device including display and touch sensitive surface is described.However, should manage
Solution, terminal device may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects
Jaws equipment.
Terminal device supports various application programs, such as one of the following or multiple: drawing application program, demonstration application
Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered
With program, telephony application, videoconference application, email application, instant messaging applications, forging
Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on the terminal device
Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick
Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example,
Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application
Indication or suggestion relative importance.
In order to illustrate the above-mentioned technical solution of the application, the following is a description of specific embodiments.
Embodiment one
A kind of scene recognition method provided below the embodiment of the present application one is described, and please refers to attached drawing 1, the application
Scene recognition method in embodiment one includes:
In step s101, picture to be identified is obtained;
In the embodiment of the present application, it in order to determine the scene type of the picture to be identified, needs to obtain in the next steps
The picture collection parameter (when acquiring the picture to be identified, each acquisition parameters of camera) of above-mentioned picture to be identified, therefore,
Above-mentioned picture to be identified is camera picture collected.
In the embodiment of the present application, the source of above-mentioned picture to be identified is not defined, above-mentioned picture to be identified can
To be a certain frame picture in this ground camera or video camera preview screen collected, for example, user starts local camera applications
Program, camera a certain frame picture collected;Alternatively, can be user by picture captured by this ground camera, for example, with
Family starts local camera application program, utilizes picture captured by camera application program;It is answered alternatively, can be user by other
With the new received picture of program, for example, picture transmitted by other wechats contact person that user receives in wechat;Alternatively,
It is also possible to the picture that user downloads from internet, for example, what user was downloaded in a browser by public operators network
Picture;Alternatively, can also be a certain frame picture in video, for example, the wherein frame picture in the TV play that user is watched.
In step s 102, scene point is carried out to above-mentioned picture to be identified using the convolutional neural networks model after training
Class;
In the embodiment of the present application, need precondition for carrying out the convolutional neural networks of scene classification
(Convolutional Neural Networks, CNN) model, the CNN model after the training is according to each in database
The training of scene type corresponding to samples pictures and each samples pictures obtains.Illustratively, the training of above-mentioned CNN model
Process can be as shown in Fig. 2, include step S201-S204:
In step s 201, each samples pictures and the corresponding scene type of each samples pictures are obtained in advance;
Assuming that the scene type that can identify of CNN model after training include: indoor scene classification, snowfield scene type with
And meadow scene type, then the scene type that can recognize that according to the CNN model after above-mentioned training (i.e. indoor scene classification,
Snowfield scene type and meadow scene type), scene classification is carried out to each samples pictures, to obtain each samples pictures
Corresponding scene type, as shown in figure 3, samples pictures 1 correspond to meadow scene type, samples pictures 2 correspond to indoor scene
Classification, samples pictures 3 correspond to snowfield scene type, samples pictures 4 correspond to indoor scene classification.
In step S202, each samples pictures are input in initial convolutional neural networks model, so that this is first
The convolutional neural networks model of beginning carries out scene classification to each samples pictures;
As described in Figure 3, samples pictures 1, samples pictures 2, samples pictures 3 and the samples pictures 4 step S201 obtained
It is input in initial CNN model, so that the initial CNN model carries out scene classification to each samples pictures, thus
To classification results, wherein the classification results can be equal for samples pictures 1, samples pictures 2, samples pictures 3 and samples pictures 4
For indoor scene classification.
In step S203, according to the scene type of each samples pictures obtained in advance, the initial convolution mind is determined
Classification accuracy through network model;
For a certain samples pictures, such as samples pictures 1, by step S201 it is found that the samples pictures 1 are meadow scene class
Not, however, if the classification results of CNN model output initial in step S202 indicate that the samples pictures 1 are indoor scene classification,
Then think the initial CNN model non-Accurate classification samples pictures 1.
All samples pictures are traversed, count the samples pictures of the initial CNN model Accurate classification in all sample graphs
The ratio setting can be classification accuracy by the ratio accounted in piece.
In step S204, the parameters of current convolutional neural networks model are constantly adjusted, and adjust by parameter
Convolutional neural networks model afterwards continues to carry out scene classification to each samples pictures, until parameter convolutional Neural net adjusted
Until the classification accuracy of network model is greater than default accuracy rate, then after the current convolutional neural networks model being determined as training
Convolutional neural networks model;
Under normal conditions, the classification accuracy of initial CNN model is often smaller, initial therefore, it is necessary to adjust this
The parameters of CNN model, and each samples pictures acquired in step S201 are input to parameter CNN mould adjusted again
In type, and the classification accuracy of parameter CNN model adjusted is calculated again, constantly adjust each of current CNN model
Parameter, until the classification accuracy of current CNN model is greater than default accuracy rate, then using the current CNN model as
Convolutional neural networks model after training.It can use stochastic gradient descent algorithm in the embodiment of the present application, power is updated and calculated
The common parameter regulation means such as method adjust the parameters of current CNN model.
In step s 103, if above-mentioned convolutional neural networks model identifies that the scene type of above-mentioned picture to be identified is pre-
If scene type, then the picture collection parameter of above-mentioned picture to be identified is obtained;
In the embodiment of the present application, above-mentioned default scene type is each field that the CNN model after training can recognize that
Any one in scape classification.For example, if the scene type that the CNN model after training can identify is indoor scene classification, grass
Ground scene type and snowfield scene type, then above-mentioned default scene type be indoor scene classification, meadow scene type or
Snowfield scene type.
In the embodiment of the present application, if the CNN model after above-mentioned training can recognize that the scene of above-mentioned picture to be identified
Classification, then obtain the picture to be identified picture collection parameter (acquire the parameters of the camera of the picture to be identified, than
Such as camera sensitivity and/or exposure time), in order to it is subsequent above-mentioned training is judged according to the picture collection parameter after
Whether the scene type that CNN model identifies is correct.
If above-mentioned picture to be identified be camera currently picture collected (for example, above-mentioned picture to be identified opens for user
After dynamic camera or camera application, camera a certain frame picture collected), then shooting that can be current by the camera
Picture collection parameter of the parameter as above-mentioned picture to be identified;If above-mentioned picture to be identified is not camera currently figure collected
Piece can then search the acquisition parameters of camera when shooting the picture to be identified from the attribute information of the picture to be identified,
And using the acquisition parameters as the picture collection parameter of above-mentioned picture to be identified.
In step S104, judges above-mentioned picture collection parameter whether to meet corresponding to above-mentioned default scene type and preset
Parameter area;
In the embodiment of the present application, corresponding relationship can be stored in advance in terminal device before terminal device factory
Table, record has the corresponding relationship of each default scene type and each preset parameter range in the mapping table, wherein each
A corresponding preset parameter range of default scene type, as shown in figure 4, being the schematic diagram of mapping table.
First in mapping table, the scene class for the picture to be identified that the CNN model after searching above-mentioned training identifies
Whether not corresponding preset parameter range, the picture collection parameter that then judgment step S103 is obtained meet the default ginseng found
Number range.
In addition, as shown in figure 4, if the CNN model after training identifies that the scene type of picture to be identified is meadow scene
Classification, then only need to judge whether the exposure time of camera meets the requirements, if the CNN model after training identifies
The scene type of picture to be identified is snowfield scene type, then needs to judge whether the sensitivity of camera meets the requirements, if instruction
CNN model after white silk identifies that the scene type of picture to be identified is indoor scene classification, then needs to judge the exposure of camera
Whether duration and sensitivity meet the requirements.Therefore, needs corresponding to different default scene types judge whether to meet and want
The picture collection parameter asked may be not identical, so in step s 103, the picture collection of acquired picture to be identified is joined
Number can be determined according to preset mapping table, if the instruction of preset mapping table only needs to judge camera
When whether exposure time meets the requirements, then the exposure time of the camera of the picture to be identified can be only obtained, does not need to obtain
The picture collection parameter for taking remaining not need to judge whether to meet the requirements.
In step s105, if above-mentioned picture collection parameter meets above-mentioned preset parameter range, confirm above-mentioned to be identified
The scene type of picture is above-mentioned default scene type;
If step S104 judges that the picture collection parameter of the picture to be identified meets corresponding to above-mentioned default scene type
Preset parameter range, then it is assumed that CNN model after training to the scene classification of the picture to be identified the result is that correctly, because
This, confirms that the scene type of the picture to be identified is above-mentioned default scene type.
Scene recognition method provided by the embodiment of the present application one, it is necessary first to utilize preparatory trained convolutional Neural net
Network model carries out scene classification to picture to be identified, obtains the scene type of the picture to be identified, then to be identified further according to this
Whether the above-mentioned convolutional neural networks model of the picture collection Verification of picture is correct to the scene classification of the picture to be identified (logical
In normal situation, in order to obtain better shooting effect, under different scenes, camera has different acquisition parameters, therefore, can
According to the acquisition parameters of camera, that is to say the picture collection parameter auxiliary judgment scene type of the picture to be identified), only
When verifying correct, just confirm that the scene type of the picture to be identified is the field that above-mentioned convolutional neural networks model identifies
Scape classification.Therefore, scene recognition method provided by the embodiment of the present application one not carries out scene knowledge based entirely on image content
Not, it is thus possible to improve the recognition accuracy of the scene type to pseudo- scene.
Embodiment two
Another scene recognition method provided below the embodiment of the present application two is described, and please refers to attached drawing 5, this Shen
Please embodiment two provide another scene recognition method include:
In step S501, picture to be identified is obtained;
In step S502, scene point is carried out to above-mentioned picture to be identified using the convolutional neural networks model after training
Class;
In the embodiment of the present application, above-mentioned steps S501-S502 is identical as the step S101-S102 in embodiment one, tool
Body can be found in the description of embodiment one, and details are not described herein again.
In step S503, if above-mentioned convolutional neural networks model identifies that the scene type of above-mentioned picture to be identified is room
Interior scene type then obtains the camera sensitivity and exposure time of above-mentioned picture to be identified;
In the embodiment of the present application, CNN model is trained in advance, and the CNN model after the training is enabled to identify indoor field
Scape classification obtains if the CNN model after the training identifies that the scene type of above-mentioned picture to be identified is indoor scene classification
The camera sensitivity and exposure time of the picture to be identified.
If the picture to be identified is camera currently picture collected, the current camera of the available camera
Sensitivity and exposure time;If above-mentioned picture to be identified is not camera currently picture collected, can wait knowing from this
The camera sensitivity and exposure time when shooting the picture to be identified are searched in the attribute information of other picture.
In step S504, when judging whether above-mentioned camera sensitivity is greater than default sensitivity, and judging above-mentioned exposure
It is long whether to be greater than default exposure time;
Under normal conditions, the light of indoor scene can be than darker, therefore, when acquiring picture under scene indoors, terminal
Equipment will increase camera sensitivity and exposure time to improve light-inletting quantity, to make up, external environment light is darker to be lacked
It falls into, therefore, we can judge scene class corresponding to the picture to be identified according to camera sensitivity and exposure time
It not whether not to be indoor scene classification.
In step S505, if above-mentioned camera sensitivity is greater than above-mentioned default sensitivity, and above-mentioned exposure time is greater than
Above-mentioned default exposure time then confirms that the scene type of above-mentioned picture to be identified is indoor scene classification;
If camera sensitivity acquired in step S503 is greater than default sensitivity, exposure time acquired in step S504
Greater than default exposure time, then it is assumed that training after CNN model identification the picture to be identified scene type be correctly,
The scene type for confirming the picture to be identified is indoor scene classification.
In addition, in the embodiment of the present application, the camera sensitivity of the picture to be identified can also be only obtained, by sentencing
Whether the camera sensitivity of breaking is greater than default sensitivity, to confirm whether the scene type of above-mentioned picture to be identified is indoor field
Scape classification, or the exposure time of the picture to be identified can be only obtained, by judging it is default whether the exposure time is greater than
Exposure time, to confirm whether the scene type of above-mentioned picture to be identified is indoor scene classification.
The embodiment of the present application two gives a kind of recognition methods for being specifically directed to indoor scene classification, on the one hand passes through instruction
CNN model after white silk identifies whether the scene type of picture to be identified is indoor scene classification, on the other hand according to camera sense
Whether luminosity and exposure time verify the CNN model after above-mentioned training correct to the scene classification of above-mentioned picture to be identified.Cause
This, indoor scene recognition methods provided by the embodiment of the present application two not carries out scene Recognition based entirely on image content, because
This, can be improved the recognition accuracy to pseudo- indoor scene.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit
It is fixed.
Embodiment three
The embodiment of the present application three provides a kind of scene Recognition device, for purposes of illustration only, only showing relevant to the application
Part, picture processing unit 600 as shown in Figure 6 include:
Picture obtains module 601, for obtaining picture to be identified;
Scene classification module 602, for being carried out using the convolutional neural networks model after training to above-mentioned picture to be identified
Scene classification;
Parameter acquisition module 603, if identifying the scene of above-mentioned picture to be identified for above-mentioned convolutional neural networks model
Classification is default scene type, then obtains the picture collection parameter of above-mentioned picture to be identified;
Parameter discrimination module 604, for judging it is right whether above-mentioned picture collection parameter meets above-mentioned default scene type institute
The preset parameter range answered;
Scene confirmation module 605 confirms above-mentioned if meeting above-mentioned preset parameter range for above-mentioned picture collection parameter
The scene type of picture to be identified is above-mentioned default scene type.
Optionally, above-mentioned default scene type is indoor scene classification;Correspondingly, it is specific to obtain module 603 for above-mentioned parameter
For: obtain the camera sensitivity of above-mentioned picture to be identified;Correspondingly, above-mentioned parameter discrimination module 604 is specifically used for: judgement
Whether above-mentioned camera sensitivity is greater than default sensitivity.
Optionally, above-mentioned default scene type is indoor scene classification;Correspondingly, it is specific to obtain module 603 for above-mentioned parameter
For: obtain the exposure time of above-mentioned picture to be identified;Correspondingly, above-mentioned parameter discrimination module 604 is specifically used for: judging above-mentioned
Whether exposure time is greater than default exposure time.
Optionally, above-mentioned default scene type is indoor scene classification;Correspondingly, it is specific to obtain module 603 for above-mentioned parameter
For: obtain the camera sensitivity and exposure time of above-mentioned picture to be identified;Correspondingly, above-mentioned parameter discrimination module 604
It is specifically used for: judges whether above-mentioned camera sensitivity is greater than default sensitivity and whether above-mentioned exposure time is greater than default exposure
Light time is long.
Optionally, above-mentioned parameter discrimination module 604 includes:
Searching unit, for searching default ginseng corresponding to above-mentioned default scene type according to preset mapping table
Number range, above-mentioned mapping table includes the corresponding relationship of each default scene type and each preset parameter range, wherein
Each default corresponding preset parameter range of scene type;
Judgement unit, for judging above-mentioned picture collection parameter whether corresponding to the above-mentioned default scene type found
Preset parameter range in.
Optionally, above-mentioned convolutional neural networks model is obtained using training module training, and above-mentioned training module includes:
Sample acquisition unit, for obtaining each samples pictures and the corresponding scene type of each samples pictures in advance;
Sample input unit, for each samples pictures to be input in initial convolutional neural networks model, so that
Above-mentioned initial convolutional neural networks model carries out scene classification to each samples pictures;
Accuracy determining unit determines above-mentioned initial for the scene type according to each samples pictures obtained in advance
Convolutional neural networks model classification accuracy;
Parameter adjustment unit for constantly adjusting the parameters of current convolutional neural networks model, and passes through parameter
Convolutional neural networks model adjusted continues to carry out scene classification to each samples pictures, until parameter convolution mind adjusted
Until classification accuracy through network model is greater than default accuracy rate, then the current convolutional neural networks model is determined as instructing
Convolutional neural networks model after white silk.
It should be noted that the contents such as information exchange, implementation procedure between above-mentioned apparatus/unit, due to the application
Embodiment of the method is based on same design, concrete function and bring technical effect, for details, reference can be made to embodiment of the method part, this
Place repeats no more.
Example IV
Fig. 7 is the schematic diagram for the terminal device that the embodiment of the present application four provides.As shown in fig. 7, the terminal of the embodiment is set
Standby 7 include: processor 70, memory 71 and are stored in the meter that can be run in above-mentioned memory 71 and on above-mentioned processor 70
Calculation machine program 72.Above-mentioned processor 70 realizes the step in above-mentioned each embodiment of the method when executing above-mentioned computer program 72,
Such as step S101 to S105 shown in FIG. 1.Alternatively, above-mentioned processor 70 realized when executing above-mentioned computer program 72 it is above-mentioned each
The function of each module/unit in Installation practice, such as the function of module 601 to 605 shown in Fig. 6.
Illustratively, above-mentioned computer program 72 can be divided into one or more module/units, said one or
Multiple module/units are stored in above-mentioned memory 71, and are executed by above-mentioned processor 70, to complete the application.Above-mentioned one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the above-mentioned computer program 72 in above-mentioned terminal device 7 is described.For example, above-mentioned computer program 72 can be divided
It is cut into picture and obtains module, scene classification module, parameter acquisition module, parameter discrimination module and scene confirmation module, each module
Concrete function is as follows:
Obtain picture to be identified;
Scene classification is carried out to above-mentioned picture to be identified using the convolutional neural networks model after training;
If above-mentioned convolutional neural networks model identifies that the scene type of above-mentioned picture to be identified is default scene type,
Then:
Obtain the picture collection parameter of above-mentioned picture to be identified;
Judge whether above-mentioned picture collection parameter meets preset parameter range corresponding to above-mentioned default scene type;
If above-mentioned picture collection parameter meets above-mentioned preset parameter range, the scene type of above-mentioned picture to be identified is confirmed
For above-mentioned default scene type.
Above-mentioned terminal device 7 can be smart phone, tablet computer, learning machine, intelligent wearable device etc. and calculate equipment.On
Stating terminal device may include, but be not limited only to, processor 70, memory 71.It will be understood by those skilled in the art that Fig. 7 is only
It is the example of terminal device 7, does not constitute the restriction to terminal device 7, may include components more more or fewer than diagram, or
Person combines certain components or different components, such as above-mentioned terminal device can also include input-output equipment, network insertion
Equipment, bus etc..
Alleged processor 70 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
Above-mentioned memory 71 can be the internal storage unit of above-mentioned terminal device 7, such as the hard disk or interior of terminal device 7
It deposits.Above-mentioned memory 71 is also possible to the External memory equipment of above-mentioned terminal device 7, such as be equipped on above-mentioned terminal device 7
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, above-mentioned memory 71 can also both include the storage inside list of above-mentioned terminal device 7
Member also includes External memory equipment.Above-mentioned memory 71 is for storing needed for above-mentioned computer program and above-mentioned terminal device
Other programs and data.Above-mentioned memory 71 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of above-mentioned apparatus is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, on
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, above-mentioned meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, above-mentioned computer program includes computer program code, above-mentioned computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..Above-mentioned computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry above-mentioned computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that above-mentioned
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Above above-described embodiment is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of scene recognition method characterized by comprising
Obtain picture to be identified;
Scene classification is carried out to the picture to be identified using the convolutional neural networks model after training;
If the convolutional neural networks model identifies that the scene type of the picture to be identified is default scene type:
Obtain the picture collection parameter of the picture to be identified;
Judge whether the picture collection parameter meets preset parameter range corresponding to the default scene type;
If the picture collection parameter meets the preset parameter range, confirm the scene type of the picture to be identified for institute
State default scene type.
2. scene recognition method as described in claim 1, which is characterized in that the default scene type is indoor scene class
Not.
3. scene recognition method as claimed in claim 2, which is characterized in that obtain the picture collection ginseng of the picture to be identified
Number, comprising:
Obtain the camera sensitivity of the picture to be identified;
Correspondingly, judge whether the picture collection parameter meets preset parameter range corresponding to the default scene type,
Include:
Judge whether the camera sensitivity is greater than default sensitivity.
4. scene recognition method as claimed in claim 2 or claim 3, which is characterized in that
Obtain the picture collection parameter of the picture to be identified, comprising:
Obtain the exposure time of the picture to be identified;
Correspondingly, judge whether the picture collection parameter meets preset parameter range corresponding to the default scene type,
Include:
Judge whether the exposure time is greater than default exposure time.
5. scene recognition method as described in claim 1, which is characterized in that
Judge whether the picture collection parameter meets preset parameter range corresponding to the default scene type, comprising:
According to preset mapping table, preset parameter range corresponding to the default scene type, the corresponding pass are searched
It is the corresponding relationship that table includes each default scene type and each preset parameter range, wherein each presets scene type
A corresponding preset parameter range;
Judge the picture collection parameter whether in the preset parameter range corresponding to the default scene type found.
6. the scene recognition method as described in any one of claims 1 to 5, which is characterized in that the convolutional neural networks mould
The training process of type includes:
Each samples pictures and the corresponding scene type of each samples pictures are obtained in advance;
Each samples pictures are input in initial convolutional neural networks model, so that the initial convolutional neural networks
Model carries out scene classification to each samples pictures;
According to the scene type of each samples pictures obtained in advance, the classification of the initial convolutional neural networks model is determined
Accuracy rate;
The parameters of current convolutional neural networks model are constantly adjusted, and pass through parameter convolutional neural networks mould adjusted
Type continues to carry out scene classification to each samples pictures, until the classification accuracy of parameter convolutional neural networks model adjusted
Until greater than default accuracy rate, then the current convolutional neural networks model is determined as to the convolutional neural networks mould after training
Type.
7. a kind of scene Recognition device characterized by comprising
Picture obtains module, for obtaining picture to be identified;
Scene classification module, for carrying out scene point to the picture to be identified using the convolutional neural networks model after training
Class;
Parameter acquisition module, if identifying that the scene type of the picture to be identified is pre- for the convolutional neural networks model
If scene type, then the picture collection parameter of the picture to be identified is obtained;
Whether parameter discrimination module is preset for judging the picture collection parameter to meet corresponding to the default scene type
Parameter area;
Scene confirmation module confirms described to be identified if meeting the preset parameter range for the picture collection parameter
The scene type of picture is the default scene type.
8. scene Recognition device as claimed in claim 7, which is characterized in that the default scene type is indoor scene class
Not.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 6 when executing the computer program
The step of any one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 6 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810947235.XA CN109101931A (en) | 2018-08-20 | 2018-08-20 | A kind of scene recognition method, scene Recognition device and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810947235.XA CN109101931A (en) | 2018-08-20 | 2018-08-20 | A kind of scene recognition method, scene Recognition device and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109101931A true CN109101931A (en) | 2018-12-28 |
Family
ID=64850432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810947235.XA Pending CN109101931A (en) | 2018-08-20 | 2018-08-20 | A kind of scene recognition method, scene Recognition device and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109101931A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503099A (en) * | 2019-07-23 | 2019-11-26 | 平安科技(深圳)有限公司 | Information identifying method and relevant device based on deep learning |
CN110971834A (en) * | 2019-12-09 | 2020-04-07 | 维沃移动通信有限公司 | Flash lamp control method and electronic equipment |
CN111131698A (en) * | 2019-12-23 | 2020-05-08 | RealMe重庆移动通信有限公司 | Image processing method and device, computer readable medium and electronic equipment |
CN111797854A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Scene model establishing method and device, storage medium and electronic equipment |
CN111797856A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Modeling method, modeling device, storage medium and electronic equipment |
CN111814812A (en) * | 2019-04-09 | 2020-10-23 | Oppo广东移动通信有限公司 | Modeling method, modeling device, storage medium, electronic device and scene recognition method |
WO2020238775A1 (en) * | 2019-05-28 | 2020-12-03 | 华为技术有限公司 | Scene recognition method, scene recognition device, and electronic apparatus |
CN112115325A (en) * | 2019-06-20 | 2020-12-22 | 北京地平线机器人技术研发有限公司 | Scene type determination method and training method and device of scene analysis model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104092948A (en) * | 2014-07-29 | 2014-10-08 | 小米科技有限责任公司 | Method and device for processing image |
US9679226B1 (en) * | 2012-03-29 | 2017-06-13 | Google Inc. | Hierarchical conditional random field model for labeling and segmenting images |
CN106954051A (en) * | 2017-03-16 | 2017-07-14 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108174097A (en) * | 2017-12-29 | 2018-06-15 | 广东欧珀移动通信有限公司 | Picture shooting, acquisition parameters providing method and device |
CN108304821A (en) * | 2018-02-14 | 2018-07-20 | 广东欧珀移动通信有限公司 | Image-recognizing method and device, image acquiring method and equipment, computer equipment and non-volatile computer readable storage medium storing program for executing |
CN108319968A (en) * | 2017-12-27 | 2018-07-24 | 中国农业大学 | A kind of recognition methods of fruits and vegetables image classification and system based on Model Fusion |
-
2018
- 2018-08-20 CN CN201810947235.XA patent/CN109101931A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9679226B1 (en) * | 2012-03-29 | 2017-06-13 | Google Inc. | Hierarchical conditional random field model for labeling and segmenting images |
CN104092948A (en) * | 2014-07-29 | 2014-10-08 | 小米科技有限责任公司 | Method and device for processing image |
CN106954051A (en) * | 2017-03-16 | 2017-07-14 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108319968A (en) * | 2017-12-27 | 2018-07-24 | 中国农业大学 | A kind of recognition methods of fruits and vegetables image classification and system based on Model Fusion |
CN108174097A (en) * | 2017-12-29 | 2018-06-15 | 广东欧珀移动通信有限公司 | Picture shooting, acquisition parameters providing method and device |
CN108304821A (en) * | 2018-02-14 | 2018-07-20 | 广东欧珀移动通信有限公司 | Image-recognizing method and device, image acquiring method and equipment, computer equipment and non-volatile computer readable storage medium storing program for executing |
Non-Patent Citations (1)
Title |
---|
(德)赖因哈德•默茨等: "《国际摄影用光与曝光教程》", 31 March 2013 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814812A (en) * | 2019-04-09 | 2020-10-23 | Oppo广东移动通信有限公司 | Modeling method, modeling device, storage medium, electronic device and scene recognition method |
CN111797854B (en) * | 2019-04-09 | 2023-12-15 | Oppo广东移动通信有限公司 | Scene model building method and device, storage medium and electronic equipment |
CN111797856B (en) * | 2019-04-09 | 2023-12-12 | Oppo广东移动通信有限公司 | Modeling method and device, storage medium and electronic equipment |
CN111797854A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Scene model establishing method and device, storage medium and electronic equipment |
CN111797856A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Modeling method, modeling device, storage medium and electronic equipment |
WO2020238775A1 (en) * | 2019-05-28 | 2020-12-03 | 华为技术有限公司 | Scene recognition method, scene recognition device, and electronic apparatus |
CN112115325A (en) * | 2019-06-20 | 2020-12-22 | 北京地平线机器人技术研发有限公司 | Scene type determination method and training method and device of scene analysis model |
CN112115325B (en) * | 2019-06-20 | 2024-05-10 | 北京地平线机器人技术研发有限公司 | Scene category determining method and scene analysis model training method and device |
CN110503099A (en) * | 2019-07-23 | 2019-11-26 | 平安科技(深圳)有限公司 | Information identifying method and relevant device based on deep learning |
CN110503099B (en) * | 2019-07-23 | 2023-06-20 | 平安科技(深圳)有限公司 | Information identification method based on deep learning and related equipment |
CN110971834B (en) * | 2019-12-09 | 2021-09-10 | 维沃移动通信有限公司 | Flash lamp control method and electronic equipment |
CN110971834A (en) * | 2019-12-09 | 2020-04-07 | 维沃移动通信有限公司 | Flash lamp control method and electronic equipment |
CN111131698B (en) * | 2019-12-23 | 2021-08-27 | RealMe重庆移动通信有限公司 | Image processing method and device, computer readable medium and electronic equipment |
CN111131698A (en) * | 2019-12-23 | 2020-05-08 | RealMe重庆移动通信有限公司 | Image processing method and device, computer readable medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109101931A (en) | A kind of scene recognition method, scene Recognition device and terminal device | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN108961157B (en) | Picture processing method, picture processing device and terminal equipment | |
CN105574910A (en) | Electronic Device and Method for Providing Filter in Electronic Device | |
CN110458360B (en) | Method, device, equipment and storage medium for predicting hot resources | |
CN109951627A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108174096A (en) | Method, apparatus, terminal and the storage medium of acquisition parameters setting | |
CN109040603A (en) | High-dynamic-range image acquisition method, device and mobile terminal | |
CN111209970A (en) | Video classification method and device, storage medium and server | |
CN109118447A (en) | A kind of image processing method, picture processing unit and terminal device | |
CN109120862A (en) | High-dynamic-range image acquisition method, device and mobile terminal | |
CN108737739A (en) | A kind of preview screen acquisition method, preview screen harvester and electronic equipment | |
CN108961183A (en) | Image processing method, terminal device and computer readable storage medium | |
CN107871000B (en) | Audio playing method and device, storage medium and electronic equipment | |
CN109377502A (en) | A kind of image processing method, image processing apparatus and terminal device | |
CN108833781A (en) | Image preview method, apparatus, terminal and computer readable storage medium | |
CN104052911A (en) | Information processing method and electronic device | |
CN109474785A (en) | The focus of electronic device and electronic device tracks photographic method | |
CN112949172A (en) | Data processing method and device, machine readable medium and equipment | |
CN106777071B (en) | Method and device for acquiring reference information by image recognition | |
CN111984803A (en) | Multimedia resource processing method and device, computer equipment and storage medium | |
CN108197203A (en) | A kind of shop front head figure selection method, device, server and storage medium | |
CN115082291A (en) | Method for adjusting image brightness, computer program product, electronic device and medium | |
CN107682691B (en) | A kind of method, terminal and the computer readable storage medium of camera focus calibration | |
US20210150243A1 (en) | Efficient image sharing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181228 |
|
RJ01 | Rejection of invention patent application after publication |