CN105791674A - Electronic device and focusing method - Google Patents

Electronic device and focusing method Download PDF

Info

Publication number
CN105791674A
CN105791674A CN201610082815.8A CN201610082815A CN105791674A CN 105791674 A CN105791674 A CN 105791674A CN 201610082815 A CN201610082815 A CN 201610082815A CN 105791674 A CN105791674 A CN 105791674A
Authority
CN
China
Prior art keywords
training
parameter
scene
focusing area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610082815.8A
Other languages
Chinese (zh)
Other versions
CN105791674B (en
Inventor
白天翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610082815.8A priority Critical patent/CN105791674B/en
Publication of CN105791674A publication Critical patent/CN105791674A/en
Application granted granted Critical
Publication of CN105791674B publication Critical patent/CN105791674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The invention provides an electronic device and a focusing method. The focusing method comprises the steps of obtaining a preview image; identifying a shoot scene and an object in the preview image; obtaining at least one parameter in a set of first parameters and a set of second parameters based on the shoot scene and the object, wherein each first parameter indicates the possibility that a specific region in the preview image in the shoot scene serves as a focusing region, and each second parameter indicates the possibility that a region where the object is in the shoot scene servers as the focusing region; determining the focusing region in the preview image based on at least one of the first parameters and the second parameters; and focusing based on the focusing region.

Description

Electronic equipment and focusing method
Technical field
The present invention relates to the field that shooting processes, more particularly it relates to an electronic equipment and focusing method.
Background technology
Recently, in order to promote the automaticity when shooting of the electronic equipment with shoot function, user operation, the positive increased popularity of Atomatic focusing method are simplified.In a kind of Atomatic focusing method, detect the face in image to be captured, and focusing area is defined as the region at face place.
But, such focusing method does not consider the use habit of photographed scene and user, and focusing area is single, it is impossible to meeting the diversified composition of user and shooting demand, Consumer's Experience is poor.
Summary of the invention
Because above-mentioned situation, the invention provides a kind of electronic equipment and focusing method, it can be focused in the way of meeting user's use habit based on photographed scene, so that focusing is more intelligent and efficient, meet the demand of user individual, improve Consumer's Experience.
According to one embodiment of the invention, it is provided that a kind of focusing method, including: obtain preview image;Photographed scene and object is identified from described preview image;Based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;Based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image;And based on described focusing area, focus.
According to another embodiment of the present invention, it is provided that a kind of electronic equipment, including: acquiring unit, obtain preview image;Processing unit, identifies photographed scene and object from described preview image;Based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;Based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image;And focusing unit, based on described focusing area, focus.
According to another embodiment of the present invention, it is provided that a kind of focusing mechanism, including: the first acquiring unit, obtain preview image;Recognition unit, identifies photographed scene and object from described preview image;Second acquisition unit, based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;Determine unit, based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image;And focusing unit, based on described focusing area, focus.
According to another embodiment of the present invention, provide a kind of computer program, including computer-readable recording medium, storing computer program instructions on described computer-readable recording medium, described computer program instructions performs following steps when being run by computer: obtain preview image;Photographed scene and object is identified from described preview image;Based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;Based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image;And based on described focusing area, focus.
In the electronic equipment and focusing method of the embodiment of the present invention, photographed scene and object is identified from preview image, and determine focusing area based on photographed scene and object, it is thus possible to focus in the way of meeting user's use habit according to different photographed scenes, make focusing more intelligent and efficient, meet the demand of user individual, improve Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the flow chart of the key step of indicative icon focusing method according to embodiments of the present invention;
Fig. 2 is the figure of the performance example of the set schematically showing the set of the first parameter in the focusing method of the embodiment of the present invention and the second parameter;
Fig. 3 is the block diagram of the main configuration of the electronic equipment schematically showing the embodiment of the present invention;And
Fig. 4 is the block diagram of the main configuration of the focusing mechanism schematically showing the embodiment of the present invention.
Detailed description of the invention
The embodiment of the present invention is described in detail below with reference to accompanying drawing.
First, focusing method according to embodiments of the present invention is described.The focusing method of the embodiment of the present invention is applied to the electronic equipment with image unit of such as camera, mobile phone, panel computer etc..
Hereinafter, the focusing method of the embodiment of the present invention will be described in detail with reference to Fig. 1.
As it is shown in figure 1, first, in step S110, preview image is obtained.Specifically, for instance, when utilizing described electronic equipment to shoot, it is possible to gather image data frame to be captured as described preview image by described image unit.
It follows that in step S120, identify photographed scene and object from described preview image.
Specifically, it is possible to by the various image-recognizing methods of known in the art and following exploitation, identify that whether described preview image is corresponding to default photographed scene, and identify whether described photographed scene exists default object.Exemplarily, described photographed scene can include at least one in landscape, personage, night scene etc..Exemplarily, described object can include at least one in face, vehicle etc..
After identifying photographed scene and object from described preview image, in step S130, based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter.
Specifically, the set of described first parameter includes one or more first parameter.The set of described second parameter includes one or more second parameter.The specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area.Described specific region can be arranged as suitably desired by those skilled in the art.Such as, described specific region can be at least one in the top left region in image, lower left region, right regions, lower right area, zone line etc..Described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene.
In one embodiment, the set of described first parameter and the set of described second parameter can obtain in advance.In the first embodiment, the set of described first parameter manually can be arranged by user with at least one in the set of described second parameter.In a second embodiment, it is possible to obtain at least one in the set of described first parameter and the set of described second parameter based on user preference by carrying out learning.
Specifically, about the set of described first parameter, it is possible, firstly, to obtain multiple training image and multiple training focusing areas corresponding with each training image respectively.
In one embodiment, the image that described training image is captured before can being user.In another embodiment, described training image can also be the image otherwise obtained, for instance the image statistics that obtained by camera ISP (image-signal processor), even from other storage mediums or from acquired image of network etc..
Described training focusing area is the region of focusing in described training image.In one embodiment, described training focusing area can automatically analyze out from described training image.Such as, when captured before described training image is user image, it is possible to from training focusing area described in captured automated image analysis.In another embodiment, described training focusing area can by user's hand labeled.Such as, when described training image is from other storage mediums or from the image that network is acquired, it is possible to instruction user marks its desired focusing area on acquired training image, thus obtaining described training focusing area.
Additionally, the plurality of training focusing area includes the training focusing area corresponding with described specific region.Specifically, for instance, described specific region can be the top left region of preview image.Now, the plurality of training focusing area includes the top left region of training image.Additionally, the plurality of training focusing area may also include at least one in such as lower left region, zone line, right regions, lower right area, zone line etc..
It follows that for each training image, identify the Training scene corresponding to described training image from Training scene set.Described Training scene set can include the multiple scenes preset, for instance, landscape, personage, night scene, cuisines etc..Similar to the above, it is possible to by the various image-recognizing methods of known in the art and following exploitation, to identify the Training scene corresponding to each training image.
Additionally, described Training scene set includes the Training scene corresponding with described photographed scene.Specifically, for instance, the photographed scene corresponding to described preview image can be landscape.Now, described Training scene set includes landscape.Additionally, described Training scene set may also include at least one in such as personage, night scene, cuisines etc..
After the Training scene and corresponding training focusing area of each training image identified as described above, the set of described first parameter of the relatedness represented between each Training scene and each training focusing area based on described Training scene and described training focusing area, can be calculated.
More specifically, for example, it is possible to first add up the number of times that under each Training scene, each training focusing area occurs.Such as, when Training scene is " landscape ", the situation that training focusing area is " top left region " occurs in that s1 time, the situation that training focusing area is " lower left region " occurs in that s2 time, the situation that training focusing area is " zone line " occurs in that s3 time, the situation that training focusing area is " right regions " occurs in that s4 time, and the situation that training focusing area is " lower right area " occurs in that s5 time;Etc..
Next, under each Training scene, for each training focusing area, ratio between the summation of the number of times that the number of times occurred based on described training focusing area and each training focusing area occur, determine the probability that described Training scene associates with described training focusing area, thus forming the set of described first parameter.
Certainly, the foregoing is only example.Those skilled in the art can calculate the set of described first parameter of the relatedness represented between each Training scene and each training focusing area on this basis by other various modes.
On the other hand, about the set of described second parameter, it is possible, firstly, to by process similar to the above, obtain multiple training image and multiple training focusing areas corresponding with each training object respectively.
It follows that by process similar to the above, for each training image, the Training scene corresponding to described training image can be identified from Training scene set.Equally, described Training scene set includes the Training scene corresponding with described photographed scene.
Additionally, for each training image, it is possible to identify from training object set and be positioned at the training object of training focusing area corresponding with described training image.Specifically, known in the art and exploitation in the future various image recognition algorithms can be passed through, identify the training object at described training focusing area place, for instance, face, vehicle etc..Equally, described training object set includes the training object corresponding with described object.Specifically, for instance, described preview image can comprise face object.Now, described training object set includes face.Additionally, described training object set may also include at least one in such as vehicle, house pet etc..
After as mentioned above each training image being identified Training scene and is positioned at the corresponding training object training focusing area place, the set of described second parameter of the relatedness represented between each Training scene and each training object based on described Training scene and described training object, can be calculated.
More specifically, for example, it is possible to first add up the number of times that under each Training scene, each training object occurs.Such as, when Training scene is " landscape ", the situation that training object is " face " occurs in that v1 time, and the situation that training object is " vehicle " occurs in that v2 time, and the situation that training object is " house pet " occurs in that v3 time;Etc..
Next, under each Training scene, for each training object, the ratio between the summation of the number of times of the number of times occurred based on described training object and each training object appearance, determine the probability that described Training scene associates with described training object, to form the set of described second parameter.
Certainly, the foregoing is only example.Those skilled in the art can calculate the set of described second parameter of the relatedness represented between each Training scene and each training object on this basis by other various modes.
Fig. 2 is the figure of set and the example of the set of the second parameter schematically showing the first parameter.As in figure 2 it is shown, reflect the probability of focusing objects corresponding different under different scene and focusing area with the form of probability graph.Specifically, in scene n, user select focusing main body t probability beUser selects the probability of focusing area m to be
Above, the exemplary acquisition mode of described first parameter and described second parameter is described.
Hereafter, described focusing method proceeds to step S140.In step S140, based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image.
Specifically, can based on described in described first parameter and described second parameter at least one, by the algorithm pre-set, it is determined that described focusing area.
In the first embodiment, when being based only upon the first parameter or the second parameter, can will be defined as focusing area with the region corresponding to the maximum in the set of described first parameter or described second parameter under described photographed scene.
In a second embodiment, when based on the first parameter and the second parameter, expression formula p can be passed througho×puOr expression formula po+puCalculate user and the zones of different in preview image is defined as the probability of focusing area, wherein, poFor user's probability to different object preferences, puFor user's probability to zones of different preference.It is then possible to the region corresponding to the maximum of probability calculated in the middle of the probability obtained is defined as focusing area.
Certainly, above-described expression formula is merely illustrative.Those skilled in the art can calculate user by other various expression formulas on this basis and the zones of different in preview image is defined as the probability of focusing area.
After determining focusing area, described focusing method proceeds to step S150, and based on described focusing area, focuses.
Above, see figures.1.and.2 and describe the focusing method of the embodiment of the present invention.
In the focusing method of the embodiment of the present invention, photographed scene and object is identified from preview image, and determine focusing area based on photographed scene and object, it is thus possible to focus in the way of meeting user's use habit according to different photographed scenes, make focusing more intelligent and efficient, meet the demand of user individual, improve Consumer's Experience.
Alternatively, in one embodiment, before identifying described photographed scene and described object, can extract for identifying comparatively important feature from described preview image, for instance or its combination in any in edge feature, histogram of gradients, gray-scale statistical amount, color channel statistic, Corner Feature etc..Then, based on the feature extracted, identifying described photographed scene and described object from described preview image, its processing procedure is known to those skilled in the art, is not described in detail in this.Equally, before recognition training scene and training object, it is possible to extract described feature from training object, and be identified based on the feature extracted.
Thus, in the focusing method of this embodiment of the present invention, not only reduce redundancy, and improve calculating speed, improve Consumer's Experience.
Optionally, in addition, in another embodiment, the set of described first parameter and the set of described second parameter are possible not only to be that off-line training is good, it is also possible to be as the operation of user and real-time update.Specifically, in this embodiment, it is possible to receive the input operation specifying focusing area from described preview image.Then, specified focusing area is defined as the focusing area in described preview image.That is, make the focusing area that user specifies have precedence over the focusing area automatically determined.It is then possible to based on specified focusing area, update at least one in the set of described first parameter and the set of described second parameter.That is, preview image now and specified focusing area as new training image and are trained focusing area, and calculated at least one in the set of described first parameter and the set of described second parameter by mode as above.
Thus, in the focusing method of the present embodiment, the preference of unceasing study user, even and if user preference change and also can update accordingly so that focusing area more meets the use habit that user is up-to-date.
Optionally, in addition, in another embodiment, so that can both carry out auto-focusing when different user uses described electronic equipment, the method for the embodiment of the present invention comprises different user parameter information.That is, the set of the set of described first parameter and described second parameter belongs to the first user parameter information in multiple customer parameter information.
Now, described focusing method can receive the user profile corresponding with first user.Described user profile can be such as the identification information of ID etc..And based on described user profile, first user parameter information is determined from the plurality of customer parameter information, thus based at least one and described user profile described in described first parameter and described second parameter, it is determined that described first parameter and in described second parameter described at least one.
Thus, even for the same preview image under same scene, it is also possible to realize different focus based on different user parameter information so that described electronic equipment can be used by many people, and focusing area meets the use habit that each user is different.
Above, see figures.1.and.2 and describe the focusing method of the embodiment of the present invention.
Below, the electronic equipment of the embodiment of the present invention will be described with reference to Fig. 3.The electronic equipment of the embodiment of the present invention is the electronic equipment with image unit of such as camera, mobile phone, panel computer etc..
As it is shown on figure 3, the electronic equipment 300 of the embodiment of the present invention includes: acquiring unit 310, processing unit 320 and focusing unit 330.
Described acquiring unit 310 obtains preview image.Described processing unit 320 identifies photographed scene and object from described preview image;Based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;Based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image.Described focusing unit 330, based on described focusing area, is focused.
In one embodiment, described processing unit is preconfigured to: obtaining multiple training image and multiple training focusing areas corresponding with each training image respectively, the plurality of training focusing area includes the training focusing area corresponding with described specific region;For each training image, identifying the Training scene corresponding to described training image from Training scene set, described Training scene set includes the Training scene corresponding with described photographed scene;And based on the Training scene that each training image is identified and the training focusing area corresponding with described training image, calculate the set of described first parameter of the relatedness represented between each Training scene and each training focusing area.
Described processing unit is also preconfigured to: obtain multiple training image and multiple training focusing areas corresponding with each training object respectively;For each training image, identifying the Training scene corresponding to described training image from Training scene set, described Training scene set includes the Training scene corresponding with described photographed scene;For each training image, identifying and be positioned at the training object of training focusing area corresponding with described training image from training object set, described training object set comprises the training object corresponding with described object;And based on the Training scene that each training image is identified and the training object being positioned at corresponding training focusing area place, calculate the set of described second parameter of the relatedness represented between each Training scene and each training object.
In another embodiment, described processing unit is also preconfigured to: add up the number of times that under each Training scene, each training focusing area occurs;And under each Training scene, for each training focusing area, ratio between the summation of the number of times that the number of times occurred based on described training focusing area and each training focusing area occur, determine the probability that described Training scene associates with described training focusing area, to form the set of described first parameter.Described processing unit is also preconfigured to: add up the number of times that under each Training scene, each training object occurs;And under each Training scene, for each training object, ratio between the summation of the number of times that the number of times occurred based on described training object and each training object occur, it is determined that the probability that described Training scene associates with described training object, to form the set of described second parameter.
In another embodiment, described processing unit is additionally configured to: extract feature from described preview image;And based on the feature extracted, from described preview image, identify described photographed scene and described object.
In another embodiment, described electronic equipment also includes: input block, receives the input operation specifying focusing area from described preview image;Described processing unit is additionally configured to: specified focusing area is defined as the focusing area in described preview image;And based on specified focusing area, update at least one in the set of described first parameter and the set of described second parameter.
In another embodiment, the set of described first parameter and the set of described second parameter belong to the first user parameter information in multiple customer parameter information, and described electronic equipment also includes: input block, receives the user profile corresponding with first user;Further, described processing unit is additionally configured to: based on described user profile, determines first user parameter information from the plurality of customer parameter information;And based at least one and described user profile described in described first parameter and described second parameter, it is determined that described first parameter and in described second parameter described at least one.
The configuration of each unit of the electronic equipment of the embodiment of the present invention and operation have been described above seeing figures.1.and.2 detailed description in described focusing method, are not repeated at this.
In the electronic equipment of the embodiment of the present invention, photographed scene and object is identified from preview image, and determine focusing area based on photographed scene and object, it is thus possible to focus in the way of meeting user's use habit according to different photographed scenes, make focusing more intelligent and efficient, meet the demand of user individual, improve Consumer's Experience.
Below, the focusing mechanism of the embodiment of the present invention will be described with reference to Fig. 4.The focusing mechanism of the embodiment of the present invention is applied to the electronic equipment with image unit of such as camera, mobile phone, panel computer etc.
As shown in Figure 4, the focusing mechanism 400 of the embodiment of the present invention includes: the first acquiring unit 410, recognition unit 420, second acquisition unit 430, determine unit 440 and focusing unit 450.
Described first acquiring unit 410 obtains preview image.Described recognition unit 420 identifies photographed scene and object from described preview image.Described second acquisition unit 430 is based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene.Described determine unit 440 based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image.Described focusing unit 450, based on described focusing area, is focused.
In one embodiment, described focusing mechanism 400 also includes: the 3rd acquiring unit, obtaining multiple training image and multiple training focusing areas corresponding with each training image respectively, the plurality of training focusing area includes the training focusing area corresponding with described specific region;Second recognition unit, for each training image, identifies the Training scene corresponding to described training image from Training scene set, and described Training scene set includes the Training scene corresponding with described photographed scene;And first computing unit, based on the Training scene that each training image is identified and the training focusing area corresponding with described training image, calculate the set of described first parameter of the relatedness represented between each Training scene and each training focusing area.
Described focusing mechanism 400 also includes: the 3rd acquiring unit, obtains multiple training image and multiple training focusing areas corresponding with each training object respectively;Second recognition unit, for each training image, identifies the Training scene corresponding to described training image from Training scene set, and described Training scene set includes the Training scene corresponding with described photographed scene;3rd recognition unit, for each training image, identifies from training object set and is positioned at the training object of training focusing area corresponding with described training image, and described training object set comprises the training object corresponding with described object;And second computing unit, based on the Training scene that each training image is identified and be positioned at the corresponding training object training focusing area place, calculate the set of described second parameter of the relatedness represented between each Training scene and each training object.
In another embodiment, described first computing unit includes: the first statistic unit, adds up the number of times that under each Training scene, each training focusing area occurs;And first computation unit, under each Training scene, for each training focusing area, ratio between the summation of the number of times that the number of times occurred based on described training focusing area and each training focusing area occur, determine the probability that described Training scene associates with described training focusing area, to form the set of described first parameter.
Described second computing unit includes: the second statistic unit, adds up the number of times that under each Training scene, each training object occurs;And second computation unit, under each Training scene, for each training object, ratio between the summation of the number of times that the number of times occurred based on described training object and each training object occur, determine the probability that described Training scene associates with described training object, to form the set of described second parameter.
In another embodiment, described focusing mechanism 400 also includes: feature extraction unit, extracts feature from described preview image;And scene and object identification unit, based on the feature extracted, from described preview image, identify described photographed scene and described object.
In another embodiment, described focusing mechanism 400 also includes: first receives unit, receives the input operation specifying focusing area from described preview image;Designating unit, is defined as the focusing area in described preview image by specified focusing area;And updating block, based on specified focusing area, update at least one in the set of described first parameter and the set of described second parameter.
In another embodiment, the set of described first parameter and the set of described second parameter belong to the first user parameter information in multiple customer parameter information, described focusing mechanism 400 also includes: second receives unit, and, described second acquisition unit includes: parameter information determines unit, based on described user profile, from the plurality of customer parameter information, determine first user parameter information;And parameter determination unit, based at least one and described user profile described in described first parameter and described second parameter, it is determined that described first parameter and in described second parameter described at least one.
The configuration of each unit of the focusing mechanism of the embodiment of the present invention and operation have been described above seeing figures.1.and.2 detailed description in described focusing method, are not repeated at this.
In the focusing mechanism of the embodiment of the present invention, photographed scene and object is identified from preview image, and determine focusing area based on photographed scene and object, it is thus possible to focus in the way of meeting user's use habit according to different photographed scenes, make focusing more intelligent and efficient, meet the demand of user individual, improve Consumer's Experience.
Above, focusing method according to embodiments of the present invention, focusing mechanism and electronic equipment are described referring to figs. 1 through Fig. 4.
It should be noted that, in this manual, term " includes ", " comprising " or its any other variant are intended to comprising of nonexcludability, so that include the process of a series of key element, method, article or equipment not only include those key elements, but also include other key elements being not expressly set out, or also include the key element intrinsic for this process, method, article or equipment.When there is no more restriction, statement " including ... " key element limited, it is not excluded that there is also other identical element in including the process of described key element, method, article or equipment.
Furthermore, it is necessary to illustrate, in this manual, the statement of similar " first ... unit ", " second ... unit " is distinguished only for convenient when describing, and is not meant to its two or more unit that must be implemented as physical separation.It is true that as required, described unit can be entirely implemented as a unit, it is also possible to is embodied as multiple unit.
Finally, in addition it is also necessary to explanation, above-mentioned a series of process not only include the process performed in temporal sequence with order described here, and include the process performed parallel or respectively rather than in chronological order.
Through the above description of the embodiments, those skilled in the art is it can be understood that can add the mode of required hardware platform by software to the present invention and realize, naturally it is also possible to implement all through hardware.Based on such understanding, what background technology was contributed by technical scheme can embody with the form of software product in whole or in part, this computer software product can be stored in storage medium, such as ROM/RAM, magnetic disc, CD etc., including some instructions with so that a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the present invention or embodiment.
In embodiments of the present invention, units/modules can realize with software, in order to is performed by various types of processors.For example, the executable code module of a mark can include one or more physics or the logical block of computer instruction, for example, it can be built as object, process or function.However, the executable code of identified module need not be physically located together, but the different instruction in can including being stored in not coordination, when these command logics combine, its Component units/module and realize the regulation purpose of this units/modules.
When units/modules can utilize software to realize, consider the level of existing hardware technique, so units/modules that can be implemented in software, when being left out cost, those skilled in the art can build the hardware circuit of correspondence and realize corresponding function, and described hardware circuit includes ultra-large integrated (VLSI) circuit of routine or the existing quasiconductor of gate array and such as logic chip, transistor etc or other discrete element.Module can also use programmable hardware device, and such as field programmable gate array, programmable logic array, programmable logic device etc. realize.
Above the present invention being described in detail, principles of the invention and embodiment are set forth by specific case used herein, and the explanation of above example is only intended to help to understand method and the core concept thereof of the present invention;Simultaneously for one of ordinary skill in the art, according to the thought of the present invention, all will change in specific embodiments and applications, in sum, this specification content should not be construed as limitation of the present invention.

Claims (12)

1. a focusing method, including:
Obtain preview image;
Photographed scene and object is identified from described preview image;
Based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;
Based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image;And
Based on described focusing area, focus.
2. focusing method as claimed in claim 1, wherein,
The set of described first parameter is obtained by following steps:
Obtaining multiple training image and multiple training focusing areas corresponding with each training image respectively, the plurality of training focusing area includes the training focusing area corresponding with described specific region;
For each training image, identifying the Training scene corresponding to described training image from Training scene set, described Training scene set includes the Training scene corresponding with described photographed scene;And
Based on the Training scene that each training image is identified and the training focusing area corresponding with described training image, calculate the set of described first parameter of the relatedness represented between each Training scene and each training focusing area;
Wherein, the set of described second parameter is obtained by following steps:
Obtain multiple training image and multiple training focusing areas corresponding with each training object respectively;
For each training image, identifying the Training scene corresponding to described training image from Training scene set, described Training scene set includes the Training scene corresponding with described photographed scene;
For each training image, identifying and be positioned at the training object of training focusing area corresponding with described training image from training object set, described training object set comprises the training object corresponding with described object;And
Based on the Training scene that each training image is identified and the training object being positioned at corresponding training focusing area place, calculate the set of described second parameter of the relatedness represented between each Training scene and each training object.
3. focusing method as claimed in claim 2, wherein,
The step of the set calculating described first parameter includes:
Add up the number of times that under each Training scene, each training focusing area occurs;And
Under each Training scene, for each training focusing area, ratio between the summation of the number of times that the number of times occurred based on described training focusing area and each training focusing area occur, determine the probability that described Training scene associates with described training focusing area, to form the set of described first parameter;
The step of the set calculating described second parameter includes:
Add up the number of times that under each Training scene, each training object occurs;And
Under each Training scene, for each training object, ratio between the summation of the number of times that the number of times occurred based on described training object and each training object occur, it is determined that the probability that described Training scene associates with described training object, to form the set of described second parameter.
4. focusing method as claimed in claim 1, also includes:
Feature is extracted from described preview image;And
Based on the feature extracted, from described preview image, identify described photographed scene and described object.
5. focusing method as claimed in claim 1, also includes:
Receive the input operation specifying focusing area from described preview image;
Specified focusing area is defined as the focusing area in described preview image;And
Based on specified focusing area, update at least one in the set of described first parameter and the set of described second parameter.
6. focusing method as claimed in claim 1, wherein, the set of described first parameter and the set of described second parameter belong to the first user parameter information in multiple customer parameter information, and described method also includes:
Receive the user profile corresponding with first user;
Further, based on described in described first parameter and described second parameter at least one, it is determined that the step of the focusing area in described preview image includes:
Based on described user profile, from the plurality of customer parameter information, determine first user parameter information;And
Based at least one and described user profile described in described first parameter and described second parameter, it is determined that the focusing area in described preview image.
7. an electronic equipment, including:
Acquiring unit, obtains preview image;
Processing unit, identifies photographed scene and object from described preview image;Based on described photographed scene and described object, obtain at least one in the set of the first parameter and the set of the second parameter, the specific region that described first parameter represents in described preview image under described photographed scene is as the probability of focusing area, and described second parameter represents that the region at described object place is as the probability of focusing area under described photographed scene;Based on described in described first parameter and described second parameter at least one, it is determined that the focusing area in described preview image;And
Focusing unit, based on described focusing area, focuses.
8. electronic equipment as claimed in claim 7,
Wherein, described processing unit is preconfigured to:
Obtaining multiple training image and multiple training focusing areas corresponding with each training image respectively, the plurality of training focusing area includes the training focusing area corresponding with described specific region;
For each training image, identifying the Training scene corresponding to described training image from Training scene set, described Training scene set includes the Training scene corresponding with described photographed scene;And
Based on the Training scene that each training image is identified and the training focusing area corresponding with described training image, calculate the set of described first parameter of the relatedness represented between each Training scene and each training focusing area;
Described processing unit is also preconfigured to:
Obtain multiple training image and multiple training focusing areas corresponding with each training object respectively;
For each training image, identifying the Training scene corresponding to described training image from Training scene set, described Training scene set includes the Training scene corresponding with described photographed scene;
For each training image, identifying and be positioned at the training object of training focusing area corresponding with described training image from training object set, described training object set comprises the training object corresponding with described object;And
Based on the Training scene that each training image is identified and the training object being positioned at corresponding training focusing area place, calculate the set of described second parameter of the relatedness represented between each Training scene and each training object.
9. electronic equipment as claimed in claim 8, wherein,
Described processing unit is also preconfigured to:
Add up the number of times that under each Training scene, each training focusing area occurs;And
Under each Training scene, for each training focusing area, ratio between the summation of the number of times that the number of times occurred based on described training focusing area and each training focusing area occur, determine the probability that described Training scene associates with described training focusing area, to form the set of described first parameter;
Described processing unit is also preconfigured to:
Add up the number of times that under each Training scene, each training object occurs;And
Under each Training scene, for each training object, ratio between the summation of the number of times that the number of times occurred based on described training object and each training object occur, it is determined that the probability that described Training scene associates with described training object, to form the set of described second parameter.
10. electronic equipment as claimed in claim 7, described processing unit is additionally configured to: extract feature from described preview image;And based on the feature extracted, from described preview image, identify described photographed scene and described object.
11. electronic equipment as claimed in claim 7, also include:
Input block, receives the input operation specifying focusing area from described preview image;
Described processing unit is additionally configured to:
Specified focusing area is defined as the focusing area in described preview image;And
Based on specified focusing area, update at least one in the set of described first parameter and the set of described second parameter.
12. electronic equipment as claimed in claim 7, wherein, the set of described first parameter and the set of described second parameter belong to the first user parameter information in multiple customer parameter information, and described electronic equipment also includes:
Input block, receives the user profile corresponding with first user;
Further, described processing unit is additionally configured to:
Based on described user profile, from the plurality of customer parameter information, determine first user parameter information;And
Based at least one and described user profile described in described first parameter and described second parameter, it is determined that described first parameter and in described second parameter described at least one.
CN201610082815.8A 2016-02-05 2016-02-05 Electronic equipment and focusing method Active CN105791674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610082815.8A CN105791674B (en) 2016-02-05 2016-02-05 Electronic equipment and focusing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610082815.8A CN105791674B (en) 2016-02-05 2016-02-05 Electronic equipment and focusing method

Publications (2)

Publication Number Publication Date
CN105791674A true CN105791674A (en) 2016-07-20
CN105791674B CN105791674B (en) 2019-06-25

Family

ID=56402700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610082815.8A Active CN105791674B (en) 2016-02-05 2016-02-05 Electronic equipment and focusing method

Country Status (1)

Country Link
CN (1) CN105791674B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282608A (en) * 2017-12-26 2018-07-13 努比亚技术有限公司 Multizone focusing method, mobile terminal and computer readable storage medium
CN108712609A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Focusing process method, apparatus, equipment and storage medium
CN109495689A (en) * 2018-12-29 2019-03-19 北京旷视科技有限公司 A kind of image pickup method, device, electronic equipment and storage medium
CN109951647A (en) * 2019-01-23 2019-06-28 努比亚技术有限公司 A kind of acquisition parameters setting method, terminal and computer readable storage medium
CN109963072A (en) * 2017-12-26 2019-07-02 广东欧珀移动通信有限公司 Focusing method, device, storage medium and electronic equipment
CN110290324A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment
CN110572573A (en) * 2019-09-17 2019-12-13 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130188070A1 (en) * 2012-01-19 2013-07-25 Electronics And Telecommunications Research Institute Apparatus and method for acquiring face image using multiple cameras so as to identify human located at remote site
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device
CN103905729A (en) * 2007-05-18 2014-07-02 卡西欧计算机株式会社 Imaging device and program thereof
CN104092936A (en) * 2014-06-12 2014-10-08 小米科技有限责任公司 Automatic focusing method and apparatus
CN105120153A (en) * 2015-08-20 2015-12-02 广东欧珀移动通信有限公司 Image photographing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905729A (en) * 2007-05-18 2014-07-02 卡西欧计算机株式会社 Imaging device and program thereof
US20130188070A1 (en) * 2012-01-19 2013-07-25 Electronics And Telecommunications Research Institute Apparatus and method for acquiring face image using multiple cameras so as to identify human located at remote site
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device
CN104092936A (en) * 2014-06-12 2014-10-08 小米科技有限责任公司 Automatic focusing method and apparatus
CN105120153A (en) * 2015-08-20 2015-12-02 广东欧珀移动通信有限公司 Image photographing method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282608A (en) * 2017-12-26 2018-07-13 努比亚技术有限公司 Multizone focusing method, mobile terminal and computer readable storage medium
CN109963072A (en) * 2017-12-26 2019-07-02 广东欧珀移动通信有限公司 Focusing method, device, storage medium and electronic equipment
CN108282608B (en) * 2017-12-26 2020-10-09 努比亚技术有限公司 Multi-region focusing method, mobile terminal and computer readable storage medium
CN108712609A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Focusing process method, apparatus, equipment and storage medium
CN109495689A (en) * 2018-12-29 2019-03-19 北京旷视科技有限公司 A kind of image pickup method, device, electronic equipment and storage medium
CN109495689B (en) * 2018-12-29 2021-04-13 北京旷视科技有限公司 Shooting method and device, electronic equipment and storage medium
CN109951647A (en) * 2019-01-23 2019-06-28 努比亚技术有限公司 A kind of acquisition parameters setting method, terminal and computer readable storage medium
CN110290324A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment
CN110290324B (en) * 2019-06-28 2021-02-02 Oppo广东移动通信有限公司 Device imaging method and device, storage medium and electronic device
CN110572573A (en) * 2019-09-17 2019-12-13 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
CN110572573B (en) * 2019-09-17 2021-11-09 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN105791674B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN105791674A (en) Electronic device and focusing method
US11386284B2 (en) System and method for improving speed of similarity based searches
US8724910B1 (en) Selection of representative images
CN105100894B (en) Face automatic labeling method and system
CN110929622B (en) Video classification method, model training method, device, equipment and storage medium
WO2012073421A1 (en) Image classification device, image classification method, program, recording media, integrated circuit, and model creation device
CN109643448A (en) Fine granularity object identification in robot system
US9224211B2 (en) Method and system for motion detection in an image
US20130336590A1 (en) Method and apparatus for generating a visual story board in real time
US11042991B2 (en) Determining multiple camera positions from multiple videos
Bianco et al. Predicting image aesthetics with deep learning
US20200082851A1 (en) Bounding box doubling as redaction boundary
WO2014068472A1 (en) Depth map generation from a monoscopic image based on combined depth cues
CN104994426A (en) Method and system of program video recognition
CN110717058B (en) Information recommendation method and device and storage medium
Voulodimos et al. Improving multi-camera activity recognition by employing neural network based readjustment
CN110176024B (en) Method, device, equipment and storage medium for detecting target in video
Wang et al. Aspect-ratio-preserving multi-patch image aesthetics score prediction
CN111242019A (en) Video content detection method and device, electronic equipment and storage medium
CN110348366B (en) Automatic optimal face searching method and device
CN113051984A (en) Video copy detection method and apparatus, storage medium, and electronic apparatus
Mseddi et al. Real-time scene background initialization based on spatio-temporal neighborhood exploration
CN114078223A (en) Video semantic recognition method and device
CN112383824A (en) Video advertisement filtering method, device and storage medium
CN111860629A (en) Jewelry classification system, method, device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant