CN105120165A - Image acquisition control method and device - Google Patents

Image acquisition control method and device Download PDF

Info

Publication number
CN105120165A
CN105120165A CN201510548776.1A CN201510548776A CN105120165A CN 105120165 A CN105120165 A CN 105120165A CN 201510548776 A CN201510548776 A CN 201510548776A CN 105120165 A CN105120165 A CN 105120165A
Authority
CN
China
Prior art keywords
focusing
camera lens
target body
mapping relations
current preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510548776.1A
Other languages
Chinese (zh)
Inventor
李茂兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510548776.1A priority Critical patent/CN105120165A/en
Publication of CN105120165A publication Critical patent/CN105120165A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an image focusing control method and device applied to an electronic device. The electronic device comprises an image acquisition unit; the method comprises: presetting a mapping relation table of human body physical feature data to distances between the human body and the electronic device in the electronic device, wherein the mapping relation table comprises at least one mapping relation of the human body physical feature data to a distance; obtaining a current preview image when the image acquisition unit acquires an image; performing image analysis on the current preview image to obtain an analysis result; when the analysis result indicates that a figure exists in the current preview image, obtaining the physical feature data of a target human body in the current preview image; determining a target distance between the target human body and the electronic device according to the mapping relation in the mapping relation table and the physical feature data of the target human body; controlling a lens in the image acquisition unit to move to a sub-region in the focusing moving region of the lens based on the target distance; and obtaining the image data using the lens.

Description

The control method of IMAQ and device
Technical field
The application relates to technical field of image processing, relates to a kind of control method and device of IMAQ in particular.
Background technology
Along with the development of electronic technology, the smart machine with camera camera function is more and more applied in the life of people, autodynes or take pictures for friend as utilized the camera on the equipment such as mobile phone, obtains life picture instantaneously.
But, camera camera on existing smart machine is when taking pictures, usually need to carry out operating body action recognition, camera focusing, the process of publishing picture of taking pictures, and camera focusing part in this course relates to the movement from far near of camera motor drive camera lens to search for the process of focusing, in this process, institute's consumption time is longer, affects the efficiency of taking pictures.
Summary of the invention
In view of this, this application provides a kind of control method and device of IMAQ, consuming time longer in order to solve the focus process that in prior art, camera camera is taken pictures, affect the technical problem of efficiency of taking pictures.
For achieving the above object, the application provides following technical scheme:
A control method for image focusing, be applied to electronic equipment, described electronic equipment comprises image collecting device, and described method comprises:
In described electronic equipment, pre-set human body figure and features characteristic and human body apart from electronic equipment distance between mapping relations table, described mapping relations table comprises the mapping relations of at least one human body figure and features characteristic and distance;
When described image collecting device carries out IMAQ, obtain current preview picture;
Graphical analysis is carried out to described current preview picture, obtains analysis result;
When described analysis result shows to there is personage in described current preview picture, obtain the figure and features characteristic of target body in described current preview picture;
According to the figure and features characteristic of the mapping relations in described mapping relations table and described target body, determine the target range between described target body and described electronic equipment;
Based on described target range, control the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens;
Described camera lens is utilized to obtain view data.
Said method, preferably, before utilizing described camera lens acquisition view data, described method also comprises:
Control described camera lens each location point in described subregion moves, until described camera lens is on a source location in described subregion;
Wherein, when described camera lens is in described source location, the definition of its view data acquired meets and presets first condition.
Said method, preferably, described figure and features characteristic comprises: the somatic stigmata data of target body or the facial feature data of target body.
Said method, preferably, described facial feature data comprises: the relative space position characteristic between at least two positions in the face of target body;
Accordingly, the figure and features characteristic of target body in the described current preview picture of described acquisition, comprising:
Identify at least two positions in the face of target body in described current preview picture;
Based at least two positions described in identified, the space length data at least two positions described in the face measuring target body described in described current preview picture between every two positions;
According to the space length data between described every two positions, obtain the relative space position characteristic between at least two positions in the face of described target body;
Accordingly, the described figure and features characteristic according to the mapping relations in described mapping relations table and described target body, determine the target range between described target body and described electronic equipment, comprising:
According to the mapping relations in described mapping relations table, obtain the objective mapping relation corresponding with the relative space position characteristic between at least two positions in the face of described target body;
Based on described objective mapping relation, determine the target range of described target body apart from electronic equipment.
Said method, preferably, when described analysis result shows there is not personage in described current preview picture, described method also comprises:
Control described camera lens in its focusing moving area, from its original position point, move to successively on each location point in described focusing moving area, arrive on the end position point in described focusing moving area, to determine the focusing position point in described focusing moving area;
Control on described lens moving to described focusing position point;
Described camera lens is utilized to obtain view data;
Wherein, when described camera lens is in described focusing position point, the definition of its view data acquired meets and presets second condition.
Present invention also offers the control device of a kind of image focusing, be applied to electronic equipment, described electronic equipment comprises image collecting device, and described device comprises:
Map preset unit, in described electronic equipment, pre-set human body figure and features characteristic and human body apart from electronic equipment distance between mapping relations table, described mapping relations table comprises the mapping relations of at least one human body figure and features characteristic and distance;
Preview acquiring unit, for when described image collecting device carries out IMAQ, obtains current preview picture;
Image analyzing unit, for carrying out graphical analysis to described current preview picture, obtains analysis result, when described analysis result shows to there is personage in described current preview picture, and triggered characteristic acquiring unit;
Feature acquiring unit, for obtaining the figure and features characteristic of target body in described current preview picture;
Distance determining unit, for the figure and features characteristic according to the mapping relations in described mapping relations table and described target body, determines the target range between described target body and described electronic equipment;
Lens control unit, for based on described target range, controls the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens;
Image acquisition unit, obtains view data for utilizing described camera lens.
Said apparatus, preferably, described device also comprises:
Subregion focusing unit, after in lens moving in image collecting device described in described lens control unit controls to the subregion in the focusing moving area of described camera lens, before described image acquisition unit utilizes described camera lens acquisition view data, for controlling described camera lens each location point in described subregion moves, until described camera lens is on a source location in described subregion;
Wherein, the definition of view data accessed when described image acquisition unit utilizes described camera lens to be in described source location meets presets first condition.
Said apparatus, preferably, the figure and features characteristic accessed by described feature acquiring unit comprises: the somatic stigmata data of target body or the facial feature data of target body.
Said apparatus, preferably, the facial feature data accessed by described feature acquiring unit comprises: the relative space position characteristic between at least two positions in the face of target body;
Accordingly, described feature acquiring unit comprises:
Position recognin unit, for identifying at least two positions in the face of target body in described current preview picture;
Range measurement subelement, for based at least two positions described in identified, the space length data at least two positions shown in the face measuring target body described in described current preview picture between every two positions;
Feature obtains subelement, for according to the space length data between described every two positions, obtains the relative space position characteristic between at least two positions in the face of described target body;
Accordingly, described distance determining unit comprises:
Objective mapping obtains subelement, for according to the mapping relations in described mapping relations table, obtains the objective mapping relation corresponding with the relative space position characteristic at least between position in the face of described target body;
Target range determination subelement, for based on described objective mapping relation, determines the target range of described target body apart from electronic equipment.
Said apparatus, preferably, also comprises:
Moving area focusing unit, when the analysis result that described image analyzing unit obtains shows there is not personage in described current preview picture, for controlling described camera lens in its focusing moving area, from its original position point, move to each location point in described focusing moving area successively, arrive on the end position point in described focusing moving area, to determine the focusing position point in described focusing moving area;
Lens focusing mobile unit, for controlling described lens moving to described focusing position point, utilizes described camera lens to obtain view data to trigger described image acquisition unit;
Wherein, the definition of view data accessed when described image acquisition unit utilizes described camera lens to be in described focusing position point meets presets second condition.
Known via above-mentioned technical scheme, compared with prior art, the control method of a kind of image focusing that the application openly provides and device, by pre-setting a mapping relations table in the electronic device, when including the human body figure and features characteristic pre-set and IMAQ in this mapping relations table human body distance electronic equipment distance between mapping relations, thus, when utilizing the image collecting device in electronic equipment to carry out IMAQ, the application can carry out graphical analysis to this current preview screen after acquisition current preview picture, and then when there is personage in current preview picture, obtain the human body figure and features characteristic in current preview picture, now, utilize the target range that each mapping relations in the mapping relations table that pre-set are determined between this target body corresponding to figure and features characteristic and electronic equipment, i.e. figure harvester and by the distance between the target body of IMAQ, and then based on this distance, controlling camera lens directly moves in subregion corresponding with this target range in its focusing moving area, now recycle camera lens and obtain view data, without the need to controlling, all location points of camera lens in its focusing moving area move in this process, but directly move to the subregion corresponding with image taking distance to search out suitable focusing, thereby saving lens moving time, improve the efficiency of IMAQ.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only the embodiment of the application, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the accompanying drawing provided.
Fig. 1 is the flow chart of the control method embodiment one of a kind of image focusing provided by the invention;
Fig. 2 a ~ Fig. 2 g is respectively the application example figure of the embodiment of the present invention;
Fig. 3 is the flow chart of the control method embodiment two of a kind of IMAQ provided by the invention;
Fig. 4 is the partial process view of the control method embodiment three of a kind of image focusing provided by the invention;
Fig. 5 is another part flow chart of the embodiment of the present invention three;
Fig. 6 is the flow chart of the control method embodiment four of a kind of image focusing provided by the invention;
Fig. 7 is the structural representation of the control device embodiment five of a kind of image focusing provided by the invention;
Fig. 8 is the structural representation of the control device embodiment six of a kind of IMAQ provided by the invention;
Fig. 9 is the part-structure schematic diagram of the control device embodiment seven of a kind of IMAQ provided by the invention;
Figure 10 is the part-structure schematic diagram of the embodiment of the present invention seven;
Figure 11 is the structural representation of the control device embodiment eight of a kind of image focusing provided by the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, be clearly and completely described the technical scheme in the embodiment of the present application, obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of the application's protection.
With reference to figure 1, for the flow chart of the control method embodiment one of a kind of image focusing provided by the invention, wherein, described method is applied in electronic equipment, and this electronic equipment can for having the terminal equipment such as mobile phone, pad, single anti-filming apparatus of image collecting device.Described image collecting device can be the devices such as camera.
In the present embodiment, described method can comprise the following steps, to realize the object of the invention:
Step 101: in described electronic equipment, pre-set human body figure and features characteristic and human body apart from electronic equipment distance between mapping relations table.
Wherein, mapping relations are at least comprised in described mapping relations table, these mapping relations are: the mapping relations of human body figure and features characteristic and distance, that is, the present embodiment pre-sets a mapping relations table in described electronic equipment, concrete, the present embodiment utilizes described image collecting device photographic images in advance under different distance, signature analysis is carried out to image corresponding under these different distance, to obtain human body figure and features characteristic corresponding under different distance, by these human body figure and features characteristics with it each self-corresponding distance set up mapping relations one by one, composition mapping relations table.
It should be noted that, described mapping relations table can pre-set or be stored in the EEPROM (ElectricallyErasableProgrammableRead-OnlyMemory of the camera lens of memory cell in described electronic equipment or described image collecting device, EEPROM (Electrically Erasable Programmable Read Only Memo)), described EEPROM is the storage chip that after a kind of power down, data are not lost, or, described mapping relations table can be stored in the OTP (onetimeprogrammable once edits the register that can not revise again) of described camera lens.
Step 102: when described image collecting device carries out IMAQ, obtains current preview picture.
Wherein, described current preview picture can utilize the camera lens in described image collecting device to obtain, and is different from the collection of view data, and described current preview picture can not imaging on the sensitive chip in described image collecting device.
Step 103: carry out graphical analysis to described current preview picture, obtain analysis result, when described analysis result shows to there is personage in described current preview picture, performs step 104.
Wherein, concrete in described step 103 can by carrying out picture element scan identification to described current preview picture, to identify the figure picture in described current preview picture, and then obtain analysis result, described analysis result can characterize in described current preview picture whether there is personage.
Step 104: the figure and features characteristic obtaining target body in described current preview picture.
Wherein, in described step 104 specifically can identify there is personage in this current preview screen time, further feature identification is carried out to the character features in described current preview picture, such as, the figure and features feature of target body in described current preview picture is identified, first the human body picture that shown in described current preview picture, target body is corresponding is determined, and then the figure and features feature in this human body picture is identified, as features such as four limbs, trunk and heads, and then get the figure and features characteristic of described target body.
Step 105: according to the figure and features characteristic of the mapping relations in described mapping relations table and described target body, determine the target range between described target body and described electronic equipment.
Based on foregoing teachings, the mapping relations of human body figure and features characteristic corresponding under there is in described mapping relations table different distance, therefore, the distance that in described step 105, inquiry is corresponding with the figure and features characteristic of this target body in described mapping relations table, using this distance as the target range between described target body and described electronic equipment.
Step 106: based on described target range, controls the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens.
Wherein, camera lens in described image collecting device has its focusing moving area carrying out focusing, as shown in Figure 2 a, in the prior art, described camera lens moves around on described focusing moving area, with the position finding the definition of an image making described camera lens obtain higher.
In the present embodiment, in advance based on described image collecting device and the far and near distance that is taken between scene, sub-zone dividing is carried out to described focusing moving area, as shown in figure 2b, when the described distance be taken between scenery and described image collecting device is x1, the definition that the described camera lens location point be in subregion y1 carries out IMAQ is higher; When the described distance be taken between scene and described image collecting device is x2, the definition that the described camera lens location point be in subregion y2 carries out IMAQ is higher, by that analogy, when the described distance be taken between scene and described image collecting device is xn, the definition that the described camera lens location point be in subregion yn carries out IMAQ is higher.Utilize this scheme, based on described target range in the present embodiment, by the lens moving in described image collecting device in the subregion corresponding with described target range.
Step 107: utilize described camera lens to obtain view data.
Concrete, utilize described camera lens collection to be taken the light of scene in described step 107, imaging on the sensitive chip be set up at described image collector by described light, with the view data of the scene that is taken described in obtaining.
Known via above-mentioned technical scheme, compared with prior art, the control method embodiment one of a kind of image focusing that the application openly provides, by pre-setting a mapping relations table in the electronic device, when including the human body figure and features characteristic pre-set and IMAQ in this mapping relations table human body distance electronic equipment distance between mapping relations, thus, when utilizing the image collecting device in electronic equipment to carry out IMAQ, the application can carry out graphical analysis to this current preview screen after acquisition current preview picture, and then when there is personage in current preview picture, obtain the human body figure and features characteristic in current preview picture, now, utilize the target range that each mapping relations in the mapping relations table that pre-set are determined between this target body corresponding to figure and features characteristic and electronic equipment, i.e. figure harvester and by the distance between the target body of IMAQ, and then based on this distance, controlling camera lens directly moves in subregion corresponding with this target range in its focusing moving area, now recycle camera lens and obtain view data, without the need to controlling, all location points of camera lens in its focusing moving area move in this process, but directly move to the subregion corresponding with image taking distance to search out suitable focusing, thereby saving lens moving time, improve the efficiency of IMAQ.
Based on aforementioned schemes, in order to while guarantee IMAQ efficiency, focusing definition can be improved, in the present embodiment, can control described camera lens and move in this subregion and then complete focusing.Accordingly, with reference to figure 3, be the flow chart of the control method embodiment two of a kind of IMAQ provided by the invention, wherein, after described step 106, before described step 107, described method can also comprise the following steps:
Step 108: control described camera lens each location point in described subregion moves, until described camera lens is on a source location in described subregion.
Wherein, when described camera lens is in described source location, the definition of its view data acquired meets and presets first condition.
Wherein, described first condition can be greater than predetermined threshold value for described definition.Or the view data that described first condition for: described camera lens is in, described source location obtains is arranged in the view data that any one location point of described subregion obtains definition at described camera lens is the highest.
Concrete, described camera lens is controlled in described subregion in described step 108, last location point to described subregion by the location point of first in described subregion moves, the highest location point of light imaging definition on sensitive chip of focusing and it being collected can be completed at described camera lens to find one in each location point of described subregion, be described source location, now, perform described step 107 again, the camera lens be on this source location is utilized to obtain view data, the definition of now accessed view data meets described first condition, as the highest in definition.
In specific implementation, figure and features characteristic accessed in described step 104 can be the somatic stigmata data of described target body or the facial feature data of described target body.
Concrete, the facial feature data of described target body can include: the relative space position characteristic between at least two positions in the face of described target body, such as, the relative space position characteristic between two eyes of described target body and face.
Accordingly, with reference to figure 4, be the realization flow figure of step 104 described in the control method embodiment three of a kind of image focusing provided by the invention, wherein, described step 104 can be realized by following steps:
Step 141: identify at least two positions in the face of target body in described current preview picture.
Such as, two eyes z1, z2 in the face of target body shown in described current preview picture and face z3 tri-positions are identified in described step 141, as illustrated in fig. 2 c.
Or, as shown in fig. 2d, in described step 141, identify two eyes z1, z2 in the face of target body shown in described current preview picture and cheek z4 tri-positions.
Step 142: based at least two positions described in identified, the space length data at least two positions described in the face measuring target body shown in described current preview picture between every two positions.
As shown in figure 2e, measure in described current preview picture, the image spacing distance l1 between eyes z1 and eyes z2, the image spacing distance l2 between eyes z1 and face z3, and the image spacing distance l3 between eyes z2 and face z3.
As shown in Fig. 2 f, measure in described current preview picture, the image spacing distance l1 between eyes z1 and eyes z2, and the space width l4 of cheek z4.
Step 143: according to the space length data between described every two positions, obtain the relative space position characteristic between at least two positions in the face of described target body.
For scheme in such as Fig. 2 e, based on the image spacing distance l1 between described eyes z1 and eyes z2, image spacing distance l2 between eyes z1 and face z3 in the present embodiment, and the image spacing distance l3 between eyes z2 and face z3, obtain eyes z1, relative space position characteristic between eyes z2 and face z3.
Based on above-mentioned implementation, with reference to figure 5, be the realization flow figure of step 105 described in the embodiment of the present invention, wherein, described step 105 can be realized by following steps:
Step 151: according to the mapping relations in described mapping relations table, obtains the objective mapping relation corresponding with the relative space position characteristic between at least two positions in the face of described target body.
Step 152: based on described objective mapping relation, determines the target range of described target body apart from electronic equipment.
Wherein, in above, the mapping relations of human body figure and features characteristic corresponding under there is in described mapping relations table different distance, that is: the mapping relations of the relative space position characteristic between corresponding under different distance human body face at least two positions, therefore, described step 151 is concrete, can search the objective mapping relation corresponding with institute relative space position characteristic in described mapping relations table.Afterwards, by the distance in this objective mapping relation in described step 152, be defined as described image collecting device and its target range be taken between scene and described target body when current described image collecting device carries out IMAQ.
In follow-up carrying into execution a plan, the present embodiment, first based on described target range, controls the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens, recycles described camera lens and obtain view data.
With reference to figure 6, for the flow chart of the control method embodiment four of a kind of image focusing provided by the invention, wherein, when the analysis result obtained in described step 103 shows there is not personage in described current preview picture, described method can also comprise the following steps:
Step 109: control described camera lens in its focusing moving area, from its original position point, move to successively on each location point in described focusing moving area, arrive on the end position point in described focusing moving area, to determine the focusing position point in described focusing moving area.
Step 110: control on described lens moving to described focusing position point, performs step 107 and utilizes described camera lens to obtain view data.
Wherein, when described camera lens is in described focusing position point, the definition of its view data acquired meets and presets second condition.Described second condition can be greater than predetermined threshold value for described definition.Or the view data that described second condition for: described camera lens is in, described focusing position point obtains is arranged in the view data that any one location point of described focusing moving area obtains definition at described camera lens is the highest.As illustrated in figure 2g, multiple location point is included in described focusing moving area, the original position point w1 of the sensitive chip maximum distance in relatively described image collecting device and the end position point wm of relatively described sensitive chip minimum distance, between w1 and wm, there is intermediate position points w2, w3 etc., the present embodiment controls described camera lens by w1, successively through w2 and w3 etc., until wm, complete the movement of described camera lens in described focusing moving area on each location point, and then find described camera lens to be positioned at the image definition that each location point can collect, and then determine that the location point that image definition is the highest is described focusing position point, again by described lens moving to described focusing position point, when making described camera lens be in described focusing position point, the definition of its view data acquired is the highest.
In addition, the somatic stigmata data of described target body can include: the body picture area features of described target body, or the relative space position characteristic etc. on described body between trunk and four limbs.
With reference to figure 7, for the structural representation of the control device embodiment five of a kind of image focusing provided by the invention, wherein, described application of installation is in electronic equipment, and this electronic equipment can for having the terminal equipment such as mobile phone, pad, single anti-filming apparatus of image collecting device.Described image collecting device can be the devices such as camera.
In the present embodiment, described device can comprise following structure, to realize the object of the invention:
Map preset unit 701, in described electronic equipment, pre-set human body figure and features characteristic and human body apart from electronic equipment distance between mapping relations table.
Wherein, mapping relations are at least comprised in described mapping relations table, these mapping relations are: the mapping relations of human body figure and features characteristic and distance, that is, mapping preset unit 701 in the present embodiment pre-sets a mapping relations table in described electronic equipment, concrete, described mapping preset unit 701 utilizes described image collecting device photographic images in advance under different distance, signature analysis is carried out to image corresponding under these different distance, to obtain human body figure and features characteristic corresponding under different distance, by these human body figure and features characteristics with it each self-corresponding distance set up mapping relations one by one, composition mapping relations table.
It should be noted that, described mapping relations table can pre-set or be stored in the EEPROM (ElectricallyErasableProgrammableRead-OnlyMemory of the camera lens of memory cell in described electronic equipment or described image collecting device, EEPROM (Electrically Erasable Programmable Read Only Memo)), described EEPROM is the storage chip that after a kind of power down, data are not lost, or, described mapping relations table can be stored in the OTP (onetimeprogrammable once edits the register that can not revise again) of described camera lens.
Preview acquiring unit 702, for when described image collecting device carries out IMAQ, obtains current preview picture.
Wherein, described preview acquiring unit 702 is when obtaining described current preview picture, the camera lens in described image collecting device can be utilized to obtain, be different from the collection of view data, described current preview picture can not imaging on the sensitive chip in described image collecting device.
Image analyzing unit 703, for carrying out graphical analysis to described current preview picture, obtains analysis result, when described analysis result shows to there is personage in described current preview picture, and triggered characteristic acquiring unit 704.
Wherein, what described image analyzing unit 703 was concrete can by carrying out picture element scan identification to described current preview picture, to identify the figure picture in described current preview picture, and then obtain analysis result, described analysis result can characterize in described current preview picture whether there is personage.
Feature acquiring unit 704, for obtaining the figure and features characteristic of target body in described current preview picture.
Wherein, described feature acquiring unit 704 specifically can identify there is personage in this current preview screen time, further feature identification is carried out to the character features in described current preview picture, such as, the figure and features feature of target body in described current preview picture is identified, first the human body picture that shown in described current preview picture, target body is corresponding is determined, and then the figure and features feature in this human body picture is identified, as features such as four limbs, trunk and heads, and then get the figure and features characteristic of described target body.
Distance determining unit 705, for the figure and features characteristic according to the mapping relations in described mapping relations table and described target body, determines the target range between described target body and described electronic equipment.
Based on foregoing teachings, the mapping relations of human body figure and features characteristic corresponding under there is in described mapping relations table different distance, therefore, described distance determining unit 705 inquires about the distance corresponding with the figure and features characteristic of this target body in described mapping relations table, using this distance as the target range between described target body and described electronic equipment.
Lens control unit 706, for based on described target range, controls the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens.
Wherein, camera lens in described image collecting device has its focusing moving area carrying out focusing, as shown in Figure 2 a, in the prior art, described camera lens moves around on described focusing moving area, with the position finding the definition of an image making described camera lens obtain higher.
In the present embodiment, in advance based on described image collecting device and the far and near distance that is taken between scene, sub-zone dividing is carried out to described focusing moving area, as shown in figure 2b, when the described distance be taken between scenery and described image collecting device is x1, the definition that the described camera lens location point be in subregion y1 carries out IMAQ is higher; When the described distance be taken between scene and described image collecting device is x2, the definition that the described camera lens location point be in subregion y2 carries out IMAQ is higher, by that analogy, when the described distance be taken between scene and described image collecting device is xn, the definition that the described camera lens location point be in subregion yn carries out IMAQ is higher.Utilize this scheme, described lens control unit 706 can based on described target range, by the lens moving in described image collecting device in the subregion corresponding with described target range.
Image acquisition unit 707, obtains view data for utilizing described camera lens.
Concrete, described image acquisition unit 707 utilizes described camera lens collection to be taken the light of scene, imaging on the sensitive chip be set up at described image collector by described light, with the view data of the scene that is taken described in obtaining.
Known via above-mentioned technical scheme, compared with prior art, the control device embodiment five of a kind of image focusing that the application openly provides, by pre-setting a mapping relations table in the electronic device, when including the human body figure and features characteristic pre-set and IMAQ in this mapping relations table human body distance electronic equipment distance between mapping relations, thus, when utilizing the image collecting device in electronic equipment to carry out IMAQ, the application can carry out graphical analysis to this current preview screen after acquisition current preview picture, and then when there is personage in current preview picture, obtain the human body figure and features characteristic in current preview picture, now, utilize the target range that each mapping relations in the mapping relations table that pre-set are determined between this target body corresponding to figure and features characteristic and electronic equipment, i.e. figure harvester and by the distance between the target body of IMAQ, and then based on this distance, controlling camera lens directly moves in subregion corresponding with this target range in its focusing moving area, now recycle camera lens and obtain view data, without the need to controlling, all location points of camera lens in its focusing moving area move in this process, but directly move to the subregion corresponding with image taking distance to search out suitable focusing, thereby saving lens moving time, improve the efficiency of IMAQ.
Based on aforementioned schemes, in order to while guarantee IMAQ efficiency, focusing definition can be improved, in the present embodiment, can control described camera lens and move in this subregion and then complete focusing.Accordingly, with reference to figure 8, be the structural representation of the control device embodiment six of a kind of IMAQ provided by the invention, wherein, described device can also comprise following structure:
Subregion focusing unit 708, after described lens control unit 706 controls in the lens moving in described image collecting device to the subregion in the focusing moving area of described camera lens, before described image acquisition unit 707 utilizes described camera lens acquisition view data, for controlling described camera lens each location point in described subregion moves, until described camera lens is on a source location in described subregion.
Wherein, the definition of view data accessed when described image acquisition unit 707 utilizes described camera lens to be in described source location meets presets first condition.
Wherein, described first condition can be greater than predetermined threshold value for described definition.Or the view data that described first condition for: described camera lens is in, described source location obtains is arranged in the view data that any one location point of described subregion obtains definition at described camera lens is the highest.
Concrete, described subregion focusing unit 708 controls described camera lens in described subregion, last location point to described subregion by the location point of first in described subregion moves, the highest location point of light imaging definition on sensitive chip of focusing and it being collected can be completed at described camera lens to find one in each location point of described subregion, be described source location, now, trigger described image acquisition unit 707 again, the camera lens be on this source location is utilized to obtain view data, the definition of now accessed view data meets described first condition, as the highest in definition.
In specific implementation, the figure and features characteristic accessed by described feature acquiring unit 704 comprises: the somatic stigmata data of target body or the facial feature data of target body.
Concrete, facial feature data accessed by described feature acquiring unit 704 comprises: the relative space position characteristic between at least two positions in the face of described target body, such as, the relative space position characteristic between two eyes of described target body and face.
Accordingly, with reference to figure 9, be the structural representation of feature acquiring unit 704 described in the control device embodiment seven of a kind of IMAQ provided by the invention, wherein, described feature acquiring unit 704 can include following structure:
Position recognin unit 741, for identifying at least two positions in the face of target body in described current preview picture.
Such as, described position recognin unit 741 identifies two eyes z1, z2 in the face of target body shown in described current preview picture and face z3 tri-positions, as illustrated in fig. 2 c.
Or, as shown in fig. 2d, in described step 141, identify two eyes z1, z2 in the face of target body shown in described current preview picture and cheek z4 tri-positions.
Range measurement subelement 742, for based at least two positions described in identified, the space length data at least two positions shown in the face measuring target body described in described current preview picture between every two positions.
As shown in figure 2e, described range measurement subelement 742 is measured in described current preview picture, image spacing distance l1 between eyes z1 and eyes z2, the image spacing distance l2 between eyes z1 and face z3, and the image spacing distance l3 between eyes z2 and face z3.
As shown in Fig. 2 f, measure in described current preview picture, the image spacing distance l1 between eyes z1 and eyes z2, and the space width l4 of cheek z4.
Feature obtains subelement 743, for according to the space length data between described every two positions, obtains the relative space position characteristic between at least two positions in the face of described target body.
For scheme in such as Fig. 2 e, described feature obtains subelement 743 based on the image spacing distance l1 between described eyes z1 and eyes z2, image spacing distance l2 between eyes z1 and face z3, and the image spacing distance l3 between eyes z2 and face z3, obtain eyes z1, relative space position characteristic between eyes z2 and face z3.
Based on above-mentioned implementation, with reference to Figure 10, be the structural representation of distance determining unit 705 described in the embodiment of the present invention, wherein, described distance determining unit 705 can comprise following structure:
Objective mapping obtains subelement 751, for according to the mapping relations in described mapping relations table, obtains the objective mapping relation corresponding with the relative space position characteristic at least between position in the face of described target body.
Target range determination subelement 752, for based on described objective mapping relation, determines the target range of described target body apart from electronic equipment.
Wherein, in above, the mapping relations of human body figure and features characteristic corresponding under there is in described mapping relations table different distance, that is: the mapping relations of the relative space position characteristic between corresponding under different distance human body face at least two positions, therefore, it is concrete that described objective mapping obtains subelement 751, can search the objective mapping relation corresponding with institute relative space position characteristic in described mapping relations table.Afterwards, described target range determination subelement 752, by the distance in this objective mapping relation, is defined as described image collecting device and its target range be taken between scene and described target body when current described image collecting device carries out IMAQ.
In follow-up carrying into execution a plan, the present embodiment, first based on described target range, controls the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens, recycles described camera lens and obtain view data.
With reference to Figure 11, be the structural representation of the control device embodiment eight of a kind of image focusing provided by the invention, wherein, described device can also comprise following structure:
Moving area focusing unit 709, when the analysis result that described image analyzing unit 703 obtains shows there is not personage in described current preview picture, for controlling described camera lens in its focusing moving area, from its original position point, move to each location point in described focusing moving area successively, arrive on the end position point in described focusing moving area, to determine the focusing position point in described focusing moving area.
Lens focusing mobile unit 710, for controlling described lens moving to described focusing position point, utilizes described camera lens to obtain view data to trigger described image acquisition unit 707.
Wherein, the definition of view data accessed when described image acquisition unit 707 utilizes described camera lens to be in described focusing position point meets presets second condition.Described second condition can be greater than predetermined threshold value for described definition.Or the view data that described second condition for: described camera lens is in, described focusing position point obtains is arranged in the view data that any one location point of described focusing moving area obtains definition at described camera lens is the highest.As illustrated in figure 2g, multiple location point is included in described focusing moving area, the original position point w1 of the sensitive chip maximum distance in relatively described image collecting device and the end position point wm of relatively described sensitive chip minimum distance, between w1 and wm, there is intermediate position points w2, w3 etc., described moving area focusing unit 709 controls described camera lens by w1, successively through w2 and w3 etc., until wm, complete the movement of described camera lens in described focusing moving area on each location point, and then find described camera lens to be positioned at the image definition that each location point can collect, and then determine that the location point that image definition is the highest is described focusing position point, described lens focusing mobile unit 710 is by described lens moving to described focusing position point, when making described camera lens be in described focusing position point, the definition of its view data acquired is the highest.
In addition, the somatic stigmata data of described target body can include: the body picture area features of described target body, or the relative space position characteristic etc. on described body between trunk and four limbs.
For aforesaid each embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the application is not by the restriction of described sequence of movement, because according to the application, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in specification all belongs to preferred embodiment, and involved action and module might not be that the application is necessary.
In this specification, each embodiment adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar portion mutually see.For device disclosed in embodiment, because it corresponds to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.
It should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operating space, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
For convenience of description, various unit is divided into describe respectively with function when describing above device.Certainly, the function of each unit can be realized in same or multiple software and/or hardware when implementing the application.
As seen through the above description of the embodiments, those skilled in the art can be well understood to the mode that the application can add required general hardware platform by software and realizes.Based on such understanding, the technical scheme of the application can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product can be stored in storage medium, as ROM/RAM, magnetic disc, CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the application or embodiment.
To the above-mentioned explanation of the disclosed embodiments, professional and technical personnel in the field are realized or uses the application.To be apparent for those skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein when not departing from the spirit or scope of the application, can realize in other embodiments.Therefore, the application can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (10)

1. a control method for image focusing, be applied to electronic equipment, described electronic equipment comprises image collecting device, and described method comprises:
In described electronic equipment, pre-set human body figure and features characteristic and human body apart from electronic equipment distance between mapping relations table, described mapping relations table comprises the mapping relations of at least one human body figure and features characteristic and distance;
When described image collecting device carries out IMAQ, obtain current preview picture;
Graphical analysis is carried out to described current preview picture, obtains analysis result;
When described analysis result shows to there is personage in described current preview picture, obtain the figure and features characteristic of target body in described current preview picture;
According to the figure and features characteristic of the mapping relations in described mapping relations table and described target body, determine the target range between described target body and described electronic equipment;
Based on described target range, control the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens;
Described camera lens is utilized to obtain view data.
2. method according to claim 1, is characterized in that, before utilizing described camera lens acquisition view data, described method also comprises:
Control described camera lens each location point in described subregion moves, until described camera lens is on a source location in described subregion;
Wherein, when described camera lens is in described source location, the definition of its view data acquired meets and presets first condition.
3. method according to claim 1, is characterized in that, described figure and features characteristic comprises: the somatic stigmata data of target body or the facial feature data of target body.
4. method according to claim 3, is characterized in that, described facial feature data comprises: the relative space position characteristic between at least two positions in the face of target body;
Accordingly, the figure and features characteristic of target body in the described current preview picture of described acquisition, comprising:
Identify at least two positions in the face of target body in described current preview picture;
Based at least two positions described in identified, the space length data at least two positions described in the face measuring target body described in described current preview picture between every two positions;
According to the space length data between described every two positions, obtain the relative space position characteristic between at least two positions in the face of described target body;
Accordingly, the described figure and features characteristic according to the mapping relations in described mapping relations table and described target body, determine the target range between described target body and described electronic equipment, comprising:
According to the mapping relations in described mapping relations table, obtain the objective mapping relation corresponding with the relative space position characteristic between at least two positions in the face of described target body;
Based on described objective mapping relation, determine the target range of described target body apart from electronic equipment.
5. method according to claim 1, is characterized in that, when described analysis result shows there is not personage in described current preview picture, described method also comprises:
Control described camera lens in its focusing moving area, from its original position point, move to successively on each location point in described focusing moving area, arrive on the end position point in described focusing moving area, to determine the focusing position point in described focusing moving area;
Control on described lens moving to described focusing position point;
Described camera lens is utilized to obtain view data;
Wherein, when described camera lens is in described focusing position point, the definition of its view data acquired meets and presets second condition.
6. a control device for image focusing, be applied to electronic equipment, described electronic equipment comprises image collecting device, and described device comprises:
Map preset unit, in described electronic equipment, pre-set human body figure and features characteristic and human body apart from electronic equipment distance between mapping relations table, described mapping relations table comprises the mapping relations of at least one human body figure and features characteristic and distance;
Preview acquiring unit, for when described image collecting device carries out IMAQ, obtains current preview picture;
Image analyzing unit, for carrying out graphical analysis to described current preview picture, obtains analysis result, when described analysis result shows to there is personage in described current preview picture, and triggered characteristic acquiring unit;
Feature acquiring unit, for obtaining the figure and features characteristic of target body in described current preview picture;
Distance determining unit, for the figure and features characteristic according to the mapping relations in described mapping relations table and described target body, determines the target range between described target body and described electronic equipment;
Lens control unit, for based on described target range, controls the lens moving in described image collecting device in the subregion in the focusing moving area of described camera lens;
Image acquisition unit, obtains view data for utilizing described camera lens.
7. device according to claim 6, is characterized in that, described device also comprises:
Subregion focusing unit, after in lens moving in image collecting device described in described lens control unit controls to the subregion in the focusing moving area of described camera lens, before described image acquisition unit utilizes described camera lens acquisition view data, for controlling described camera lens each location point in described subregion moves, until described camera lens is on a source location in described subregion;
Wherein, the definition of view data accessed when described image acquisition unit utilizes described camera lens to be in described source location meets presets first condition.
8. device according to claim 6, is characterized in that, the figure and features characteristic accessed by described feature acquiring unit comprises: the somatic stigmata data of target body or the facial feature data of target body.
9. device according to claim 8, is characterized in that, the facial feature data accessed by described feature acquiring unit comprises: the relative space position characteristic between at least two positions in the face of target body;
Accordingly, described feature acquiring unit comprises:
Position recognin unit, for identifying at least two positions in the face of target body in described current preview picture;
Range measurement subelement, for based at least two positions described in identified, the space length data at least two positions shown in the face measuring target body described in described current preview picture between every two positions;
Feature obtains subelement, for according to the space length data between described every two positions, obtains the relative space position characteristic between at least two positions in the face of described target body;
Accordingly, described distance determining unit comprises:
Objective mapping obtains subelement, for according to the mapping relations in described mapping relations table, obtains the objective mapping relation corresponding with the relative space position characteristic at least between position in the face of described target body;
Target range determination subelement, for based on described objective mapping relation, determines the target range of described target body apart from electronic equipment.
10. device according to claim 6, is characterized in that, also comprises:
Moving area focusing unit, when the analysis result that described image analyzing unit obtains shows there is not personage in described current preview picture, for controlling described camera lens in its focusing moving area, from its original position point, move to each location point in described focusing moving area successively, arrive on the end position point in described focusing moving area, to determine the focusing position point in described focusing moving area;
Lens focusing mobile unit, for controlling described lens moving to described focusing position point, utilizes described camera lens to obtain view data to trigger described image acquisition unit;
Wherein, the definition of view data accessed when described image acquisition unit utilizes described camera lens to be in described focusing position point meets presets second condition.
CN201510548776.1A 2015-08-31 2015-08-31 Image acquisition control method and device Pending CN105120165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510548776.1A CN105120165A (en) 2015-08-31 2015-08-31 Image acquisition control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510548776.1A CN105120165A (en) 2015-08-31 2015-08-31 Image acquisition control method and device

Publications (1)

Publication Number Publication Date
CN105120165A true CN105120165A (en) 2015-12-02

Family

ID=54668041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510548776.1A Pending CN105120165A (en) 2015-08-31 2015-08-31 Image acquisition control method and device

Country Status (1)

Country Link
CN (1) CN105120165A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899800A (en) * 2016-06-28 2017-06-27 阿里巴巴集团控股有限公司 Method, device and mobile terminal device that camera is focused
CN108989791A (en) * 2018-07-11 2018-12-11 昆山丘钛微电子科技有限公司 A kind of linear detection method of motor, device and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070025722A1 (en) * 2005-07-26 2007-02-01 Canon Kabushiki Kaisha Image capturing apparatus and image capturing method
CN101086598A (en) * 2006-06-09 2007-12-12 索尼株式会社 Imaging apparatus, control method of imaging apparatus, and computer program
US20110134273A1 (en) * 2006-06-09 2011-06-09 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
CN103024338A (en) * 2011-04-08 2013-04-03 数字光学欧洲有限公司 Display device with image capture and analysis module
CN103246130A (en) * 2013-04-16 2013-08-14 广东欧珀移动通信有限公司 Focusing method and device
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
CN104360456A (en) * 2014-11-14 2015-02-18 广东欧珀移动通信有限公司 Lens focusing control method and device
CN104853088A (en) * 2015-04-09 2015-08-19 来安县新元机电设备设计有限公司 Method for rapidly focusing a photographing mobile terminal and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070025722A1 (en) * 2005-07-26 2007-02-01 Canon Kabushiki Kaisha Image capturing apparatus and image capturing method
CN101086598A (en) * 2006-06-09 2007-12-12 索尼株式会社 Imaging apparatus, control method of imaging apparatus, and computer program
US20110134273A1 (en) * 2006-06-09 2011-06-09 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
CN103024338A (en) * 2011-04-08 2013-04-03 数字光学欧洲有限公司 Display device with image capture and analysis module
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
CN103246130A (en) * 2013-04-16 2013-08-14 广东欧珀移动通信有限公司 Focusing method and device
CN104360456A (en) * 2014-11-14 2015-02-18 广东欧珀移动通信有限公司 Lens focusing control method and device
CN104853088A (en) * 2015-04-09 2015-08-19 来安县新元机电设备设计有限公司 Method for rapidly focusing a photographing mobile terminal and mobile terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899800A (en) * 2016-06-28 2017-06-27 阿里巴巴集团控股有限公司 Method, device and mobile terminal device that camera is focused
CN108989791A (en) * 2018-07-11 2018-12-11 昆山丘钛微电子科技有限公司 A kind of linear detection method of motor, device and computer readable storage medium
CN108989791B (en) * 2018-07-11 2020-10-09 昆山丘钛微电子科技有限公司 Motor linearity detection method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN106911922B (en) The depth map generated from single sensor
CN110266938B (en) Transformer substation equipment intelligent shooting method and device based on deep learning
CN107087107A (en) Image processing apparatus and method based on dual camera
CN105933589A (en) Image processing method and terminal
CN106101540B (en) Focus point determines method and device
CN110996002B (en) Microscope focusing method, device, computer equipment and storage medium
CN108235816A (en) Image recognition method, system, electronic device and computer program product
US20110150300A1 (en) Identification system and method
CN105354296A (en) Terminal positioning method and user terminal
CN113132717A (en) Data processing method, terminal and server
CN109379538B (en) Image acquisition device, system and method
CN105120165A (en) Image acquisition control method and device
CN109711287B (en) Face acquisition method and related product
CN110930437B (en) Target tracking method and device
JP3919722B2 (en) Skin shape measuring method and skin shape measuring apparatus
CN104748862A (en) Analyzing device and analyzing method
CN104349197A (en) Data processing method and device
CN104980663A (en) Control method and device for rapid focusing of image acquisition
CN103900714A (en) Device and method for thermal image matching
KR101133024B1 (en) Apparatus and method for training based auto-focusing
CN106713726A (en) Method and apparatus for recognizing photographing way
JP2014232373A (en) Collation object extraction system, collation object extraction method, and collation object extraction program
CN111220173A (en) POI (Point of interest) identification method and device
JPH1032751A (en) Image pickup device and image processor
CN111179408B (en) Three-dimensional modeling method and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151202