CN103607537A - Control method of camera and the camera - Google Patents

Control method of camera and the camera Download PDF

Info

Publication number
CN103607537A
CN103607537A CN201310532546.7A CN201310532546A CN103607537A CN 103607537 A CN103607537 A CN 103607537A CN 201310532546 A CN201310532546 A CN 201310532546A CN 103607537 A CN103607537 A CN 103607537A
Authority
CN
China
Prior art keywords
detection
personage
target
camera
target person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310532546.7A
Other languages
Chinese (zh)
Other versions
CN103607537B (en
Inventor
施伟
黄伟才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201310532546.7A priority Critical patent/CN103607537B/en
Publication of CN103607537A publication Critical patent/CN103607537A/en
Application granted granted Critical
Publication of CN103607537B publication Critical patent/CN103607537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a control method of a camera and the camera and relates to the image processing field. The method comprises the following steps of a target determination step: determining at least one target person in at least one detectable person which enters into a detection vision field of the camera; a photographing step: carrying out triggering so as to take pictures according to motion of the target person. The camera comprises a target determination module and a photographing module, wherein the target determination module is used to determine the at least one target person in the at least one detectable person which enters into the detection vision field of the camera; the photographing module is used to carry out triggering so as to take pictures according to the motion of the target person. By using the method and the camera, waste of time caused by the camera which takes the picture at wrong time is reduced, and self-timer efficiency and a user experience are increased.

Description

The control method of camera and camera
Technical field
The present invention relates to communication technical field, relate in particular to a kind of control method and camera of camera.
Background technology
Along with popularizing of the mobile terminals such as smart mobile phone, panel computer, more people enjoys the enjoyment of taking pictures and autodyning, but autodyne, still has various limitation.
A kind of conventional auto heterodyne mode is that handheld camera is autodyned, and because camera is from the ultimate range of health, the restriction that angle is all subject to Human Physiology characteristic, this auto heterodyne mode inconvenient operation, also cannot take the photo with optimum efficiency.In order to improve auto heterodyne effect, another conventional auto heterodyne mode is that remotely controlled cameras is autodyned, and before taking pictures, camera is fixed on a certain position, then by modes such as timings, triggers camera and takes.Then, the shortcoming of this auto heterodyne mode is, user cannot well control the photo opportunity of camera, often need to from the photo of multiple shootings, select the photo that user wants, and even selects the photo of wanting less than user, waste user time.
Therefore, the auto heterodyne process efficiency of existing camera is low, and user's health check-up is poor.
Summary of the invention
Goal of the invention of the present invention is: a kind of control method and camera of camera are provided, to facilitate user to utilize camera self-timer, improve auto heterodyne efficiency.
For solving the problems of the technologies described above, first aspect, the invention provides a kind of control method of camera, and described method comprises:
Target determining step is determined at least one target person at least one personage of detection in a detection visual field who enters described camera;
The step of taking pictures, takes pictures according to the action triggers of described target person.
Second aspect, the present invention also provides a kind of camera, and described camera comprises:
Target determination module, determines at least one target person at least one personage of detection who detects the visual field entering described camera;
Photo module, takes pictures for the action triggers according to described target person.
The control method of camera of the present invention and camera, reduced the time waste that camera was taken pictures and caused on wrong opportunity, improved auto heterodyne efficiency and user and experienced.
Accompanying drawing explanation
Fig. 1 is the flow chart of the control method of camera described in the embodiment of the present invention;
Fig. 2 a is the detection visual field schematic diagram of camera described in the embodiment of the present invention;
Fig. 2 b is that another of camera detects visual field schematic diagram described in the embodiment of the present invention;
Fig. 2 c is the internal module structural representation that personage determines submodule described in the embodiment of the present invention;
Fig. 3 is the modular structure schematic diagram of camera described in the embodiment of the present invention;
Fig. 4 is the internal module structural representation of target determination module described in first embodiment of the invention;
Fig. 5 a is the internal module structural representation of target determination module described in second embodiment of the invention;
Fig. 5 b is the internal module structural representation that target is determined submodule described in second embodiment of the invention;
Fig. 6 is the internal module structural representation of target determination module described in third embodiment of the invention;
Fig. 7 is the internal module structural representation of target determination module described in four embodiment of the invention;
Fig. 8 a and 8b are the application scenarios schematic diagrames of camera described in the embodiment of the present invention;
Fig. 9 is the hardware configuration schematic diagram of camera described in the embodiment of the present invention.
Embodiment
Below in conjunction with drawings and Examples, the specific embodiment of the present invention is described in further detail.Following examples are used for illustrating the present invention, but are not used for limiting the scope of the invention.
In a lot of application scenarioss, user wishes to use camera self-timer.The existing method of utilizing camera self-timer, because user cannot well control (generally just knowing phase chance automatic camera in a period of time) to the photo opportunity of camera, therefore, often need to take repeatedly, just can photograph the photo that user wants, efficiency is low, and it is poor to experience.If in the situation that guaranteeing privacy of user safety, make user can utilize the control appliance of oneself to control camera, self-service completing taken pictures, and can significantly improve auto heterodyne efficiency, and can make user have more sense of participation, promotes user and experiences.Therefore, the embodiment of the present invention provides a kind of control method of camera, and as shown in Figure 1, described method comprises:
S100: determine at least one target person at least one personage of detection in a detection visual field who enters described camera.Wherein, described camera can be individual camera, can be also public camera, and for public camera, it can be to be specifically designed to the camera that scenic spot is taken, and can be also at ordinary times for monitoring, the camera of temporarily taking pictures for visitor.
S200: take pictures according to the action triggers of described target person.
Wherein, described action can be expression action, such as smiling, make faces etc., can be also limb action, and such as health is shown a certain posture, hand provides certain gesture etc.Described taking pictures can comprise common taking pictures, and also can be included in a shooting process and obtain a rest image.
Camera described in the present embodiment, automatically in entering the detected personage who detects the visual field, determine target person, and take pictures according to the action triggers of target person, that is to say, after determining target person, whether camera moves by technology automatic decision target persons such as image comparison, be that posture changes, and after each postural change automatic camera, be that user can control indirectly by posing the photo opportunity of camera, reduced the time waste that camera was taken pictures and caused on wrong opportunity, improved auto heterodyne efficiency and user and experienced.
Concrete, in an optional embodiment, described step S100 comprises:
S101: at least one visual signature that gathers the described personage of detection who enters the described detection visual field.
Wherein, described visual signature can be human visual feature (such as face characteristic), clothes feature and specific identifier feature (such as the attached identify label of note with it user) etc.
S102: at least one visual signature of at least one visual signature of the described personage of detection and pre-stored at least one candidate personage is compared, determine described target person.
Wherein, at least one visual signature of described pre-stored at least one candidate personage can be stored in described phase machine local, or is stored in server that described camera can access etc.
When described, detect personage for a plurality of, and the plurality of visual signature that detects personage is all when pre-stored, above-mentioned definite target person may be a plurality of, in this case, described camera can be taken pictures according to the action triggers of arbitrary target person, while being also the postural change of arbitrary target person, camera all can be taken pictures.
For the above-mentioned situation that has a plurality of target persons, can also further determine target person in conjunction with acoustic information.Therefore,, in another optional embodiment of the present invention, described step S100 comprises:
S111: at least one visual signature that gathers the described personage of detection who enters the described detection visual field.
S112: at least one visual signature of at least one visual signature of the described personage of detection and pre-stored at least one the first candidate personage is compared, obtain one first comparison result.
Wherein, described the first comparison result can record the detected personage that the visual signature with described the first candidate personage matches, and can note by abridging as vision matching personage.
S113: gather a described detection acoustic information within the vision.
Wherein, described detection acoustic information within the vision, is the acoustic information in the current scope that can photograph of described camera.As shown in Figure 2 a, direction shown in arrow is the shooting direction of camera 210, the scope in the detection visual field of camera 210 is the first ray L1 and the definite regional extent of the second ray L2, and in Fig. 2 a, 1: 221 is positioned at detection within sweep of the eye, and second point 222 is positioned at and detects outside field range.According to the orientation of current acoustic information, can determine the line direction of sound source and camera, by judging the position relationship of the first ray L1 shown in this line and Fig. 2 a and the second ray L2, can determine whether current acoustic information is positioned at detection within sweep of the eye, and then can get rid of the acoustic information detecting outside the visual field.
S114: according to described the first comparison result and a described detection acoustic information within the vision, determine described target person.
In a kind of optional execution mode, described step S114 comprises:
A described detection acoustic information within the vision and pre-stored at least one the second candidate personage's acoustic information is compared, obtain one second comparison result.
According to described the first comparison result and described the second comparison result, determine described target person.
Wherein, described the second candidate personage can be for example described the first candidate personage's subset, thus the personage that can further find acoustic information also to mate from described vision matching personage, and as final target person.The owner who supposes camera plays together with friend, uses for convenience camera, can be in camera pre-stored everybody visual signature (such as human face photo), simultaneously owner also can be pre-stored oneself acoustic information (such as one section of recording).Like this in the process of taking pictures of playing, when if the detection of camera does not have acoustic information within sweep of the eye, camera can all be identified as target person by friend and owner according to visual signature, and take pictures according to respective objects personage's action triggers, if many people group photo, everyone action all can trigger camera and takes pictures.But when the detection of camera has acoustic information within sweep of the eye, camera can be to further screening according to the definite target person of visual signature.Such as, when everybody takes a group photo, everybody all shouts " listening mine " facing to camera, at this moment, only have owner's acoustic information to mate by acoustic information stored in advance, thereby camera will only be taken pictures according to owner's action triggers.
In another kind of optional execution mode, described step S114 comprises:
According to the orientation of described the first comparison result and a described detection acoustic information within the vision, determine described target person.
As shown in Figure 2 b, direction shown in arrow is the shooting direction of camera 210, suppose in step S112, the detection that is positioned at camera 210 A personage within the vision and B personage are all identified as to vision matching personage, at this moment A personage propagandas directed to communicate to camera, the information of sounding, camera can further determine that according to the orientation of described acoustic information A personage is target person.
In another optional embodiment of the present invention, described step S100 comprises:
S121 a: image that gathers the described personage of detection who enters the described detection visual field.
S122: the characteristic according to the described personage's of detection described image in an imaging region of described camera is determined described target person.
Concrete, in a kind of optional execution mode, described step S122 comprises:
S1221: target person described in the location positioning according to the described described image that detects personage in the described imaging region of described camera.Such as, preferably, the described personage of detection corresponding to an image that a centre position that is arranged in the described imaging region of described camera can be located is defined as described target person.Certainly, also the described personage of detection corresponding to an image of a position, corner who is arranged in the described imaging region of described camera can be defined as to described target person, to facilitate the centre position that background image is placed on to described imaging region.
In the optional execution mode of another kind, described step S122 comprises:
S1222: target person described in the area ratio-dependent according to the described described image that detects personage in the described imaging region of described camera.Such as, the described personage of detection corresponding to an image that the area ratio that is arranged in the described imaging region of described camera can be greater than to a certain area threshold is defined as described target person.Wherein, described area threshold is such as being set to 50% or 30% etc.
In the optional execution mode of another kind, described step S122 comprises:
S1223: the image change according to the described personage's of detection described image in the described imaging region of described camera is determined described target person.Such as, when the simulation personages such as user and a plurality of statue, waxen imagen take a group photo, camera can be defined as target person by user according to the variation of user images.
In another optional embodiment of the present invention, described step S100 comprises:
S131 a: image that gathers the described personage of detection who enters the described detection visual field.
S132: gather a described detection acoustic information within the vision.
S133: determine described target person according to the orientation of described image and described acoustic information.
The mode of definite described target person of the present embodiment is similar to the corresponding embodiment of Fig. 2 b, be directly using the described personage of detection as the vision matching personage of acquiescence, (in fact to have omitted the step of visual signature comparison) in described step S131, and then according to the orientation of the acoustic information collecting, from described detect personage, determine described target person.In the detection visual field of camera, can detect personage when less, such as two or three, can determine fast described target person.
Should be understood that in an embodiment of the present invention, the size of the sequence number of above steps does not also mean that the priority of execution sequence, and the execution sequence of each step should determine with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
To sum up, method described in the embodiment of the present invention, can automatically in personage can be detected, determine target person according to visual signature, sound characteristic, picture characteristics etc., and after each new posture of target person automatic camera, reduced the time waste that camera was taken pictures and caused on wrong opportunity, improved auto heterodyne efficiency and user and experienced.
Fig. 3 is the modular structure schematic diagram of camera described in the embodiment of the present invention, and wherein, described camera can be individual camera, also can be public camera, for public camera, it can be to be specifically designed to the camera that scenic spot is taken, also can be at ordinary times for monitoring, the camera of temporarily taking pictures for visitor.As shown in Figure 3, described camera 300 comprises: target determination module 310 and photo module 320.
Described target determination module 310, determines at least one target person at least one personage of detection who detects the visual field entering described camera.
Described photo module 320, takes pictures for the action triggers according to described target person.
Wherein, described action can be expression action, such as smiling, make faces etc., can be also limb action, and such as health is shown a certain posture, hand provides certain gesture etc.
As shown in Figure 4, in an optional embodiment, described target determination module 310 comprises: visual signature gathers submodule 410 and target is determined submodule 420.
Described visual signature gathers submodule 410, for gathering at least one visual signature of the described personage of detection who enters the described detection visual field.
Wherein, described visual signature can be human visual feature (such as face characteristic), clothes feature and specific identifier feature (such as the attached identify label of note with it user) etc.
Described target is determined submodule 420, at least one visual signature of at least one visual signature of the described personage of detection and pre-stored at least one candidate personage is compared, determines described target person.
Wherein, at least one visual signature of described pre-stored at least one candidate personage can be stored in described phase machine local, or is stored in server that described camera can access etc.
When described, detect personage for a plurality of, and the plurality of visual signature that detects personage is all when pre-stored, above-mentioned definite target person may be a plurality of, in this case, described camera can be taken pictures according to the action triggers of arbitrary target person, while being also the postural change of arbitrary target person, camera all can be taken pictures.
For the above-mentioned situation that has a plurality of target persons, can also further determine target person in conjunction with acoustic information.Referring to Fig. 5 a, in another optional embodiment of the present invention, described target determination module 310 comprises: visual signature gathers submodule 510, comparer module 520, sound collection submodule 530 and target and determines submodule 540.
Described visual signature gathers submodule 510, for gathering at least one visual signature of the described personage of detection who enters the described detection visual field.
Described comparer module 520, at least one visual signature of at least one visual signature of the described personage of detection and pre-stored at least one the first candidate personage is compared, obtains one first comparison result.
Wherein, described the first comparison result can record the detected personage that the visual signature with described the first candidate personage matches, and can note by abridging as vision matching personage.
Described sound collection submodule 530, for gathering a described detection acoustic information within the vision.
Wherein, described detection acoustic information within the vision, is the acoustic information in the current scope that can photograph of described camera, and embodiment of the method describes in detail above, repeats no more herein.
Described target is determined submodule 540, for according to described the first comparison result and a described detection acoustic information within the vision, determines described target person.
Referring to Fig. 5 b, in a kind of optional execution mode, described target determines that submodule 540 comprises: comparing unit 541 and target determining unit 542.
Described comparing unit 541, for a described detection acoustic information within the vision and pre-stored at least one the second candidate personage's acoustic information is compared, obtains one second comparison result;
Described target determining unit 542, for determining described target person according to described the first comparison result and described the second comparison result.
Referring to Fig. 6, in another optional embodiment of the present invention, described target determination module 310 comprises: IMAQ submodule 610 and target are determined submodule 620.
Described IMAQ submodule 610, for gathering the described personage's of detection who enters the described detection visual field a image.
Described target is determined submodule 620, for determining described target person according to the described personage's of detection described image in a characteristic of an imaging region of described camera.
Wherein, described target determine submodule 620 can be according to the described personage's of detection described image target person described in the location positioning in the described imaging region of described camera.Such as, preferably, the described personage of detection corresponding to an image that a centre position that is arranged in the described imaging region of described camera can be located is defined as described target person.Certainly, also the described personage of detection corresponding to an image of a position, corner who is arranged in the described imaging region of described camera can be defined as to described target person, to facilitate the centre position that background image is placed on to described imaging region.
Or, described target determine submodule 620 can be according to the described personage's of detection described image target person described in the area ratio-dependent in the described imaging region of described camera.Such as, the described personage of detection corresponding to an image that the area ratio that is arranged in the described imaging region of described camera can be greater than to a certain area threshold is defined as described target person.Wherein, described area threshold is such as being set to 50% or 30% etc.
Or described target determines that submodule 620 can the image change in the described imaging region of described camera determine described target person according to the described personage's of detection described image.Such as, when the simulation personages such as user and a plurality of statue, waxen imagen take a group photo, camera can be defined as target person by user according to the variation of user images.
Referring to Fig. 7, in another optional embodiment of the present invention, described target determination module 310 comprises: IMAQ submodule 710, sound collection submodule 720 and target are determined submodule 730.
Described IMAQ submodule 710, for gathering the described personage's of detection who enters the described detection visual field a image.
Described sound collection submodule 720, for gathering a described detection acoustic information within the vision.
Described target is determined submodule 730, for determining described target person according to the orientation of described image and described acoustic information.
Fig. 8 a and 8b are the application scenarios schematic diagrames of camera described in the embodiment of the present invention.
As shown in Figure 8 a, after camera 810 determines that middle woman 820 is target person, according to the action triggers of target person, take pictures, middle woman 820, show first count according to automatic camera after posture, although at this moment passerby man 830 also walks about within sweep of the eye in the detection of camera 810, but because it is not identified as target person by camera 810, thereby can not affect taking pictures of camera 810.
As shown in Figure 8 b, when middle woman 820 changes taking pictures after posture of oneself, camera 810 captures after the postural change of target person, can automatically again take pictures, although now passerby woman 840 also walks about within sweep of the eye in the detection of camera 810, but because it is not identified as target person by camera 810, thereby can not affect taking pictures of camera 810 yet.
Fig. 9 is the hardware configuration schematic diagram of camera described in the embodiment of the present invention, and the specific embodiment of the invention does not limit the specific implementation of described camera.As shown in Figure 9, described camera can comprise:
Processor (processor) 910, communication interface (Communications Interface) 920, memory (memory) 930, and communication bus 940.Wherein:
Processor 910, communication interface 920, and memory 930 completes mutual communication by communication bus 940.
Communication interface 920, for other net element communications.
Processor 910, for executive program 932, specifically can carry out the correlation step in the embodiment of the method shown in above-mentioned Fig. 1.
Particularly, program 932 can comprise program code, and described program code comprises computer-managed instruction.
Processor 910 may be a central processor CPU, or specific integrated circuit ASIC(Application Specific Integrated Circuit), or be configured to implement one or more integrated circuits of the embodiment of the present invention.
Memory 930, for depositing program 932.Memory 930 may comprise high-speed RAM memory, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disc store.Program 932 specifically can be carried out following steps:
Target determining step is determined at least one target person at least one personage of detection in a detection visual field who enters described camera;
The step of taking pictures, takes pictures according to the action triggers of described target person.
In program 932, the specific implementation of each step can, referring to the corresponding steps in above-described embodiment or module, be not repeated herein.Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the device of foregoing description and module, can describe with reference to the corresponding process in preceding method embodiment, does not repeat them here.
Those of ordinary skills can recognize, unit and the method step of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can specifically should be used for realizing described function with distinct methods to each, but this realization should not thought and exceeds scope of the present invention.
If described function usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium.Understanding based on such, the part that technical scheme of the present invention contributes to prior art in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CDs.
Above execution mode is only for illustrating the present invention; and be not limitation of the present invention; the those of ordinary skill in relevant technologies field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (20)

1. a control method for a camera, is characterized in that, described method comprises:
Target determining step is determined at least one target person at least one personage of detection in a detection visual field who enters described camera;
The step of taking pictures, takes pictures according to the action triggers of described target person.
2. the method for claim 1, is characterized in that, described target determining step comprises:
Visual signature gathers sub-step, gathers at least one visual signature of the described personage of detection who enters the described detection visual field;
Target is determined sub-step, and at least one visual signature of the described personage's of detection described visual signature and pre-stored at least one candidate personage is compared, and determines described target person.
3. the method for claim 1, is characterized in that, described target determining step comprises:
Visual signature gathers sub-step, gathers at least one visual signature of the described personage of detection who enters the described detection visual field;
Comparer step, compares at least one visual signature of the described personage's of detection described visual signature and pre-stored at least one the first candidate personage, obtains one first comparison result;
Sound collection sub-step, gathers a described detection acoustic information within the vision;
Target is determined sub-step, according to described the first comparison result and described detection described acoustic information within the vision, determines described target person.
4. method as claimed in claim 3, is characterized in that, described target determines that sub-step comprises:
Described detection described acoustic information within the vision and pre-stored at least one the second candidate personage's acoustic information is compared, obtain one second comparison result;
According to described the first comparison result and described the second comparison result, determine described target person.
5. method as claimed in claim 3, is characterized in that, described target determines that sub-step comprises:
According to the orientation of described the first comparison result and described detection described acoustic information within the vision, determine described target person.
6. the method as described in claim 2 to 5 any one, is characterized in that, described visual signature comprises: at least one item in a human visual feature, a clothes feature and a specific identifier feature.
7. the method for claim 1, is characterized in that, described target determining step comprises:
IMAQ sub-step, collection enters the described personage's of detection in the described detection visual field a image;
Target is determined sub-step, and the characteristic according to the described personage's of detection described image in an imaging region of described camera is determined described target person.
8. method as claimed in claim 7, is characterized in that, described target determines that sub-step comprises:
Target person described in a location positioning according to the described described image that detects personage in the described imaging region of described camera.
9. method as claimed in claim 8, is characterized in that, described target is determined in sub-step:
The described personage of detection corresponding to an image that a centre position that is arranged in the described imaging region of described camera is located is defined as described target person.
10. method as claimed in claim 7, is characterized in that, described target determines that sub-step comprises:
Target person described in an area ratio-dependent according to the described described image that detects personage in the described imaging region of described camera.
11. methods as claimed in claim 7, is characterized in that, described target determines that sub-step comprises:
An image change according to the described personage's of detection described image in the described imaging region of described camera is determined described target person.
12. the method for claim 1, is characterized in that, described target determining step comprises:
IMAQ sub-step, collection enters the described personage's of detection in the described detection visual field a image;
Sound collection sub-step, gathers a described detection acoustic information within the vision;
Target is determined sub-step, according to the orientation of described image and described acoustic information, determines described target person.
13. methods as described in claim 1 to 12 any one, is characterized in that, described action comprises: expression action and/or a limb action.
14. methods as described in claim 1 to 13 any one, is characterized in that, described in take pictures and comprise: in a shooting process, obtain a rest image.
15. 1 kinds of cameras, is characterized in that, described camera comprises:
Target determination module, determines at least one target person at least one personage of detection who detects the visual field entering described camera;
Photo module, takes pictures for the action triggers according to described target person.
16. cameras as claimed in claim 15, is characterized in that, described target determination module comprises:
Visual signature gathers submodule, for gathering at least one visual signature of the described personage of detection who enters the described detection visual field;
Target is determined submodule, at least one visual signature of the described personage's of detection described visual signature and pre-stored at least one candidate personage is compared, determines described target person.
17. cameras as claimed in claim 15, is characterized in that, described target determination module comprises:
Visual signature gathers submodule, for gathering at least one visual signature of the described personage of detection who enters the described detection visual field;
Comparer module, at least one visual signature of the described personage's of detection described visual signature and pre-stored at least one the first candidate personage is compared, obtains one first comparison result;
Sound collection submodule, for gathering a described detection acoustic information within the vision;
Target is determined submodule, for according to described the first comparison result and described detection described acoustic information within the vision, determines described target person.
18. cameras as claimed in claim 17, is characterized in that, described target determines that submodule comprises:
Comparing unit, for described detection described acoustic information within the vision and pre-stored at least one the second candidate personage's acoustic information is compared, obtains one second comparison result;
Target determining unit, for determining described target person according to described the first comparison result and described the second comparison result.
19. cameras as claimed in claim 15, is characterized in that, described target determination module comprises:
IMAQ submodule, for gathering the described personage's of detection who enters the described detection visual field a image;
Target is determined submodule, for determining described target person according to the described personage's of detection described image in a characteristic of an imaging region of described camera.
20. cameras as claimed in claim 15, is characterized in that, described target determination module comprises:
IMAQ submodule, for gathering the described personage's of detection who enters the described detection visual field a image;
Sound collection submodule, for gathering a described detection acoustic information within the vision;
Target is determined submodule, for determining described target person according to the orientation of described image and described acoustic information.
CN201310532546.7A 2013-10-31 2013-10-31 The control method and camera of camera Active CN103607537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310532546.7A CN103607537B (en) 2013-10-31 2013-10-31 The control method and camera of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310532546.7A CN103607537B (en) 2013-10-31 2013-10-31 The control method and camera of camera

Publications (2)

Publication Number Publication Date
CN103607537A true CN103607537A (en) 2014-02-26
CN103607537B CN103607537B (en) 2017-10-27

Family

ID=50125736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310532546.7A Active CN103607537B (en) 2013-10-31 2013-10-31 The control method and camera of camera

Country Status (1)

Country Link
CN (1) CN103607537B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104902185A (en) * 2015-05-29 2015-09-09 努比亚技术有限公司 Shooting method and shooting device
CN104917961A (en) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 Camera rotation control method and terminal
CN104917967A (en) * 2015-05-30 2015-09-16 深圳市金立通信设备有限公司 Photographing method and terminal
CN105704389A (en) * 2016-04-12 2016-06-22 上海斐讯数据通信技术有限公司 Intelligent photo taking method and device
CN105847668A (en) * 2015-02-03 2016-08-10 株式会社Macron A gesture recognition driving method for selfie camera devices
CN105872338A (en) * 2016-05-31 2016-08-17 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN107682632A (en) * 2017-10-16 2018-02-09 河南腾龙信息工程有限公司 A kind of method and multifunction camera of camera automatic camera
CN108093167A (en) * 2016-11-22 2018-05-29 谷歌有限责任公司 Use the operable camera of natural language instructions
CN108174095A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Photographic method, mobile terminal and computer-readable medium based on smiling face's identification
CN109729268A (en) * 2018-12-26 2019-05-07 武汉市澜创信息科技有限公司 A kind of face image pickup method, device, equipment and medium
CN109788193A (en) * 2018-12-26 2019-05-21 武汉市澜创信息科技有限公司 A kind of camera unit control method, device, equipment and medium
CN111327814A (en) * 2018-12-17 2020-06-23 华为技术有限公司 Image processing method and electronic equipment
CN112712817A (en) * 2020-12-24 2021-04-27 惠州Tcl移动通信有限公司 Sound filtering method, mobile device and computer readable storage medium
CN111031249B (en) * 2019-12-26 2021-07-13 维沃移动通信有限公司 Auxiliary focusing method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101911670A (en) * 2008-01-07 2010-12-08 摩托罗拉公司 Digital camera focusing using stored object recognition
CN102292689A (en) * 2009-01-21 2011-12-21 汤姆森特许公司 Method to control media with face detection and hot spot motion
CN103024275A (en) * 2012-12-17 2013-04-03 东莞宇龙通信科技有限公司 Automatic shooting method and terminal
CN103108127A (en) * 2013-02-17 2013-05-15 华为终端有限公司 Method for shooting pictures through portable device and portable device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101911670A (en) * 2008-01-07 2010-12-08 摩托罗拉公司 Digital camera focusing using stored object recognition
CN102292689A (en) * 2009-01-21 2011-12-21 汤姆森特许公司 Method to control media with face detection and hot spot motion
CN103024275A (en) * 2012-12-17 2013-04-03 东莞宇龙通信科技有限公司 Automatic shooting method and terminal
CN103108127A (en) * 2013-02-17 2013-05-15 华为终端有限公司 Method for shooting pictures through portable device and portable device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847668A (en) * 2015-02-03 2016-08-10 株式会社Macron A gesture recognition driving method for selfie camera devices
CN104917961A (en) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 Camera rotation control method and terminal
CN104902185A (en) * 2015-05-29 2015-09-09 努比亚技术有限公司 Shooting method and shooting device
CN104917967A (en) * 2015-05-30 2015-09-16 深圳市金立通信设备有限公司 Photographing method and terminal
CN105704389A (en) * 2016-04-12 2016-06-22 上海斐讯数据通信技术有限公司 Intelligent photo taking method and device
CN105872338A (en) * 2016-05-31 2016-08-17 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN108093167B (en) * 2016-11-22 2020-07-17 谷歌有限责任公司 Apparatus, method, system, and computer-readable storage medium for capturing images
CN108093167A (en) * 2016-11-22 2018-05-29 谷歌有限责任公司 Use the operable camera of natural language instructions
CN107682632A (en) * 2017-10-16 2018-02-09 河南腾龙信息工程有限公司 A kind of method and multifunction camera of camera automatic camera
CN108174095A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Photographic method, mobile terminal and computer-readable medium based on smiling face's identification
CN111327814A (en) * 2018-12-17 2020-06-23 华为技术有限公司 Image processing method and electronic equipment
CN109729268A (en) * 2018-12-26 2019-05-07 武汉市澜创信息科技有限公司 A kind of face image pickup method, device, equipment and medium
CN109788193A (en) * 2018-12-26 2019-05-21 武汉市澜创信息科技有限公司 A kind of camera unit control method, device, equipment and medium
CN109729268B (en) * 2018-12-26 2021-03-02 武汉市澜创信息科技有限公司 Face shooting method, device, equipment and medium
CN111031249B (en) * 2019-12-26 2021-07-13 维沃移动通信有限公司 Auxiliary focusing method and electronic equipment
CN112712817A (en) * 2020-12-24 2021-04-27 惠州Tcl移动通信有限公司 Sound filtering method, mobile device and computer readable storage medium
CN112712817B (en) * 2020-12-24 2024-04-09 惠州Tcl移动通信有限公司 Sound filtering method, mobile device and computer readable storage medium

Also Published As

Publication number Publication date
CN103607537B (en) 2017-10-27

Similar Documents

Publication Publication Date Title
CN103607537A (en) Control method of camera and the camera
CN108629791B (en) Pedestrian tracking method and device and cross-camera pedestrian tracking method and device
US8754934B2 (en) Dual-camera face recognition device and method
CN105095873B (en) Photo be shared method, apparatus
CN101834986B (en) Imaging apparatus, mobile body detecting method, mobile body detecting circuit and program
JP5990951B2 (en) Imaging apparatus, imaging apparatus control method, imaging apparatus control program, and computer-readable recording medium recording the program
JP7026225B2 (en) Biological detection methods, devices and systems, electronic devices and storage media
US20120300092A1 (en) Automatically optimizing capture of images of one or more subjects
CN106844492B (en) A kind of method of recognition of face, client, server and system
JP7261296B2 (en) Target object recognition system, method, apparatus, electronic device, and recording medium
CN103905727B (en) Object area tracking apparatus, control method, and program of the same
CN105163034B (en) A kind of photographic method and mobile terminal
CN106462240A (en) System and method for providing haptic feedback to assist in capturing images
CN106331504A (en) Shooting method and device
CN104333748A (en) Method, device and terminal for obtaining image main object
CN101855633A (en) Video analysis apparatus and method for calculating inter-person evaluation value using video analysis
CN108875476B (en) Automatic near-infrared face registration and recognition method, device and system and storage medium
WO2011153270A2 (en) Image retrieval
CN105654033A (en) Face image verification method and device
CN107836109A (en) The method that electronic equipment autofocuses on area-of-interest
CN105740379A (en) Photo classification management method and apparatus
CN103501410B (en) The based reminding method of shooting, the generation method of device and detection pattern, device
WO2015102711A2 (en) A method and system of enforcing privacy policies for mobile sensory devices
Smowton et al. Zero-effort payments: Design, deployment, and lessons
CN112347834A (en) Remote nursing method and device based on personnel category attributes and readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant