CN105100749A - Image pick-up method and device as well as terminal - Google Patents

Image pick-up method and device as well as terminal Download PDF

Info

Publication number
CN105100749A
CN105100749A CN201510551707.6A CN201510551707A CN105100749A CN 105100749 A CN105100749 A CN 105100749A CN 201510551707 A CN201510551707 A CN 201510551707A CN 105100749 A CN105100749 A CN 105100749A
Authority
CN
China
Prior art keywords
shooting
coverage
module
time section
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510551707.6A
Other languages
Chinese (zh)
Inventor
刘铁俊
李政
程亮
张鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510551707.6A priority Critical patent/CN105100749A/en
Publication of CN105100749A publication Critical patent/CN105100749A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention relates to an image pick-up method and device as well as a terminal. The method comprises: reading the shooting time period and the shooting range of a next mission to be performed in a shooting mission list, wherein the shooting time periods of a plurality of shooting missions and shooting ranges separately corresponding to the shooting missions are recorded in the shooting mission list; and shooting the shooting range when reaching the starting time of the shooting time period. According to the image pick-up method and device and the terminal, the image pick-up device is capable of pre-storing the shooting mission list containing a plurality of shooting missions, wherein the shooting time period and the shooting range of each shooting task can be recorded in the list; and the image pick-up device is capable of shooting the set shooting range based on the shooting time period of each shooting task. In this way, the image pick-up device is capable of shooting different ranges within different time periods, and therefore, the utilization rate of the image pick-up device is increased and a plurality of purposes are achieved; besides, the pick-up device is convenient for a user and capable of enhancing the experience of the user.

Description

Image capture method, device and terminal
Technical field
The disclosure relates to technical field of communication network, particularly relates to a kind of image capture method, device and terminal.
Background technology
Along with the development of technology and the raising of awareness of safety, be more and more provided with camera in Modern Family and indoor and outdoor is monitored.
In correlation technique, camera is monitored the environment in image pickup scope in the time of setting usually, and the information of recording is uploaded to remote control center.Style of shooting, the function of this monitoring scheme are all more single.
Summary of the invention
For overcoming Problems existing in correlation technique, present disclose provides a kind of image capture method, device and terminal.
According to the first aspect of disclosure embodiment, a kind of image capture method is provided, comprises:
Read shooting time section and the coverage of the next one task to be captured in shooting task list, in described shooting task list, record the shooting time section of multiple shooting task and coverage corresponding respectively;
When arriving the initial time of described shooting time section, described coverage is taken.
Optionally, described described coverage to be taken, comprising:
According to the shooting angle that described coverage is corresponding, described coverage is taken, the described coverage of multiple difference and multiple different described shooting angle one_to_one corresponding.
Optionally, described read the shooting time section of next one task to be captured in shooting task list and coverage before, described method also comprises:
Receive first user instruction, in described first user instruction, carry the coverage of shooting time section and correspondence;
Described shooting time section and described coverage is extracted in described first user instruction;
Using described shooting time section and described coverage as a task to be captured, be stored in described shooting task list.
Optionally, described read the shooting time section of next one task to be captured in shooting task list and coverage before, described method also comprises:
Receive the second user instruction, in described second user instruction, carry shooting time section;
Described shooting time section is extracted in described second user instruction;
Obtain current shooting scope, described current shooting scope is the coverage after user regulates;
Using described shooting time section and described coverage as a task to be captured, be stored in described shooting task list.
Optionally, described described coverage is taken after, described method also comprises:
Photographed data is analyzed, determines whether there is potential safety hazard;
If there is described potential safety hazard, then the analysis result of described potential safety hazard is sent to smart machine; Or,
If there is described potential safety hazard, then outputting alarm information.
Optionally, described photographed data to be analyzed, determines whether there is potential safety hazard, comprising:
Identify the face-image in described coverage;
Described face-image is mated with the default face-image of validated user, obtains similarity;
If described similarity is lower than setting threshold, then determine to there is potential safety hazard.
Optionally, described photographed data to be analyzed, determines whether there is potential safety hazard, comprising:
Gather the movable information of children in described coverage or pet;
According to movable information and the surrounding enviroment of described children or pet, determine whether described children or pet exist potential safety hazard.
Optionally, described smart machine comprises intelligent terminal, intelligent appliance, wearable device or security protection center, community.
Optionally, described described coverage is taken after, described method also comprises:
At interval of Preset Time, generate security protection report based on photographed data and analysis result;
Described security protection report is sent to intelligent terminal or wearable device.
According to the second aspect of disclosure embodiment, a kind of camera head is provided, comprises: read module and taking module;
Described read module, is configured to shooting time section and the coverage of the next one task to be captured read in shooting task list, records the shooting time section of multiple shooting task and coverage corresponding respectively in described shooting task list;
Described taking module, is configured to, when arriving the initial time of the described shooting time section that described read module reads, take described coverage.
Optionally, described taking module comprises: shooting submodule;
Described shooting submodule, the shooting angle that the described coverage being configured to read according to described read module is corresponding, takes described coverage, the described coverage of multiple difference and multiple different described shooting angle one_to_one corresponding.
Optionally, described device also comprises: the first command reception module, the first extraction module and the first memory module;
Described first command reception module, is configured to receive first user instruction, carries the coverage of shooting time section and correspondence in described first user instruction;
Described first extraction module, is configured to extract described shooting time section and described coverage in the described first user instruction received in described first command reception module;
Described first memory module, be configured to using described first extraction module extract described shooting time section and described coverage as a task to be captured, be stored in described shooting task list.
Optionally, described device also comprises: the second command reception module, the second extraction module, coverage acquisition module and the second memory module;
Described second command reception module, is configured to reception second user instruction, carries shooting time section in described second user instruction;
Described second extraction module, is configured to extract described shooting time section in described second user instruction received in described second command reception module;
Described coverage acquisition module, be configured to obtain current shooting scope, described current shooting scope is the coverage after user regulates;
Described second memory module, the described coverage being configured to described shooting time section and the described coverage acquisition module acquisition of being extracted by described second extraction module, as a task to be captured, is stored in described shooting task list.
Optionally, described device also comprises: analysis module, analysis result sending module and alarm module;
Described analysis module, is configured to analyze photographed data, determines whether there is potential safety hazard;
Described analysis result sending module, if be configured to described analysis module to determine to there is described potential safety hazard, then sends to smart machine by the analysis result of described potential safety hazard; Or,
Described alarm module, if be configured to described analysis module to determine to there is described potential safety hazard, then outputting alarm information.
Optionally, described analysis module comprises: recognin module, matched sub-block and the first hidden danger determination submodule;
Described recognin module, is configured to identify the face-image in described coverage;
Described matched sub-block, the default face-image being configured to described face-image and the validated user described recognin module identified mates, and obtains similarity;
Described first hidden danger determination submodule, the described similarity that described matched sub-block obtains if be configured to lower than setting threshold, then determines to there is potential safety hazard.
Optionally, described analysis module comprises: gather submodule and the second hidden danger determination submodule;
Described collection submodule, is configured to gather the movable information of children in described coverage or pet;
Described second hidden danger determination submodule, is configured to movable information and the surrounding enviroment of described children or the pet gathered according to described collection submodule, determines whether described children or pet exist potential safety hazard.
Optionally, described smart machine comprises intelligent terminal, intelligent appliance, wearable device or security protection center, community.
Optionally, described device also comprises: report generation module and report sending module;
Described report generation module, is configured at interval of Preset Time, generates security protection report based on photographed data and analysis result;
Described report sending module, is configured to the described security protection report of described report generation CMOS macro cell to send to intelligent terminal or wearable device.
According to the third aspect of disclosure embodiment, a kind of terminal is provided, comprises: processor; Be configured to the memory of storage of processor executable instruction; Wherein, described processor is configured to:
Read shooting time section and the coverage of the next one task to be captured in shooting task list, in described shooting task list, record the shooting time section of multiple shooting task and coverage corresponding respectively;
When arriving the initial time of described shooting time section, described coverage is taken.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect:
In the disclosure, camera head pre-stored can comprise the shooting task list of multiple shooting task, can record shooting time section and the coverage of each shooting task in this list, camera head can be taken based on the coverage of the shooting time section of each shooting task to setting.Under this mode, camera head can take different scopes in the different periods, improves the utilance of camera head, realizes multiple use, for user provides convenient, optimizes Consumer's Experience.
The image pickup scope stored in camera head in the disclosure can be also camera angle, determines coverage based on shooting angle, and coverage can be made to meet the requirement of user more accurately.
In the disclosure, camera head can receive the instruction of carrying shooting time section and coverage of user's input, and is stored in shooting task list as task to be captured.This arrange the mode of shooting task list simple, be easy to realize.
In the disclosure, camera head can also receive the instruction carrying shooting time section of user, and obtains the coverage that user adjusts in real time, this shooting time section and coverage is stored in shooting task list as task to be captured.Because user adjusts coverage in real time in which, thus this coverage more can meet the demand of user.
In the disclosure, camera head can also to the data analysis photographed, and analyzing as result being notified when there is potential safety hazard smart machine or sending alarm, thus reminding user processes potential safety hazard event in time, avoids unnecessary loss.
In the disclosure, camera head can by identifying that whether the face-image of shooting is that the face-image of validated user determines whether there is potential safety hazard, and which can effectively determine whether that stranger swarms into, and plays the effect of safety monitoring.
In the disclosure, camera head can by determining whether there is potential safety hazard to the movable information of children or pet and the analysis of surrounding environment, and which can the security situation of effective monitoring children and pet.
Whether in the disclosure, no matter analysis result represents exists potential safety hazard, and camera head can generate security protection report, sends to intelligent terminal or the wearable device of user.User is gone on business, travels or situation that other long periods stay out, due to intelligent terminal or the wearable device normally equipment carried with of user, thus this mode user can be made to understand in time get home in security protection situation, when there is not potential safety hazard at home, can ensure that user feels at ease busy work or other things; When there is potential safety hazard, user can be convenient to and in time hidden danger be processed, in order to avoid cause unnecessary loss.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and is used from specification one and explains principle of the present disclosure.
Fig. 1 is a kind of image capture method flow chart of the disclosure according to an exemplary embodiment.
Fig. 2 is the another kind of image capture method flow chart of the disclosure according to an exemplary embodiment.
Fig. 3 is the one shooting application scenarios schematic diagram of the disclosure according to an exemplary embodiment.
Fig. 4 is a kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Fig. 5 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Fig. 6 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Fig. 7 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Fig. 8 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Fig. 9 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Figure 10 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Figure 11 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment.
Figure 12 is a structural representation of a kind of device for make a video recording of the disclosure according to an exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
The term used in the disclosure is only for the object describing specific embodiment, and the not intended to be limiting disclosure." one ", " described " and " being somebody's turn to do " of the singulative used in disclosure and the accompanying claims book is also intended to comprise most form, unless context clearly represents other implications.It is also understood that term "and/or" used herein refer to and comprise one or more project of listing be associated any or all may combine.
Term first, second, third, etc. may be adopted although should be appreciated that to describe various information in the disclosure, these information should not be limited to these terms.These terms are only used for the information of same type to be distinguished from each other out.Such as, when not departing from disclosure scope, the first information also can be called as the second information, and similarly, the second information also can be called as the first information.Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining ".
As shown in Figure 1, Fig. 1 is a kind of image capture method flow chart according to an exemplary embodiment, and the method may be used for, in camera head, comprising the following steps:
Step 101, the shooting time section reading the next one task to be captured in shooting task list and coverage.
In disclosure embodiment, shooting task list is the list including multiple shooting task, can be stored in camera head, records the shooting time section of each shooting task in this list, and coverage corresponding respectively.
Step 102, when arriving the initial time of shooting time section, this coverage to be taken.
In above-described embodiment, camera head pre-stored can comprise the shooting task list of multiple shooting task, can record shooting time section and the coverage of each shooting task in this list, camera head can be taken based on the coverage of the shooting time section of each shooting task to setting.Under this mode, camera head can take different scopes in the different periods, improves the utilance of camera head, realizes multiple use, for user provides convenient, optimizes Consumer's Experience.
As shown in Figure 2, Fig. 2 is the another kind of image capture method flow chart according to an exemplary embodiment, and the method may be used for, in camera head, comprising the following steps:
Step 201, the instruction of reception first user, carry the coverage of shooting time section and correspondence in this first user instruction.
In disclosure embodiment, user can be arranged shooting task list in advance, and such as, user sends user instruction to camera head, carries shooting time section and the coverage of shooting task in this instruction.
Step 202, in first user instruction, extract shooting time section and coverage.
Wherein, shooting time section and coverage one_to_one corresponding, different coverages and different shooting angle one_to_one corresponding.Such as, in a shooting task, shooting time section is point at 9 in the morning to afternoon 17, and coverage is for aiming at gate, and whether this shooting task has stranger to swarm into for monitoring in family; In another shooting task, shooting time section is point at 17 in afternoon to evening 21, and coverage is for aiming at study, and this shooting task is for monitoring the study situation of child.
Step 203, using shooting time section and coverage as a task to be captured, be stored in shooting task list.
By the way, the setting to shooting task and shooting task list is realized.
In another kind of publicity pattern, can also be realized shooting task and the setting of taking task list by following manner.
Receive the second user instruction, in this second user instruction, carry shooting time section; Shooting time section is extracted in the second user instruction; Obtain current shooting scope, this current coverage is the coverage after user regulates; Using the current shooting scope of the shooting time section extracted from the second user instruction and acquisition as a task to be captured, be stored in shooting task list.
In this implementation, shooting time section is set by the user, and sends to camera head to arrange by being carried in the second user instruction; Coverage is then based on the current coverage that user adjusts, and the coverage current by camera head Real-time Obtaining is determined.Such as user inputs initial time and the termination time of shooting task, camera head stores this shooting time section, then user regulates shooting angle, camera head gathers the coverage after user regulates, and is stored by the carrying out that this coverage is corresponding with the shooting time section that user inputs.
Step 204, the shooting time section reading the next one task to be captured in shooting task list and coverage, record the shooting time section of multiple shooting task and coverage corresponding respectively in this shooting task list.
After above-mentioned steps 201-203 is provided with shooting task list, camera head can read shooting time section and the coverage of next task to be captured.The next one shooting task that distance current time is nearest and task to be captured.
Step 205, when arriving the initial time of this shooting time section, according to the shooting angle that coverage is corresponding, coverage to be taken.
In disclosure embodiment, camera head is when arriving the shooting initial time of task to be captured, and the shooting angle of adjustment camera, makes it to aim at coverage.Camera in the disclosure, is provided with motor below it, for driving camera to rotate, to aim at different coverages.This camera can rotate in the horizontal direction, also can carry out rotation to a certain degree in other directions, can adopt wide-angle lens.In addition, in camera head, can also wireless communication module be set, for carrying out wireless connections with router, thus can with the intelligent terminal of user, and to communicate with the same intelligent appliance with wireless communication module.
Step 206, photographed data to be analyzed, determine whether there is potential safety hazard.
In a kind of publicity pattern, potential safety hazard can be determined whether there is by following manner:
Identify the face-image in coverage; Face-image is mated with the default face-image of validated user, obtains similarity; If this similarity is lower than setting threshold, then determine to there is potential safety hazard.
In which, the face-image of validated user can be prestored.If coverage is the indoor range of house, this validated user can be the kinsfolk of this house, or through kinsfolk accreditation kith and kin.As long as be judged as that similarity is higher than setting threshold, can think by the user trusted, thus to there is not potential safety hazard by the artificial validated user appeared in coverage; Otherwise, then think that the people appeared in coverage is stranger, determine to there is potential safety hazard.
In another kind of publicity pattern, potential safety hazard can also be determined whether there is by following manner:
Children in collection coverage or the movable information of pet; According to movable information and the surrounding enviroment of children or pet, determine whether these children or pet exist potential safety hazard.
In which, the home equipment that children or pet should not touch can be thought dangerous goods, and prestore the image information of these dangerous goods.Determine it is children or pet based on the height of the object collected in coverage and body information, and, the movable information of children or pet can be determined, as touch action, and determine the surrounding environment of children or pet, i.e. around whether dangerous article, if determine that children or pet distance dangerous goods are within setting range, or dangerous goods are had to the action of touching, then determine to there is potential safety hazard.
If step 207 exists this potential safety hazard, then the analysis result of this potential safety hazard is sent to smart machine; Or, if there is this potential safety hazard, then outputting alarm information.
In disclosure step, when determining to there is potential safety hazard, analysis result can be sent to the smart machine of user, such as the information of " photographing stranger " is sent to the mobile phone of user, maybe the face-image of the stranger photographed is sent to the mobile phone of user, etc., also directly alarm outputting alarm information can be passed through, or send warning to the security protection center of intelligent residential district, to warn stranger, or seek help from security protection center.
In disclosure embodiment, at interval of Preset Time, security protection report can also be generated based on photographed data and analysis result; And this security protection report is sent to intelligent terminal or wearable device.
In which, whether no matter analysis result represents exists potential safety hazard, all generates security protection report, sends to intelligent terminal or the wearable device of user.User is gone on business, travels or situation that other long periods stay out, due to intelligent terminal or the wearable device normally equipment carried with of user, thus this mode user can be made to understand in time get home in security protection situation, when there is not potential safety hazard at home, can ensure that user feels at ease busy work or other things; When there is potential safety hazard, user can be convenient to and in time hidden danger be processed, in order to avoid cause unnecessary loss.
Smart machine in disclosure embodiment can comprise: intelligent terminal, wearable device, intelligent appliance, security protection center, intelligent residential district.
In addition, the video of shooting can also send to the designated equipment of user to store by camera head.Such as, because video takes up room larger, thus can not preserve captured video to prevent, other equipment such as intelligent television can be sent to store captured video every the time of setting, also timing can will be judged as that the video not having potential safety hazard empties.
As shown in Figure 3, Fig. 3 is the one shooting application scenarios schematic diagram of the disclosure according to an exemplary embodiment.In the scene shown in Fig. 3, comprising: as the intelligent camera of camera head, as the smart mobile phone of smart machine.Shooting task list is stored in intelligent camera, the shooting time section of multiple shooting task and corresponding coverage is recorded in shooting task list, intelligent camera reads shooting time section and the coverage of the next one task to be captured in the shooting task list stored, the initial time of this shooting time section is 9:00 in the morning, and coverage is just to the scope at gate.When arriving 9:00, camera head adjustment being oriented just to gate of camera, and align shooting is started to the scope at gate.
In application scenarios shown in Fig. 3, the detailed process realizing shooting see aforementioned to the description in Fig. 1 and Fig. 2, can not repeat them here.
Corresponding with aforementioned image capture method embodiment, the embodiment of terminal that the disclosure additionally provides camera head and applies.
As shown in Figure 4, Fig. 4 is a kind of camera head block diagram of the disclosure according to an exemplary embodiment, and this device can comprise: read module 410 and taking module 420.
Wherein, read module 410, is configured to shooting time section and the coverage of the next one task to be captured read in shooting task list, records the shooting time section of multiple shooting task and coverage corresponding respectively in this shooting task list;
Taking module 420, is configured to, when arriving the initial time of the shooting time section that read module reads, take this coverage.
In above-described embodiment, camera head pre-stored can comprise the shooting task list of multiple shooting task, can record shooting time section and the coverage of each shooting task in this list, camera head can be taken based on the coverage of the shooting time section of each shooting task to setting.Under this mode, camera head can take different scopes in the different periods, improves the utilance of camera head, realizes multiple use, for user provides convenient, optimizes Consumer's Experience.
As shown in Figure 5, Fig. 5 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and taking module 420 can comprise: shooting submodule 421.
Wherein, shooting submodule 421, the shooting angle that the coverage being configured to read according to read module 410 is corresponding, takes coverage, multiple different coverage and multiple different shooting angles one_to_one corresponding.
In above-described embodiment, the image pickup scope stored in camera head can be also camera angle, determines coverage based on shooting angle, and coverage can be made to meet the requirement of user more accurately.
As shown in Figure 6, Fig. 6 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and this device can also comprise: the first command reception module 430, first extraction module 440 and the first memory module 450.
Wherein, the first command reception module 430, is configured to receive first user instruction, carries the coverage of shooting time section and correspondence in this first user instruction;
First extraction module 440, is configured to extract shooting time section and coverage in the first user instruction received in the first command reception module 430;
First memory module 450, is configured to the shooting time section extracted by the first extraction module 440 and coverage as a task to be captured, is stored in shooting task list.
In above-described embodiment, camera head can receive the instruction of carrying shooting time section and coverage of user's input, and is stored in shooting task list as task to be captured.This arrange the mode of shooting task list simple, be easy to realize.
As shown in Figure 7, Fig. 7 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and this device can also comprise: the second command reception module 460, second extraction module 470, coverage acquisition module 480 and the second memory module 490.
Wherein, the second command reception module 460, is configured to reception second user instruction, carries shooting time section in this second user instruction;
Second extraction module 470, is configured to extract shooting time section in the second user instruction received in the second command reception module 460;
Coverage acquisition module 480, be configured to obtain current shooting scope, this current coverage is the coverage after user regulates;
Second memory module 490, is configured to coverage that the shooting time section extracted by the second extraction module 470 and coverage acquisition module 480 obtain as a task to be captured, is stored in shooting task list.
In above-described embodiment, camera head can also receive the instruction carrying shooting time section of user, and obtains the coverage that user adjusts in real time, this shooting time section and coverage is stored in shooting task list as task to be captured.Because user adjusts coverage in real time in which, thus this coverage more can meet the demand of user.
As shown in Figure 8, Fig. 8 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and this device can also comprise: analysis module 4100, analysis result sending module 4110, alarm module 4120.
Wherein, analysis module 4100, is configured to analyze photographed data, determines whether there is potential safety hazard;
Analysis result sending module 4110, if be configured to analysis module 4100 to determine to there is potential safety hazard, then sends to smart machine by the analysis result of potential safety hazard; Or,
Alarm module 4120, if be configured to analysis module 4100 to determine to there is potential safety hazard, then outputting alarm information.
In above-described embodiment, camera head can also to the data analysis photographed, and analyzing as result being notified when there is potential safety hazard smart machine or sending alarm, thus reminding user processes potential safety hazard event in time, avoids unnecessary loss.
As shown in Figure 9, Fig. 9 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 8, and analysis module 4100 can comprise: recognin module 4101, matched sub-block 4102 and the first hidden danger determination submodule 4103.
Wherein, recognin module 4101, is configured to identify the face-image in coverage;
Matched sub-block 4102, the default face-image being configured to face-image and validated user recognin module 4101 identified mates, and obtains similarity;
First hidden danger determination submodule 4103, the similarity that matched sub-block 4102 obtains if be configured to lower than setting threshold, then determines to there is potential safety hazard.
In above-described embodiment, camera head can by identifying that whether the face-image of shooting is that the face-image of validated user determines whether there is potential safety hazard, and which can effectively determine whether that stranger swarms into, and plays the effect of safety monitoring.
As shown in Figure 10, Figure 10 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 8, and analysis module 4100 can comprise: gather submodule 4104 and the second hidden danger determination submodule 4105.
Wherein, gather submodule 4104, be configured to gather the movable information of children in coverage or pet;
Second hidden danger determination submodule 4105, is configured to the movable information according to the children or pet that gather submodule 4104 collection and surrounding enviroment, determines whether children or pet exist potential safety hazard.
In above-described embodiment, can by determining whether there is potential safety hazard to the movable information of children or pet and the analysis of surrounding environment, which can the security situation of effective monitoring children and pet.
In above-described embodiment, smart machine comprises intelligent terminal, intelligent appliance, wearable device or security protection center, community.
As shown in figure 11, Figure 11 is the another kind of camera head block diagram of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and device also comprises: report generation module 4130 and report sending module 4140.
Wherein, report generation module 4130, is configured at interval of Preset Time, generates security protection report based on photographed data and analysis result;
Report sending module 4140, the security protection report being configured to report generation module 4130 to generate sends to intelligent terminal or wearable device.
In above-described embodiment, whether no matter analysis result represents exists potential safety hazard, all generates security protection report, sends to intelligent terminal or the wearable device of user.User is gone on business, travels or situation that other long periods stay out, due to intelligent terminal or the wearable device normally equipment carried with of user, thus this mode user can be made to understand in time get home in security protection situation, when there is not potential safety hazard at home, can ensure that user feels at ease busy work or other things; When there is potential safety hazard, user can be convenient to and in time hidden danger be processed, in order to avoid cause unnecessary loss.
Camera head embodiment shown in above-mentioned Fig. 4 to Figure 11 can be applied in the terminal.
In said apparatus, the implementation procedure of the function and efficacy of unit specifically refers to the implementation procedure of corresponding step in said method, does not repeat them here.
For device embodiment, because it corresponds essentially to embodiment of the method, so relevant part illustrates see the part of embodiment of the method.Device embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of disclosure scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
Accordingly, the disclosure also provides a kind of terminal, and described terminal includes processor; For the memory of storage of processor executable instruction; Wherein, described processor is configured to:
Read shooting time section and the coverage of the next one task to be captured in shooting task list, in described shooting task list, record the shooting time section of multiple shooting task and coverage corresponding respectively;
When arriving the initial time of described shooting time section, described coverage is taken.
As shown in figure 12, Figure 12 is a structural representation of a kind of device 1200 for make a video recording of the disclosure according to an exemplary embodiment.Such as, device 1200 can be the mobile phone with routing function, computer, digital broadcast terminal, messaging devices, game console, flat-panel devices, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Figure 12, device 1200 can comprise with current or multiple assembly: processing components 1202, memory 1204, power supply module 1206, multimedia groupware 1208, audio-frequency assembly 1210, the interface 1212 of I/O (I/O), sensor cluster 1214, and communications component 1216.
The integrated operation of the usual control device 1200 of processing components 1202, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1202 can comprise one or more processor 1220 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 1202 can comprise one or more module, and what be convenient between processing components 1202 and other assemblies is mutual.Such as, processing components 1202 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1208 and processing components 1202.
Memory 1204 is configured to store various types of data to be supported in the operation of device 1200.The example of these data comprises for any application program of operation on device 1200 or the instruction of method, contact data, telephone book data, message, picture, video etc.Memory 1204 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 1206 is device 1200 provide electric power.Power supply module 1206 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 1200 and be associated.
Multimedia groupware 1208 is included in the screen providing an output interface between described device 1200 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1208 comprises a front-facing camera and/or post-positioned pick-up head.When device 1200 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1210 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1210 comprises a microphone (MIC), and when device 1200 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 1204 further or be sent via communications component 1216.In certain embodiments, audio-frequency assembly 1210 also comprises a loud speaker, for output audio signal.
I/O interface 1212 is for providing interface between processing components 1202 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 1214 comprises one or more transducer, for providing the state estimation of various aspects for device 1200.Such as, sensor cluster 1214 can detect the opening/closing state of device 1200, the relative positioning of assembly, such as described assembly is display and the keypad of device 1200, the position of all right checkout gear 1200 of sensor cluster 1214 or device 1200 assemblies changes, the presence or absence that user contacts with device 1200, the variations in temperature of device 1200 orientation or acceleration/deceleration and device 1200.Sensor cluster 1214 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 1214 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor cluster 1214 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor, microwave remote sensor or temperature sensor.
Communications component 1216 is configured to the communication being convenient to wired or wireless mode between device 1200 and other equipment.Device 1200 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1216 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1216 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 1200 can be realized, for performing said method by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 1204 of instruction, above-mentioned instruction can perform said method by the processor 1220 of device 1200.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The disclosure is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
The foregoing is only preferred embodiment of the present disclosure, not in order to limit the disclosure, all within spirit of the present disclosure and principle, any amendment made, equivalent replacements, improvement etc., all should be included within scope that the disclosure protects.

Claims (19)

1. an image capture method, is characterized in that, comprising:
Read shooting time section and the coverage of the next one task to be captured in shooting task list, in described shooting task list, record the shooting time section of multiple shooting task and coverage corresponding respectively;
When arriving the initial time of described shooting time section, described coverage is taken.
2. image capture method according to claim 1, is characterized in that, describedly takes described coverage, comprising:
According to the shooting angle that described coverage is corresponding, described coverage is taken, the described coverage of multiple difference and multiple different described shooting angle one_to_one corresponding.
3. image capture method according to claim 1, is characterized in that, described read the shooting time section of next one task to be captured in shooting task list and coverage before, described method also comprises:
Receive first user instruction, in described first user instruction, carry the coverage of shooting time section and correspondence;
Described shooting time section and described coverage is extracted in described first user instruction;
Using described shooting time section and described coverage as a task to be captured, be stored in described shooting task list.
4. image capture method according to claim 1, is characterized in that, described read the shooting time section of next one task to be captured in shooting task list and coverage before, described method also comprises:
Receive the second user instruction, in described second user instruction, carry shooting time section;
Described shooting time section is extracted in described second user instruction;
Obtain current shooting scope, described current shooting scope is the coverage after user regulates;
Using described shooting time section and described coverage as a task to be captured, be stored in described shooting task list.
5. method according to claim 1, is characterized in that, described described coverage is taken after, described method also comprises:
Photographed data is analyzed, determines whether there is potential safety hazard;
If there is described potential safety hazard, then the analysis result of described potential safety hazard is sent to smart machine; Or,
If there is described potential safety hazard, then outputting alarm information.
6. method according to claim 5, is characterized in that, describedly analyzes photographed data, determines whether there is potential safety hazard, comprising:
Identify the face-image in described coverage;
Described face-image is mated with the default face-image of validated user, obtains similarity;
If described similarity is lower than setting threshold, then determine to there is potential safety hazard.
7. method according to claim 5, is characterized in that, describedly analyzes photographed data, determines whether there is potential safety hazard, comprising:
Gather the movable information of children in described coverage or pet;
According to movable information and the surrounding enviroment of described children or pet, determine whether described children or pet exist potential safety hazard.
8. method according to claim 5, is characterized in that, described smart machine comprises intelligent terminal, intelligent appliance, wearable device or security protection center, community.
9. method according to claim 1, is characterized in that, described described coverage is taken after, described method also comprises:
At interval of Preset Time, generate security protection report based on photographed data and analysis result;
Described security protection report is sent to intelligent terminal or wearable device.
10. a camera head, is characterized in that, comprising: read module and taking module;
Described read module, is configured to shooting time section and the coverage of the next one task to be captured read in shooting task list, records the shooting time section of multiple shooting task and coverage corresponding respectively in described shooting task list;
Described taking module, is configured to, when arriving the initial time of the described shooting time section that described read module reads, take described coverage.
11. devices according to claim 10, is characterized in that, described taking module comprises: shooting submodule;
Described shooting submodule, the shooting angle that the described coverage being configured to read according to described read module is corresponding, takes described coverage, the described coverage of multiple difference and multiple different described shooting angle one_to_one corresponding.
12. devices according to claim 10, is characterized in that, described device also comprises: the first command reception module, the first extraction module and the first memory module;
Described first command reception module, is configured to receive first user instruction, carries the coverage of shooting time section and correspondence in described first user instruction;
Described first extraction module, is configured to extract described shooting time section and described coverage in the described first user instruction received in described first command reception module;
Described first memory module, be configured to using described first extraction module extract described shooting time section and described coverage as a task to be captured, be stored in described shooting task list.
13. devices according to claim 10, is characterized in that, described device also comprises: the second command reception module, the second extraction module, coverage acquisition module and the second memory module;
Described second command reception module, is configured to reception second user instruction, carries shooting time section in described second user instruction;
Described second extraction module, is configured to extract described shooting time section in described second user instruction received in described second command reception module;
Described coverage acquisition module, be configured to obtain current shooting scope, described current shooting scope is the coverage after user regulates;
Described second memory module, the described coverage being configured to described shooting time section and the described coverage acquisition module acquisition of being extracted by described second extraction module, as a task to be captured, is stored in described shooting task list.
14. devices according to claim 10, is characterized in that, described device also comprises: analysis module, analysis result sending module and alarm module;
Described analysis module, is configured to analyze photographed data, determines whether there is potential safety hazard;
Described analysis result sending module, if be configured to described analysis module to determine to there is described potential safety hazard, then sends to smart machine by the analysis result of described potential safety hazard;
Described alarm module, if be configured to described analysis module to determine to there is described potential safety hazard, then outputting alarm information.
15. devices according to claim 14, is characterized in that, described analysis module comprises: recognin module, matched sub-block and the first hidden danger determination submodule;
Described recognin module, is configured to identify the face-image in described coverage;
Described matched sub-block, the default face-image being configured to described face-image and the validated user described recognin module identified mates, and obtains similarity;
Described first hidden danger determination submodule, the described similarity that described matched sub-block obtains if be configured to lower than setting threshold, then determines to there is potential safety hazard.
16. devices according to claim 14, is characterized in that, described analysis module comprises: gather submodule and the second hidden danger determination submodule;
Described collection submodule, is configured to gather the movable information of children in described coverage or pet;
Described second hidden danger determination submodule, is configured to movable information and the surrounding enviroment of described children or the pet gathered according to described collection submodule, determines whether described children or pet exist potential safety hazard.
17. devices according to claim 14, is characterized in that, described smart machine comprises intelligent terminal, intelligent appliance, wearable device or security protection center, community.
18. devices according to claim 10, is characterized in that, described device also comprises: report generation module and report sending module;
Described report generation module, is configured at interval of Preset Time, generates security protection report based on photographed data and analysis result;
Described report sending module, is configured to the described security protection report of described report generation CMOS macro cell to send to intelligent terminal or wearable device.
19. 1 kinds of terminals, is characterized in that, comprising: processor; Be configured to the memory of storage of processor executable instruction; Wherein, described processor is configured to:
Read shooting time section and the coverage of the next one task to be captured in shooting task list, in described shooting task list, record the shooting time section of multiple shooting task and coverage corresponding respectively;
When arriving the initial time of described shooting time section, described coverage is taken.
CN201510551707.6A 2015-09-01 2015-09-01 Image pick-up method and device as well as terminal Pending CN105100749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510551707.6A CN105100749A (en) 2015-09-01 2015-09-01 Image pick-up method and device as well as terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510551707.6A CN105100749A (en) 2015-09-01 2015-09-01 Image pick-up method and device as well as terminal

Publications (1)

Publication Number Publication Date
CN105100749A true CN105100749A (en) 2015-11-25

Family

ID=54580169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510551707.6A Pending CN105100749A (en) 2015-09-01 2015-09-01 Image pick-up method and device as well as terminal

Country Status (1)

Country Link
CN (1) CN105100749A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878620A (en) * 2017-03-23 2017-06-20 北京小米移动软件有限公司 The method and apparatus for controlling IMAQ
CN109993946A (en) * 2017-12-29 2019-07-09 国民技术股份有限公司 A kind of monitoring alarm method, camera, terminal, server and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110317037A1 (en) * 2010-06-28 2011-12-29 Canon Kabushiki Kaisha Image pickup apparatus
CN102547241A (en) * 2011-12-31 2012-07-04 深圳市永达电子股份有限公司 Home anomaly detection method and home anomaly detection device as well as home anomaly detection system on basis of wireless network camera
CN102905110A (en) * 2012-09-07 2013-01-30 北京瀚景锦河科技有限公司 Remote multi-area image monitoring system and method
CN103268680A (en) * 2013-05-29 2013-08-28 北京航空航天大学 Intelligent monitoring and anti-theft system for family
CN103714648A (en) * 2013-12-06 2014-04-09 乐视致新电子科技(天津)有限公司 Monitoring and early warning method and device
CN204069205U (en) * 2014-09-17 2014-12-31 中国农业科学院农业信息研究所 Based on the agriculture production environment asynchronous wireless video monitoring apparatus of 4G network
CN104270609A (en) * 2014-10-09 2015-01-07 深圳市中控生物识别技术有限公司 Method, system and device for remote monitoring

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110317037A1 (en) * 2010-06-28 2011-12-29 Canon Kabushiki Kaisha Image pickup apparatus
CN102547241A (en) * 2011-12-31 2012-07-04 深圳市永达电子股份有限公司 Home anomaly detection method and home anomaly detection device as well as home anomaly detection system on basis of wireless network camera
CN102905110A (en) * 2012-09-07 2013-01-30 北京瀚景锦河科技有限公司 Remote multi-area image monitoring system and method
CN103268680A (en) * 2013-05-29 2013-08-28 北京航空航天大学 Intelligent monitoring and anti-theft system for family
CN103714648A (en) * 2013-12-06 2014-04-09 乐视致新电子科技(天津)有限公司 Monitoring and early warning method and device
CN204069205U (en) * 2014-09-17 2014-12-31 中国农业科学院农业信息研究所 Based on the agriculture production environment asynchronous wireless video monitoring apparatus of 4G network
CN104270609A (en) * 2014-10-09 2015-01-07 深圳市中控生物识别技术有限公司 Method, system and device for remote monitoring

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878620A (en) * 2017-03-23 2017-06-20 北京小米移动软件有限公司 The method and apparatus for controlling IMAQ
CN109993946A (en) * 2017-12-29 2019-07-09 国民技术股份有限公司 A kind of monitoring alarm method, camera, terminal, server and system

Similar Documents

Publication Publication Date Title
CN104184944B (en) Obtain method and the device of multimedia data stream
CN105791958A (en) Method and device for live broadcasting game
CN105204742A (en) Control method and device of electronic equipment and terminal
CN105279898A (en) Alarm method and device
CN103926890A (en) Intelligent terminal control method and device
CN104378587A (en) Camera equipment control method, device and equipment
CN105338399A (en) Image acquisition method and device
CN105701997A (en) Alarm method and device
CN105491048A (en) Account management method and apparatus
CN104717554A (en) Smart television control method and device and electronic equipment
CN104219038A (en) Method and device for synchronizing data
CN106101629A (en) The method and device of output image
CN105681928A (en) Device control method and apparatus
CN106209800A (en) Equipment Authority sharing method and apparatus
CN105610700A (en) Group creating method and apparatus and electronic device
CN105515831A (en) Network state information display method and device
CN105245809A (en) Video recording method and video recording device
CN104933419A (en) Method and device for obtaining iris images and iris identification equipment
CN105049807A (en) Method and apparatus for acquiring monitoring picture sound
CN106603350A (en) Information display method and apparatus
CN105282446A (en) Camera management method and device
CN105306827A (en) Shooting method, shooting device, control equipment and camera equipment
CN105049813A (en) Method, device and terminal controlling video image
CN106101773A (en) Content is with shielding method, device and display device
CN105281993A (en) Method and device for playing multi-media file

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151125

RJ01 Rejection of invention patent application after publication