Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining that correlation is open, rather than the restriction to the disclosure.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and disclose relevant part to related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can the method for shooting image using embodiment of the disclosure or the dress for shooting image
The exemplary system architecture 100 set.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as image processing class is answered on terminal device 101,102,103
With, video playback class application, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be various electronic equipments.When terminal device 101,102,103 is software, above-mentioned electronic equipment may be mounted at
In.Multiple softwares or software module (such as providing the software of Distributed Services or software module) may be implemented into it,
Single software or software module may be implemented into.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to showing on terminal device 101,102,103
The backstage image processing server that image sequence is handled.Backstage image processing server can to the image sequence of acquisition into
Row processing, and generate processing result (such as instruction for controlling the shooting of target camera).
It should be noted that can be by server 105 for shooting the method for image provided by embodiment of the disclosure
It executes, can also be executed by terminal device 101,102,103, correspondingly, the device for shooting image can be set in service
In device 105, also it can be set in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
To be implemented as multiple softwares or software module (such as providing the software of Distributed Services or software module), also may be implemented
At single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for shooting image according to the disclosure is shown
200.The method for being used to shoot image, comprising the following steps:
Step 201, target image sequence playing in target interface, being shot to target person is obtained.
In the present embodiment, (such as server shown in FIG. 1 or terminal are set the executing subject for shooting the method for image
It is standby) can by wired connection mode or radio connection from long-range, or played in target interface from local acquisition,
The target image sequence that target person is shot.Wherein, target interface can be for show to target person into
The interface for the image that row shooting obtains.For example, target interface can be it is being installed in above-mentioned executing subject, for shooting image
The interface of application.Target person can be the personage that shooting image is carried out to it, be held for example, target person can be using above-mentioned
The user of row main body progress self-timer.Target image sequence can be the image sequence to carry out moving object detection to it.In general,
The image that target image sequence includes can be a part of the image in the image sequence shot to target person,
Target image sequence includes the image currently shown in target interface.As an example, target image sequence may include presetting
Quantity image, including the image currently shown in target interface.
Step 202, moving object detection is carried out to target image sequence, determines the image difference that target image sequence includes
Corresponding action status information.
In the present embodiment, above-mentioned executing subject can carry out moving object detection to target image sequence, determine target
The corresponding action status information of the image that image sequence includes.Wherein, action status information exists for characterizing target person
The action state of picture display times, action state include motion state and stationary state.As an example it is supposed that target image sequence
Column include two images, and each image therein corresponds to an action state.Action status information can include but is not limited to
The information of following at least one form: number, text, symbol etc..For example, characterizing target when action status information is digital " 1 "
Personage is motion state;When action status information is digital " 0 ", characterization target person is stationary state.When above-mentioned image is shown
Between can be the image shown in target interface the corresponding display time.
Typically for the image in target image sequence, the corresponding action state of the image can be according in the image, phase
Target image before the image (can be by the image adjacent with the image, be also possible to be spaced in advance between the image
If the image of quantity image), the moving distance in the region of mobile pixel composition occurs in target interface (for example, the movement
Distance can be the maximum moving distance in the moving distance of each pixel in the region of the mobile pixel composition of above-mentioned generation;
Or can be the average value of the moving distance of each pixel) determine.For example, being preset if above-mentioned moving distance is more than or equal to
Distance threshold, determine the corresponding action state of the image be motion state.Alternatively, according to above-mentioned moving distance and the image
Play time between above-mentioned target image is poor, determines movement speed, if movement speed is more than or equal to preset speed
Threshold value determines that the corresponding action state of the image is motion state.
Above-mentioned executing subject can carry out moving object detection to target image sequence according to various methods.Optionally, on
Moving object detection: optical flow method, background segment method, frame can be carried out according to the existing method of following at least one by stating executing subject
Between calculus of finite differences etc..
In some optional implementations of the present embodiment, above-mentioned executing subject by optical flow method and can pass through background
Split plot design, which is combined, carries out moving object detection to target image sequence.Wherein, optical flow method can be used for detecting spatial movement object
Body is mapped to the instantaneous velocity for the pixel that image includes, and is variation using pixel in image sequence in time-domain and adjacent
Correlation between image determines the relationship between adjacent image, to calculate object in the interval time of adjacent image
A kind of method of motion information.Background segment method extracts motion target area using the calculus of differences of different images.Background segment
The background image that present image and one are constantly updated usually is carried out calculus of differences by method, is extracted in obtained difference image
Motion target area.
In general, optical flow method detect be the corresponding pixel of moving target speed, detection speed is fast, and precision is high.And it carries on the back
Scape split plot design, relative to optical flow method, detects moving target and is prolonged with the regular hour due to needing to reduce background
Late.By the way that two methods are combined, it is possible to reduce due to the noise in image, or due to target person experiencing small oscillating movements and make
At to generate for control target camera shooting instruction false triggering.It specifically, can in various manners will be above-mentioned
Two kinds of moving target detecting methods combine.As an example, can be used it is that optical flow method will test, for characterizing moving target
The combination of pixel be determined as foreground image, further foreground image is detected using background segment method, if current aobvious
Movement has occurred in target interface in the foreground image of the image shown, the foreground image relative to previous image, it is determined that current
The corresponding action state of the image of display is motion state, is otherwise stationary state.Alternatively, passing through optical flow method and background segment method
The corresponding action state of image currently shown is determined respectively, if two methods detect that the image currently shown is corresponding
Action state is motion state, it is determined that the corresponding action state of the image currently shown is motion state;If two methods
Detect that the corresponding action state of the image currently shown is stationary state, it is determined that the corresponding movement of the image currently shown
State is stationary state.
Step 203, in response to detecting that target person, by motion state convert to static state, is generated and used in current time
In the instruction of control target camera shooting.
In the present embodiment, above-mentioned executing subject can be in response to detecting target person in current time by motion state
Convert to static state generates the instruction for controlling the shooting of target camera.Specifically, above-mentioned executing subject can be in response to
Detect that the corresponding action state of the image currently shown in target interface is stationary state, and currently shown in target interface
The corresponding action state of previous image of image is motion state.Determine that target person is converted in current time by motion state
For stationary state.The above-mentioned form for controlling the instruction of target camera shooting can include but is not limited to following at least one
Kind: number, text, symbol, level signal etc..Above-mentioned target camera can be for target person is shot (such as
Image can be shot, video can also be shot) camera.Above-mentioned target camera can be set in above-mentioned executing subject,
It (such as when generating above-metioned instruction, is triggered at this point, above-metioned instruction control target camera shooting can be used in above-mentioned executing subject
Shoot image or video).Above-mentioned target camera also can be set on the electronic equipment communicated to connect with above-mentioned executing subject.
At this point, above-mentioned executing subject can send above-metioned instruction to the electronic equipment, which can be used received instruction
Control the shooting of target camera (such as when receiving above-metioned instruction, triggering shooting image or video).
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for shooting image of the present embodiment
Figure.In the application scenarios of Fig. 3, electronic equipment 301 passes through the camera being disposed thereon and maps the image of target person 302
On target interface (such as the interface currently shown on the screen of electronic equipment 301).Electronic equipment 301 is obtained in target first
The target image sequence 303 (the preset quantity picture frame for example including the image currently shown) played on interface.Then, electric
Sub- equipment 301 carries out moving object detection to target image sequence 303, determines the image difference that target image sequence 303 includes
Corresponding action status information.Wherein, when action status information is digital " 1 ", characterization target person 302 is motion state, is moved
Make status information be digital " 0 " when, characterization target person 302 be stationary state.Finally, electronic equipment 301 is in response to detecting
The corresponding action status information of image 3031 currently shown in target interface is " 0 ", and adjacent with image 3031 previous
The corresponding action status information of a image 3032 is " 1 ", and it is quiet to determine that target person 302 is converted in current time by motion state
Only state, and the instruction 304 for controlling the shooting of target camera is generated, electronic equipment controls camera pair according to instruction 304
Target person shoots photo.
The method provided by the above embodiment of the disclosure, by obtain played in target interface, to target person into
The target image sequence that row shooting obtains carries out moving object detection to target image sequence, determines that target person is current and moves
Make state, in response to detecting that target person, by motion state convert to static state, is generated for controlling mesh in current time
The instruction of camera shooting is marked, image is shot by the action control camera for identifying people to realize, without manually controlling,
Improve the flexibility of control camera shooting image.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for shooting image.The use
In the process 400 of the method for shooting image, comprising the following steps:
Step 401, target image sequence playing in target interface, being shot to target person is obtained.
In the present embodiment, step 401 and the step 201 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 402, threshold speed corresponding with target image sequence is obtained.
In the present embodiment, (such as server shown in FIG. 1 or terminal are set the executing subject for shooting the method for image
It is standby) can by wired connection mode or radio connection from long-range, or from it is local obtain it is corresponding with target image sequence
Threshold speed.Wherein, the corresponding relationship of threshold speed and threshold speed and target image sequence, can be by technology people
Member is pre-set, is also possible to be determined by above-mentioned executing subject in advance.
In some optional implementations of the present embodiment, threshold speed can obtain in accordance with the following steps:
Firstly, for the image in target image sequence, human body image is determined from the image, and determine in the image
Human body image size.Specifically, as an example, existing target detection model can be used in above-mentioned executing subject, from this
Human body image is determined in image.Target detection model can be based on existing target detection network (such as SSD (Single
Shot MultiBox Detector), DPM (Deformable Part Model) etc.) the obtained model of training.Target detection
Model can determine the position of human body image from inputting in image therein.In general, target detection model can be believed with output coordinate
Breath, the coordinate information can characterize the position of human body image in the picture.For example, coordinate information may include two of rectangle frame
To angular coordinate, by two to angular coordinate, it can determine that a rectangular region image, the rectangular region image are in the picture
Human body image.The size of above-mentioned human body image can include but is not limited to following at least one: the rectangle comprising human body image
Length, width, catercorner length etc..
Then, according to identified size, the corresponding human body image size of target image sequence is determined.On as an example,
Stating executing subject can determine the average value of identified each size as the corresponding human body image size of target image sequence.
Alternatively, above-mentioned executing subject can from identified each size, select size (such as random selection or selection it is current aobvious
The corresponding size of the image shown) it is used as the corresponding human body image size of target image sequence.
Finally, the corresponding relationship based on preset human body image size and threshold speed, determines that target image sequence is corresponding
Threshold speed.Wherein, the corresponding relationship of human body image size and threshold speed can be pre-set.For example, human figure
As the corresponding relationship of size and threshold speed can be characterized by preset two-dimensional table.In general, the size of human body image is bigger,
Indicate that the distance between target person and camera are closer, corresponding threshold speed is bigger, and the size of human body image is smaller, table
Show that the distance between target person and camera are remoter, corresponding threshold speed is smaller.By the implementation, may be implemented
When target person difference at a distance from camera, control target person to camera by identical movement range.
Step 403, for the image in target image sequence, the corresponding movement speed of the image is determined.
In the present embodiment, for the image in target image sequence, above-mentioned executing subject can determine that the image is corresponding
Movement speed.Specifically, for some image, the corresponding movement speed of the image can be according in the image, relative to this
Target image before image (can be the image adjacent with the image, be also possible to and be spaced preset quantity between the image
The image of a image), the moving distance in the region of mobile pixel composition occurs in target interface (for example, the moving distance can
Be each pixel in region that the mobile pixel of above-mentioned generation forms moving distance in maximum moving distance;Or it can
Be each pixel moving distance average value) and the image and above-mentioned target image between play time difference come it is true
Fixed (i.e. movement speed be moving distance and play time difference quotient).
Step 404, based on threshold speed and identified movement speed, the image difference that target image sequence includes is determined
Corresponding action status information.
In the present embodiment, above-mentioned executing subject can determine target based on threshold speed and identified movement speed
The corresponding action status information of the image that image sequence includes.Specifically, for the image in target image sequence, if
The corresponding movement speed of the image is more than or equal to above-mentioned threshold speed, determines that the corresponding action status information of the image is for table
The information of motion state is levied, if the corresponding movement speed of the image is less than above-mentioned threshold speed, determines that the image is corresponding dynamic
Making status information is the information for characterizing stationary state.
In some optional implementations of the present embodiment, above-mentioned executing subject can determine mesh in accordance with the following steps
The corresponding action status information of the image that logo image sequence includes:
Firstly, being smoothed to identified movement speed, the image obtained in target image sequence is respectively corresponded
It is smooth after movement speed.Specifically, above-mentioned executing subject can in various manners carry out identified movement speed flat
Sliding processing.For example, above-mentioned executing subject can use existing such as moving window least square moving-polynomial smoother algorithm, coarse
It punishes algorithm etc., identified movement speed is smoothed.
As an example, above-mentioned executing subject can use exponential smoothing algorithm, identified movement speed is carried out smooth
Processing.The data currently generated can be associated by exponential smoothing algorithm with all data generated before, i.e., current to generate
Data be that the data generated before determine, and with the closer data of data distance that currently generate, determine current
Shared weight is bigger when the data of generation.While so as to reach the mutation for eliminating movement speed, improves and determine image pair
The accuracy for the movement speed answered.
Then, based on threshold speed and identified smooth rear movement speed, the image that target image sequence includes is determined
Corresponding action status information.Specifically, for the image in target image sequence, if the image it is corresponding it is smooth after
Movement speed is more than or equal to above-mentioned threshold speed, determines that the corresponding action status information of the image is for characterizing motion state
Information determines the corresponding action state of the image if the corresponding smooth rear movement speed of the image is less than above-mentioned threshold speed
Information is the information for characterizing stationary state.
Step 405, in response to detecting that target person, by motion state convert to static state, is generated and used in current time
In the instruction of control target camera shooting.
In the present embodiment, step 405 and the step 203 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Figure 4, it is seen that the method for shooting image compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 highlight based on threshold speed and the corresponding movement speed of each image, determine that target image sequence includes
Image corresponding action status information the step of.The scheme of the present embodiment description can be neatly according to speed as a result,
Threshold value determines action status information, controls the accurate of target camera shooting according to the action state of personage to help to improve
Property and flexibility.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides one kind to be used for shooting figure
One embodiment of the device of picture, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the present embodiment includes: acquiring unit 501 for shooting the device 500 of image, it is configured to obtain
Take target image sequence playing in target interface, being shot to target person, wherein target image sequence packet
Include the image currently shown in target interface;Determination unit 502 is configured to carry out moving target inspection to target image sequence
It surveys, determines the corresponding action status information of image that target image sequence includes, wherein action status information is for characterizing
For target person in the action state of picture display times, action state includes motion state and stationary state;Generation unit 503,
It is configured in response to detect that target person, by motion state convert to static state, is generated for controlling mesh in current time
Mark the instruction of camera shooting.
In the present embodiment, acquiring unit 501 can by wired connection mode or radio connection from long-range, or
Target image sequence played in target interface from local acquisition, that target person is shot.Wherein, target circle
Face can be the interface for showing the image shot to target person.For example, target interface can be above-mentioned hold
The interface of application installed in row main body, for shooting image.Target person can be the personage that shooting image is carried out to it,
For example, target person can be the user for carrying out self-timer using above-mentioned executing subject.Target image sequence can be to its into
The image sequence of row moving object detection.In general, the image that target image sequence includes can be and shoot to target person
A part of the obtained image in image sequence, target image sequence include the image currently shown in target interface.Make
For example, target image sequence may include preset quantity image, including the image currently shown in target interface.
In the present embodiment, determination unit 502 can carry out moving object detection to target image sequence, determine target figure
The corresponding action status information of image for including as sequence.Wherein, action status information is being schemed for characterizing target person
As the action state of display time, action state includes motion state and stationary state.As an example it is supposed that target image sequence
Including two images, each image therein corresponds to an action state.Action status information can include but is not limited to
Under at least a form of information: number, text, symbol etc..For example, characterizing target person when action status information is digital " 1 "
Object is motion state;When action status information is digital " 0 ", characterization target person is stationary state.Above-mentioned picture display times
It can be the image shown in target interface the corresponding display time.
Typically for the image in target image sequence, the corresponding action state of the image can be according in the image, phase
Target image before the image (can be by the image adjacent with the image, be also possible to be spaced in advance between the image
If the image of quantity image), the moving distance in the region of mobile pixel composition occurs in target interface (for example, the movement
Distance can be the maximum moving distance in the moving distance of each pixel in the region of the mobile pixel composition of above-mentioned generation;
Or can be the average value of the moving distance of each pixel) determine.For example, being preset if above-mentioned moving distance is more than or equal to
Distance threshold, determine the corresponding action state of the image be motion state.Alternatively, according to above-mentioned moving distance and the image
Play time between above-mentioned target image is poor, determines movement speed, if movement speed is more than or equal to preset speed
Threshold value determines that the corresponding action state of the image is motion state.
Above-mentioned determination unit 502 can carry out moving object detection to target image sequence according to various methods.Optionally,
Above-mentioned determination unit 502 can carry out moving object detection: optical flow method, background segment according to the existing method of following at least one
Method, frame differential method etc..
In the present embodiment, generation unit 503 can be in response to detecting target person in current time by motion state
Convert to static state generates the instruction for controlling the shooting of target camera.Specifically, above-mentioned generation unit 503 can be rung
Ying Yu detects that the corresponding action state of the image currently shown in target interface is stationary state, and currently aobvious in target interface
The corresponding action state of previous image of the image shown is motion state.Determine target person in current time by motion state
Convert to static state.It is above-mentioned for control target camera shooting instruction form can include but is not limited to it is following at least
It is a kind of: number, text, symbol, level signal etc..Above-mentioned target camera can be for being shot (example to target person
Image can such as be shot, video can also be shot) camera.Above-mentioned target camera can be set in above-mentioned apparatus 500,
It (such as when generating above-metioned instruction, is triggered at this point, above-metioned instruction control target camera shooting can be used in above-mentioned apparatus 500
Shoot image or video).Above-mentioned target camera also can be set on the electronic equipment communicated to connect with above-mentioned apparatus 500.
At this point, above-mentioned apparatus 500 can send above-metioned instruction to the electronic equipment, which can be used received instruction control
Target camera shooting processed (such as when receiving above-metioned instruction, triggering shooting image or video).
In some optional implementations of the present embodiment, determination unit 502 may include: to obtain module (in figure not
Show), it is configured to obtain threshold speed corresponding with target image sequence;First determining module (not shown), is matched
It is set to for the image in target image sequence, determines the corresponding movement speed of the image;Second determining module (is not shown in figure
Out), the image for being configured to be determined that target image sequence includes based on threshold speed and identified movement speed is respectively corresponded
Action status information.
In some optional implementations of the present embodiment, threshold speed can obtain in accordance with the following steps: for mesh
Image in logo image sequence determines human body image from the image, and determines the size of the human body image in the image;Root
According to identified size, the corresponding human body image size of target image sequence is determined;Based on preset, human body image size and speed
The corresponding relationship for spending threshold value, determines the corresponding threshold speed of target image sequence.
In some optional implementations of the present embodiment, the second determining module may include: processing submodule, be matched
It is set to and identified movement speed is smoothed, obtain that the image in target image sequence is corresponding smoothly to be moved back
Dynamic speed;It determines submodule, is configured to determine target image sequence based on threshold speed and identified smooth rear movement speed
The corresponding action status information of the image that column include.
In some optional implementations of the present embodiment, determination unit 502 can be further configured to: using such as
Lower at least one device carries out moving object detection: optical flow method, background segment method, frame differential method to target image sequence.
In some optional implementations of the present embodiment, determination unit 502 can be further configured to: pass through light
Stream method and by background segment method combine to target image sequence carry out moving object detection.
The device provided by the above embodiment of the disclosure, by obtain played in target interface, to target person into
The target image sequence that row shooting obtains carries out moving object detection to target image sequence, determines that target person is current and moves
Make state, in response to detecting that target person, by motion state convert to static state, is generated for controlling mesh in current time
The instruction of camera shooting is marked, image is shot by the action control camera for identifying people to realize, without manually controlling,
Improve the flexibility of control camera shooting image.
Below with reference to Fig. 6, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1
Server or terminal device) 600 structural schematic diagram.Terminal device in embodiment of the disclosure can include but is not limited to all
As mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable
Formula multimedia player), the mobile terminal and such as number TV, desk-top meter of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
The fixed terminal of calculation machine etc..Electronic equipment shown in Fig. 6 is only an example, should not be to the function of embodiment of the disclosure
Any restrictions are brought with use scope.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as memory etc.;And communication device 609.Communication device 609 can
To allow electronic equipment 600 wirelessly or non-wirelessly to be communicated with other equipment to exchange data.Although Fig. 6 is shown with various
The electronic equipment 600 of device, it should be understood that being not required for implementing or having all devices shown.It can be alternatively
Implement or have more or fewer devices.Each box shown in Fig. 6 can represent a device, also can according to need
Represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the implementation of the disclosure is executed
The above-mentioned function of being limited in the method for example.It should be noted that computer-readable medium described in embodiment of the disclosure can be with
It is computer-readable signal media or computer readable storage medium either the two any combination.It is computer-readable
Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or
Device, or any above combination.The more specific example of computer readable storage medium can include but is not limited to: have
The electrical connection of one or more conducting wires, portable computer diskette, hard disk, random access storage device (RAM), read-only memory
(ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-
ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer
Readable storage medium storing program for executing can be any tangible medium for including or store program, which can be commanded execution system, device
Either device use or in connection.And in embodiment of the disclosure, computer-readable signal media may include
In a base band or as the data-signal that carrier wave a part is propagated, wherein carrying computer-readable program code.It is this
The data-signal of propagation can take various forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate
Combination.Computer-readable signal media can also be any computer-readable medium other than computer readable storage medium, should
Computer-readable signal media can send, propagate or transmit for by instruction execution system, device or device use or
Person's program in connection.The program code for including on computer-readable medium can transmit with any suitable medium,
Including but not limited to: electric wire, optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more
A program by the electronic equipment execute when so that the electronic equipment: obtain played in target interface, to target person carry out
Shoot obtained target image sequence, wherein target image sequence includes the image currently shown in target interface;To target
Image sequence carries out moving object detection, determines the corresponding action status information of image that target image sequence includes,
In, for action status information for characterizing target person in the action state of picture display times, action state includes motion state
And stationary state;In response to detecting that target person, by motion state convert to static state, is generated for controlling in current time
The instruction of target camera shooting processed.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof
The computer program code of work, described program design language include object oriented program language-such as Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor
Including acquiring unit, determination unit, generation unit.Wherein, the title of these units is not constituted under certain conditions to the list
Restriction of member itself, for example, acquiring unit be also described as " obtain it is being played in target interface, to target person into
The unit for the target image sequence that row shooting obtains ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member it should be appreciated that embodiment of the disclosure involved in invention scope, however it is not limited to the specific combination of above-mentioned technical characteristic and
At technical solution, while should also cover do not depart from foregoing invention design in the case where, by above-mentioned technical characteristic or its be equal
Feature carries out any combination and other technical solutions for being formed.Such as disclosed in features described above and embodiment of the disclosure (but
It is not limited to) technical characteristic with similar functions is replaced mutually and the technical solution that is formed.