CN108307116A - Image capturing method, device, computer equipment and storage medium - Google Patents

Image capturing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108307116A
CN108307116A CN201810122474.1A CN201810122474A CN108307116A CN 108307116 A CN108307116 A CN 108307116A CN 201810122474 A CN201810122474 A CN 201810122474A CN 108307116 A CN108307116 A CN 108307116A
Authority
CN
China
Prior art keywords
posture
target subject
image
goal
selling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810122474.1A
Other languages
Chinese (zh)
Other versions
CN108307116B (en
Inventor
李科慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810122474.1A priority Critical patent/CN108307116B/en
Publication of CN108307116A publication Critical patent/CN108307116A/en
Application granted granted Critical
Publication of CN108307116B publication Critical patent/CN108307116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

This application involves a kind of image capturing method, device, computer equipment and storage mediums.The image that the above method passes through acquisition image acquisition device, at least one target subject is identified in described image, and continue to track the attitudes vibration of at least one target subject, the posture of at least one target subject is detected by the deep learning neural network model trained, when detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered.The above method can accurately hold the time point of dynamic posture candid photograph, improve and capture effect.

Description

Image capturing method, device, computer equipment and storage medium
Technical field
This application involves field of computer technology, more particularly to a kind of image capturing method, device, computer equipment and Storage medium.
Background technology
With the development of computer technology, shooting demand is also growing, to record most beautiful moment, need us It seizes the opportunity and presses shutter at the time of most appropriate.In order to take suitable photo, numerous shooting means are derived, at present The shooting style being derived includes mainly countdown shooting, bluetooth triggering shooting etc..
Traditional style of shooting is essentially all the image pickup method based on time control, is often gone out using above-mentioned image pickup method It has passed through and has had taken but photographer is not ready the action of shooting, or the action of photographer has been tied when shooting Beam misses best shooting time.Existing technique for taking is difficult to hold the time point for capturing dynamic posture, cannot obtain most Good candid photograph effect.
Invention content
Based on this, it is necessary to for the above technical issues, provide it is a kind of can accurately hold dynamic posture candid photograph Time point improves image capturing method, device, computer equipment and the storage medium for capturing effect.
A kind of image capturing method, including:
Obtain the image of image acquisition device;
At least one target subject is identified in described image, and continues to track the appearance of at least one target subject State changes;
The posture of at least one target subject is detected by the deep learning neural network model trained;
When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered.
A kind of image capturing device, including:
Image capture module, the image for obtaining image acquisition device;
Target subject recognition and tracking module, for identifying at least one target subject in described image, and persistently with The attitudes vibration of at least one target subject described in track;
Attitude detection module, for the deep learning neural network model by having trained at least one target master The posture of body is detected;
Taking module, for when detecting the posture of at least one target subject with goal-selling attitude matching, Trigger shooting instruction.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor executes following steps:
Obtain the image of image acquisition device;
At least one target subject is identified in described image, and continues to track the appearance of at least one target subject State changes;
The posture of at least one target subject is detected by the deep learning neural network model trained;
When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating When machine program is executed by the processor so that the processor executes following steps:
Obtain the image of image acquisition device;
At least one target subject is identified in described image, and continues to track the appearance of at least one target subject State changes;
The posture of at least one target subject is detected by the deep learning neural network model trained;
When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered.
Above-mentioned image capturing method, device, computer equipment and storage medium are monitored by image collecting device in scene Target subject posture, to target subject carry out continue tracking improve detection target subject efficiency.According to trained Deep learning neural network, which is detected the posture of target subject, can obtain more accurate dynamic posture.According to detecting The dynamic posture of target subject is shot with goal-selling attitude matching situation, and the instantaneous trigger that can be completed in dynamic posture is clapped It takes the photograph, improves and capture effect.
Description of the drawings
Fig. 1 is the applied environment figure of image capturing method in one embodiment;
Fig. 2 is the flow diagram of image capturing method in one embodiment;
Fig. 3 is the flow diagram of target subject recognition and tracking in one embodiment;
Fig. 4 is the flow diagram of training deep learning neural network in one embodiment;
Fig. 5 is the flow diagram of training deep learning neural network in another embodiment;
Fig. 6 is the flow diagram of attitude detection in one embodiment;
Fig. 7 is the flow diagram that lasting triggering shooting is completed in one embodiment;
Fig. 8 is the flow diagram that video capture is completed in another embodiment;
Fig. 9 is the flow diagram of multiple target main body triggering shooting in further embodiment;
Figure 10 is the picture terminal interface schematic diagram that multiple target main body triggering shooting obtains in one embodiment;
Figure 11 is the picture terminal interface schematic diagram that multiple target main body triggering shooting obtains in another embodiment;
Figure 12 is the picture terminal interface schematic diagram that multiple target main body triggering shooting obtains in further embodiment;
Figure 13 is the posture schematic diagram under different conditions during takeofing in one embodiment;
Figure 14 is to meet the picture terminal interface schematic diagram that preset state parameter is shot in one embodiment;
Figure 15 is the flow diagram that speech trigger is shot in one embodiment;
Figure 16 is the flow diagram of a specific embodiment image capturing method;
Figure 17 is the structure diagram of image capturing device in one embodiment;
Figure 18 is the structure diagram of target subject recognition and tracking in one embodiment;
Figure 19 is the structure diagram of attitude detection model in one embodiment;
Figure 20 is the structure diagram of posture network model training unit in one embodiment;
Figure 21 is the structure diagram of attitude detection model in another embodiment;
Figure 22 is the structure diagram of another embodiment image capturing device;
Figure 23 is the structure diagram of video capture module in one embodiment;
Figure 24 is the structure diagram of taking module in one embodiment;
Figure 25 is the structure diagram of one embodiment Computer equipment.
Specific implementation mode
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
Fig. 1 is the applied environment figure that image capturing method is provided in one embodiment, as shown in Figure 1, in the application environment In, including terminal 110 and server 120.Terminal 110 includes image collecting device, and image collecting device is for acquiring image. Terminal 110 obtains the image that image acquisition device arrives, and passes through image recognition model and deep learning neural network model pair The posture of target subject and target subject in image is detected, and the posture and preset posture of the target subject to recognizing It is matched, image collecting device executes shooting according to matching result.When successful match, image collecting device triggering shooting refers to It enables, when matching unsuccessful, image collecting device continues to acquire image.The image that shooting obtains is sent to service by network Device.Can also by terminal by image acquisition device to original image be sent in server 120, in server 120 In handled to obtain the posture of target subject in image to original image, the posture of target subject is returned into terminal 110.Terminal 110 will return the result and be matched with goal-selling posture, and when successful match, image collecting device executes shooting, works as matching When unsuccessful, image collecting device continues head and acquires image.
Server 120 can be independent physical server, can also be the server set that multiple physical servers are constituted Group can be to provide the Cloud Server of the basic cloud computing service such as Cloud Server, cloud database, cloud storage and CDN.Terminal 110 Can be smart mobile phone, tablet computer, laptop, desktop computer, mm professional camera special etc., however, it is not limited to this.Service Device 120 and terminal 110 can communicate connection type with network etc. and be attached, and the present invention is not limited herein.
As shown in Fig. 2, in one embodiment, providing a kind of image capturing method.This method specifically includes following step Suddenly:
Step S202 obtains the image of image acquisition device.
Wherein, image collecting device is the device for acquiring image, such as camera, and camera is generally taken the photograph with video The basic functions such as picture/propagation and still image capture, it is after acquiring image by camera lens, by the photosensory assembly electricity in camera Road and control assembly to image be processed and converted to the digital signal that computer can identify, to complete Image Acquisition Work.Camera on capture apparatus generally can be utilized directly, need not be developed again.Image is image acquisition device One or more image arrived may include that main body, main body can be people, animal or scenery in image.
Specifically, the image for including target subject that image acquisition device arrives is obtained.Image collector can be obtained The image collection for setting multiple images composition that continuous acquisition arrives, can also be that image collecting device is acquired according to intervals The image collection of the image composition arrived.
Step S204 identifies at least one target subject in the picture, and continues to track at least one target subject Attitudes vibration.
Wherein, image recognition is handled image, analyzed and is understood using computer, to identify the mesh in image The identification technology of mark and object.Image recognition technology is to extract the main feature of image based on the main feature of image Target subject in image for identification.It is characterized in one group of data for describing target subject.For example, a deep learning god Include multitiered network through network, each layer network extracts the feature of different dimensions.By taking face as an example, after machine learning, network The feature that extracts of bottom layer be mainly essential characteristic, such as left oblique line, right oblique line, horizontal line, vertical line, point, more top Network layer extracts and is characterized in local feature, and such as the local feature of face, then the network layer extraction on upper layer is face spy Sign, the face geometric properties extracted by upper layer network and face position feature etc. describe a face.Target subject is to use In the behavioral agent of triggering shooting instruction, the attitudes vibration by tracking target subject triggers shooting instruction, can preset It needs the target subject identified or corresponding target subject is obtained by recognizer automatic identification.Wherein, target subject packet The target facial image preserved before people, animal, scenery etc., such as shooting is included, to pass through target in the image that shooting obtains Facial image identifies to obtain corresponding target subject.Can include one or more target subjects in piece image.Image with Track is positioned to the target subject detected in the image that is shot by image collecting device, obtains target subject in image In location information.The tracking such as including but not limited to neural network or particle filter algorithm or Kalman filtering algorithm may be used Algorithm is to target subject into line trace.To target subject into when line trace, including but not limited to only with appointing in track algorithm Meaning one algorithm target subject is used further into line trace, or after multiple track algorithms are combined to target subject carry out with Track etc..Such as, using particle filter algorithm to target subject into line trace, or by particle filter algorithm and Kalman filtering algorithm knot Again to target subject into line trace after conjunction.
Specifically, the main feature for including in image is extracted, carrying out analysis and identification to the main feature extracted goes out target Main body persistently obtains the location information of target subject to the attitudes vibration of the target subject identified into line trace.Wherein, right Target subject tracking include but not limited to the target subject in each frame into line trace or according to certain time interval into Line trace or according to certain interval frame into line trace.The attitudes vibration of target subject can be the lasting variation of the same posture Or a variety of variations of multiple postures.
Step S206 carries out the posture of at least one target subject by the deep learning neural network model trained Detection.
Wherein, the deep learning neural network model trained be by carry posture label sets of image data into Row study obtains.The model can be detected the posture of the target subject in the image comprising target subject of input, And export the posture of target subject.Posture is the action that the object being taken is sent out, or the posture shown.Wherein, taking human as Example, action and posture include but not limited to takeoff, refer to day, clap hands, waving, turning over, owner points in the direction of a distant place, throws away cap etc.. By taking animal as an example, action and posture include but not limited to the jump of animal, scratch an itch, put out one's tongue, rearings or fall backwards with hands and legs in the air Deng.
Specifically, the image comprising target subject is inputted to the deep learning neural network model trained, extracts target The posture feature of main body determines the posture of target subject according to the posture feature of target subject.Such as, target subject is human body, will Including the image of human body inputs the deep learning neural network model trained, output obtains the posture of human body.
Step S208, when detecting the posture of at least one target subject with goal-selling attitude matching, triggering shooting Instruction.
Wherein, goal-selling posture is the posture preset for triggering shooting instruction, can once be arranged one or The multiple goal-selling postures of person.It can be by being determined in posture that posture learning algorithm learns the posture in image Goal-selling posture, or goal-selling posture is determined according to self-defined pose template.Such as, according to deep learning neural network model Learn the posture learnt to the posture in image.Matching refers to the posture feature and goal-selling posture of target subject Feature it is same or similar, the matching degree between the posture of target subject and goal-selling posture can be calculated, and default It is judged as successful match when matching degree reaches matching degree threshold value with degree threshold value.
Specifically, the posture by the target subject obtained in the deep learning neural network model trained with set in advance It is fixed to be matched for trigger the targeted attitude taken pictures, match for the same posture when, capture apparatus, which starts, to be shot, and is completed Shooting.Shooting can be shooting photo or video.When the screening-mode of capture apparatus setting is to take pictures, shooting instruction is triggered Later, it completes to take pictures.Wherein, the photo shot can be one, can also be multiple, when the shooting of capture apparatus setting When pattern is shooting video, after triggering shooting instruction, video capture is completed.It completes that after primary shooting bat can be triggered again Instruction is taken the photograph, the posture and the posture of first time triggering shooting instruction for triggering shooting instruction again can be identical, can not also be identical. The targeted attitudes of multiple triggering shooting instructions can be set with primary shooting, when detecting target subject and trigger shooting instruction When at least one of targeted attitude targeted attitude matches, shooting instruction is triggered.Prompt of taking pictures is can be sent out when triggering shooting Information.
In one embodiment, when the target subject in image is one, in the dynamic appearance for detecting the target subject When state is with the preset attitude matching for being used to trigger shooting instruction, shooting instruction is triggered.Such as, the target in image is recognized Main body is 1 people, and the posture preset for triggering shooting instruction is jump, then this target subject in detecting image is done When going out the action of jump, shooting instruction is triggered, completes shooting.
In another embodiment, when the target subject in image is multiple, wherein any one target is being detected When the posture of the target subject of main body or preset number is with the attitude matching for triggering shooting instruction is preset, triggering shooting Instruction.Wherein preset number can be self-defined as needed, and e.g., the target subject recognized in image is 10 people, is preset Posture for triggering shooting instruction is jump, and preset number 3 then has 3 people to make the action of jump in detecting image When, shooting instruction is triggered, shooting is completed.
In yet another embodiment, when the target subject in image is multiple, all targets in detecting image When the posture of main body is with the attitude matching for triggering shooting instruction is preset, shooting instruction is triggered.Such as, it recognizes in image Target subject be 20 people, it is jump to preset posture for triggering shooting instruction, then there are 20 in detecting image When people makes the action of jump simultaneously, shooting instruction is triggered, completes shooting.
Above-mentioned image pickup method identifies target subject by obtaining the image of image acquisition device in the picture, and with The target subject that track identifies can reduce target subject into line trace by image the region area of detection image, to Detection time is reduced, the efficiency of detection target subject is improved, by the deep learning neural network model trained to target master The posture of body is detected, and the deep learning neural network trained can quickly learn the feature of image, detect target The dynamic posture of main body can according to detecting that the dynamic posture of target subject is shot with goal-selling attitude matching situation In the instantaneous trigger shooting that dynamic posture is completed, improves and capture effect.
As shown in figure 3, in one embodiment, step S204 includes:
Present image is inputted the image recognition model trained by step S204a, and the image recognition model trained obtains The historical position information of at least one target subject in the corresponding history image of present image.
Wherein, image recognition model is the identification model of the target subject in image for identification, can identify to obtain mesh Mark the location information of main body.The image recognition model include but not limited to be by magnanimity carry label photo learn to obtain , for identification and the target subject in shooting photo is oriented, and be tracked to it.Image trace is by history figure As being analyzed to obtain the historical position information of target subject, target subject in present image is predicted according to historical position information Location information.Specifically, before the image recognition model for having trained present image input, present image can be located in advance Reason.Pretreatment includes being zoomed in and out to the size of image, and dimension of picture is zoomed to the figure with the above-mentioned image recognition model of training As corresponding size.The color space of image is converted according to algorithm requirements, different recognizers corresponds to different figures As color space.Pretreated present image is inputted to the image recognition model trained, is obtained by the image recognition model Take the historical position information of the target subject in at least history image before present image.History image is present image A frame before or multiple image, historical position information are location information of the target subject on history image.Such as, in acquisition The location information of one frame or multiframe history image target subject.
Step S204b determines at least one target subject in the predicted position area of present image according to historical position information Domain.
Specifically, predicted location area is that the target subject obtained by image recognition model prediction can in present image Can occur the band of position, according to historical position information of the target subject in history image to target subject on present image The band of position predicted.The time interval of image acquisition device picture is smaller, the position ratio of target subject movement It is relatively limited, therefore the band of position of the target subject in present image can accurately be predicted according to historical position information.Also The position of target subject in prediction present image can be combined according to the movable information of historical position information and target subject Region.Such as, according to the location information of target subject in previous frame or multiframe history image, according to Kalman state prediction equation mesh Mark predicted location area of the main body in current frame image.
Step S204c, when detecting at least one target subject within the scope of predicted location area, output is at least one Current location information of the target subject in present image.
Specifically, be detected to the predicted location area in present image, when detecting mesh in the predicted location area When marking main body, the image that the location information of the target subject detected and the target subject identified are constituted is known as image The output data of other model.In view of the performance limitation of mobile end equipment, tracing algorithm can be in the historical position of target subject Adjacent region is detected, and improves the tracking efficiency of target subject.Using the location information of target subject in predicted position area Domain detection target subject can reduce the detection time to target subject, improve detection efficiency.It is assisted by picture charge pattern Positioning, achievees the effect that real-time tracing.
In one embodiment, after identification navigates to target subject, target subject can be determined with selected part frame Position, improves the processing speed of image.
In one embodiment, it is used as with reference to letter since there is no historical position information when detecting target subject for the first time Breath, therefore when being detected to target subject, search the location information that whole image region determines target subject.
In one embodiment, when target subject is not detected in the predicted location area in above-mentioned present image, Target subject is detected in entire present image, into the location tracking flow of next round, is repeated above-mentioned in image The step of tracking is identified in the main body that is taken.
As shown in figure 4, in one embodiment, before step S204, further including:
Step S402, will be in the training image set input deep learning neural network model that carry posture label.
Specifically, posture label is the data illustrated for the posture to target subject in image.Such as, including personage The corresponding posture label of photo to takeoff is " takeofing ".The training image set of posture label carries various posture labels The set of image composition.Training image set is input in deep learning neural network model.
Step S404 obtains status data corresponding with posture label.
Specifically, status data is the data of user-defined format corresponding with posture label, can be vector data, matrix Data etc..Such as, status data corresponding with the posture label that takeoffs is (1,0,0,0), status number corresponding with posture label is turned over According to for (0,1,0,0), status data corresponding with day posture label is referred to is (0,0,1,0), shape corresponding with rotation attitude label State data are (0,0,0,1), and status data corresponding with posture label is got by terminal.
Step S406, using status data as the anticipated output of deep learning neural network model as a result, to deep learning Neural network model is trained.
Specifically, using status data as the anticipated output of deep learning neural network model as a result, being to export result Guiding is trained deep learning neural network model.Such as, the posture label of an image is to takeoff, corresponding status data For (1,0,0,0 ...), then desired status number after being handled by above-mentioned deep learning neural network model According to for (1,0,0,0 ...), i.e., using the corresponding status data of posture label as deep learning neural network model Expectation of Learning Obtained result.
Step S408 updates the parameter of deep learning neural network model, the deep learning neural network trained Model.
Wherein, deep learning neural network model is that the posture feature extracted in the image to input is weighted, and is obtained To corresponding output state data.Constantly learnt by the image of the carrying label to input, study as far as possible is every A kind of feature of posture so that each posture is indicated with an identical posture feature set as far as possible, is being learnt Cheng Zhonghui adjusts the weight of each posture feature in the corresponding posture feature set of each posture.It is above-mentioned by learning update The parameter of deep learning neural network model training pattern so that study is carried out to image collection by the network model and is identified The posture correct recognition rata arrived is as high as possible, terminates above-mentioned deep learning nerve when posture correct recognition rata meets preset range The training of network model, the deep learning neural network model trained.Deep learning neural network model can be quick The feature for extracting image, improve the processing speed of image, reduce and take.Go out target subject by the feature recognition extracted Posture, obtain the posture of target subject, improve the accuracy of attitude detection.
In one embodiment, can be according to the needs of application scenarios, trigger corresponding to different application scene is taken pictures Posture extracts different various features, carries out feature learning, so that the network model after training can be directed to different answer With the different complicated posture of scene Recognition.Such as competitive sports application scenarios, there are slamdunk, dive into water, soccer goal-shooting, net Ball service, gymnastics such as empty at the posture of Various Complexes, when training dataset is sufficiently large, when feature extraction is improved enough, after training Network model can accurately identify the posture of Various Complex, the foul of sports tournament can be carried out according to the posture identified Auxiliary judgment.
In one embodiment, above-mentioned deep learning neural network model may include convolution deep learning neural network mould Type.Generally weights is used to share network structure in convolution deep learning neural network model, reduces the complexity of network model, Reduce the quantity of weights.Specifically, the image collection for carrying posture label is inputted into convolution deep learning neural network model, The network model is trained, the parameter of network is updated, when the parameter using updated network model can be preset Output accuracy when, deconditioning, the network model trained.
As shown in figure 5, in one embodiment, step S408 includes:
Step S408a carries out posture feature to each image in image collection and extracts to obtain corresponding posture feature set.
Wherein, posture feature is the target subject action sent out or the corresponding each limb of the posture shown of triggering shooting The state of body, or trigger the behavioral characteristics of the landscape of shooting.Such as, human body basic exercise action form can mainly be summarized as pushing away It draws, whip, buffering, pedaling and stretch, swing, twist and move toward one another.The action of upper limb basic exercise can be summarized as pushing away, draw and whipping 3 Kind.The action of lower limb basic exercise can be summarized as buffering, pedal and stretch and whip 3 kinds.The athletic performance of whole body and trunk can be divided into swing, Reverse and move toward one another 3 kinds.Feature extraction is carried out to image according to above-mentioned basic exercise action form, the posture extracted is special Collection be combined into (upper limb pushes away, upper limb is drawn, upper limb is whipped, lower limb buffer, lower limb pedal stretch, lower limb are whipped, are swung, are reversed, transporting in opposite directions It is dynamic).
Step S408b adjusts the weight of each posture feature in the corresponding posture feature set of each image.
Wherein, weight is the proportion of each posture feature in each posture occupied.Such as, posture feature collection is combined into X (upper limb pushes away, upper limb is drawn, upper limb is whipped, lower limb buffer, lower limb pedal stretch, lower limb are whipped, are swung, are reversed, moving toward one another), support Triggering photography posture have (takeoff, refer to day, rotate, turn over), indicating the probability of posture by y vectors, (P takeoffs, and P refers to day, P rotations Turn, P is turned over).It trains to obtain a matrix W by machine learning, the element in the matrix is for describing in posture feature set The corresponding weight of each posture feature.Wherein, the sequence of each posture in posture feature set can be not limited to random alignment Above-mentioned arrangement mode.
Step S408c obtains current shape after being weighted to corresponding posture feature according to the weight of each posture feature State data.
Wherein, current status data is the output data of deep learning neural network model, is data corresponding with posture, Different status datas indicates different postures.Posture feature set is weighted to obtain status data, according to status data Obtain posture corresponding with image object main body.Such as, the corresponding probability of each posture, result of calculation are calculated according to y=W*x The corresponding probability of posture that takeoffs is indicated for (0.9,0,0,0.1), 0.9, and 0.1 indicates to turn over the corresponding probability of posture, due to takeofing The corresponding probability of posture is far longer than the corresponding probability of other postures, therefore according to the determining status number of probability (0.9,0,0,0.1) According to for (1,0,0,0), corresponding posture is to takeoff with status data (1,0,0,0).
Step S408d obtains corresponding each appearance when current status data meets the condition of convergence with expecting state data The target weight of state feature.
Specifically, convergence is tapered into error in certain threshold range.The condition of convergence can be to carrying appearance The error recognition rate threshold value for the posture that the image collection of state label learns according to target weight.Posture mark is carried when detecting When error recognition rate in the image collection of label is in posture error recognition rate threshold range, corresponding each posture feature is obtained Target weight.Wherein, error recognition rate is calculated according to current status data and expecting state data.Current state Data are consistent with expecting state data, indicate correct identification, and current status data and expecting state data are inconsistent, indicate mistake Identification, by being counted to obtain error recognition rate to wrong identification number and test data.Such as, error recognition rate threshold value is 0.15, it is trained to have obtained deep learning neural network model by training image, by test data input deep learning god It is tested through network model, calculates the error recognition rate of test image, if wrong identification is expressed as reaching corresponding for 0.17 Restrain image.If error recognition rate, which is 0.15 expression, meets the condition of convergence.
Step S408e obtains the parameter of deep learning neural network model, the depth trained according to target weight Learning neural network model.
Specifically, using above-mentioned target weight as the parameter of deep learning neural network model, the depth trained Learning neural network model.It inputs an image into the deep learning neural network model trained, extracts target subject Posture feature is weighted to obtain corresponding status data to posture feature according to deep learning neural network model parameter, Output of the status data as deep learning neural network model.Training data is enough so that the depth that training obtains The feature that habit neural network model learns is more accurate, carries out what attitude detection obtained by deep learning neural network model As a result more accurate.
As shown in fig. 6, in one embodiment, step S206, including:
Step S206a will input the deep learning nerve net trained comprising the image-region of at least one target subject Network model.
Specifically, target subject be from above-mentioned image recognition Model Identification to target subject.Including the target subject Image-region can be to above-mentioned image recognition Model Identification to the image that is split of target subject, can also be to obtain What is taken includes the present image of target subject.Target subject may include people, animal or scenery, by the image comprising target subject Region inputs the deep learning neural network model trained.
Step S206b carries out posture feature to the image-region comprising at least one target subject and extracts to obtain at least one The corresponding targeted attitude characteristic set of a target subject.
Wherein, targeted attitude characteristic set is the feature of multiple features composition of action or posture that target subject is sent out Set.By taking the falling backwards with hands and legs in the air of animal as an example, it is that animal foot above pushes away to carry out the posture feature that posture feature extracts to animal The direction of action and four limbs.
Specifically, it according to the posture feature of image-region of the feature extraction algorithm extraction comprising target subject, will extract Target subject posture feature carry out in a certain order arrangement or random alignment obtain corresponding posture feature set, The posture feature set is a vector for including multiple posture features.
Step S206c, according to the weight of each posture feature in the posture feature set of at least one target subject Each posture feature is weighted to obtain corresponding target state data.
Specifically, the weight of each posture feature is that the parameter for the deep learning neural network recognization model trained corresponds to Weight, processing is weighted by each posture feature in the posture feature set of the parameters on target main body, obtains mesh Mark status data.Target state data is status data corresponding with the posture of target subject.
Step S206d obtains the target of at least one target subject according to the correspondence of target state data and posture Posture.
Specifically, it is corresponding between status data and posture, which is to carry out above-mentioned deep learning nerve It is just defined before network training.Therefore when the status data of the posture in the target subject during image is calculated, by looking into Status data and the correspondence of posture is looked for determine the posture of target subject.
Targeted attitude, and deep learning god can quickly be obtained by carrying out attitude detection according to deep learning neural network model The posture that more accurate target subject from the posture feature of multiple dimension learning objective main bodys, can be obtained through network model is special Sign improves shooting effect.
In one embodiment, image recognition model and deep learning neural network model can be merged into a nerve net Network model, which can be identified and track to the target subject of input picture, and the target to identifying The posture of main body is detected, and determines the posture of the target subject in image.Above-mentioned neural network model is in multiple image Including target subject and the posture of target subject learnt to obtain.
As shown in fig. 7, in one embodiment, after step S208, further including:
Step S602 continues the image for obtaining image acquisition device.
Specifically, the new images of image acquisition device are obtained after completing shooting in triggering shooting instruction.New figure Can also be the image acquired in the original location as that can be the image collected after being moved to camera.
Step S604 into identifying at least one target subject in the picture, and continues to track at least one target master The step of attitudes vibration of body, when detecting the posture of target subject with goal-selling attitude matching, triggering shooting again refers to It enables.
Specifically, image collecting device continues to acquire new images, handles the new images of acquisition, passes through image recognition Algorithm identifies the target subject in image, and the target subject to identifying carries out track and localization.Wherein, the mesh to identifying It includes but not limited to carry out positioning to the target subject in parts of images or in whole images that mark main body, which carries out track and localization, Target subject positioned.The posture of the target subject identified is detected by attitude detection model, posture inspection Survey the deep learning neural network model that model has including but not limited to been trained.When the posture arrived by attitude detection model inspection It when with goal-selling attitude matching, is shot again, obtains new video and photo.Wherein, goal-selling posture can be It is one or more.Posture when the target subject detected and any one goal-selling posture in multiple goal-selling postures When matching, shooting instruction is triggered again.When in a frame image including multiple target subjects, when detecting at least one target master When the posture of body is with goal-selling attitude matching, shooting instruction is triggered.When including multiple target subjects and multiple goal-selling appearances When state, then when in the multiple target subjects detected the posture of at least one target subject and multiple goal-selling postures it is arbitrary When one goal-selling attitude matching, shooting instruction is triggered.
Step S606 the step of repeating to enter the image for continuing to obtain image acquisition device, completes lasting triggering and claps It takes the photograph.
Specifically, it completes after shooting again, the step of repeating to be detected the image got.Image collecting device It is constantly in operating mode, constantly repeats above-mentioned acquisition image, detection image, attitude detection, the step of triggering shooting instruction.Weight More more natural photos and video can be obtained by carrying out shooting again.When by image collecting device against target subject, image is adopted Acquisition means constantly repeat acquisition image to triggering shooting instruction, the step of completing to shoot.Such as, image collecting device is opposite Target subject, goal-selling posture include but not limited to that child makes and claps hands, waves, turning over or the actions such as skips and hops, work as inspection Measure child make clap hands, wave, turning over, the action of skips and hops when, trigger shooting instruction.There are three detecting in image Child, wherein any one child make above-mentioned action and just trigger shooting instruction.Goal-selling posture includes but not limited to cat and dog Jump, scratches an itch, puts out one's tongue, rearings, and the actions such as fall backwards with hands and legs in the air trigger shooting instruction when detecting above-mentioned action, complete Heterodyne is shone, obtain it is various sprout photographs, record child's fine moment and staying for pet sprout the moment.
As shown in figure 8, in one embodiment, goal-selling posture includes reference attitude and termination posture, step S208 Including:
Step S208a, when detecting that the posture of at least one target subject is matched with reference attitude, triggering shooting refers to It enables, persistently obtains the picture of image collecting device shooting.
Specifically, when the goal-selling posture is used to trigger the goal-selling posture of shooting instruction, as starting Posture.When the action and the goal-selling posture for triggering shooting instruction for detecting that the target subject recognized in image is made When consistent, shooting instruction is triggered, persistently obtains the picture that image acquisition device arrives.
Step S208b makes image collector when detecting the posture of at least one target subject with attitude matching is terminated Set stopping shooting picture.
Specifically, it is termination posture when goal-selling posture is the posture for terminating shooting.When detecting in image When the action that the target subject recognized is made is consistent with the targeted attitude for terminating shooting, stop obtaining continuous image acquisition The collected picture of device.By reference attitude to the image structure for terminating the image acquisition device persistently got posture At video.More information can be preserved using videograph, and video can record many dynamic posture whole process Come.
In one embodiment, goal-selling posture includes the sub- posture of multiple goal-sellings, and step S208 includes:
When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction, shooting are triggered Obtain multiple multiple sub- posture photos for including the same goal-selling posture.
Wherein, preset posture includes that the sub- posture of multiple goal-sellings indicates that acts an entire period from start to end The inside includes multiple sub- postures of the goal-selling for meeting preset posture, and these postures arrange have recorded shooting sequentially in time The change procedure of person's posture within the entire period.
Specifically, multiple pictures can be continuously shot when triggering is taken pictures.The start to finish of each action can continue one The section time can be continuously shot multiple pictures during action continues, the whole flow process of action is recorded.Such as, a series of Personage's jump photo, a people jumps to during landing when rebounding height from low to high from, in dropping process from It is high to Low, when it is jump to detect the posture of target subject, starts to shoot photo, be continuously shot multiple.Shooting can be according to Preset shooting time interval is continuously shot, can also after triggering is shot by image acquisition device to nearest more Frame preserves.It preserves how many frame images self-defined to be arranged, as setting is continuously shot 3 or 5 pictures.
In one embodiment, goal-selling gesture set is formed by multiple goal-selling postures, step S208 includes:When When detecting that the posture of at least one target subject is matched with any one targeted attitude in goal-selling gesture set, triggering Shooting.
Specifically, goal-selling gesture set is made of multiple goal-selling postures, and goal-selling posture can be according to bat The expection content of shooting for the person of taking the photograph determines.When shooting, when in the posture and multiple goal-selling postures for detecting target subject When any one goal-selling posture matches, shooting instruction is triggered.Such as, the goal-selling posture of setting include takeoff, refer to day, It turns over etc., detects that target subject is made when takeofing posture, shoot picture, or detect that target subject is made when referring to day action, Trigger shooting instruction.Any one goal-selling in the multiple goal-selling postures for the posture and setting that target subject is made When posture matches, photographing instruction is triggered, completes to take pictures.
In one embodiment, goal-selling posture includes multiple, and capture apparatus can be according to multiple goal-selling postures Triggering repeatedly shooting, the posture in first time detection target subject and any one goal-selling in multiple goal-selling postures The attitude matching time triggers shooting instruction, completes to shoot for the first time, image collecting device obtains image again, to obtaining again Image be detected, when detecting any one goal-selling posture in target subject and multiple goal-selling postures again When matching, shooting instruction is triggered again, and completion is shot again, continues to obtain picture, is repeated to enter above-mentioned image detection, is triggered and clap The step of taking the photograph and complete shooting.Such as, Basketball Match is shot, goal-selling posture includes but not limited to shoot, dribble, pass Ball is laid up, the actions such as dunk shot, when detecting that the act of shooting in image is matched with the act of shooting in goal-selling posture, Shooting instruction is triggered, the picture for including act of shooting is preserved, then proceedes to obtain picture, to the posture of the target subject in picture Detected again, when detect lay up when, complete current shooting, obtain comprising the picture of posture of laying up, image collecting device Continue to obtain, when the posture dribble for detecting the target subject in picture, completes current shooting, obtain the figure for including dribble posture Piece.Repeat to obtain posture in picture, detection picture whether with above-mentioned goal-selling attitude matching, complete when matching to take pictures.It is more The style of shooting of secondary triggering shooting instruction and multiple goal-selling postures can obtain more more natural images.
As shown in figure 9, in one embodiment, step S208 includes:
Step S208c, when it includes multiple target subjects to detect in target subject, to the postures of multiple target subjects into Row detection.
Specifically, can unanimously can not also comprising multiple target subjects, the type of target subject in same frame image Unanimously.Target subject in the same image learnt according to application scenarios can be same type of, can also be difference Type.The posture of the multiple target subjects gone out to image recognition is all detected, including to each mesh in multiple target subjects The posture that the posture of mark main body is detected or is made jointly to target complete main body is detected.
Step S208d, when detect multiple target subjects posture simultaneously with goal-selling attitude matching when, triggering shooting Instruction.
Specifically, when the condition for triggering shooting instruction is multiple target subjects while meeting triggering shooting, when what is detected When the posture of target complete main body all meets goal-selling posture, shooting instruction is triggered.As shown in Figure 10, most bottom is located in Figure 10 The horizontal line in portion indicates that ground is shot after identifying that 4 people in figure are all liftoff, and shooting obtains the jump photo of more people. If be detected to the posture of multiple target subjects in image made jointly, detect that multiple target subjects are done jointly When the posture gone out is consistent with goal-selling posture, shooting instruction is triggered.Such as, the acrobatics completed, sports fortune are coordinated to more people Dynamic more people coordinate the shooting of the actions such as the athletics event completed.The posture of multiple target subjects is detected, according to multiple mesh The posture triggering shooting of main body is marked so that shooting is more convenient, and each target subject can enter a country, need not be additional Photographer shoots, and shooting is more convenient.
In the present embodiment, triggering shooting when the action of target complete main body is all consistent, as shown in figure 11, all When all exposing smiling face on the face of people, trigger shooting instruction, that is, 4 target subjects detecting while smiling face all occur When, the photo that is shot.As shown in figure 12, it when 5 people have shown the action for being named as " here it is lives " in figure, touches Hair shooting is obtained comprising the action photo for being named as " here it is lives ".
In one embodiment, step S206 includes:Deep learning neural network model is at least one target subject Posture is detected, and obtains the posture and state corresponding with the posture of at least one target subject ginseng of at least one target subject Number, the variation of wherein state parameter reflect the state change of the posture of target subject corresponding with state parameter.
Wherein, state parameter is for indicating that target subject corresponds to the state degree that posture is presently in, with moving for posture State changes, and the state degree that posture is presently in also changes therewith, the state detected to deep learning neural network model Dynamic state of parameters changes.Specifically, the target subject identified by deep learning neural network model is detected, and obtains the mesh The corresponding posture of main body state parameter corresponding with the posture is marked, which is used to indicate the state of target subject, posture Entire continuous process from being formed end includes multiple state parameters, the appearance of the representative target subject of each state parameter The different conditions of state.
In one embodiment, state parameter is the state parameter with posture type corresponding types, such as when posture type is When takeofing, state parameter corresponding with posture type of takeofing is height of jumping;When posture type is smiling face, with smiling face's posture class The corresponding state parameter of type is the degree etc. of smiling face.
In another embodiment, state parameter includes but not limited to be indicated with numerical value or indicated with grade.Such as, it shoots The height grade that personage jumps in image when the photo of personage's jump, height grade are corresponding with height number.Such as, the height etc. takeoff Grade can be divided into including but not limited to 3 grades, belong to 1 when wherein the height takeoff is in first height number threshold range Grade is takeoff, and in first height number threshold range and belonging to 2 grades and takeoff between second height number threshold range, is more than Second height number threshold range belongs to 3 grades and takeoffs.
Step S208 includes:When detecting the posture of at least one target subject and goal-selling attitude matching, and with it is pre- If the state parameter of the posture of the matched target subject of targeted attitude meets preset state parameter, shooting instruction is triggered.
Specifically, when the posture and pre-set target for detecting at least one of multiple target subjects target subject When attitude matching, and when the state parameter of matched target subject is matched with pre-set dbjective state parameter, triggering is taken pictures. When being shot by state parameter and posture co- controlling, it can be captured by state parameter to more accurately shooting posture.
In one embodiment, state parameter corresponding with the posture of the target subject of goal-selling attitude matching meets pre- If state parameter, including:When the corresponding state parameter of the posture of target subject be greater than or equal to preset state parameter when, judgement with The corresponding state parameter of posture of the target subject of goal-selling attitude matching meets preset state parameter.
Specifically, if preset state parameter is height of jumping threshold value, be more than when the height of jumping of the posture of target subject or When equal to height of jumping threshold value, state parameter corresponding with the posture of the target subject of goal-selling attitude matching, which meets, presets shape State parameter.Due to being the process of a height consecutive variations from takeofing to falling, clapped so as to be triggered by preset state parameter Take the photograph the image collection of posture consecutive variations.
In one embodiment, the preset state parameter is default expression shape change coefficient range, with goal-selling posture The corresponding state parameter of posture of matched target subject meets preset state parameter, including:When with goal-selling attitude matching The corresponding posture of target subject expression shape change coefficient in the default expression shape change coefficient range when, judgement with default mesh The corresponding state parameter of posture for marking the target subject of attitude matching meets preset state parameter.Specifically, such as preset table end of love It is [0.5,0.8] to change coefficient range, indicates the amplitude of variation for presetting expression, wherein 0.5 indicates to smile, 0.8 indicates to laugh, if Expression shape change coefficient corresponding with the posture of the target subject of goal-selling attitude matching is 0.6, then the corresponding table of target subject Feelings judge state ginseng corresponding with the posture of the target subject of goal-selling attitude matching in default expression shape change coefficient range Number meets preset state parameter.
In a specific embodiment, it is height of jumping that state parameter, which can be arranged, as shown in figure 13, from a left side in Figure 13 To it is right be same person jump to from the outset fall back to ground during takeoff the different conditions of posture, under different conditions point Different state parameters, i.e., different height of jumping have not been corresponded to.Each state uses 001,002,003,004 and of label respectively 005 indicates.Wherein, although 2 states corresponding marked as 001 and 005 indicate to detect the posture of takeofing of personage, personage Height of jumping is unsatisfactory for default height of jumping threshold value, so shooting instruction is not triggered, and corresponding marked as 002,003 and 004 3 The expression of a state detects posture of takeofing, and height of jumping reaches default height of jumping threshold value, when posture and state parameter simultaneously When meeting preset condition, shooting instruction is triggered, to obtain shooting image as shown in figure 14, marked as 002,003 and 004 pair The corresponding shooting image of 3 states answered is respectively image 010, image 020 and image 030, these images composition shooting image Set.
As shown in figure 15, in one embodiment, step S208, including:
Step S208e obtains the voice data of at least one target subject.
Specifically, voice data is the sound that the person of being taken got by voice acquisition device sends out.The voice number Can also be specific voice data according to may include posture information corresponding with posture.Voice data includes and target subject The text information that matches of posture, such as in voice data including but not limited to " oiling ", " raising one's hand ", " takeofing " meaning language Sound data.
Step S208f carries out speech detection to voice data and identification obtains corresponding voice recognition result.
Specifically, the voice data got is detected and is identified by speech recognition equipment.It detects and identifies Mode includes but not limited to time domain of the text information as voice recognition result or extraction voice data extracted in voice data Signal and frequency-region signal obtain corresponding voice recognition result.Such as, it is " to refer to the text information that voice data is detected It ", " oiling ", " raising one's hand ", " takeofing " etc., or the time domain waveform or the frequency-domain waveform difference that are obtained after handling voice data It is similar or identical with the time domain waveform of default voice data or frequency-domain waveform.
Step S208g, when detecting the posture of at least one target subject with goal-selling attitude matching, and voice is known When other result is matched with goal-selling voice data, shooting instruction is triggered.
Specifically, when detecting the posture of at least one target subject and pre-set target appearance in multiple target subjects State matches, and when voice recognition result matches with the pre-set target speech data for triggering shooting instruction, triggering is clapped Take the photograph instruction.When voice recognition result is text information, which includes but is not limited to text information corresponding with posture Or by pre-setting posture and text information correspondence, determined by the correspondence of posture and text information.The voice Whether recognition result matches with the text information in default voice data, when successful match, triggers shooting instruction.When voice is known When other result is the time-domain signal or frequency-region signal of voice data, time-domain signal or frequency-region signal with goal-selling voice data When successful match, shooting instruction is triggered.Controlling the image capturing method of shooting simultaneously by voice data and posture can capture To more accurate shooting posture.When wherein any one is not consistent with preset condition for voice data and posture, bat is not triggered Instruction is taken the photograph, imaging error is reduced.Such as, detect that the posture of target subject is to takeoff, the voice data of target subject identifies to obtain Text information be " referring to day " when, do not trigger shooting instruction;The text information that the voice data of target subject identifies is " to jump Rise " when, shooting instruction is triggered, shooting is completed.
As shown in figure 16, in a specific embodiment, image pickup method includes:
Step S802 obtains the image of image acquisition device.
Step S804 carries out target recognition and tracking by the image recognition model trained to image, what this had been trained Image recognition model is learnt to the image for carrying target subject label.It is being gone through according to the target subject got Location information in history image predicts target subject in the position of present image, obtains predicted location area.It is predicting Target subject is detected within the scope of the band of position, and identifies target subject.
Step S806 will input the deep learning neural network trained comprising the image-region of at least one target subject Model, according to the posture feature of image-region of the feature extraction algorithm extraction comprising target subject, according to the depth trained The corresponding weight of each posture feature in the parameter of neural network model is practised, each posture feature is weighted to obtain corresponding mesh Mark status data.According to status data targeted attitude corresponding with the correspondence of posture lookup target state data.
Step S808 matches targeted attitude with goal-selling posture, when successful match, executes step S810. Goal-selling posture can be one or more, and e.g., goal-selling posture includes to takeoff, refer to the actions such as day, turn.Work as detection When making the action for referring to day to target subject, step S810 is executed.When the posture and goal-selling posture for detecting target subject When mismatch, return to step S802.Such as, detect that target subject does not make any one in above three goal-selling posture When a posture, return to step S802 repeats above-mentioned S802 to S808 steps.
Whether step S810, detection capture apparatus are provided with lasting shooting.If provided with lasting shooting, then follow the steps S812.If being not provided with lasting shooting, S822 is thened follow the steps.
Step S812, triggering are taken pictures, and multiple pictures can be continuously shot when taking pictures, and can also only shoot a photo.Such as, When shooting, 1 shooting or 3 continuous shootings or 5 continuous shootings etc. are including but not limited to executed, step S830 is executed after having shot.
Step S822 starts to shoot video.
Step S824 detects whether to be provided with the termination posture for terminating video record.It detects to be provided with and be regarded for terminating When the termination posture that frequency is recorded, step S826a is executed.It detects the termination posture for not being arranged and terminating video record, executes step S826b。
Step S826a, the corresponding posture of target subject in the image of detection image harvester acquisition whether with it is default Termination attitude matching in targeted attitude.Such as, it is to turn over to terminate posture, is turned over when detecting that the target subject in video image is made When the action of body, step S828 is executed.
Step S826b obtains the time span of video capture, detect video capture time span whether with pre-set Shooting time length when it is consistent.When the time span of video capture reaches pre-set shooting time length, step is executed Rapid S828.Such as, setting shooting video time span be 4 minutes, when detect shooting video time span have reached 4 points Zhong Shi executes step S828.
Step S828, the target master when meeting the condition of step S826a and step S826b, that is, in video image Body makes turn or the time span for shooting video when reaching 4 minutes, stops shooting video.After stopping shooting, step is executed Rapid S830.
Step S830, under the obtained image shot in the photo shot in step S812 and step S828 is preserved Come.
Step S832, when completing step S830, whether detection capture apparatus is provided with the function of repeatedly triggering shooting, if Provided with setting repeatedly triggering shooting, then the step of entering step S802, repeat above-mentioned S802 to S832, if not being arranged more Secondary triggering shooting, enters step S834.
Step S834 terminates shooting.
As shown in figure 17, in one embodiment, a kind of image capturing device 200 is provided, including:
Image capture module 202, the image for obtaining image acquisition device.
Target subject recognition and tracking module 204 for identifying at least one target subject in the picture, and continues to track The attitudes vibration of at least one target subject.
Attitude detection module 206, for the deep learning neural network model by having trained at least one target master The posture of body is detected.
Taking module 208, for when detecting the posture of at least one target subject with goal-selling attitude matching, touching Send out shooting instruction.
As shown in figure 18, in one embodiment, target subject recognition and tracking module 204, including:
Historical position acquiring unit 204a has been trained for present image to be inputted the image recognition model trained Image recognition model obtains the historical position information of at least one target subject in the corresponding history image of image.
Predicting unit 204b, for determining prediction of at least one target subject in present image according to historical position information The band of position.
Current location output unit 204c detects at least one target subject for working as within the scope of predicted location area When, export at least one target subject present image current location information.
As shown in figure 19, in one embodiment, image capturing device 200, including:
Image data input unit 402, the training image set input deep learning nerve for posture label will to be carried In network model.
State data acquisition unit 404, for obtaining status data corresponding with posture label.
Network model training unit 406, for using status data as the anticipated output of deep learning neural network model As a result, being trained to deep learning neural network model, the parameter of deep learning neural network model is updated, has been trained Deep learning neural network model.
As shown in figure 20, in one embodiment, training unit 406, including:
Feature extraction subelement 406a, it is corresponding for extracting to obtain to each image progress posture feature in image collection Posture feature set.
Weight regulator unit 406b, for adjusting each posture feature in the corresponding posture feature set of each image Weight.
Current status data computation subunit 406c is used for the weight according to each posture feature to corresponding posture feature Current status data is obtained after being weighted.
Target weight computation subunit 406d meets the condition of convergence for working as current status data with expecting state data When, obtain the target weight of corresponding each posture feature.
Network model determination subelement 406e, for obtaining the ginseng of deep learning neural network model according to target weight Number, the deep learning neural network model trained.
As shown in figure 21, in one embodiment, attitude detection module 206, including:
Image input units 206a, the image-region for that will include at least one target subject input the depth trained Learning neural network model.
Targeted attitude characteristic set extraction unit 206b, for being carried out to the image-region comprising at least one target subject Posture feature extracts to obtain the corresponding targeted attitude characteristic set of at least one target subject.
Target state data computing unit 206c is used for the weight according to each posture feature at least one target subject Posture feature set in each posture feature be weighted to obtain corresponding target state data.
Targeted attitude searching unit 206d, it is at least one for being obtained according to the correspondence of target state data and posture The targeted attitude of target subject.
As shown in figure 22, in one embodiment, image capturing device 200 further include:
Image capture module 202 is additionally operable to the image for continuing to obtain image acquisition device.
It repeats to identify target subject in the picture into target subject recognition and tracking module 204, and continues tracking at least The attitudes vibration of one target subject, in attitude detection module 206 to the posture of at least one target subject identified into Row detection, taking module 208 when the posture of at least one target subject detected and goal-selling attitude matching, clap by triggering Instruction is taken the photograph, shooting is completed.
It repeats to enter image capture module 202, completes lasting triggering shooting.
As shown in figure 23, in one embodiment, taking module 208, including:
Continue shooting unit 208a, for when detecting that the posture of at least one target subject is matched with reference attitude, Shooting instruction is triggered, the picture of image collecting device shooting is persistently obtained.
Stop shooting unit 208b, for when detecting the posture of at least one target subject with attitude matching is terminated, Image collecting device is set to stop shooting picture.
In one embodiment, taking module 208 is additionally operable to when the posture for detecting at least one target subject and presets When the sub- attitude matching of target, shooting instruction is triggered, shooting obtains sub- posture photo corresponding with the sub- posture of multiple goal-sellings.
In one embodiment, taking module 208 is additionally operable to when the posture for detecting at least one target subject and presets When any one goal-selling attitude matching in targeted attitude set, shooting instruction is triggered.
In one embodiment, taking module 208 are additionally operable to work as detect in target subject to include multiple target subjects When, the posture of multiple target subjects is detected, when detect multiple target subjects posture simultaneously with goal-selling posture When matching, shooting instruction is triggered.
In one embodiment, state detection module 206 is additionally operable to deep learning neural network model at least one target The posture of main body is detected, and obtains the posture of at least one target subject and corresponding with the posture of at least one target subject State parameter, the variation of wherein state parameter reflect the state change of the posture of target subject corresponding with state parameter.
In the present embodiment, taking module 208 are additionally operable to when the posture and default mesh for detecting at least one target subject When marking attitude matching, and meeting preset state parameter with the state parameter of the posture of the target subject of goal-selling attitude matching, Trigger shooting instruction.
As shown in figure 24, in one embodiment, taking module 208, including:
Voice acquisition unit 208e, the voice data for obtaining at least one target subject.
Voice recognition unit 208f obtains corresponding voice recognition result for being detected identification to voice data.
Voice posture shooting unit 208g, for when the posture and goal-selling posture for detecting at least one target subject When matching, and when voice recognition result is matched with goal-selling voice data, shooting instruction is triggered.
As shown in figure 25, it is the internal structure chart of one embodiment Computer equipment, which passes through system Connect bus couple processor, non-volatile memory medium, built-in storage and network interface.Wherein, the computer equipment is non- Volatile storage medium can storage program area and computer-readable instruction, which is performed, may make Processor executes a kind of image capturing method.For the processor of the computer equipment for providing calculating and control ability, support is whole The operation of a computer equipment.Computer-readable instruction can be stored in the built-in storage, which is handled When device executes, processor may make to execute a kind of image capturing method.The network interface of computer equipment is logical for carrying out network Letter sends such as map interlinking picture and stops control instruction etc..It will be understood by those skilled in the art that structure shown in Figure 25, only With the block diagram of the relevant part-structure of application scheme, the computer equipment being applied thereon to application scheme is not constituted Restriction, specific computer equipment may include than more or fewer components as shown in the figure, or the certain components of combination, or There is person different components to arrange.
In one embodiment, filming apparatus provided by the present application can be implemented as a kind of form of computer program, meter Calculation machine program can be run on computer equipment as shown in figure 25, and the non-volatile memory medium of computer equipment can store group Image collection module 202 at each program module of the filming apparatus, such as in Figure 17.Each program module includes calculating Machine readable instruction, computer-readable instruction is for making computer equipment execute each embodiment of the application described in this specification Image pickup method in step, e.g., computer equipment can by image collection module 202 as shown in figure 17 obtain image adopt The image of acquisition means acquisition.At least one target subject is identified in the picture by target subject recognition and tracking module 204, and Persistently track the attitudes vibration of at least one target subject.It is refreshing by the deep learning trained by attitude detection module 206 The posture of at least one target subject is detected through network model.Worked as by taking module 208 and detects at least one mesh When marking the posture of main body with goal-selling attitude matching, shooting instruction is triggered, completes shooting.
In one embodiment, a kind of computer readable storage medium is provided, computer program, computer program are stored with When being executed by processor so that processor executes following steps:The image for obtaining image acquisition device, identifies in the picture Go out at least one target subject, and continue to track the attitudes vibration of at least one target subject, passes through the deep learning trained Neural network model is detected the posture of at least one target subject, when detect the posture of at least one target subject with When goal-selling attitude matching, shooting instruction is triggered.
In one embodiment, at least one target subject is identified in the picture, and continues to track at least one target The attitudes vibration of main body, including:Present image is inputted to the image recognition model trained, the image recognition model trained obtains The historical position information for taking at least one target subject in the corresponding history image of present image, determines according to historical position information At least one target subject present image predicted location area, it is at least one when being detected within the scope of predicted location area When target subject, export at least one target subject present image current location information.
In one embodiment, by the deep learning neural network model trained to the appearance of at least one target subject Before state is detected, the computer program also makes the processor execute following steps:The instruction of posture label will be carried Practice in image collection input deep learning neural network model, status data corresponding with posture label is obtained, by status data For anticipated output as deep learning neural network model as a result, being trained to deep learning neural network model, update is deep Spend the parameter of learning neural network model, the deep learning neural network model trained.
In one embodiment, the parameter for updating deep learning neural network model, the deep learning god trained Through network model, including:Posture feature is carried out to each image in image collection to extract to obtain corresponding posture feature set, is adjusted The weight of each posture feature in the corresponding posture feature set of whole each image, according to the weight of each posture feature to correspondence Posture feature be weighted after obtain current status data, when current status data and expecting state data meet the condition of convergence When, the target weight of corresponding each posture feature is obtained, the ginseng of deep learning neural network model is obtained according to target weight Number, the deep learning neural network model trained.
In one embodiment, by the deep learning neural network model trained to the appearance of at least one target subject State is detected, including:The deep learning neural network trained will be inputted comprising the image-region of at least one target subject Model carries out posture feature to the image-region comprising at least one target subject and extracts to obtain at least one target subject correspondence Targeted attitude characteristic set, according to the weight of each posture feature in the posture feature set of at least one target subject Each posture feature is weighted to obtain corresponding target state data, is obtained according to the correspondence of target state data and posture The targeted attitude of at least one target subject.
In one embodiment, before triggering shooting instruction, the computer program also makes the processor execute such as Lower step:Continue the image of acquisition image acquisition device;Into identifying at least one target subject in the picture, and hold The step of attitudes vibration of continuous at least one target subject of tracking, when the posture and default mesh for detecting at least one target subject It is complete the step of triggering shooting instruction again, repeat to enter the image for continuing to obtain image acquisition device when marking attitude matching It is shot at lasting triggering.
In one embodiment, goal-selling posture includes reference attitude and terminates posture, when detecting at least one mesh When marking the posture of main body with goal-selling attitude matching, shooting instruction is triggered, including:When detecting at least one target subject When posture is matched with reference attitude, shooting instruction is triggered, the picture of image collecting device shooting is persistently obtained, when detecting at least When the posture of one target subject is with attitude matching is terminated, image collecting device is made to stop shooting picture.
In one embodiment, goal-selling posture includes the sub- posture of multiple goal-sellings, when detecting at least one mesh When marking the posture of main body with goal-selling attitude matching, shooting instruction is triggered, including:When detecting at least one target subject When posture is with default sub-goal attitude matching, shooting instruction is triggered, shooting obtains son corresponding with the sub- posture of multiple goal-sellings Posture photo.
In one embodiment, goal-selling gesture set is formed by multiple goal-selling postures, when detecting at least one When the posture of a target subject is with goal-selling attitude matching, shooting instruction is triggered, including:When detecting at least one target master When the posture of body is matched with the arbitrary default targeted attitude in goal-selling gesture set, shooting instruction is triggered.
In one embodiment, it when detecting the posture of at least one target subject with goal-selling attitude matching, touches Shooting instruction is sent out, including:When it includes multiple target subjects to detect in target subject, the posture of multiple target subjects is carried out Detection, when detect multiple target subjects posture simultaneously with goal-selling attitude matching when, trigger shooting instruction.
In one embodiment, deep learning neural network model is detected the posture of at least one target subject, When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered, including:Deep learning Neural network model is detected the posture of at least one target subject, obtain at least one target subject posture and with extremely The corresponding state parameter of posture of a few target subject, the variation reflection target corresponding with state parameter of wherein state parameter The state change of the posture of main body;When detecting the posture of at least one target subject and goal-selling attitude matching, and with it is pre- If the state parameter of the posture of the matched target subject of targeted attitude meets predetermined threshold value, shooting instruction is triggered.
In one embodiment, it when detecting the posture of at least one target subject with goal-selling attitude matching, touches Shooting instruction is sent out, including:Obtain the voice data of at least one target subject;Identification is detected to voice data to be corresponded to Voice recognition result;When detecting the posture of at least one target subject with goal-selling attitude matching, and speech recognition When as a result being matched with goal-selling voice data, shooting instruction is triggered.
In one embodiment, a kind of computer equipment, including memory and processor, the memory storage are provided There is computer program, when the computer program is executed by the processor so that the processor executes following steps:It obtains The image of image acquisition device identifies at least one target subject in the picture, and continues to track at least one target The attitudes vibration of main body examines the posture of at least one target subject by the deep learning neural network model trained It surveys, when detecting the posture of at least one target subject with goal-selling attitude matching, triggers shooting instruction.
In one embodiment, at least one target subject is identified in the picture, and continues to track at least one target The attitudes vibration of main body, including:Present image is inputted to the image recognition model trained, the image recognition model trained obtains The historical position information for taking at least one target subject in the corresponding history image of present image, determines according to historical position information At least one target subject present image predicted location area, it is at least one when being detected within the scope of predicted location area When target subject, export at least one target subject present image current location information.
In one embodiment, by the deep learning neural network model trained to the appearance of at least one target subject Before state is detected, the computer program also makes the processor execute following steps:The instruction of posture label will be carried Practice in image collection input deep learning neural network model, status data corresponding with posture label is obtained, by status data For anticipated output as deep learning neural network model as a result, being trained to deep learning neural network model, update is deep Spend the parameter of learning neural network model, the deep learning neural network model trained.
In one embodiment, the parameter for updating deep learning neural network model, the deep learning god trained Through network model, including:Posture feature is carried out to each image in image collection to extract to obtain corresponding posture feature set, is adjusted The weight of each posture feature in the corresponding posture feature set of whole each image, according to the weight of each posture feature to correspondence Posture feature be weighted after obtain current status data, when current status data and expecting state data meet the condition of convergence When, the target weight of corresponding each posture feature is obtained, the ginseng of deep learning neural network model is obtained according to target weight Number, the deep learning neural network model trained.
In one embodiment, by the deep learning neural network model trained to the appearance of at least one target subject State is detected, including:The deep learning neural network trained will be inputted comprising the image-region of at least one target subject Model carries out posture feature to the image-region comprising at least one target subject and extracts to obtain at least one target subject correspondence Targeted attitude characteristic set, according to the weight of each posture feature in the posture feature set of at least one target subject Each posture feature is weighted to obtain corresponding target state data, is obtained according to the correspondence of target state data and posture The targeted attitude of at least one target subject.
In one embodiment, before triggering shooting instruction, the computer program also makes the processor execute such as Lower step:Continue the image of acquisition image acquisition device;Into identifying at least one target subject in the picture, and hold The attitudes vibration step of continuous at least one target subject of tracking, when the posture and goal-selling for detecting at least one target subject When attitude matching, the step of triggering shooting instruction again, repeat to enter the image for continuing to obtain image acquisition device, complete Lasting triggering shooting.
In one embodiment, goal-selling posture includes reference attitude and terminates posture, when detecting at least one mesh When marking the posture of main body with goal-selling attitude matching, shooting instruction is triggered, including:When detecting at least one target subject When posture is matched with reference attitude, shooting instruction is triggered, the picture of image collecting device shooting is persistently obtained, when detecting at least When the posture of one target subject is with attitude matching is terminated, image collecting device is made to stop shooting picture.
In one embodiment, goal-selling posture includes the sub- posture of multiple goal-sellings, when detecting at least one mesh When marking the posture of main body with goal-selling attitude matching, shooting instruction is triggered, including:When detecting at least one target subject When the sub- attitude matching of posture and goal-selling, shooting instruction is triggered, shooting obtains son corresponding with the sub- posture of multiple goal-sellings Posture photo.
In one embodiment, goal-selling gesture set is formed by multiple goal-selling postures, when detecting at least one When the posture of a target subject is with goal-selling attitude matching, shooting instruction is triggered, including:When detecting at least one target master When the posture of body is with any one goal-selling attitude matching in goal-selling gesture set, shooting instruction is triggered.
In one embodiment, it when detecting the posture of at least one target subject with goal-selling attitude matching, touches Shooting instruction is sent out, including:When it includes multiple target subjects to detect in target subject, the posture of multiple target subjects is carried out Detection, when detect multiple target subjects posture simultaneously with goal-selling attitude matching when, trigger shooting instruction.
In one embodiment, deep learning neural network model is detected the posture of at least one target subject, When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered, including:Deep learning Neural network model is detected the posture of at least one target subject, obtain at least one target subject posture and with extremely The corresponding state parameter of posture of a few target subject, the variation reflection target corresponding with state parameter of wherein state parameter The state change of the posture of main body;Detect the posture of at least one target subject and goal-selling attitude matching, and with it is default When the state parameter of the posture of the matched target subject of targeted attitude meets predetermined threshold value, shooting instruction is triggered.
In one embodiment, it when detecting the posture of at least one target subject with goal-selling attitude matching, touches Shooting instruction is sent out, including:Obtain the voice data of at least one target subject;Identification is detected to voice data to be corresponded to Voice recognition result;When detecting the posture of at least one target subject with goal-selling attitude matching, and speech recognition When as a result being matched with goal-selling voice data, shooting instruction is triggered.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above example can be combined arbitrarily, to keep description succinct, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield is all considered to be the range of this specification record.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously Cannot the limitation to the application the scope of the claims therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art For, under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the guarantor of the application Protect range.Therefore, the protection domain of the application patent should be determined by the appended claims.

Claims (15)

1. a kind of image capturing method, the method includes:
Obtain the image of image acquisition device;
At least one target subject is identified in described image, and the posture for continuing to track at least one target subject becomes Change;
The posture of at least one target subject is detected by the deep learning neural network model trained;
When detecting the posture of at least one target subject with goal-selling attitude matching, shooting instruction is triggered.
2. according to the method described in claim 1, it is characterized in that, described identify at least one target master in described image Body, and the step of continuing to track the attitudes vibration of at least one target subject, including:
Present image is inputted to the image recognition model trained, the image recognition model trained obtains the current figure The historical position information of at least one target subject as described in corresponding history image;
According to the historical position information determine at least one target subject the present image predicted location area;
When detecting at least one target subject within the scope of the predicted location area, at least one mesh is exported Current location information of the mark main body in the present image.
3. according to the method described in claim 1, it is characterized in that, the deep learning neural network model by having trained Before the step of being detected to the posture of at least one target subject, further include:
It will be in the training image set input deep learning neural network model that posture label be carried;
Obtain status data corresponding with the posture label;
Using the status data as the anticipated output of the deep learning neural network model as a result, to deep learning god It is trained through network model;
The parameter for updating the deep learning neural network model, the deep learning neural network model trained.
4. according to the method described in claim 3, it is characterized in that, the ginseng of the update deep learning neural network model The step of number, the deep learning neural network model trained, including:
Posture feature is carried out to each image in described image set to extract to obtain corresponding posture feature set;
Adjust the weight of each posture feature in the corresponding posture feature set of each image;
Current status data is obtained after being weighted to corresponding posture feature according to the weight of each posture feature;
When the current status data and expecting state data meet the condition of convergence, the mesh of corresponding each posture feature is obtained Mark weight;
The parameter of deep learning neural network model is obtained according to the target weight, the deep learning nerve net trained Network model.
5. according to the method described in claim 1, it is characterized in that, the deep learning neural network model by having trained The step of posture of at least one target subject is detected, including:
The deep learning neural network model trained described in the input of the image-region of at least one target subject will be included;
The image-region progress posture feature comprising at least one target subject is extracted to obtain described at least one The corresponding targeted attitude characteristic set of target subject;
It is special to each posture in the posture feature set of at least one target subject according to the weight of each posture feature Sign is weighted to obtain corresponding target state data;
The targeted attitude of at least one target subject is obtained according to the correspondence of target state data and posture.
6. according to the method described in claim 1, it is characterized in that, after the step of the triggering shooting instruction, further include:
Continue to obtain the image that described image harvester acquires;
At least one target subject is identified in described image into described, and continues to track at least one target subject Attitudes vibration step touched again when detecting the posture of at least one target subject with goal-selling attitude matching Send out shooting instruction;
It repeats, into described the step of continuing to obtain the image of described image harvester acquisition, to complete lasting triggering shooting.
7. according to the method described in claim 1, it is characterized in that, the goal-selling posture includes reference attitude and termination appearance State, it is described when detecting the posture of at least one target subject with goal-selling attitude matching, trigger shooting instruction Step, including:
When detecting that the posture of at least one target subject is matched with the reference attitude, shooting instruction is triggered, is continued Obtain the picture of described image harvester shooting;
When detecting the posture of at least one target subject with the termination attitude matching, make described image harvester Stop shooting picture.
8. according to the method described in claim 1, it is characterized in that, the goal-selling posture includes the sub- appearance of multiple goal-sellings State, it is described when detecting the posture of at least one target subject with goal-selling attitude matching, trigger shooting instruction Step, including:
When the sub- attitude matching of the posture and the goal-selling that detect at least one target subject, triggering shooting refers to It enables, shooting obtains sub- posture photo corresponding with the sub- posture of the multiple goal-selling.
9. according to the method described in claim 1, it is characterized in that, forming goal-selling posture collection by multiple goal-selling postures It closes, it is described when detecting the posture of at least one target subject with goal-selling attitude matching, trigger shooting instruction Step, including:
When the posture and any one in the goal-selling gesture set that detect at least one target subject are default When targeted attitude matches, shooting instruction is triggered.
10. according to the method described in claim 1, it is characterized in that, described ought detect at least one target subject When posture and goal-selling attitude matching, the step of triggering shooting instruction, including:
When in the target subject including multiple target subjects, the posture of the multiple target subject is detected;
When detect the multiple target subject posture simultaneously with goal-selling attitude matching when, trigger shooting instruction.
11. according to the method described in claim 1, it is characterized in that, the deep learning neural network model to it is described at least The posture of one target subject is detected, when the posture and goal-selling posture for detecting at least one target subject The step of timing, triggering shooting instruction, including:
The deep learning neural network model is detected the posture of at least one target subject, obtain it is described at least The posture of one target subject and state parameter corresponding with the posture of at least one target subject, wherein the state is joined Several variations reflects the state change of the posture of the target subject corresponding with the state parameter;
When detecting the posture of at least one target subject and goal-selling attitude matching, and with the goal-selling posture When the state parameter of the posture of matched target subject meets predetermined threshold value, shooting instruction is triggered.
12. according to the method described in claim 1, it is characterized in that, described ought detect at least one target subject When posture and goal-selling attitude matching, the step of triggering shooting instruction, including:
Obtain the voice data of at least one target subject;
Identification is detected to the voice data and obtains corresponding voice recognition result;
When detecting the posture of at least one target subject with goal-selling attitude matching, and institute's speech recognition result When being matched with goal-selling voice data, shooting instruction is triggered.
13. a kind of image capturing device, which is characterized in that described device includes:
Image capture module, the image for obtaining image acquisition device;
Target subject recognition and tracking module for identifying at least one target subject in described image, and continues to track institute State the attitudes vibration of at least one target subject;
Attitude detection module, for the deep learning neural network model by having trained at least one target subject Posture is detected;
Taking module, for when detecting the posture of at least one target subject with goal-selling attitude matching, triggering Shooting instruction.
14. a kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor is executed such as the step of any one of claim 1 to 12 the method.
15. a kind of computer equipment, including memory and processor, the memory is stored with computer program, the calculating When machine program is executed by the processor so that the processor is executed such as any one of claim 1 to 12 the method Step.
CN201810122474.1A 2018-02-07 2018-02-07 Image shooting method and device, computer equipment and storage medium Active CN108307116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810122474.1A CN108307116B (en) 2018-02-07 2018-02-07 Image shooting method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810122474.1A CN108307116B (en) 2018-02-07 2018-02-07 Image shooting method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108307116A true CN108307116A (en) 2018-07-20
CN108307116B CN108307116B (en) 2022-03-29

Family

ID=62864537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810122474.1A Active CN108307116B (en) 2018-02-07 2018-02-07 Image shooting method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108307116B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194879A (en) * 2018-11-19 2019-01-11 Oppo广东移动通信有限公司 Photographic method, device, storage medium and mobile terminal
CN109241949A (en) * 2018-10-19 2019-01-18 珠海格力电器股份有限公司 Image processing method and air-conditioning equipment, terminal, storage medium, electronic device
CN109259726A (en) * 2018-08-19 2019-01-25 天津大学 A kind of automatic grasp shoot method of tongue picture based on bidirectional imaging
CN109658323A (en) * 2018-12-19 2019-04-19 北京旷视科技有限公司 Image acquiring method, device, electronic equipment and computer storage medium
CN109740593A (en) * 2018-12-18 2019-05-10 全球能源互联网研究院有限公司 The location determining method and device of at least one predeterminated target in sample
CN109922266A (en) * 2019-03-29 2019-06-21 睿魔智能科技(深圳)有限公司 Grasp shoot method and system, video camera and storage medium applied to video capture
CN110110604A (en) * 2019-04-10 2019-08-09 东软集团股份有限公司 Target object detection method, device, readable storage medium storing program for executing and electronic equipment
CN110210406A (en) * 2019-06-04 2019-09-06 北京字节跳动网络技术有限公司 Method and apparatus for shooting image
CN110336939A (en) * 2019-05-29 2019-10-15 努比亚技术有限公司 A kind of method for snap control, wearable device and computer readable storage medium
CN110650291A (en) * 2019-10-23 2020-01-03 Oppo广东移动通信有限公司 Target focus tracking method and device, electronic equipment and computer readable storage medium
CN110677592A (en) * 2019-10-31 2020-01-10 Oppo广东移动通信有限公司 Subject focusing method and device, computer equipment and storage medium
CN111263073A (en) * 2020-02-27 2020-06-09 维沃移动通信有限公司 Image processing method and electronic device
CN111787215A (en) * 2019-04-03 2020-10-16 阿里巴巴集团控股有限公司 Shooting method and device, electronic equipment and storage medium
CN111967404A (en) * 2020-08-20 2020-11-20 苏州凝眸物联科技有限公司 Automatic snapshot method for specific scene
WO2021022983A1 (en) * 2019-08-07 2021-02-11 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN112700344A (en) * 2020-12-22 2021-04-23 成都睿畜电子科技有限公司 Farm management method, farm management device, farm management medium and farm management equipment
CN112750437A (en) * 2021-01-04 2021-05-04 欧普照明股份有限公司 Control method, control device and electronic equipment
CN112752016A (en) * 2020-02-14 2021-05-04 腾讯科技(深圳)有限公司 Shooting method, shooting device, computer equipment and storage medium
CN112843690A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112843693A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
CN112843722A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112967289A (en) * 2021-02-08 2021-06-15 上海西井信息科技有限公司 Security check package matching method, system, equipment and storage medium
CN113114924A (en) * 2020-01-13 2021-07-13 北京地平线机器人技术研发有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
CN113395452A (en) * 2021-06-24 2021-09-14 上海卓易科技股份有限公司 Automatic shooting method
CN114520795A (en) * 2020-11-19 2022-05-20 腾讯科技(深圳)有限公司 Group creation method, group creation device, computer equipment and storage medium
CN115227273A (en) * 2022-08-26 2022-10-25 江西中科九峰智慧医疗科技有限公司 Method, apparatus, device and medium for controlling imaging process of DR equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767940A (en) * 2015-04-14 2015-07-08 深圳市欧珀通信软件有限公司 Photography method and device
US20170060251A1 (en) * 2015-09-01 2017-03-02 Samsung Electronics Co., Ltd. System and Method for Operating a Mobile Device Using Motion Gestures
CN107370942A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Photographic method, device, storage medium and terminal
CN107635095A (en) * 2017-09-20 2018-01-26 广东欧珀移动通信有限公司 Shoot method, apparatus, storage medium and the capture apparatus of photo

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767940A (en) * 2015-04-14 2015-07-08 深圳市欧珀通信软件有限公司 Photography method and device
US20170060251A1 (en) * 2015-09-01 2017-03-02 Samsung Electronics Co., Ltd. System and Method for Operating a Mobile Device Using Motion Gestures
CN107370942A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Photographic method, device, storage medium and terminal
CN107635095A (en) * 2017-09-20 2018-01-26 广东欧珀移动通信有限公司 Shoot method, apparatus, storage medium and the capture apparatus of photo

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109259726A (en) * 2018-08-19 2019-01-25 天津大学 A kind of automatic grasp shoot method of tongue picture based on bidirectional imaging
CN109241949A (en) * 2018-10-19 2019-01-18 珠海格力电器股份有限公司 Image processing method and air-conditioning equipment, terminal, storage medium, electronic device
CN109194879B (en) * 2018-11-19 2021-09-07 Oppo广东移动通信有限公司 Photographing method, photographing device, storage medium and mobile terminal
CN109194879A (en) * 2018-11-19 2019-01-11 Oppo广东移动通信有限公司 Photographic method, device, storage medium and mobile terminal
CN109740593A (en) * 2018-12-18 2019-05-10 全球能源互联网研究院有限公司 The location determining method and device of at least one predeterminated target in sample
CN109658323A (en) * 2018-12-19 2019-04-19 北京旷视科技有限公司 Image acquiring method, device, electronic equipment and computer storage medium
CN109922266A (en) * 2019-03-29 2019-06-21 睿魔智能科技(深圳)有限公司 Grasp shoot method and system, video camera and storage medium applied to video capture
CN109922266B (en) * 2019-03-29 2021-04-06 睿魔智能科技(深圳)有限公司 Snapshot method and system applied to video shooting, camera and storage medium
CN111787215A (en) * 2019-04-03 2020-10-16 阿里巴巴集团控股有限公司 Shooting method and device, electronic equipment and storage medium
US11037301B2 (en) 2019-04-10 2021-06-15 Neusoft Corporation Target object detection method, readable storage medium, and electronic device
CN110110604A (en) * 2019-04-10 2019-08-09 东软集团股份有限公司 Target object detection method, device, readable storage medium storing program for executing and electronic equipment
CN110336939A (en) * 2019-05-29 2019-10-15 努比亚技术有限公司 A kind of method for snap control, wearable device and computer readable storage medium
CN110210406A (en) * 2019-06-04 2019-09-06 北京字节跳动网络技术有限公司 Method and apparatus for shooting image
WO2021022983A1 (en) * 2019-08-07 2021-02-11 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN110650291A (en) * 2019-10-23 2020-01-03 Oppo广东移动通信有限公司 Target focus tracking method and device, electronic equipment and computer readable storage medium
CN110650291B (en) * 2019-10-23 2021-06-08 Oppo广东移动通信有限公司 Target focus tracking method and device, electronic equipment and computer readable storage medium
CN110677592A (en) * 2019-10-31 2020-01-10 Oppo广东移动通信有限公司 Subject focusing method and device, computer equipment and storage medium
CN110677592B (en) * 2019-10-31 2022-06-10 Oppo广东移动通信有限公司 Subject focusing method and device, computer equipment and storage medium
CN113114924A (en) * 2020-01-13 2021-07-13 北京地平线机器人技术研发有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
CN112752016A (en) * 2020-02-14 2021-05-04 腾讯科技(深圳)有限公司 Shooting method, shooting device, computer equipment and storage medium
CN111263073B (en) * 2020-02-27 2021-11-09 维沃移动通信有限公司 Image processing method and electronic device
CN111263073A (en) * 2020-02-27 2020-06-09 维沃移动通信有限公司 Image processing method and electronic device
CN111967404A (en) * 2020-08-20 2020-11-20 苏州凝眸物联科技有限公司 Automatic snapshot method for specific scene
CN114520795B (en) * 2020-11-19 2024-02-09 腾讯科技(深圳)有限公司 Group creation method, group creation device, computer device and storage medium
CN114520795A (en) * 2020-11-19 2022-05-20 腾讯科技(深圳)有限公司 Group creation method, group creation device, computer equipment and storage medium
CN112700344A (en) * 2020-12-22 2021-04-23 成都睿畜电子科技有限公司 Farm management method, farm management device, farm management medium and farm management equipment
CN112843690B (en) * 2020-12-31 2023-05-12 上海米哈游天命科技有限公司 Shooting method, shooting device, shooting equipment and storage medium
CN112843722A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112843690A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium
CN112843693A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
CN112843693B (en) * 2020-12-31 2023-12-29 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
CN112750437A (en) * 2021-01-04 2021-05-04 欧普照明股份有限公司 Control method, control device and electronic equipment
CN112967289A (en) * 2021-02-08 2021-06-15 上海西井信息科技有限公司 Security check package matching method, system, equipment and storage medium
CN113395452B (en) * 2021-06-24 2023-02-03 上海卓易科技股份有限公司 Automatic shooting method
CN113395452A (en) * 2021-06-24 2021-09-14 上海卓易科技股份有限公司 Automatic shooting method
CN115227273A (en) * 2022-08-26 2022-10-25 江西中科九峰智慧医疗科技有限公司 Method, apparatus, device and medium for controlling imaging process of DR equipment

Also Published As

Publication number Publication date
CN108307116B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN108307116A (en) Image capturing method, device, computer equipment and storage medium
US11045705B2 (en) Methods and systems for 3D ball trajectory reconstruction
US10748376B2 (en) Real-time game tracking with a mobile device using artificial intelligence
CN108234870B (en) Image processing method, device, terminal and storage medium
CN109934115B (en) Face recognition model construction method, face recognition method and electronic equipment
CN110321754B (en) Human motion posture correction method and system based on computer vision
CN110503077B (en) Real-time human body action analysis method based on vision
CN110192168A (en) A kind of unmanned plane photographic method, image processing method and device
US20130251246A1 (en) Method and a device for training a pose classifier and an object classifier, a method and a device for object detection
CN109063584B (en) Facial feature point positioning method, device, equipment and medium based on cascade regression
US10796448B2 (en) Methods and systems for player location determination in gameplay with a mobile device
CN112487965B (en) Intelligent fitness action guiding method based on 3D reconstruction
JP2012518236A (en) Method and system for gesture recognition
CN107341442A (en) Motion control method, device, computer equipment and service robot
CN110574040A (en) Automatic snapshot method and device, unmanned aerial vehicle and storage medium
CN108805058A (en) Target object changes gesture recognition method, device and computer equipment
CN109297489B (en) Indoor navigation method based on user characteristics, electronic equipment and storage medium
CN107690305A (en) Image is produced from video
KR102594938B1 (en) Apparatus and method for comparing and correcting sports posture using neural network
CN109448025A (en) Short-track speeding skating sportsman's automatically tracks and track modeling method in video
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN112422946A (en) Intelligent yoga action guidance system based on 3D reconstruction
CN113435355A (en) Multi-target cow identity identification method and system
CN115331314A (en) Exercise effect evaluation method and system based on APP screening function
CN117061857A (en) Unmanned aerial vehicle automatic shooting method and device, unmanned aerial vehicle and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant