CN110007755A - Object event triggering method, device and its relevant device based on action recognition - Google Patents

Object event triggering method, device and its relevant device based on action recognition Download PDF

Info

Publication number
CN110007755A
CN110007755A CN201910199231.2A CN201910199231A CN110007755A CN 110007755 A CN110007755 A CN 110007755A CN 201910199231 A CN201910199231 A CN 201910199231A CN 110007755 A CN110007755 A CN 110007755A
Authority
CN
China
Prior art keywords
action
target object
video
classification
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910199231.2A
Other languages
Chinese (zh)
Inventor
雷超兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910199231.2A priority Critical patent/CN110007755A/en
Publication of CN110007755A publication Critical patent/CN110007755A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Abstract

The invention discloses a kind of object event triggering method, device, electronic equipment and storage medium based on action recognition.Wherein, this method comprises: obtaining the collected video in target area;Identifying processing is carried out to the action behavior for being directed to target object in the video, determines the corresponding action classification of the target object;Corresponding object event is triggered according to the corresponding action classification of the target object.This method, which can make full use of object event to trigger itself, has the characteristics that diversified classification, can trigger the object event of larger class, and view-based access control model processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.

Description

Object event triggering method, device and its relevant device based on action recognition
Technical field
The present invention relates to field of computer technology more particularly to a kind of object event triggering method based on action recognition, Device, electronic equipment and storage medium.
Background technique
With the development of internet, " nobody " concept obtains increasingly under the background that every intellectual technology emerges one after another Comprehensive to promote, the unmanned retail industry new as one is mainly to rely on, while using big data, artificial intelligence with internet The advanced technologies means such as energy, so that the shopping impression of consumer has significant increase compared to traditional shopping impression.
In the related technology, corresponding object is mainly triggered by gravity sensor mode or the detection mode of physical quantities Event.Wherein, gravity sensor mode becomes mainly by installing gravity sensor on shelf according to the load-bearing of gravity sensor Change, judges whether there is object and take or put back to carry out the triggering of object event.The detection mode of another physical quantities is led to It crosses and whether number, which changes, is measured in real time to the object on shelf, to determine whether there is object to take or put back to, thus into The triggering of row respective objects event.
But presently, there are the problem of be: above-mentioned gravity sensor mode, when gravity sensor is in face of similar weight When object, it can not accomplish precisely to identify, in addition, since environment influence can also make gravity change, such as liquid sloshing etc., Therefore a large amount of false triggering is had, leads to object event triggering inaccuracy;The detection mode of above-mentioned physical quantities can only be examined Object is measured to be removed or put back to, so that not making full use of object event to trigger itself has the characteristics that diversified classification, The object event classification for making it possible to trigger tails off.
Summary of the invention
The purpose of the present invention is intended to solve at least some of the technical problems in related technologies.
For this purpose, the first purpose of this invention is to propose a kind of object event triggering method based on action recognition, it should Method, which can make full use of object event to trigger itself, has the characteristics that diversified classification, can trigger the object thing of larger class Part, and view-based access control model processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.
Second object of the present invention is to propose a kind of object event trigger device based on action recognition.
Third object of the present invention is to propose a kind of electronic equipment.
Fourth object of the present invention is to propose a kind of computer readable storage medium.
In order to achieve the above objectives, the object event triggering side based on action recognition that first aspect present invention embodiment proposes Method, comprising: obtain the collected video in target area;To in the video be directed to target object action behavior into Row identifying processing determines the corresponding action classification of the target object;According to the corresponding action classification triggering of the target object Corresponding object event.
Object event triggering method according to an embodiment of the present invention based on action recognition can obtain collected for mesh The video in region is marked, identifying processing then is carried out to the action behavior for being directed to target object in video, determines target object pair The action classification answered, triggers corresponding object event according to the corresponding action classification of target object later, and this method can be sufficiently sharp Triggering itself with object event has the characteristics that diversified classification, can trigger the object event of larger class, and view-based access control model Processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.
In order to achieve the above objectives, the object event based on action recognition that second aspect of the present invention embodiment proposes triggers dress It sets, comprising: video acquiring module, for obtaining the collected video in target area;Action recognition module, for pair Identifying processing is carried out for the action behavior of target object in the video, determines the corresponding action classification of the target object; Event trigger module, for triggering corresponding object event according to the corresponding action classification of the target object.
Object event trigger device according to an embodiment of the present invention based on action recognition can obtain collected for mesh The video in region is marked, identifying processing then is carried out to the action behavior for being directed to target object in video, determines target object pair The action classification answered, triggers corresponding object event according to the corresponding action classification of target object later, which can be sufficiently sharp Triggering itself with object event has the characteristics that diversified classification, can trigger the object event of larger class, and view-based access control model Processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.
In order to achieve the above objectives, the electronic equipment that third aspect present invention embodiment proposes, comprising: memory, processor And it is stored in the computer program that can be run on the memory and on the processor, the processor executes the calculating When machine program, the object event triggering method based on action recognition described in first aspect present invention embodiment is realized.
In order to achieve the above objectives, the computer readable storage medium that fourth aspect present invention embodiment proposes, the calculating The object event triggering described in first aspect present invention embodiment based on action recognition is realized when machine program is executed by processor Method.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow chart of the object event triggering method according to an embodiment of the invention based on action recognition.
Fig. 2 is the process of the object event triggering method based on action recognition accord to a specific embodiment of that present invention Figure.
Fig. 3 is the flow chart according to an embodiment of the invention for obtaining action recognition model.
Fig. 4 is the structural representation of the object event trigger device according to an embodiment of the invention based on action recognition Figure.
Fig. 5 is the structural representation of the object event trigger device according to an embodiment of the invention based on action recognition Figure.
Fig. 6 is the structural representation of the object event trigger device according to an embodiment of the invention based on action recognition Figure.
Fig. 7 is the structural schematic diagram of electronic equipment according to an embodiment of the invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
In the related technology, corresponding object is mainly triggered by gravity sensor mode or the detection mode of physical quantities Event.Wherein, gravity sensor mode becomes mainly by installing gravity sensor on shelf according to the load-bearing of gravity sensor Change, judges whether there is object and take or put back to carry out the triggering of object event.The detection mode of another physical quantities is led to It crosses and whether number, which changes, is measured in real time to the object on shelf, to determine whether there is object to take or put back to, thus into The triggering of row respective objects event.
But presently, there are the problem of be: above-mentioned gravity sensor mode, when gravity sensor is in face of similar weight When object, it can not accomplish precisely to identify, in addition, since environment influence can also make gravity change, such as liquid sloshing etc., Therefore a large amount of false triggering is had, leads to object event triggering inaccuracy;The detection mode of above-mentioned physical quantities can only be examined Object is measured to be removed or put back to, so that not making full use of object event to trigger itself has the characteristics that diversified classification, The object event classification for making it possible to trigger tails off.
For this purpose, the object event triggering method that the invention proposes a kind of based on action recognition, device, electronic equipment and depositing Storage media, the present invention determine that target object is corresponding by carrying out identifying processing to the action behavior for being directed to target object in video Action classification, and then corresponding object event can be triggered according to corresponding action classification.It can be seen that the present invention is to object Weight it is insensitive, and object event can be made full use of to trigger itself have the characteristics that diversified classification, can triggered more The object event of classification, and view-based access control model processing technique realizes the triggering of object event, improves the standard of object event triggering True rate in whole process, is based only upon the sighting device arranged in scene, without hardware such as weight sensors, reduces hardware Complexity.
Specifically, below with reference to the accompanying drawings describe the embodiment of the present invention the object event triggering method based on action recognition, Device, electronic equipment and computer readable storage medium.
Fig. 1 is the flow chart of the object event triggering method according to an embodiment of the invention based on action recognition.It needs It is noted that the object event triggering method based on action recognition of the embodiment of the present invention can be applied to the embodiment of the present invention Object event trigger device based on action recognition, the device can be configured on electronic equipment.
As shown in Figure 1, the object event triggering method based on action recognition may include:
S110 obtains the collected video in target area.
It should be noted that the object event triggering method based on action recognition of the embodiment of the present invention be suitable for nobody zero Sell scene.For example, the equipment with video monitoring system can be arranged in unmanned public safety, multiple take the photograph for example, settable As head so that each region in the unmanned public safety can take, acquired in entire scene by multiple cameras Video monitoring data, thus by being monitored to video monitoring data or identifying processing, to realize to unmanned public safety In personnel and property safety be monitored.In user when entering target area, camera meeting corresponding to the target area Enter the various actions carried out behind the target area to user and carry out video acquisition, is collected so as to obtain the camera Target area in video.
It should be noted that there is the equipment of video monitoring system can be thermal camera or dome type camera etc. for this.
S120 carries out identifying processing to the action behavior for being directed to target object in video, determines that the target object is corresponding Action classification.
In one embodiment of the invention, the model of the target object pre-established can be obtained, and is based on the target pair The model of elephant determined from the video include the target object video clip, and according to the piece of video of the target object Duan Jinhang action behavior identification, to obtain corresponding action classification.
In another embodiment of the present invention, it can extract the characteristic information of each frame in video, and according to the feature of each frame Information determines the characteristic information of target object, then according to the characteristic information of target object, is determined from video for mesh The video clip of object is marked, later, action behavior identification is carried out according to the video clip of the target object, it is corresponding dynamic to obtain Make classification.Specific implementation process can be found in the description of subsequent embodiment.
S130 triggers corresponding object event according to the corresponding action classification of target object.
Wherein, in an embodiment of the present invention, the action classification may include but be not limited to take movement, put back to movement and/ Or transmitting movement etc..As an example, when the corresponding action classification of target object is to take to act, it can trigger and take accordingly Walk object event;When the corresponding action classification of target object is to put back to movement, it can trigger and put back to object event accordingly;Work as mesh When the corresponding action classification of mark object is that transmitting acts, it can trigger and transmit object event accordingly.
It is appreciated that be made of a series of event when carrying out some or certain movements to object, for example, each Shopping Behaviors are all made of a series of event, these events can be abstracted as commodity of taking, put back to commodity and transmitting quotient Product;And each event is completed by people by specifically movement, event is the behavior of people as a result, event and behavior are one by one It is corresponding.Therefore, the present invention can judge event by the Activity recognition of people, carry out object event triggering, and then after wake-up Onward sequence is further processed.
Object event triggering method according to an embodiment of the present invention based on action recognition can obtain collected for mesh The video in region is marked, identifying processing then is carried out to the action behavior for being directed to target object in video, determines target object pair The action classification answered, triggers corresponding object event according to the corresponding action classification of target object later, and this method can be sufficiently sharp Triggering itself with object event has the characteristics that diversified classification, can trigger the object event of larger class, and view-based access control model Processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.
Fig. 2 is the process of the object event triggering method based on action recognition accord to a specific embodiment of that present invention Figure.As shown in Fig. 2, the object event triggering method based on action recognition may include:
S210 obtains the collected video in target area.
S220 extracts the characteristic information of each frame in video, and the spy of target object is determined according to the characteristic information of each frame Reference breath.
In an embodiment of the present invention, after getting the video in target area, which is decomposed to obtain multiple Frame extracts the characteristic information of each frame in the video later, wherein the feature of frame include but are not limited to color, texture, shape, Position etc..The characteristic information of target object is determined according to the characteristic information of each frame, wherein the feature of target object includes but not It is only limitted to the colour of skin, face, hair style etc..
For example, in unmanned retail shop, the video in daily necessity region is got, later in daily necessity region Video decomposed to obtain multiple frames, extract the feature of each frame of video later, determine that the characteristic information of each frame of video is The position of object, and determine to have the target object of white features of skin colors in its vicinity according to the position of the object.
S230 determines the video clip for the target object according to the characteristic information of the target object from video.
For example, target object is characterized in long hair, then determined from video for the mesh with long hair Mark the video clip of object.
The video clip of the target object is inputted trained action recognition model by S240.Wherein, of the invention In embodiment, which has learnt to obtain the mapping relations between the space-time characteristic of video and each action classification, The action recognition model may include the input layer for carrying out feature extraction, and the output for exporting target action classification Layer.
In this step, the video clip of the target object can be inputted trained action recognition model, to pass through The action recognition model carries out the identifying processing of action behavior to the video clip, for example, the action recognition model can be to the view Frequency segment carries out the extraction of space-time characteristic, and based on the mapping relations between the space-time characteristic and each action classification, finds out and is somebody's turn to do The corresponding action classification of the space-time characteristic of extraction, and the action classification is exported by output layer.
It should be noted that the action recognition model is that preparatory training study obtains.As an example, such as Fig. 3 Shown, action recognition model can be used under type such as and obtain:
S310 obtains Sample video;
S320 is labeled the Sample video, obtains the training sample by mark;
Wherein, in this example, the expectation for being noted for indicating an action classification in each action classification, the mark Value be that action behavior is carried out and determination to object according to one of object in object each in Sample video.
S330 is trained the action recognition model using training sample, and according to training result to the action recognition Model is adjusted, and obtains trained action recognition model.
S310-S330 can be trained in advance and be obtained the action recognition model through the above steps as a result,.
S250 obtains the corresponding action classification of target object of action recognition model output.
Wherein, in an embodiment of the present invention, action classification include take movement, put back to movement and/or transmitting and act.For Facilitate understanding, below may be used for example: is pre-defined first: learnt in action recognition model to space-time characteristic A with take It mapping relations, space-time characteristic B between movement and puts back between the mapping relations between movement, space-time characteristic C and transmitting movement Mapping relations.Three kinds of examples are presented below to understand:
For example, there will be length so that action classification is to take movement as an example as a kind of example of possible implementation The video clip for the target object curled hair is input in trained action recognition model, and action recognition model is to the video Segment carries out feature extraction, and extracting space-time characteristic is A, has been learnt to obtain the space-time characteristic A of video according to action recognition model Mapping relations between action classification, it may be determined that the target object for providing long paper hair maps out corresponding action classification and is It takes movement.
For example, will have white so that action classification is to put back to movement as an example as a kind of example of possible implementation The video clip of the target object of skin is input in trained action recognition model, and action recognition model is to the piece of video Duan Jinhang feature extraction, extracting space-time characteristic is B, according to action recognition model learnt to obtain the space-time characteristic B of video with Mapping relations between the action classification, it is to put back to that determining, there is the target object of pale skin, which to map out corresponding action classification, Movement.
To for example have so that action classification is transmitting movement as an example as the example of alternatively possible implementation The video clip for the target object worn glasses is input in trained action recognition model, and action recognition model is to the video Segment carries out feature extraction, and extracting space-time characteristic is C, has been learnt to obtain the space-time characteristic C of video according to action recognition model With the mapping relations between the action classification, determine to map out corresponding action classification with the target object worn glasses to pass Graduating is made.
S260 triggers corresponding object event according to the corresponding action classification of target object.
In order to further increase the accuracy rate of object event triggering, the accuracy of action recognition is improved, optionally, in this hair It, can be to determination before triggering corresponding object event according to the corresponding action classification of target object in bright one embodiment The corresponding action classification of target object verified, it is whether correct with the action classification for judging determining.As an example, may be used It obtains from video for determining video clip involved in the corresponding action classification of target object, then, it is determined that involved Video clip in target object execution movement when the object that is touched according to related video clip, detect target later Whether the quantity of object changes in region, and judges whether determining action classification is correct according to testing result.
For example, in the video in study article target area, acquisition is taken for determining that target object is corresponding Video clip involved in movement is taken, then determines the corresponding object " study article touched when acting of taking of target object A ", later according to video clip, whether the quantity for detecting " study article A " in study article target area is reduced, and is such as reduced, then Determine that the corresponding movement of taking of the target object is correct.
It should be noted that the present invention can judge event to the action behavior identification of people by video, object is carried out Event triggering, and then wake up down-stream and be further processed.For example, with the object based on action recognition of the embodiment of the present invention Body event triggering method is applied to for unmanned public safety, and the action behavior of people in video can be identified by video respectively, Corresponding commodity event, such as event of taking are triggered according to the action behavior, then can wake up down-stream (as identified and taken Type of merchandize) to be further processed;For another example, event is put back in triggering, then can wake up down-stream (as identified put back to quotient Kind class, and the quantity of the commodity on shelf is increased by 1) to be further processed;For another example, transmitting event is triggered, then may be used Down-stream (as identified transmitted type of merchandize) is waken up to be further processed.
Object event triggering method according to an embodiment of the present invention based on action recognition can obtain collected for mesh The video in region is marked, extracts the characteristic information of each frame in video later, and target pair is determined according to the characteristic information of each frame The characteristic information of elephant determines the video for the target object then according to the characteristic information of the target object from video The video clip of the target object is inputted trained action recognition model later by segment, then obtains action recognition mould The corresponding action classification of target object of type output finally triggers corresponding object thing according to the corresponding action classification of target object Part.This method, which can make full use of object event to trigger itself, has the characteristics that diversified classification, can trigger the object of larger class Body event, and view-based access control model processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.
In order to enable those skilled in the art clearly understand the present invention, will be exemplified below.
For example, unmanned retail is applied to the object event triggering method based on action recognition of the embodiment of the present invention For scene.In unmanned retail shop, the video in food region is got, later the video is decomposed to obtain multiple Then frame extracts the feature of each frame of video, determine the location information of food, and provide according to the determination of the location information of food There is the object of long hair feature near the product locations, is determined from video later for the object with long hair feature Then video clip inputs the video clip in trained action recognition model, action recognition model is to the piece of video Duan Jinhang feature extraction determines that extracting space-time characteristic is B, and the space-time for having learnt to obtain video according to the action recognition model is special The mapping relations between B and the action classification are levied, exporting the corresponding action classification of the object is to put back to movement, to obtain movement The object of identification model output is corresponding to put back to action classification.
Whether the action classification in order to judge the object is correct, needs from the video in food region, obtains for true It makes corresponding put back to of the object and acts related video clip, then determine that the object is corresponding and touched when putting back to movement " food A " detect in food region that the quantity of " food A " increases, it is determined that the object is corresponding later according to video clip Movement of putting back to be correctly, later according to the movement of putting back to of the object, to trigger and put back to " food A " event accordingly.
It is corresponding with the object event triggering method based on action recognition that above-mentioned several embodiments provide, of the invention one Kind embodiment also provides a kind of object event trigger device based on action recognition, due to provided in an embodiment of the present invention based on dynamic The object event triggering method based on action recognition of the object event trigger device identified and above-mentioned several embodiment offers is provided It is corresponding, therefore base provided in this embodiment is also applied in the embodiment of the object event triggering method based on action recognition In the object event trigger device of action recognition, it is not described in detail in the present embodiment.Fig. 4 is an implementation according to the present invention The structural schematic diagram of the object event trigger device based on action recognition of example.It should be noted that the base of the embodiment of the present invention It can be applied to unmanned public safety in the object event trigger device of action recognition.
Include: video acquiring module 410, move as shown in figure 4, being somebody's turn to do the object event trigger device 400 based on action recognition Make identification module 420 and event trigger module 430, in which:
Video acquiring module 410 is used to obtain the collected video in target area.
Action recognition module 420 is used to carry out identifying processing to the action behavior for being directed to target object in the video, really Determine the corresponding action classification of the target object.
Event trigger module 430 is used to trigger corresponding object event according to the corresponding action classification of the target object. In one embodiment of the invention, the action classification include take movement, put back to movement and/or transmitting and act;The thing Part trigger module 430 is specifically used for: when the corresponding action classification of the target object is taken for described in be acted, triggering is corresponding Take object event away;When the corresponding action classification of the target object puts back to movement for described in, object is put back in triggering accordingly Event;When the corresponding action classification of the target object is that the transmitting acts, corresponding transmitting object event is triggered.
In one embodiment of the invention, the action recognition module 420 is specifically used for: extracting each frame in the video Characteristic information, and determine according to the characteristic information of each frame the characteristic information of target object;According to the target object Characteristic information, the video clip for the target object is determined from the video;By the video of the target object Segment inputs trained action recognition model;The space-time characteristic that the action recognition model has learnt to obtain video is moved with each Make the mapping relations between classification, the action recognition model includes the input layer for carrying out feature extraction, and for defeated The output layer of target action classification out;Obtain the corresponding action classification of the target object of the action recognition model output.
In one embodiment of the invention, as shown in figure 5, the object event trigger device based on action recognition also It include: model training module 440, wherein in an embodiment of the present invention, model training module 440 is for described in training in advance Action recognition model.
In one embodiment of the invention, the model training module 440 is specifically used for: obtaining Sample video;To institute It states Sample video to be labeled, obtains the training sample by mark;Wherein, described to be noted for indicating in each action classification The expectation of one action classification, the value of the mark are according to one of object pair in object each in the Sample video One object carries out action behavior and determination;The action recognition model is trained using training sample, and according to instruction Practice result to be adjusted the action recognition model, obtains the trained action recognition model.
In order to further increase the accuracy rate of object event triggering, the accuracy of action recognition is improved, optionally, in this hair In bright one embodiment, as shown in fig. 6, the object event trigger device based on action recognition further include: correction verification module 450, wherein in an embodiment of the present invention, correction verification module 450 is used to touch according to the corresponding action classification of the target object Before sending out object event corresponding, the determining corresponding action classification of the target object is verified, with judge it is described really Whether fixed action classification is correct.
In one embodiment of the invention, the correction verification module 450 is specifically used for: obtaining from the video for true Make video clip involved in the corresponding action classification of the target object;It determines described in the related video clip The object that target object execution is touched when acting;According to the related video clip, institute in the target area is detected Whether the quantity for stating object changes;Judge whether the action classification of the determination is correct according to testing result.
Object event trigger device according to an embodiment of the present invention based on action recognition can obtain collected for mesh The video in region is marked, identifying processing then is carried out to the action behavior for being directed to target object in video, determines target object pair The action classification answered, triggers corresponding object event according to the corresponding action classification of target object later, which can be sufficiently sharp Triggering itself with object event has the characteristics that diversified classification, can trigger the object event of larger class, and view-based access control model Processing technique realizes the triggering of object event, improves the accuracy rate of object event triggering.
In order to realize above-described embodiment, the invention also provides a kind of electronic equipment.
Fig. 7 is the structural schematic diagram of electronic equipment according to an embodiment of the invention.As shown in fig. 7, the server 700 It may include: memory 710, processor 720 and be stored in the computer that can be run on memory 710 and on processor 720 Program 730 when processor 720 executes program, realizes the present invention object event described in any of the above embodiments based on action recognition Triggering method.
In order to realize above-described embodiment, the invention also provides a kind of computer program, the computer program is processed Device realizes the object event triggering method described in any of the above embodiments based on action recognition when executing.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as to limit of the invention System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of the invention Type.

Claims (14)

1. a kind of object event triggering method based on action recognition, which comprises the following steps:
Obtain the collected video in target area;
Identifying processing is carried out to the action behavior for being directed to target object in the video, determines the corresponding movement of the target object Classification;
Corresponding object event is triggered according to the corresponding action classification of the target object.
2. the method according to claim 1, wherein described to the action row for being directed to target object in the video To carry out identifying processing, the corresponding action classification of the target object is determined, comprising:
The characteristic information of each frame in the video is extracted, and determines the feature of target object according to the characteristic information of each frame Information;
According to the characteristic information of the target object, the video clip for the target object is determined from the video;
The video clip of the target object is inputted into trained action recognition model;The action recognition model has learnt The mapping relations between the space-time characteristic of video and each action classification are obtained, the action recognition model includes for carrying out feature The input layer of extraction, and the output layer for exporting target action classification;
Obtain the corresponding action classification of the target object of the action recognition model output.
3. according to the method described in claim 2, it is characterized in that, the action recognition model obtains in the following way:
Obtain Sample video;
The Sample video is labeled, the training sample by mark is obtained;Wherein, described to be noted for indicating each movement The expectation of an action classification in classification, the value of the mark are according to wherein one in object each in the Sample video A object carries out action behavior and determination to object;
The action recognition model is trained using training sample, and according to training result to the action recognition model into Row adjustment, obtains the trained action recognition model.
4. the method according to claim 1, wherein the action classification include take movement, put back to act and/ Or transmitting movement;It is described that corresponding object event is triggered according to the corresponding action classification of the target object, comprising:
When the corresponding action classification of the target object is taken for described in be acted, object event is taken in triggering away accordingly;
When the corresponding action classification of the target object puts back to movement for described in, object event is put back in triggering accordingly;
When the corresponding action classification of the target object is that the transmitting acts, corresponding transmitting object event is triggered.
5. method according to claim 1 to 4, which is characterized in that described according to the target object pair The action classification answered triggers corresponding object event, before further include:
The determining corresponding action classification of the target object is verified, whether just to judge the action classification of the determination Really.
6. according to the method described in claim 5, it is characterized in that, the corresponding movement class of the described pair of determining target object It is not verified, whether the action classification to judge the determination is correct, comprising:
It obtains from the video for determining video clip involved in the corresponding action classification of the target object;
Determine the object touched when target object execution movement described in the related video clip;
According to the related video clip, whether the quantity for detecting the object in the target area changes;
Judge whether the action classification of the determination is correct according to testing result.
7. a kind of object event trigger device based on action recognition characterized by comprising
Video acquiring module, for obtaining the collected video in target area;
Action recognition module, described in determining to the action behavior progress identifying processing for being directed to target object in the video The corresponding action classification of target object;
Event trigger module, for triggering corresponding object event according to the corresponding action classification of the target object.
8. device according to claim 7, which is characterized in that the action recognition module is specifically used for:
The characteristic information of each frame in the video is extracted, and determines the feature of target object according to the characteristic information of each frame Information;
According to the characteristic information of the target object, the video clip for the target object is determined from the video;
The video clip of the target object is inputted into trained action recognition model;The action recognition model has learnt The mapping relations between the space-time characteristic of video and each action classification are obtained, the action recognition model includes for carrying out feature The input layer of extraction, and the output layer for exporting target action classification;
Obtain the corresponding action classification of the target object of the action recognition model output.
9. device according to claim 8, which is characterized in that described device further include:
Model training module, for training the action recognition model in advance;
Wherein, the model training module is specifically used for:
Obtain Sample video;
The Sample video is labeled, the training sample by mark is obtained;Wherein, described to be noted for indicating each movement The expectation of an action classification in classification, the value of the mark are according to wherein one in object each in the Sample video A object carries out action behavior and determination to object;
The action recognition model is trained using training sample, and according to training result to the action recognition model into Row adjustment, obtains the trained action recognition model.
10. device according to claim 7, which is characterized in that the action classification include take movement, put back to movement And/or transmitting movement;The event trigger module is specifically used for:
When the corresponding action classification of the target object is taken for described in be acted, object event is taken in triggering away accordingly;
When the corresponding action classification of the target object puts back to movement for described in, object event is put back in triggering accordingly;
When the corresponding action classification of the target object is that the transmitting acts, corresponding transmitting object event is triggered.
11. device according to any one of claims 7 to 10, which is characterized in that described device further include:
Correction verification module is used for before triggering corresponding object event according to the corresponding action classification of the target object, to true The fixed corresponding action classification of the target object is verified, and whether the action classification to judge the determination is correct.
12. device according to claim 11, which is characterized in that the correction verification module is specifically used for:
It obtains from the video for determining video clip involved in the corresponding action classification of the target object;
Determine the object touched when target object execution movement described in the related video clip;
According to the related video clip, whether the quantity for detecting the object in the target area changes;
Judge whether the action classification of the determination is correct according to testing result.
13. a kind of electronic equipment characterized by comprising memory, processor and be stored on the memory and can be in institute The computer program run on processor is stated, when the processor executes described program, is realized as any in claim 1 to 6 The object event triggering method based on action recognition described in.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program Such as the object event triggering method described in any one of claims 1 to 6 based on action recognition is realized when being executed by processor.
CN201910199231.2A 2019-03-15 2019-03-15 Object event triggering method, device and its relevant device based on action recognition Pending CN110007755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910199231.2A CN110007755A (en) 2019-03-15 2019-03-15 Object event triggering method, device and its relevant device based on action recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910199231.2A CN110007755A (en) 2019-03-15 2019-03-15 Object event triggering method, device and its relevant device based on action recognition

Publications (1)

Publication Number Publication Date
CN110007755A true CN110007755A (en) 2019-07-12

Family

ID=67167208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910199231.2A Pending CN110007755A (en) 2019-03-15 2019-03-15 Object event triggering method, device and its relevant device based on action recognition

Country Status (1)

Country Link
CN (1) CN110007755A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170323376A1 (en) * 2016-05-09 2017-11-09 Grabango Co. System and method for computer vision driven applications within an environment
CN108551658A (en) * 2017-12-18 2018-09-18 上海云拿智能科技有限公司 Object positioning system and localization method
CN108805091A (en) * 2018-06-15 2018-11-13 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN108830251A (en) * 2018-06-25 2018-11-16 北京旷视科技有限公司 Information correlation method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170323376A1 (en) * 2016-05-09 2017-11-09 Grabango Co. System and method for computer vision driven applications within an environment
CN108551658A (en) * 2017-12-18 2018-09-18 上海云拿智能科技有限公司 Object positioning system and localization method
CN108805091A (en) * 2018-06-15 2018-11-13 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN108830251A (en) * 2018-06-25 2018-11-16 北京旷视科技有限公司 Information correlation method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李建军: "《基于图像深度信息的人体动作识别研究》", 30 December 2018 *

Similar Documents

Publication Publication Date Title
CN109214373B (en) Face recognition system and method for attendance checking
CN104881637B (en) Multimodal information system and its fusion method based on heat transfer agent and target tracking
CN103717124B (en) For obtaining and process the apparatus and method for of the survey measurements of biology
JP6427973B2 (en) Image recognition apparatus and feature data registration method in image recognition apparatus
CN109166007A (en) A kind of Method of Commodity Recommendation and its device based on automatic vending machine
CN105989174B (en) Region-of-interest extraction element and region-of-interest extracting method
CN110472611A (en) Method, apparatus, electronic equipment and the readable storage medium storing program for executing of character attribute identification
CN111563396A (en) Method and device for online identifying abnormal behavior, electronic equipment and readable storage medium
CN106682473A (en) Method and device for identifying identity information of users
CA3014365C (en) System and method for gathering data related to quality of service in a customer service environment
CN111189494A (en) Measurement label, image color restoration method thereof and measurement identification method
CN107589968A (en) Put out screen unlocks method and apparatus
EP3074844B1 (en) Estimating gaze from un-calibrated eye measurement points
JP2018081654A (en) Searching device, display unit, and searching method
CN109740474A (en) It jumps the queue personnel's Dynamic Recognition mechanism and corresponding terminal
JP2024023957A (en) Processing equipment, processing method and program
JPWO2021230316A5 (en)
CN113159876B (en) Clothing collocation recommendation device, method and storage medium
CN110007755A (en) Object event triggering method, device and its relevant device based on action recognition
CN111951058A (en) Commodity attention analysis method, device and system based on electronic price tags
CN104850225A (en) Activity identification method based on multi-level fusion
CN109145768A (en) Obtain the method and device of the human face data with face character
CN108154403A (en) Self-service system and its good selling method
Guo et al. Human face recognition using a spatially weighted Hausdorff distance
CN111311379A (en) Information interaction method and device for intelligent goods shelf, intelligent goods shelf and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190712

RJ01 Rejection of invention patent application after publication