CN104598012A - Interactive advertising equipment and working method thereof - Google Patents

Interactive advertising equipment and working method thereof Download PDF

Info

Publication number
CN104598012A
CN104598012A CN201310526678.9A CN201310526678A CN104598012A CN 104598012 A CN104598012 A CN 104598012A CN 201310526678 A CN201310526678 A CN 201310526678A CN 104598012 A CN104598012 A CN 104598012A
Authority
CN
China
Prior art keywords
user
action
node
vector
master control
Prior art date
Application number
CN201310526678.9A
Other languages
Chinese (zh)
Other versions
CN104598012B (en
Inventor
张宜春
蒋伟
吴晓雨
郑伟
欧雪雯
Original Assignee
中国艺术科技研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国艺术科技研究所 filed Critical 中国艺术科技研究所
Priority to CN201310526678.9A priority Critical patent/CN104598012B/en
Publication of CN104598012A publication Critical patent/CN104598012A/en
Application granted granted Critical
Publication of CN104598012B publication Critical patent/CN104598012B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F27/00Combined visual and audible advertising or displaying, e.g. for public address
    • G09F27/005Signs associated with a sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

The invention relates to interactive advertising equipment and a working method thereof. The advertising equipment comprises a display device, a somatosensory interaction device, a master control device and a delivery execution device, wherein the master control device controls the display device to play video information, identifies the selection of an advertising product by a user according to user movement information captured by the somatosensory interaction device, evaluates a matching degree between user movement and the relevant video movement of the selected advertising product, and judges whether a delivery instruction is output to the delivery execution device or not according to the matching degree to control the advertising equipment to output the product to the user as a reward. According to the matching degree between the user movement and the relevant video movement relevant of the selected advertising product, the product is used as the reward to be given to the user, the interaction mode improves the participation degree of the user and interestingness, and the popularization degree of the product can be favorably improved.

Description

A kind of interactive advertising equipment and method of work thereof

Technical field

The present invention relates to a kind of advertising equipment and method of work thereof, particularly relate to a kind of interactive advertising equipment and method of work thereof.

Background technology

Along with the development of digital media technology, the propagandizing form of advertisement and transmitting carrier thereof there occurs snafu change.The conventional ads form promoted by the folk prescription curstomer-oriented such as word, picture or video is relatively uninteresting and flat with regard to life colourful now, often can not produce deep impression to client, also just has no way of reaching the effect of publicity of expectation.In fact, for product or service, best propagate method is no more than inviting user's on probation or experience in person.To promote beverage products, businessman can distribute the free beverage products tasted in crowd's high aggregation places such as sales fields to the masses of contact.But this mode needs tissue of disposing available manpower usually, and often because the time is short, audient's randomness is large, causes promotion effect very limited.

Summary of the invention

The present invention is directed to the problems referred to above, provide a kind of and there is strong, the convenient and practical interactive advertising equipment of body sense interactive function, intuitive and method of work thereof.

For achieving the above object, present invention employs following technical proposal:

A kind of interactive advertising equipment is provided, it is characterized in that, comprising:

Display device;

Body sense interactive device, it is for catching user action information;

Master control set, its electrical connection display device and body sense interactive device, video information is play for controlling display device, according to the user action information that body sense interactive device catches, identify that user is to the selection of advertised product, and the matching degree of assessment user action and the action of selected advertised product associated video, and judge whether to export shipment instruction according to matching degree;

Shipment actuating unit, it is electrically connected master control set, and the shipment instruction for sending according to master control set controls advertising equipment and exports advertised product.

Above-mentioned body sense interactive device comprises infrared camera or RGB-D camera, to record the user action information comprising depth data and user's bone node data.

Further, above-mentioned body sense interactive device also comprises optical camera.

According to one embodiment of the invention, above-mentioned interactive advertising equipment can also comprise communicator, and it is electrically connected master control set, for the steering order connecting Internet transmitted according to master control set, to realize the operated from a distance to advertising equipment.

According to one embodiment of the invention, above-mentioned interactive advertising equipment can also comprise sound-producing device, and it is electrically connected master control set, plays voice messaging for the steering order transmitted according to master control set.

In addition, the present invention also provides the method for work of above-mentioned interactive advertising equipment, comprises the following steps:

S10, prompting user select advertised product;

S20, seizure user action information, identify that user selects;

Whether S30, to detect user-selected advertised product in short supply:

If in stockit is available, play the associated video of user-selected advertised product, prompting user imitates video actions;

S40, seizure user action information, the matching degree of assessment user action and video actions;

S50, judge whether to export user-selected advertised product according to matching degree.

According to one embodiment of the invention, in above-mentioned steps S20 and S40, the user action information of seizure comprises depth data and user's bone node data.

Particularly, in above-mentioned steps S40, assess the matching degree of user action and video actions based on following steps:

S100, choose user's bone node and profile node based on depth data;

S200, build user limbs vector based on user's bone node, calculate the space angle between user's limbs vector and corresponding video template limbs vector, by its weighting normalizing, calculate the space angle cumulative errors between user's limbs vector and corresponding video template limbs vector, as the human action diversity factor based on bone node analysis;

S300, based on user profile node build user profile vector, calculate the space angle between the adjacent two profile vectors of user, utilize the difference value between itself and the space angle of corresponding video template profile vector to build energy function, ask for the minimum value of energy function as the human action diversity factor analyzed based on profile node;

S400, to based on the human action diversity factor of bone node analysis and the human action diversity factor weighted sum based on profile node analysis, as the evaluate parameter weighing user action and video actions matching degree.

In addition, in above-mentioned steps S30, if in short supply, point out back-order information to user, and send back-order information by internet to operator.

Compared with prior art, the invention has the beneficial effects as follows:

1) the present invention is to advertising equipment configuration body sense interactive device and the master control set presetting image processing algorithm, the actions such as user's gesture, attitude are identified, realize the switching that product is selected, therefore can save traditional physical commodity and show window, save hardware cost.

2) the present invention is to advertising equipment configuration body sense interactive device and the master control set presetting image processing algorithm, after user selectes certain advertised product, utilize image processing algorithm to assess the matching degree of user action and this advertised product associated video action, advertised product is provided to user to reward according to action matching degree, this interaction mode improves participation and the interest of user, is conducive to the popularization degree expanding product.

3) the preferred infrared camera of the present invention or RGB-D camera catch the user action information comprising depth data and user's bone node data, and in conjunction with a kind of based on bone node analysis and the human action automatic evaluation algorithm based on profile node analysis, to user's body articulation point and gesture, and human body attitude direction is accurately located, assessment result accuracy is higher.

4) the present invention is to advertising equipment configuration communication device accessing Internet, can realize operated from a distance and the management of advertising equipment further.

The present invention changes traditional advertisement putting pattern, and the body sense achieving user and advertisement video is interactive alternately, enhances the promotion effect of product while self-service to-be-experienced product.

Accompanying drawing explanation

Fig. 1 is the composition schematic diagram of an embodiment of interactive advertising equipment of the present invention;

Fig. 2 is the workflow diagram of an embodiment of interactive advertising equipment of the present invention;

Fig. 3 is the process flow diagram of the human action automatic evaluation method preset in control device of the present invention;

Fig. 4 is the schematic diagram of the human body contour outline node that method shown in Fig. 3 is chosen;

Fig. 5 is the choosing method schematic diagram of human body profile node under hip in method shown in Fig. 3;

Fig. 6 is the method flow diagram based on skeleton node calculate diversity factor in method shown in Fig. 3;

Fig. 7 is the schematic diagram building limbs vector in method shown in Fig. 3 based on skeleton node;

Fig. 8 is the method flow diagram based on human body contour outline node calculate diversity factor in method shown in Fig. 3.

Embodiment

As shown in Figure 1, be the composition schematic diagram of a specific embodiment of interactive advertising equipment provided by the invention.This embodiment is an interactive advertisement beverage dispenser, mainly comprises master control set 10, the body sense interactive device 20, display device 30, sound-producing device 40, shipment actuating unit 50, refrigerating plant 60 and the communicator 70 that are electrically connected with master control set 10.Certainly, on this basis, this equipment also can need according to user or service condition carries out expanding and adjusting, such as, heating combined equipment can be adopted to replace refrigerating plant at some cold district.This area and various equivalent modifications should be known, as long as under the prerequisite not departing from the spirit disclosed by the present invention, the formal and details that technical scheme is implemented are made any amendment and are changed all in the scope of protection of present invention.

With traditional advertising equipment unlike, interactive advertising equipment provided by the invention mainly utilizes body sense interactive device 20 and display device 30 to realize interaction with user, gesture identification in mutual by body sense carries out the switching of product selection, after user selects certain advertised product, utilize the matching degree of the action of image processing algorithm real-time analysis assessment user action preset in master control set 10 and this advertised product associated video, and provide product as award according to action matching degree.Below in conjunction with accompanying drawing, describe the function and efficacy of wherein each device in detail for interactive advertisement beverage dispenser, to the present invention, how application technology means solve technical matters whereby, and the implementation procedure reaching technique effect carries out fully understanding that also tool is to implement.

Master control set 10 is core control parts of interactive advertisement beverage dispenser.In the present embodiment, preferred computer is as master control set 10, carries out arithmetic sum logical operation, thus controls and other each device co-ordinations in scheduling interactive advertisement beverage dispenser.

Body sense interactive device 20 is critical components that interactive advertisement beverage dispenser realizes human-computer interaction.Adequately catch user action, the preferred infrared camera of the present embodiment and optical camera combination set gather human action information in advertisement interactive process.Particularly, according to the steering order that master control set 10 is sent, infrared camera launches infrared-ray according to certain time interval, and the infrared ray returned in human body is irradiated in detection, testing result is passed to master control set 10, master control set 10, by computational reflect ultrared time and phase differential, obtains the such as information such as depth data, skeleton node data, and then follows the trail of human body three-dimensional action.And optical camera is generally used for the graph and image processing application such as static action recognition, recognition of face, scene Recognition.

Display device 30, the steering order sent according to master control set 10, to user's display reminding information, plays the advertisement video etc. preset.

Sound-producing device 40, plays voice messaging according to the steering order that master control set 10 is sent to user.

Shipment actuating unit 50, according to the shipment instruction action that master control set 10 is sent, controls the shipment of interactive advertisement beverage dispenser, as exported user-selected beverage products to user to reward.

Refrigerating plant 60, the steering order sent according to master control set 10 is to the beverage products refrigeration cool-down stored in beverage machine.

Communicator 70, according to the steering order that master control set 10 is sent, accessing Internet, data interaction is carried out with the communicator of operator's control end in internet, so that operator can inquire about discharge, tank farm stock, running status, the failure condition of machine at any time, increase new interactive application, renewal system, analyze user behavior etc.

Certainly, also promising each device provides the supply unit of operating voltage.

As shown in Figure 2, be a kind of method of work process flow diagram of above-mentioned interactive advertisement beverage dispenser.It should be noted that, the method is only the preferred embodiments of the present invention, and every equivalent flow process conversion utilizing instructions of the present invention and accompanying drawing content to do, includes in technical scheme of the present invention.

S10, master control set send steering order, control display device Display Select Panel and relevant information, the beverage products that prompting user is admired by the way selection that body sense is mutual.

Wherein, can specify the left and right switching waving to carry out beverage selection of user, upwards show of hands confirms selected beverage, certainly also can be not limited thereto.

The user action information such as depth data, user's bone node data of S20, master control set receiving body sense interactive device record, by the selection of built-in image processing algorithm identification user;

Whether S30, master control set detect user-selected beverage in short supply:

If in short supply, prompting back-order information, returns step S10;

If in stockit is available, master control set sends steering order, and control display device and play corresponding advertisement video, prompting user imitates video actions;

S40, when user imitates video actions, the user action information such as depth data, user's bone node data of master control set receiving body sense interactive device record, by built-in image processing algorithm assessment user action and the matching degree of video actions;

S50, master control set judge whether matching degree meets the requirements, such as, judge that whether matching degree is higher than the threshold value 80% preset:

If undesirable, return step S10;

If met the requirements, master control set sends shipment instruction, starts the work of shipment actuating unit, controls beverage machine dispensing beverages product to user to reward.

In above-mentioned steps S30, if in short supply, when pointing out back-order information to user, back-order information can also be sent by internet to operator.

In above-mentioned steps S40, master control set can by following based on bone node analysis and the human action automatic evaluation algorithm based on profile node analysis, the matching degree of assessment user action and video actions.As shown in Figure 3, this human action automatic evaluation algorithm comprises the following steps:

S100, choose user's bone node and profile node based on depth data;

S200, build user limbs vector based on user's bone node, calculate the space angle between user's limbs vector and corresponding video template limbs vector, by its weighting normalizing, calculate the space angle cumulative errors between user's limbs vector and corresponding video template limbs vector, as the human action diversity factor based on bone node analysis;

S300, based on user profile node build user profile vector, calculate the space angle between the adjacent two profile vectors of user, utilize the difference value between itself and the space angle of corresponding video template profile vector to build energy function, ask for the minimum value of energy function as the human action diversity factor analyzed based on profile node;

S400, to based on the human action diversity factor of bone node analysis and the human action diversity factor weighted sum based on profile node analysis, as the evaluate parameter weighing user action and video actions matching degree.

When adopting above-mentioned human action automatic evaluation algorithm, the preferred RGB-D camera of body sense interactive device 20 catches user action information.

Certainly, the present invention also can adopt the image processing algorithm identification or the assessment user action that such as calculate skeleton based on X-Y scheme sequence, and just the accuracy of this method is relatively low.

Below by a specific embodiment, above-mentioned human action automatic evaluation algorithm is further described in detail.

In above-mentioned steps S100, RGB-D equipment can be adopted to obtain depth data, guarantee that equipment comprises human body within sweep of the eye whole, then the depth data collected is converted to the depth image of certain resolution, human body segmentation's image is set up based on this depth image, in embodiments of the invention, human body segmentation's image refers to the image after being separated with human body image by background image.

Determined the skeleton node of analyst's body action by matching in human body segmentation's image.In the present embodiment, preferred following 20 bone nodes: head, neck, left shoulder, left elbow, left wrist, left hand, right shoulder, right elbow, right wrist, the right hand, vertebra, waist, left stern, left knee, left ankle, left foot, right stern, right knee, right ankle, right crus of diaphragm.These bone nodes can be roughly divided into following a few class according to human motion mode to the influence degree of action:

Trunk node: vertebra, waist, left shoulder, right shoulder, left stern, right stern, neck be totally seven nodes.Can learn by observing, trunk node usually shows strong autokinetic movement trend and seldom presents the motion of high independence, therefore trunk can be considered as the larger rigid body of motion inertia, in the similarity of general image registration is weighed, not consider the motion of trunk node.

First nodes: the head be directly connected with trunk, left elbow, right elbow, left knee, right knee.The a small amount of movement warp of first nodes just can cause visually larger difference.

Two-level node: the left wrist be connected with first nodes, right wrist, left ankle, right ankle.Two-level node is farther from trunk compared to first nodes, and movement tendency only affects by first nodes, is easy to rotate freely in space, and therefore motion amplitude is comparatively large, but visually higher to the tolerance of angular deviation.

Endpoint node: left hand, the right hand, left foot, right crus of diaphragm.Endpoint node distance two-level node is very short, and flexibility ratio is higher, being subject to noise and causing location inaccurate, therefore ignoring the impact of endpoint node on human action in the present embodiment when following the tracks of imaging.

Human body contour outline node is chosen in human body segmentation's image.First in human body segmentation's image, extract body contour line, then outline line is converted into the representation of sequence of points, therefrom the human body contour outline node of Analysis on Selecting human action.As shown in Figure 4, according to the characteristic of human body limb action, under the present embodiment preferably chooses the left oxter of human body, left elbow, left wrist, left stern, left knee, left ankle, hip with under type, right ankle, right knee, right stern, right wrist, right elbow, right oxter be totally ten three profile nodes:

The mode of choosing of left oxter profile node is, crosses left shoulder bone Nodes and draws the straight line that is parallel to X-axis, find the sequence of points that on the outline line below straight line, distance left shoulder bone node is nearest, as left oxter profile node.Right oxter profile node in like manner.

The mode of choosing of right elbow profile node is, crosses right elbow bone Nodes and draws the straight line that is parallel to Y-axis, finds the sequence of points that on straight line right-hand wheel profile, distance right elbow bone node is nearest, as right elbow profile node.Right wrist, right stern, right knee, right ankle are in like manner.

The mode of choosing of left elbow profile node is, crosses left elbow bone Nodes and draws the straight line that is parallel to Y-axis, finds the sequence of points that on straight line left side wheels profile, distance left elbow bone node is nearest, as left elbow profile node.Left wrist, left stern, left knee, left ankle are in like manner.

The mode of choosing of hip bottom profiled node as shown in Figure 5, by left stern bone node and left kneecap bone node line, line taking Duan Shangsi/mono-place is A point, by right stern bone node and right kneecap bone node line, line taking Duan Shangsi/mono-place is B point, cross A, B point respectively and do the straight line being parallel to vertical axes, the outline line between these two straight lines is found the sequence of points P that distance pelvic bone bone node O is nearest, as hip bottom profiled node.

As shown in Figure 6, be the method flow diagram calculating human action diversity factor based on skeleton node analysis of step S200 shown in Fig. 3, it comprises the following steps:

S201, based on skeleton node build human body limb vector, the descriptor as human action data:

Because the coordinate of skeleton node does not have relativity and directivity, therefore the present invention adopts limbs vector to replace skeleton node as the descriptor of skeleton data.On the one hand, limbs vector has directivity, and its locus can represent by the three-dimensional coordinate of bone node, on the other hand, limbs vector is corresponding with human body limb, can describe the motion of human body limb with the motion of limbs vector, greatly reduces the quantity of data and reduces the complexity calculated.In addition, from the mode of motion of human body, the motion of human body head and trunk is less for the influence degree of human action, and the motion of human body limb is larger for the influence degree of human action, therefore the present invention is when adopting limbs vector description human motion, have employed certain simplification measure.As shown in Figure 7, in this example is implemented, to choose about human body wrist joint, elbow joint, shoulder joint, stern joint, knee joint, ankle-joint totally 12 bone nodes as the formation point of limbs vector, the direction of rudimentary bone node as limbs vector is pointed to using senior bone node, also namely point to first nodes by trunk node, point to two-level node by first nodes.

S202, according to the following formula calculate human body limb vector and corresponding template limbs vector between space angle, the matching degree in the template skeleton data preset with the skeleton data and system weighing Real-time Collection between corresponding point.

cos θ = a 1 → · a 2 → | a 1 → | · | a 2 → | = x 1 · x 2 + y 1 · y 2 + z 1 · z 2 x 1 2 + y 1 2 + z 1 2 + x 2 2 + y 2 2 + z 2 2

In above formula, θ is the space angle (also claiming limbs vector space angle) between human body limb vector and corresponding template limbs vector, its value is less shows that human body limb vector more mates with corresponding template limbs vector, therefore in the analytical approach based on skeleton node for weighing the matching degree of human action and swooping template action. represent human body limb vector sum template limbs vector respectively.X 1, y 1, z 1with x 2, y 2, z 2be respectively three-dimensional coordinate.The three-dimensional coordinate of human body limb vector is determined by the three-dimensional coordinate of skeleton node, the depth data that the three-dimensional coordinate of skeleton node obtains based on step S100 and determining.In the present embodiment, preferred human lumbar bone node is initial point, and horizontal direction is X-axis, and vertical direction is that Y-axis sets up rectangular coordinate system in space, the three-dimensional coordinate of skeleton node and limbs vector is the rectangular coordinate in this rectangular coordinate system in space, is the same order of magnitude.

From aforementioned, during human motion, the action difference of dissimilar bone Nodes is different to the subjective sensation of people, therefore the present invention is by a large amount of Data Comparisons and practical experience, and the limbs vector of the limbs vector sum human body bottom of the limbs vector of being correlated with to first nodes respectively, limbs vector that two-level node is correlated with, human upper gives different weights in differential expression.Specifically arranging can be as follows:

In the present embodiment, two-level node is far away compared to first nodes distance trunk, motion amplitude is only by the impact of first nodes, more easy to control when moving, therefore the limbs vector (vector of limbs shown in Fig. 74 that two-level node is relevant, 5, 6, 7) in the difficulty of action coupling is considered, proportion is less, and first nodes distance trunk is nearer, by the impact of trunk motional inertia, simultaneously also by the impact of two-level node motion amplitude, therefore the limbs vector (vector of limbs shown in Fig. 70 that first nodes is relevant, 1, 2, 3) in the difficulty of action coupling is considered, proportion is more.Simultaneously, also need to consider to occur the situation that local space angle is excessive, in order to make same action each human body limb vector average as far as possible with the space angle between corresponding template limbs vector, in the present embodiment, also the standard deviation of each for same action human body limb vector and the space angle between corresponding template limbs vector is considered the factor also as measurement action matching degree.In addition, the present embodiment preferably gives less weight to the variance data of the limbs vector (vector of limbs shown in Fig. 70,1,4,5) that upper limbs is correlated with, larger weight is given, to balance visual experience to the variance data of (vector of limbs shown in Fig. 72,3,6,7) that lower limb are correlated with.

S203, by space angle weighting normalizing, according to the following formula calculate human body limb vector and corresponding template limbs vector between space angle cumulative errors, the human action diversity factor as based on skeleton node analysis:

Metric=SD+AngDiff 1×f 1+AngDiff 2×f 2+AngDiff U×f U+AngDiff L×f L

In above formula, Metric is the space angle cumulative errors between human body limb vector and corresponding template limbs vector, and SD is the standard deviation of limbs vector space angle.AngDiff u, AngDiff l, AngDiff 1, AngDiff 2represent that upper limbs in same action experiment sample is correlated with respectively, lower limb are correlated with, first nodes is correlated with, the Cumulate Sum of limbs vector space angle that two-level node is correlated with.In the present embodiment, only consider the limbs vector 0 ~ 7 of eight shown in Fig. 5, therefore have:

AngDiff 1 = AngDiff [ ∂ 0 + ∂ 1 + ∂ 2 + ∂ 3 ]

AngDiff 2 = AngDiff [ ∂ 4 + ∂ 5 + ∂ 6 + ∂ 7 ]

AngDiff U = AngDiff [ ∂ 0 + ∂ 1 + ∂ 4 + ∂ 5 ]

AngDiff L = AngDiff [ ∂ 2 + ∂ 3 + ∂ 6 + ∂ 7 ]

Wherein, i ∈ 0,1 ..., 7} is the space angle between eight human body limb vectors and corresponding template limbs vector.

F u, f l, f 1, f 2represent that upper limbs is correlated with respectively, lower limb are correlated with, first nodes is correlated with, weight that limbs that two-level node is correlated with vector is shared in differential expression, for embodying upper and lower limb respectively, the influence degree of limbs vector to human action that first nodes is relevant with two-level node.

f U = Ang Diff U ′ AngDiff U ′ + AngDiff L ′ f L = AngDiff L ′ AngDiff U ′ + AngDiff L ′

f 1 = Ang Diff 1 ′ AngDiff 1 ′ + AngDiff 2 ′ f 2 = AngDiff 2 ′ AngDiff 1 ′ + AngDiff 2 ′

In above formula, AngDiff' u, AngDiff' l, AngDiff 1', AngDiff 2' represent that many group experiment samples concentrate all upper limbs to be correlated with respectively, lower limb are correlated with, first nodes is correlated with, the Cumulate Sum of limbs vector space angle that two-level node is correlated with.Here one group of experiment sample is made up of multiple experiment sample, an experiment sample specifically refers to that the swooping template action that hypothesis is preset is that A(is as uprightly flat action of stretching both arms), what certain moment gathered is a with human action like this swooping template action category-A, then swooping template action A and human action a just forms an experiment sample of swooping template action A.One group of experiment sample specifically refers to for same swooping template action A, same person not similar action in the same time and different people not similar action in the same time form one group of experiment sample together with swooping template action.Each swooping template action just has one group of experiment sample, and multiple different swooping template action (as swooping template action A, swooping template action B and swooping template action C etc.) just constructs many group experiment sample collection.

As shown in Figure 8, be the method flow diagram calculating human action diversity factor based on human body contour outline node analysis of step S300 shown in Fig. 3, it comprises the following steps:

S301, based on human body contour outline node build human body contour outline vector:

Human body contour outline node is joined end to end successively, connect between two to build human body contour outline vector, in the present embodiment, can according under left oxter, left elbow, left wrist, left stern, left knee, left ankle, hip, the order of right ankle, right knee, right stern, right wrist, right elbow, right oxter joins end to end, and forms 13 human body contour outline vectors.

S302, calculate space angle between human body adjacent two profile vectors, the descriptor as human action data according to the following formula:

cos θ = b 1 → · b 2 → | b 1 → | · | b 2 → | = x 1 · x 2 + y 1 · y 2 + z 1 · z 2 x 1 2 + y 1 2 + z 1 2 + x 2 2 + y 2 2 + z 2 2

In above formula, θ is the space angle (also claiming profile vector space angle) between the adjacent two profile vectors of human body, represent the two profile vectors that human body is adjacent, x respectively 1, y 1, z 1with x 2, y 2, z 2be respectively herein three-dimensional coordinate (from defining different based on the formula of step S202 in skeleton node analysis method above).The three-dimensional coordinate of human body contour outline vector is determined by the three-dimensional coordinate of human body contour outline node, the depth data that the three-dimensional coordinate of human body contour outline node obtains based on step 100 and determining.Similar with the analytical approach of skeleton node, in the present embodiment, preferred human lumbar bone node is initial point, horizontal direction is X-axis, vertical direction is that Y-axis sets up rectangular coordinate system in space, the three-dimensional coordinate of human body contour outline node and profile vector is the rectangular coordinate in this rectangular coordinate system in space, is the same order of magnitude.

The difference value of S303, calculating each profile vector space angle of human body and template all profile vector spaces angle:

In the present embodiment, step S302 13 human body contour outline vectors obtain 13 human body contour outline vector space angles, first man body profile vector space angle is deducted respectively 13 profile vector space angles of template, again second human body contour outline vector space angle is deducted respectively 13 profile vector space angles of template, the like, acquisition 13 × 13 difference value altogether.

S304, based on difference value structure energy function, ask for the minimum value of energy function as the human action diversity factor based on human body contour outline node analysis:

13 × 13 difference value that above-mentioned steps S303 obtains are as matrix element, form the difference matrix of 13 × 13, because the value of matrix element can just can be born, in order to construct energy function, by square process of each matrix element, build energy function E (d) according to the following formula:

E ( d ) = Σ s = 1 j [ k 1 ( s ) - k 2 ( s - d ( s ) ) ] 2 + α | d ( s ) |

In above formula, s is the sequence number of profile vector space angle, k 1s () represents for the side-play amount in the somatic data to be matched of sequence number s for profile vector space angle corresponding to sequence number s in template data, d (s).In actual applications, in template data, profile vector space angle may be inconsistent with the order of profile vector space angle in somatic data to be matched, such as may there is following situation: in template data, first space angle is left oxter space angle, and 3rd space angle is left oxter space angle in somatic data to be matched, so need definition d (s), represent for s space angle in template data, need in somatic data to be matched after to s offset d (s), both profile vector space angles just can correspond to each other, therefore k 2(s-d (s)) is through the profile vector space angle in the rear somatic data to be matched of skew conversion.α is smoothing factor.J is the number of profile vector space angle, and in the present embodiment, its value is 13.

Then, utilize figure to cut minimum value that algorithm asks for energy function, as the human action diversity factor based on human body contour outline node analysis.

S400, by the above-mentioned human action diversity factor weighted sum according to the following formula based on skeleton node and human body contour outline node analysis, its result is as the evaluate parameter weighing human action and swooping template action matching degree, its value is larger, show human action and swooping template action similarity lower, its value is less, show human action and swooping template action similarity higher, thus realize the comprehensive and accurate automatic evaluation of human action, reach technique effect of the present invention.

D=a×D skeleton+(1-a)×D shape

In above formula, D is the evaluate parameter weighing human action and swooping template action matching degree, D skeletonbe the human action diversity factor based on skeleton node analysis, a is its weight coefficient, D shapebeing the human action diversity factor based on human body contour outline node analysis, is normalized, makes (1-a) for its weight coefficient.

In the assignment procedure of above-mentioned weight, need to carry out a large amount of data tests and in conjunction with human body subjective sensation to determine the value of weight coefficient a.And can also adjust weight coefficient according to specific requirement further.Such as, can judge whether human action exists according to the depth data obtained in step s100 and skeleton node data self to block.When human action existence self is blocked, part of limb profile is lost, the method combined with based on human body contour outline node analysis now just can not be adopted again to carry out computing, need forcibly the weight coefficient of the human action diversity factor based on skeleton node analysis to be set to 1, only adopt and carry out computing based on the analytical approach of skeleton node, assessment action matching degree.

Above-mentionedly judge whether human action exists the method for self blocking and comprise the following steps:

S401, edge is searched to human body segmentation's image, finds out degree of depth sudden change pixel:

Searching edge to human body segmentation's image, find out the pixel that wherein depth data is greater than given threshold value, think that its degree of depth has sudden change, is degree of depth sudden change pixel;

S402, judge the degree of depth sudden change pixel whether be human body image pixel:

Inspection degree of depth sudden change pixel coordinate, if degree of depth sudden change pixel is positioned at the scope of human body image, is human body image pixel, then show that human body exists degree of depth sudden change, and then infers that human action existence self is blocked.

The assessment result of the node analysis of above-mentioned fusion skeleton and human body contour outline node analysis, matching degree automatically between evaluator body action and swooping template action, compensate for the deficiency of only carrying out action comparison in prior art based on skeleton node analysis or human body contour outline node analysis to a certain extent, the accurate evaluation of human action can be realized better.

Interactive advertising equipment provided by the invention is coordinated with the master control set presetting image processing algorithm by body sense interactive device, can to user's body articulation point, gesture, and track and localization is carried out in human body attitude direction, identify that user is to the selection of advertised product, the matching degree of assessment user action and this advertised product associated video action, requires if matching degree reaches expection, provides product to user to reward.This interaction mode enhances participation and the feeling of immersion of user, is conducive to the popularization degree expanding advertised product.

The foregoing is only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (9)

1. an interactive advertising equipment, is characterized in that, comprising:
Display device;
Body sense interactive device, it is for catching user action information;
Master control set, it is electrically connected described display device and body sense interactive device, video information is play for controlling described display device, according to the user action information that described body sense interactive device catches, identify that user is to the selection of advertised product, and the matching degree of assessment user action and the action of selected advertised product associated video, and judge whether to export shipment instruction according to matching degree;
Shipment actuating unit, it is electrically connected described master control set, and the shipment instruction for sending according to described master control set controls described advertising equipment and exports advertised product.
2. interactive advertising equipment as claimed in claim 1, is characterized in that:
Described body sense interactive device comprises infrared camera or RGB-D camera, to record the user action information comprising depth data and user's bone node data.
3. interactive advertising equipment as claimed in claim 2, is characterized in that:
Described body sense interactive device also comprises optical camera.
4. the interactive advertising equipment as described in claim 1 or 2 or 3, it is characterized in that, further comprise communicator, it is electrically connected described master control set, for the steering order connecting Internet transmitted according to described master control set, to realize the operated from a distance to described advertising equipment.
5. the interactive advertising equipment as described in claim 1 or 2 or 3, it is characterized in that, further comprise sound-producing device, it is electrically connected described master control set, plays voice messaging for the steering order transmitted according to described master control set.
6. a method of work for interactive advertising equipment as claimed in any one of claims 1 to 5, wherein, comprises the following steps:
S10, prompting user select advertised product;
S20, seizure user action information, identify that user selects;
Whether S30, to detect user-selected advertised product in short supply:
If in stockit is available, play the associated video of user-selected advertised product, prompting user imitates video actions;
S40, seizure user action information, the matching degree of assessment user action and video actions;
S50, judge whether to export user-selected advertised product according to matching degree.
7. method of work as claimed in claim 6, is characterized in that:
In described step S20 and S40, the user action information of seizure comprises depth data and user's bone node data.
8. method of work as claimed in claim 7, is characterized in that: in described step S40, assesses the matching degree of user action and video actions based on following steps:
S100, choose user's bone node and profile node based on depth data;
S200, build user limbs vector based on user's bone node, calculate the space angle between user's limbs vector and corresponding video template limbs vector, by its weighting normalizing, calculate the space angle cumulative errors between user's limbs vector and corresponding video template limbs vector, as the human action diversity factor based on bone node analysis;
S300, based on user profile node build user profile vector, calculate the space angle between the adjacent two profile vectors of user, utilize the difference value between itself and the space angle of corresponding video template profile vector to build energy function, ask for the minimum value of energy function as the human action diversity factor analyzed based on profile node;
S400, to based on the human action diversity factor of bone node analysis and the human action diversity factor weighted sum based on profile node analysis, as the evaluate parameter weighing user action and video actions matching degree.
9. method of work as claimed in claim 7 or 8, is characterized in that:
In described step S30, if in short supply, point out back-order information to user, and send back-order information by internet to operator.
CN201310526678.9A 2013-10-30 2013-10-30 A kind of interactive advertising equipment and its method of work CN104598012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310526678.9A CN104598012B (en) 2013-10-30 2013-10-30 A kind of interactive advertising equipment and its method of work

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310526678.9A CN104598012B (en) 2013-10-30 2013-10-30 A kind of interactive advertising equipment and its method of work

Publications (2)

Publication Number Publication Date
CN104598012A true CN104598012A (en) 2015-05-06
CN104598012B CN104598012B (en) 2017-12-05

Family

ID=53123858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310526678.9A CN104598012B (en) 2013-10-30 2013-10-30 A kind of interactive advertising equipment and its method of work

Country Status (1)

Country Link
CN (1) CN104598012B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915011A (en) * 2015-06-28 2015-09-16 合肥金诺数码科技股份有限公司 Open environment gesture interaction game system
CN105138111A (en) * 2015-07-09 2015-12-09 中山大学 Single camera based somatosensory interaction method and system
CN108062533A (en) * 2017-12-28 2018-05-22 北京达佳互联信息技术有限公司 Analytic method, system and the mobile terminal of user's limb action

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103408A (en) * 2009-12-18 2011-06-22 微软公司 Gesture style recognition and reward
CN102609871A (en) * 2012-02-18 2012-07-25 扬州市前景电子技术研究所有限公司 Network self-service store and shopping method thereof
CN103049873A (en) * 2013-01-23 2013-04-17 贵州宝森科技有限公司 3D shopping guide machine system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103408A (en) * 2009-12-18 2011-06-22 微软公司 Gesture style recognition and reward
CN102609871A (en) * 2012-02-18 2012-07-25 扬州市前景电子技术研究所有限公司 Network self-service store and shopping method thereof
CN103049873A (en) * 2013-01-23 2013-04-17 贵州宝森科技有限公司 3D shopping guide machine system and method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915011A (en) * 2015-06-28 2015-09-16 合肥金诺数码科技股份有限公司 Open environment gesture interaction game system
CN105138111A (en) * 2015-07-09 2015-12-09 中山大学 Single camera based somatosensory interaction method and system
CN108062533A (en) * 2017-12-28 2018-05-22 北京达佳互联信息技术有限公司 Analytic method, system and the mobile terminal of user's limb action

Also Published As

Publication number Publication date
CN104598012B (en) 2017-12-05

Similar Documents

Publication Publication Date Title
Taylor et al. Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences
Han et al. Enhanced computer vision with microsoft kinect sensor: A review
Han et al. A vision-based motion capture and recognition framework for behavior-based safety management
US8953844B2 (en) System for fast, probabilistic skeletal tracking
Chen et al. A survey of depth and inertial sensor fusion for human action recognition
Kim et al. Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor
ES2227470T3 (en) System of supervision of the use of a toothbrush.
US20100205043A1 (en) Virtual reality system including smart objects
US9189886B2 (en) Method and apparatus for estimating body shape
Chaaraoui et al. Evolutionary joint selection to improve human action recognition with RGB-D devices
Ray et al. Real-time construction worker posture analysis for ergonomics training
US9189855B2 (en) Three dimensional close interactions
Yam et al. Automated person recognition by walking and running via model-based approaches
Hirshberg et al. Coregistration: Simultaneous alignment and modeling of articulated 3D shape
Zhou et al. An information fusion framework for robust shape tracking
Li et al. Facial performance sensing head-mounted display
JP2014501011A (en) Method, circuit and system for human machine interface with hand gestures
Valyear et al. Observing learned object-specific functional grasps preferentially activates the ventral stream
US20120223956A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
Kolsch Vision based hand gesture interfaces for wearable computing and virtual environments
TW201120684A (en) Human tracking system
CN102749991B (en) A kind of contactless free space sight tracing being applicable to man-machine interaction
US7761269B1 (en) System and method of subjective evaluation of a vehicle design within a virtual environment using a virtual reality
WO2017133009A1 (en) Method for positioning human joint using depth image of convolutional neural network
US20030031357A1 (en) System and method for analyzing an image

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant